Compare commits

...

184 Commits

Author SHA1 Message Date
Luna Yao 8428a89d2d refactor: introduce HedgeExt for task hedging; rewrite NatDstQuicConnector (#2229) 2026-05-12 20:26:16 +08:00
韩嘉乐 513695297c [OHOS] feat: Enhance Rust kernel with config management and routing improvements (#2227)
* [OHOS.with ai] 将配置管理/配置分享/路由聚合/实例状态解析下沉至 Rust 内核,收敛职责并提升性能 (#2209)

* feat: add ohrs config store and startup error logging

* feat: full ability core for ohos

* feat: full ability core for ohos

* feat: clean code

---------

Co-authored-by: FrankHan <frankhan@FrankHans-Mac-mini.local>

* fix: 添加缺失文件

* fix: 修复更新路由启动两次TUN问题,并调整日志

* fix: rustfmt

* fix: 适配Cidr忽略/32格式路由

* fix: 修复Option适配错误

* fix: rustfmt

* fix: rustfmt

---------

Co-authored-by: FrankHan <frankhan@FrankHans-Mac-mini.local>
2026-05-10 14:15:31 +08:00
21paradox bfbfa2ef8d fix: reuse conn by dst_peer_id, every peer use only 1 quic conn, to fix nat lost problem (#2216) 2026-05-09 22:33:44 +08:00
KKRainbow 8e1d079142 feat: add Windows UDP broadcast relay (#2222)
This may helps games to find rooms in virtual network.

- add opt-in Windows UDP broadcast relay config flag and CLI/env plumbing
- capture local UDP broadcasts with Windows raw sockets, normalize packets, and inject them via PeerManager
2026-05-09 09:56:31 +08:00
fanyang 55f15bb6f0 fix(connector): classify manual reconnect timeouts by stage (#2062) 2026-05-08 22:08:51 +08:00
Luna Yao 96fd39649a revert UPX version to 4.2.4 in core.yml (#2221) 2026-05-07 18:49:40 +08:00
KKRainbow 74fc8b300d chore: bump version to 2.6.4 (#2219) 2026-05-07 13:48:51 +08:00
KKRainbow baeee40b79 fix machine uid and easytier-web panic (#2215)
1. fix(web-client): persist and migrate machine id
2. fix panic when easytier-web session receive malformat packet
2026-05-07 00:57:42 +08:00
fanyang 4342c8d7a2 fix: add missing CLI help text (#2213) 2026-05-05 17:05:34 +08:00
KKRainbow 1178b312fa fix foreign network entry leak (#2211) 2026-05-05 11:01:44 +08:00
fanyang 362aa7a9cd fix: allow omitted ACL config fields (#2206) 2026-05-04 00:47:24 +08:00
KKRainbow 12a7b5a5c5 fix: scope peer center server data to instance (#2198)
Stop sharing PeerCenterServer state through a process-global map so local and foreign-network services cannot mix peer-center data when peer ids overlap.
2026-05-02 01:43:01 +08:00
fanyang 4eba9b07b6 fix(web-client): keep retrying unreachable config server (#2140)
Defer config-server connector creation into the web client retry loop so
service startup does not fail when network or DNS is unavailable.
2026-05-02 00:09:48 +08:00
KKRainbow 1b48029bdc fix: clean stale foreign network state (#2197)
- clear foreign-network traffic metric peer caches on peer removal and network cleanup
- release reserved foreign-network peer IDs on handshake/add-peer error paths
- avoid creating no-op foreign-network token buckets when limits are unlimited
- shrink relay/session maps after cleanup and remove unused peer-center global data entries
2026-05-01 23:30:51 +08:00
KKRainbow 3542e944cb fix(quic): prune stopped endpoints from pool (#2195)
* remove wss port 0 compatibility code
* fix(quic): prune stopped endpoints from pool
2026-05-01 18:51:39 +08:00
KKRainbow 852d1c9e14 feat(gui): add UPnP and public IPv6 advanced options (#2194)
Expose disable-upnp and ipv6_public_addr_auto in the shared web/GUI config editor
bump release metadata to 2.6.3.
2026-05-01 13:45:19 +08:00
KKRainbow 4958394469 fix: protect self peer during credential refresh and allow need-p2p peers through public server (#2192)
* fix: protect self peer during credential refresh

* fix: allow need-p2p peers through public server
2026-05-01 06:59:30 +08:00
KKRainbow 41b6d65604 fix faketcp filter on windows (#2190) 2026-04-30 23:55:56 +08:00
KKRainbow aae30894dd fix: keep file logger disabled by default (#2189) 2026-04-30 21:42:30 +08:00
fanyang 81d169abfc fix: fall back when CLI manage service is unavailable (#2185) 2026-04-30 19:50:50 +08:00
Luna Yao 9c6c210e89 fix: disable SO_EXCLUSIVEADDRUSE on Windows (#2180) 2026-04-30 19:48:54 +08:00
Mg Pig d1c6dcf754 fix: prevent URL input layout flicker with container queries (#2186) 2026-04-30 19:45:01 +08:00
KKRainbow 97c8c4f55a feat: support disabling relay data forwarding (#2188)
- add a disable_relay_data runtime/config patch option
- reuse the existing avoid_relay_data feature flag when relay data forwarding is disabled
2026-04-30 19:44:40 +08:00
KKRainbow ed8df2d58f prevent EasyTier-managed IPv6 from being used as underlay connections (#2181)
When a node has public IPv6 addresses allocated by EasyTier, those addresses
are installed on the host's network interfaces. The system would then pick
them up as candidate source/destination addresses for underlay connections
(direct peer, UDP hole punch, bind addresses), causing overlay traffic to
loop back into the overlay itself.

Add a central predicate is_ip_easytier_managed_ipv6() and apply it at every
point where IPv6 addresses are selected for underlay use:
- Filter managed IPv6 from DNS-resolved connector addresses, including a
  UDP socket getsockname check to detect whether the OS would route through
  the overlay to reach a destination
- Skip managed IPv6 in bind address selection and STUN candidate filtering
- Strip managed IPv6 from GetIpListResponse RPC so peers never learn them
- Pass pre-resolved addresses to tunnel connectors to avoid re-resolution

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 12:17:22 +08:00
lurenjia f66010e6f9 fix: preserve URL type in matches_scheme (#2179)
Avoid resolving Url::as_ref() to the full URL string before TunnelScheme
conversion. Add regression coverage for owned/borrowed URLs and the UDP
IPv6 hole-punch branch condition.

Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-28 23:23:41 +08:00
Luna Yao d5c4700d32 utils: replace defer, ContextGuard, DetachableTask with guarden crate (#2163) 2026-04-27 18:29:46 +08:00
KKRainbow 969ecfc4ca fix(gui): refresh service after core version upgrade (#2172) 2026-04-27 15:54:52 +08:00
KKRainbow 8f862997eb feat: support allocating public IPv6 addresses from a provider (#2162)
* feat: support allocating public IPv6 addresses from a provider

Add a provider/leaser architecture for public IPv6 address allocation
between nodes in the same network:

- A node with `--ipv6-public-addr-provider` advertises a delegable
  public IPv6 prefix (auto-detected from kernel routes or manually
  configured via `--ipv6-public-addr-prefix`).
- Other nodes with `--ipv6-public-addr-auto` request a /128 lease from
  the selected provider via a new RPC service (PublicIpv6AddrRpc).
- Leases have a 30s TTL, renewed every 10s by the client routine.
- The provider allocates addresses deterministically from its prefix
  using instance-UUID-based hashing to prefer stable assignments.
- Routes to peer leases are installed on the TUN device, and each
  client's own /128 is assigned as its IPv6 address.

Also includes netlink IPv6 route table inspection, integration tests,
and event-driven route/address reconciliation.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-26 21:37:34 +08:00
KKRainbow b20075e3dc fix: allow self virtual IP loopback (#2161) 2026-04-25 21:26:16 +08:00
Luna Yao eb3b5aae51 utils: add DetachableTask & ContextGuard (#2138) 2026-04-25 18:24:36 +08:00
datayurei af6b6ab6f1 fix: avoid panic when validating mapped listeners (#2153) 2026-04-25 17:45:57 +08:00
Luna Yao 5a1668c753 refactor: remove ScopedTask (#2125)
* replace ScopedTask with AbortOnDropHandle
2026-04-25 15:20:25 +08:00
Luna Yao 820d9095d3 replace AsyncRuntime with simpler CancellableTask (#2136) 2026-04-25 10:29:53 +08:00
KKRainbow 2fb41ccbba bump version 262 (#2158) 2026-04-25 10:22:24 +08:00
Luna Yao b4666be696 fix: disable SO_REUSEADDR & enable SO_EXCLUSIVEADDRUSE on Windows (#2128) 2026-04-25 00:37:34 +08:00
KKRainbow 4688ad74ad Honor credential reusable flag (#2157)
- propagate reusable through credential storage, CLI, RPC, routing, and tests
- enforce reusable=false owner election with current topology
- preserve proof-backed groups when refreshing credential ACL groups
2026-04-25 00:22:40 +08:00
Luna Yao f7ea78d4f0 lower max_udp_payload_size to 1200 (#2156) 2026-04-24 21:20:37 +08:00
james.zhang ac112440c3 fix(UrlInput): update parseUrl and buildUrlValue to handle null ports correctly (#2146) 2026-04-23 13:45:09 +08:00
KKRainbow 958b246f05 improve webclient (#2151) 2026-04-23 13:44:18 +08:00
james.zhang 263f4c3bc9 fix(peer_route): exclude current peer ID from proxy CIDR lists (#2149) 2026-04-22 20:30:38 +08:00
Luna Yao ffddc517e1 fix: listener parsing (#2143)
Fixes a CLI listener parsing regression where url crate special-casing for ws/wss could misinterpret inputs like ws:11011, and adds coverage to prevent future regressions.

Changes:

Refactors listener parsing to avoid url::Url parsing for proto:port forms and to support additional shorthand inputs (port-only / IP-only / SocketAddr).
Centralizes “expand to all IpScheme variants” logic in a helper (gen_listeners) while preserving the “port=0 is dynamic” behavior.
Adds unit tests covering valid/invalid listener inputs and expansion behavior.
2026-04-21 23:45:22 +08:00
Debugger Chen 5cd0a3e846 feat: add upnp support (#1449) 2026-04-21 17:19:04 +08:00
Luna Yao f4319c4d4f ci(test): always check everything (#2142)
* ci(test): always check everything
* move Cargo.lock check to the last step
2026-04-21 10:08:27 +08:00
Luna Yao 0091a535d5 use mimalloc for FreeBSD (#2144) 2026-04-21 08:40:21 +08:00
Luna Yao d7a5fb8d66 remove --no-deps from lock check (#2134) 2026-04-20 00:46:26 +08:00
KKRainbow f63054e937 fix: resolve Android APK version fallback to 1.0 on CI (#2131) 2026-04-19 19:06:37 +08:00
KKRainbow efc043abbb bump version to v2.6.1 (#2129) 2026-04-19 16:49:45 +08:00
Mg Pig 40c6de8e31 fix(core): restrict implicit config merge to explicit config files (#2127) 2026-04-19 10:39:04 +08:00
KKRainbow 2db655bd6d fix: refresh ACL groups and enable TCP_NODELAY for WebSocket (#2118)
* fix: refresh ACL groups and enable TCP_NODELAY for WebSocket
* add remove_peers to remove list of peer id in ospf route
* fix secure tunnel for unreliable udp tunnel
* fix(web-client): timeout secure tunnel handshake
* fix(web-server): tolerate delayed secure hello
* fix quic endpoint panic
* fix replay check
2026-04-19 10:37:39 +08:00
Mg Pig c49c56612b feat(ui): add ACL graphical configuration interface (#1815) 2026-04-18 20:23:53 +08:00
Mg Pig 6ca074abae feat(nix): 添加 rustfmt 和 clippy 到 Rust 工具链扩展 (#2126) 2026-04-18 20:23:26 +08:00
Luna Yao 84430055ab remove hashbrown (#2108) 2026-04-18 11:06:34 +08:00
Mg Pig 432fcb3fc3 build(nix): add mold to the flake dev shell (#2122) 2026-04-18 09:06:45 +08:00
Luna Yao fae32361f2 chore: update Rust to 1.95; replace cfg_if with cfg_select (#2121) 2026-04-17 23:41:31 +08:00
Luna Yao bcb2e512d4 utils: move code to a dedicated mod; add AsyncRuntime (#2072) 2026-04-16 23:32:07 +08:00
Luna Yao 82ca04a8a7 proto(utils): add MessageModel & RepeatedMessageModel (#2068)
* add FromIterator, Extend, AsRef, AsMut, TryFrom<[Message]>
2026-04-15 19:40:09 +08:00
Luna Yao 2ef3b72224 proto: add some conversion for Url (#2067) 2026-04-15 19:39:24 +08:00
Luna Yao 6d319cba1d tests(relay_peer_e2e_encryption): wait for the key of inst3 before ping test (#2069) 2026-04-15 19:39:00 +08:00
Luna Yao 3687519ef3 turn off ansi for file log (#2110)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-15 19:38:27 +08:00
Luna Yao 3a4ac59467 log: change default log level of tests to WARNING (#2113) 2026-04-14 18:10:38 +08:00
Luna Yao 1cfc135df3 ci: remove -D warnings from test (#2109)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-14 12:35:05 +08:00
KKRainbow 5b35c51da9 fix packet split on udp tunnel and avoid tcp proxy access rpc portal (#2107)
* distinct control / data when forward packets
* fix rpc split for udp tunnel
* feat(easytier-web): pass public ip in validate token webhook
* protect rpc port from subnet proxy
2026-04-13 11:03:09 +08:00
Luna Yao ec7ddd3bad fix: filter overlapped proxy cidrs in ProxyCidrsMonitor (#2079)
* feat(route): add async methods to list proxy CIDRs for IPv4 and IPv6
* refactor(ProxyCidrsMonitor): get proxy cidrs from list_proxy_cidrs
2026-04-12 22:18:54 +08:00
Luna Yao 6f3e708679 tunnel(bind): gather all bind logic to a single function (#2070)
* extract a Bindable trait for binding TcpSocket, TcpListener, and UdpSocket
2026-04-12 22:16:58 +08:00
Luna Yao 869e1b89f5 fix: remove log (file) when level is explicitly set to OFF (#2083)
* fix level filter for OFF
* remove unwrap of file appender creation
2026-04-12 22:16:30 +08:00
Luna Yao 9e0a3b6936 ci: rewrite build workflows (#2089) 2026-04-12 22:14:41 +08:00
Luna Yao c6cb1a77d0 chore: clippy fix some code on Windows (#2106) 2026-04-12 22:13:58 +08:00
deddey 83010861ba Optimize network interface configuration for macOS and FreeBSD to avoid hard-coded IP addresses (#1853)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-12 21:00:59 +08:00
Luna Yao daa53e5168 log: auto-init log for tests (#2073) 2026-04-12 13:04:21 +08:00
fanyang 51befdbf87 fix(faketcp): harden packet parsing against malformed frames (#2103)
Discard malformed fake TCP frames instead of panicking so OpenWrt
nodes can survive unexpected or truncated packets.

Also emit the correct IPv6 ethertype and cover the parser with
round-trip and truncation regression tests.
2026-04-12 13:02:23 +08:00
Luna Yao 8311b11713 refactor: remove NoGroAsyncUdpSocket (#1867) 2026-04-10 23:22:08 +08:00
Luna Yao 19c80c7b9c cli: do not add offset when port = 0 (#2085) 2026-04-10 23:21:15 +08:00
Luna Yao a879dd1b14 chore: update Rust to 2024 edition (#2066) 2026-04-10 00:22:12 +08:00
Luna Yao a8feb9ac2b chore: use Debug to print errors (#2086) 2026-04-09 09:45:55 +08:00
Luna Yao c5fbd29c0e ci: fix skip condition for draft pull requests in CI workflows (#2088)
* ci: run xxx-result only when pre_job is run successfully
* fix get-result steps
2026-04-09 09:45:04 +08:00
Luna Yao 26b1794723 ci: accecelerate pipeline (#2078)
* enable concurrency

pr

* do not run build on draft PRs

pr

* enable fail-fast for build workflows
2026-04-08 08:43:03 +08:00
Luna Yao 371b4b70a3 proto(utils): add TransientDigest trait (#2071) 2026-04-08 00:06:48 +08:00
Luna Yao b2cc38ee63 chore(clippy): disallow some methods from itertools (#2075) 2026-04-07 16:27:33 +08:00
Luna Yao 79b562cdc9 drop peer_mgr in time (#2064) 2026-04-06 11:31:05 +08:00
fanyang e3f089251c fix(ospf): mitigate route sync storm under connection flapping (#2063)
Addresses issue #2016 where nodes behind unstable networks
(e.g. campus firewalls) cause excessive traffic that can freeze
the remote node.

Two changes in peer_ospf_route.rs:

- Make do_sync_route_info only trigger reverse sync_now when
  incoming data actually changed the route table or foreign
  network state.  The previous unconditional sync_now created
  an A->B->A->B ping-pong cycle on every RPC exchange.

- Add exponential backoff (50ms..5s) to session_task retry loop.
  The previous fixed 50ms retry produced ~20 RPCs/s during
  sustained network instability.
2026-04-06 11:26:20 +08:00
fanyang cf6dcbc054 Fix IPv6 TCP tunnel display formatting (#1980)
Normalize composite tunnel display values before rendering peer and
debug output so IPv6 tunnel types no longer append `6` to the port.

- Preserve prefixes like `txt-` while converting tunnel schemes to
  their IPv6 form.
- Recover malformed values such as `txt-tcp://...:110106` into
  `txt-tcp6://...:11010`.
- Reuse the normalized remote address display in CLI debug output.
2026-04-05 22:12:55 +08:00
fanyang 2cf2b0fcac feat(cli): implement connector add/remove, drop peer stubs (#2058)
Implement the previously stubbed connector add/remove CLI commands
using PatchConfig RPC with InstanceConfigPatch.connectors, and
remove the peer add/remove stubs that had incorrect semantics.
2026-04-05 13:56:17 +08:00
dependabot[bot] aa0cca3bb6 build(deps): bump quinn-proto in /easytier-contrib/easytier-ohrs (#2059)
Bumps [quinn-proto](https://github.com/quinn-rs/quinn) from 0.11.13 to 0.11.14.
- [Release notes](https://github.com/quinn-rs/quinn/releases)
- [Commits](https://github.com/quinn-rs/quinn/compare/quinn-proto-0.11.13...quinn-proto-0.11.14)

---
updated-dependencies:
- dependency-name: quinn-proto
  dependency-version: 0.11.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-05 13:16:33 +08:00
KKRainbow fb59f01058 fix: reconcile webhook-managed configs and make disable_p2p more intelligent (#2057)
* reconcile infra configs on webhook validate
* make disable_p2p more intelligent
* fix stats
2026-04-04 23:41:57 +08:00
Luna Yao e91a0da70a refactor: listener/connector protocol abstraction (#2026)
* fix listener protocol detection
* replace IpProtocol with IpNextHeaderProtocol
* use an enum to gather all listener schemes
* rename ListenerScheme to TunnelScheme; replace IpNextHeaderProtocols with socket2::Protocol
* move TunnelScheme to tunnel
* add IpScheme, simplify connector creation
* format; fix some typos; remove check_scheme_...;
* remove PROTO_PORT_OFFSET
* rename WSTunnel.. -> WsTunnel.., DNSTunnel.. -> DnsTunnel..
2026-04-04 10:55:58 +08:00
Luna Yao 9cc617ae4c ci: build rpm package (#2044)
* add rpm to ci
* rename build_filter to build-filter
* use prepare-pnpm action
2026-04-04 10:32:08 +08:00
韩嘉乐 e4b0f1f1bb Rename libeasytier_ohrs.so to libeasytier_release.so when build release package (#2056)
Rename shared library file for release.
2026-04-04 10:29:37 +08:00
Luna Yao 443c3ca0b3 fix: append address of reverse proxy to remote_addr (#2034)
* append address of reverse proxy to remote_addr
* validate proxy address in test
2026-03-30 16:48:23 +08:00
Luna Yao 55a0e5952c chore: use cfg_aliases for mobile (#2033) 2026-03-30 16:38:39 +08:00
KKRainbow 1dff388717 bump version to v2.6.0 (#2039) 2026-03-30 15:50:07 +08:00
Luna Yao 61c741f887 add BoxExt trait (#2036) 2026-03-30 13:25:53 +08:00
ParkGarden 01dd9a05c3 fix: 重构了 Magisk 模块的 easytier_core.sh, action.sh, uninstall.sh 三个脚本的逻辑,优化参数解析与进程管理,调整措辞 (#1964) 2026-03-30 13:18:42 +08:00
KKRainbow 8c19a2293c fix(windows): avoid pnet interface enumeration panic (#2031) 2026-03-29 23:16:44 +08:00
KKRainbow a1bec48dc9 fix android vpn permission grant (#2023)
* fix android vpn permission grant
* fix url input behaviour
2026-03-29 23:16:32 +08:00
KKRainbow 7e289865b2 fix(faketcp): avoid pnet interface lookup on windows (#2029) 2026-03-29 19:26:29 +08:00
fanyang 742c7edd57 fix: use default connection loss rate for peer stats (#2030) 2026-03-29 19:25:25 +08:00
Luna Yao b71a2889ef suppress clippy warnings when no feature flags are enabled (#2028) 2026-03-29 11:02:23 +08:00
KKRainbow bcd75d6ce3 Add instance recv limiter in peer conn (#2027) 2026-03-29 10:28:02 +08:00
Luna Yao d4c1b0e867 fix: read X-Forwarded-For from HTTP header of WS/WSS (#2019) 2026-03-28 22:20:46 +08:00
KKRainbow b037ea9c3f Relax private mode foreign network secret checks (#2022) 2026-03-28 22:19:23 +08:00
Luna Yao b5f475cd4c filter overlapped proxy cidr (#2024) 2026-03-28 09:40:05 +08:00
Luna Yao eaa4d2c7b8 test: use taiki-e/install-action for cargo-hack (#2020) 2026-03-28 00:07:59 +08:00
Luna Yao e160d9b048 ci: remove aes-gcm from check (#1925) 2026-03-27 22:48:22 +08:00
KKRainbow 0aeea39fbe refactor(gui): collapse public server and standalone into initial peer list (#2017)
The GUI exposed three networking modes: public server, manual, and standalone. In practice EasyTier does not have a server/client role distinction here. Those options only mapped to different peer bootstrap shapes, which made the product model misleading and pushed users toward a non-existent "public server" concept.

This change rewrites the shared configuration UX around initial nodes. Users now add or remove one or more initial node URLs directly, and the UI explains that EasyTier networking works like plugging in a cable: once a node connects to one or more existing nodes, it can join the mesh. Initial nodes may be self-hosted or shared by others.

To preserve compatibility, the frontend keeps the legacy fields and adds normalization helpers in the shared NetworkConfig layer. Old configs are read as initial_node_urls, while saves, runs, validation, config generation, and persisted GUI config sync still denormalize back into the current backend shape: zero initial nodes -> Standalone, one -> PublicServer, many -> Manual. This avoids any proto or backend API change while making old saved configs and imported TOML files load cleanly in the new UI.

Code changes:

- add initial_node_urls plus normalize/denormalize helpers in the shared frontend NetworkConfig model

- remove the mode switch and public-server/manual specific inputs from the shared Config component and replace them with a single initial-node list plus explanatory copy

- update Chinese and English locale strings for the new terminology

- normalize configs received from GUI/web backends and denormalize them before outbound API calls

- normalize GUI save-config events before storing them in localStorage so legacy payloads remain editable under the new model
2026-03-27 11:37:09 +08:00
KKRainbow e000636d83 feat(stats): add by-instance traffic metrics (#2011) 2026-03-26 13:46:33 +08:00
Luna Yao 8e4dc508bb test: improve test_txt_public_stun_server with timeout and retry mechanism (#2014) 2026-03-26 09:32:07 +08:00
Luna Yao e2684a93de refactor: use strum on EncryptionAlgorithm, use Xor as default when AesGcm not available (#1923) 2026-03-25 18:42:34 +08:00
KKRainbow 1d89ddbb16 Add lazy P2P demand tracking and need_p2p override (#2003)
- add lazy_p2p so nodes only start background P2P for peers that actually have recent business traffic
- add need_p2p so specific peers can still request eager background P2P even when other nodes enable lazy mode
- cover the new behavior with focused connector/peer-manager tests plus three-node integration tests that verify relay-to-direct route transition
2026-03-23 09:38:57 +08:00
KKRainbow 2bfdd44759 multi_fix: harden peer/session handling, tighten foreign-network trust, and improve web client metadata (#1999)
* machine-id should be scoped unbder same user-id
* feat: report device os metadata to console
* fix sync root key cause packet loss
* fix tun packet not invalid
* fix faketcp cause lat jitter
* fix some packet not decrypt
* fix peer info patch, improve performance of update self info
* fix foreign credential identity mismatch handling
2026-03-21 21:06:07 +08:00
Luna Yao 77966916c4 cargo: add used features for windows-sys (#1924) 2026-03-17 14:10:50 +08:00
TsXor 26b7455c1e ignores eol difference for auto-generated files (#1997) 2026-03-16 23:40:38 +08:00
KKRainbow 8922e7b991 fix: foreign credential handling and trusted key visibility (#1993)
* fix foreign credential handling
* allow list foreign network trusted keys
* fix(gui): delete removed config-server networks
* fix(web): reset managed instances on first sync
2026-03-16 22:19:31 +08:00
KKRainbow e6ac31fb20 feat(web): add webhook-managed machine access and multi-instance CLI support (#1989)
* feat: add webhook-managed access and multi-instance CLI support
* fix(foreign): verify credential of foreign credential peer
2026-03-15 12:08:50 +08:00
KKRainbow c8f3c5d6aa feat(credential): support custom credential ID generation (#1984)
introduces support for custom credential ID generation, allowing users to specify their own credential IDs instead of relying solely on auto-generated UUIDs.
2026-03-12 00:48:24 +08:00
KKRainbow 330659e449 feat(web): full-power RPC access + typed JSON proxy endpoint (#1983)
- extend web controller bindings to cover full RPC service set
- update rpc_service API wiring and session/controller integration
- generate trait-level json_call_method in rpc codegen
- route restful proxy-rpc requests via scoped typed clients
- add json-call regression tests and required Sync bound fixes~
2026-03-11 20:32:37 +08:00
Maxwell 80043df292 script: introduce EasyTier powershell installer (#1975) 2026-03-11 11:57:03 +08:00
KKRainbow ecd1ea6f8c feat(web): implement secure core-web tunnel with Noise protocol (#1976)
Implement end-to-end encryption for core-web connections using the
Noise protocol framework with the following changes:

Client-side (easytier/src/web_client/):
- Add security.rs module with Noise handshake implementation
- Add upgrade_client_tunnel() for client-side handshake
- Add Noise frame encryption/decryption via TunnelFilter
- Integrate GetFeature RPC for capability negotiation
- Support secure_mode option to enforce encrypted connections
- Handle graceful fallback for backward compatibility

Server-side (easytier-web/):
- Accept Noise handshake in client_manager
- Expose encryption support via GetFeature RPC

The implementation uses Noise_NN_25519_ChaChaPoly_SHA256 pattern for
encryption without authentication. Provides backward compatibility
with automatic fallback to plaintext connections.
2026-03-10 08:48:08 +08:00
KKRainbow 694b8d349d feat(credential): enforce signed credential distribution across mixed admin/shared topology (#1972) 2026-03-10 08:37:33 +08:00
KKRainbow ef44027f57 feat(credential): improve credential peer routing and visibility (#1971)
- improve credential peer filtering and related route lookup behavior
- expose credential peer information through CLI and API definitions
- add and refine tests for credential routing and peer interactions
2026-03-08 14:06:33 +08:00
KKRainbow f3db348b01 fix: resolve slow exit and reduce test timeouts (#1970)
- Explicitly shutdown tokio runtime on launcher cleanup to fix slow exit
- Add timeout to tunnel connector in tests to prevent hanging
- Reduce test wait durations from 5s to 100ms for faster test execution
- Bump num-bigint-dig from 0.8.4 to 0.8.6
2026-03-08 12:27:42 +08:00
KKRainbow c4eacf4591 feat(credential): implement credential peer auth and trust propagation (#1968)
- add credential manager and RPC/CLI for generate/list/revoke
- support credential-based Noise authentication and revocation handling
- propagate trusted credential metadata through OSPF route sync
- classify direct peers by auth level in session maintenance
- normalize sender credential flag for legacy non-secure compatibility
- add unit/integration tests for credential join, relay and revocation
2026-03-07 22:58:15 +08:00
KKRainbow 59d4475743 feat: relay peer end-to-end encryption via Noise IK handshake (#1960)
Enable encryption for non-direct nodes requiring relay forwarding.
When secure_mode is enabled, peers perform Noise IK handshake to
establish an encrypted PeerSession. Relay packets are encrypted at
the sender and decrypted at the receiver. Intermediate forwarding
nodes cannot read plaintext data.

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: KKRainbow <5665404+KKRainbow@users.noreply.github.com>
2026-03-07 14:47:22 +08:00
KKRainbow 22b4c4be2c fix: guard macos-ne feature with target_os = "macos" in cfg expressions (#1962)
All 13 occurrences of `any(target_os = "ios", feature = "macos-ne")` are
replaced with `any(target_os = "ios", all(target_os = "macos", feature = "macos-ne"))`.

Previously, enabling `macos-ne` on non-macOS platforms (e.g. `--all-features`
on Linux) would incorrectly compile macOS/mobile-specific code paths, causing
build failures or wrong runtime behavior.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-05 00:06:21 +08:00
Luna Yao 5f31583a84 refactor: 使用 tracing 输出日志 (#1856)
* change all println to tracing
2026-03-04 09:52:23 +08:00
Mg Pig 1d25240d8c refactor(ui): extract URL input components and enhance UI responsiveness (#1819) 2026-03-04 09:49:15 +08:00
fanyang eeb507d6ea fix: register PeerCenterRpc in management API server so CLI peer-center works (#1929)
PeerCenterRpc was only registered in the per-instance peer-to-peer RPC
manager (domain = network_name), but not in the management API server
(domain = ""). The CLI connects to the management API with an empty
domain, causing "Invalid service name: PeerCenterRpc" errors.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-04 09:37:37 +08:00
fanyang 9e9916efa5 fix(connector): skip self-connection when peer shares local interface IPs (#1941)
When two EasyTier instances run on the same machine and share the same
network, the direct connector would expand a remote peer's 0.0.0.0
listener into local interface IPs and then attempt to connect to
itself, causing an infinite loop of failed connection attempts.

The existing `peer_id != my_peer_id` guard does not cover this case
because the two instances have different peer IDs despite sharing the
same physical network interfaces.

Fix by adding a self-connection check in `spawn_direct_connect_task`:
before spawning a connect task, compare the candidate (scheme, IP,
port) against the local running listeners. If a local listener matches
on all three dimensions — accounting for 0.0.0.0/:: wildcards by
checking membership in the local interface IP sets — the candidate is
silently dropped with a DEBUG log message.

The fix covers all four code paths:
- IPv4 unspecified (0.0.0.0) expansion loop
- IPv4 specific-address branch
- IPv6 unspecified (::) expansion loop
- IPv6 specific-address branch

The TESTING flag logic is untouched so existing unit tests are
unaffected.

* refactor(connector): replace is_self_connect closure with GlobalCtx::should_deny_proxy (#1954)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-03-04 09:36:35 +08:00
hello db6b9e3684 feat: core config server use last path segment as user name (#1931) 2026-03-03 18:24:28 +08:00
Mg Pig ff24332e23 feat(web): add OIDC SSO login support (#1943) 2026-03-03 18:23:31 +08:00
fanyang d4ff0b1767 build(deps): upgrade vite to 5.4.21 in frontend and gui packages (#1950) 2026-03-01 13:47:02 +08:00
Mg Pig 5716f7f16b fix(web): allow configuring listen address for API and web servers (#1919) (#1948) 2026-03-01 01:02:31 +08:00
fanyang e5bd8f9e24 build(deps): upgrade minimatch to 10.2.4 (#1949) 2026-02-28 22:40:47 +08:00
sky96111 b56bcfb4b0 fix: increase websocket peer connection timeout to 20 seconds (#1939)
- Add ws/wss protocols to long timeout list
2026-02-28 18:26:19 +08:00
fanyang fb95b4827c build(deps): bump axios from 1.11.0 to 1.13.6 in frontend packages (#1947)
Addresses security vulnerabilities in axios <1.13.5. Updates the
declared specifier to ^1.13.5 in all three frontend package.json
files and regenerates both npm and pnpm lock files (resolved: 1.13.6).

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 11:17:18 +08:00
fanyang a8f7226195 fix(foreign_network): set avoid_relay_data when relay_data is false (#1935) 2026-02-25 09:30:24 +08:00
dependabot[bot] e6ee485352 build(deps-dev): bump vite from 5.4.10 to 5.4.21 in /easytier-web/frontend-lib (#1922)
* build(deps-dev): bump vite in /easytier-web/frontend-lib

Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.4.10 to 5.4.21.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.21/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.21/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.21
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 22:47:29 +08:00
hello 73291a3a1c feat: Update Cargo.toml to add support for tls1.2 when use wss (#1917) 2026-02-20 18:01:21 +08:00
fanyang f737708f45 fix: avoid panic on malformed short tunnel packets (#1904) 2026-02-18 00:04:30 +08:00
fanyang aa24d09aa2 fix: replace stale magic DNS records on IP change (#1906)
Magic DNS updates are full snapshots, so appending routes keeps old IPs and returns duplicate A records. Replace each client's previous routes on update and add a regression test to ensure hostname resolution keeps only the latest IP.
2026-02-16 13:20:11 +08:00
fanyang fe4e77979d fix: avoid panic for quic peer urls using port 0 (#1905)
Prevent crashes when users input quic://...:0 by rejecting port 0 explicitly and propagating connect setup errors. Add a regression test to ensure invalid QUIC targets fail gracefully.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-14 17:10:29 +08:00
Chenx Dust 7a26640c26 feat: support macOS Network Extension (#1902)
* feat: support macOS Network Extension
* fix: disable macOS NE feature in cargo hack check
2026-02-14 14:54:36 +08:00
Mg Pig 5a777959e3 ui: clarify encryption checkbox description in locales (#1841) 2026-02-13 16:04:26 +08:00
Mg Pig 3512a80597 feat(web): add --disable-registration flag to disable user registration (#1881) 2026-02-13 16:03:11 +08:00
Zkitefly 011770a601 Update http_connector.rs (#1900) 2026-02-13 16:02:32 +08:00
Chenx Dust 6475724d2e fix: toggle_window_visibility with focus check (#1888)
* refactor: better logics for toggle_window_visibility
2026-02-11 16:50:36 +08:00
Mg Pig 85e9029577 feat: add Nix CI workflow and update flake.lock dependencies (#1872) 2026-02-10 18:11:35 +08:00
Luna Yao b6e292cce3 ci: use shared key for build workflow (#1868) 2026-02-04 09:48:55 +08:00
KKRainbow c58140fb47 update rust to 1.93 (#1865) 2026-02-04 09:48:43 +08:00
Luna Yao aebb7facfa drop permit reserved by poll_reserve (#1858) 2026-02-03 11:14:11 +08:00
Chenx Dust 1e2124cb99 fix: force set tun fd when received (#1860) 2026-02-03 11:13:31 +08:00
Chenx Dust e1cbd07d1f feat: separate zstd and faketcp into features (#1861)
* feat: separate faketcp into a feature
* fix: no need to initialize out_len
* feat: separate zstd into a feature
* clippy: remove unnecessary cast, because for unix size_t always equals usize
2026-02-03 11:12:33 +08:00
韩嘉乐 7750e81168 CI(ohos): add a condition to check for the publish code (#1863)
Added a condition to check for the presence of a release code when running the publish step
2026-02-03 11:11:45 +08:00
KKRainbow bf3edbd28f remove src modified flag from pm hdr (#1857) 2026-02-02 16:47:26 +08:00
Luna Yao cd2cf56358 refactor: handle quic proxy internally instead of use external udp port (#1743)
* deprecate quic_listen_port, add disable_relay_quic and enable_relay_foreign_network_quic
* add set_src_modified to TcpProxyForWrappedSrcTrait
* prioritize quic over kcp
2026-02-02 11:53:40 +08:00
KKRainbow 21f4a944a7 fix perf degraded because of impact of is_empty() of dashmap (#1854) 2026-02-01 08:51:18 +08:00
KKRainbow 9617005136 make udp->ring transmit reliable (#1851) 2026-01-31 17:23:45 +08:00
deddey c85d1d41b3 allow set TUN dev name on FreeBSD (#1823)
Also rename stale interfaces from previous runs before creating new ones.
Works around rust-tun reusing existing tun0 instead of configured name.

Tested on FreeBSD 14.1
2026-01-30 23:51:52 +08:00
KKRainbow 9e3c9228bb improve perf of remove_network in foreign net mgr (#1847) 2026-01-30 23:04:31 +08:00
Luna Yao acd7c85ff6 ci: speed up test with matrix (#1830)
* add an action to install pnpm packages
* add an action to prepare build environment
* rewrite test workflow, using composite actions and matrix
2026-01-30 22:21:27 +08:00
KKRainbow 8727221513 call remove_peer instead of remove_network when peer id not match (#1844) 2026-01-30 16:01:52 +08:00
Luna Yao cdedaf3f63 refactor(quic): remove quinn encryption (#1831)
* use quinn-plaintext
* remove server_cert in QUICTunnelListener
* remove some customized transport config
* leave max_concurrent_bidi_streams as default

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-30 10:21:59 +08:00
KKRainbow ffe5644ddc add token bucket limiter on peer conn recv (#1842)
We should limit peer conn recv to make sure we don't recv too much from peers.
2026-01-29 16:12:26 +08:00
Chenx Dust ccc684a9ab Fix: Fixed compilation issue after partially removing the feature flag (#1835) 2026-01-28 21:38:34 +08:00
fanyang 977e502150 feat(cli): add column truncation controls (#1838)
- drop low-priority columns when tables exceed terminal width
- truncate optional columns to fit remaining width
- add --no-trunc flag to disable truncation
- compute column widths using unicode display width

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-28 14:50:14 +08:00
Mg Pig 518d26b25f feat: add X-Network-Name header to HTTP connector requests (#1839)
This allows HTTP redirect servers to provide network-specific node
lists based on the client's network identity. Updated unit tests
to verify the header is correctly sent.
2026-01-28 14:48:45 +08:00
KKRainbow 101f416268 Introduce secure mode (part 1) (#1808)
Use noise protocol on handshake. Check peer's public key if needed. Also support rekey and replay attack prevention.

E2EE and temporary password will be implemented based on this.
2026-01-25 20:16:51 +08:00
Chenx Dust ffa08d1c43 feat: add peer_id in MyNodeInfo (#1821) 2026-01-22 22:44:37 +08:00
韩嘉乐 cf3f9169b7 CI(ohos): Enhance CI workflow for release package builds (#1812)
Added support for building and publishing release packages based on tags.
2026-01-20 12:25:10 +08:00
KKRainbow 8343cd5e76 fix config loss when run network (#1802) 2026-01-17 00:58:42 +08:00
KKRainbow 005b321f62 allow open rpc port in gui normal mode (#1795)
* allow open rpc port for gui normal mode
* downgrade dev tool console
2026-01-16 11:12:32 +08:00
KKRainbow 53264f67bf fix peer establish direct conn with subnet proxy to one of local interface (#1782)
* fix peer establish direct conn with subnet proxy to one of local interface

* fix peer mgr ref loop
2026-01-15 01:00:32 +08:00
韩嘉乐 f8b34e3c86 Merge pull request #1787 from EasyTier/FrankHan052176-patch-1
action[ohos] fix the cnt of commit in ohos.yml
2026-01-13 23:58:26 +08:00
韩嘉乐 ce1bdac2bc action[ohos] fix the cnt of commit in ohos.yml 2026-01-13 22:57:43 +08:00
Copilot bd8f01fb26 Add Nushell completion script generation support (#1756)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
2026-01-11 18:41:02 +08:00
Chenx Dust b590700540 feat: support unix socket tunnel (for ios) (#1779)
Co-authored-by: Page Chen <pagechen04@gmail.com>
2026-01-11 16:37:32 +08:00
Chenx Dust 48c5c23f9b feat: support compile for iOS (#1777) 2026-01-11 16:36:58 +08:00
朝倉水希 f4f591d14c fix: outbound packet not dropped by acl (#1766) 2026-01-08 19:58:23 +08:00
Mg Pig 0c16e2211b feat(gui): persist and restore last used network instance ID (#1762) 2026-01-08 17:03:51 +08:00
Rinne 4bfea06a12 docs: update locales (#1755)
Co-authored-by: KKRainbow <443152178@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-08 11:08:32 +08:00
桜井 ホタル 057ee9f2c5 Resolves the issue of DNS resolution failure after installing KSU modules, resulting in inability to connect to nodes. (#1761) 2026-01-08 11:07:52 +08:00
Burning_TNT 7f48ca54a3 Implement requesting tun_fd with tokio channel. (#1734)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-04 21:04:43 +08:00
hello ee5227130c feat: Update Cargo.toml for easytier-gui and android app to support tls1.2 (#1744) 2026-01-04 21:03:34 +08:00
韩嘉乐 2e0d9a2b54 Refactor EasyTier version resolution in workflow (#1747)
Updated the workflow to resolve the EasyTier version based on the latest commit and tag information.
2026-01-04 21:02:55 +08:00
编程小白 c5d732773f Convert dead URL to ASCII before socket address lookup (#1739) 2026-01-02 18:49:23 +08:00
334 changed files with 55243 additions and 9732 deletions
+35 -54
View File
@@ -1,29 +1,40 @@
[target.x86_64-unknown-linux-musl] # region Native
linker = "rust-lld"
rustflags = ["-C", "linker-flavor=ld.lld"] [target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.aarch64-unknown-linux-gnu] [target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc" rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.aarch64-unknown-linux-ohos] [target.'cfg(all(windows, target_env = "msvc"))']
ar = "/usr/local/ohos-sdk/linux/native/llvm/bin/llvm-ar" rustflags = ["-C", "target-feature=+crt-static"]
linker = "/home/runner/sdk/native/llvm/aarch64-unknown-linux-ohos-clang.sh"
[target.aarch64-unknown-linux-ohos.env] # region
PKG_CONFIG_PATH = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib/pkgconfig:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib/pkgconfig"
PKG_CONFIG_LIBDIR = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib" # region CI
PKG_CONFIG_SYSROOT_DIR = "/usr/local/ohos-sdk/linux/native/sysroot"
SYSROOT = "/usr/local/ohos-sdk/linux/native/sysroot" [target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-feature=+crt-static"]
[target.aarch64-unknown-linux-musl] [target.aarch64-unknown-linux-musl]
linker = "aarch64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"] rustflags = ["-C", "target-feature=+crt-static"]
[target.riscv64gc-unknown-linux-musl] [target.riscv64gc-unknown-linux-musl]
linker = "riscv64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"] rustflags = ["-C", "target-feature=+crt-static"]
[target.'cfg(all(windows, target_env = "msvc"))'] [target.armv7-unknown-linux-musleabihf]
rustflags = ["-C", "target-feature=+crt-static"]
[target.armv7-unknown-linux-musleabi]
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabihf]
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabi]
rustflags = ["-C", "target-feature=+crt-static"]
[target.loongarch64-unknown-linux-musl]
rustflags = ["-C", "target-feature=+crt-static"] rustflags = ["-C", "target-feature=+crt-static"]
[target.mipsel-unknown-linux-musl] [target.mipsel-unknown-linux-musl]
@@ -64,44 +75,14 @@ rustflags = [
"gcc", "gcc",
] ]
[target.armv7-unknown-linux-musleabihf] [target.aarch64-unknown-linux-ohos]
linker = "armv7-unknown-linux-musleabihf-gcc" ar = "/usr/local/ohos-sdk/linux/native/llvm/bin/llvm-ar"
rustflags = ["-C", "target-feature=+crt-static"] linker = "/home/runner/sdk/native/llvm/aarch64-unknown-linux-ohos-clang.sh"
[target.armv7-unknown-linux-musleabi] [target.aarch64-unknown-linux-ohos.env]
linker = "armv7-unknown-linux-musleabi-gcc" PKG_CONFIG_PATH = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib/pkgconfig:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib/pkgconfig"
rustflags = ["-C", "target-feature=+crt-static"] PKG_CONFIG_LIBDIR = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib"
PKG_CONFIG_SYSROOT_DIR = "/usr/local/ohos-sdk/linux/native/sysroot"
SYSROOT = "/usr/local/ohos-sdk/linux/native/sysroot"
[target.loongarch64-unknown-linux-musl] # endregion
linker = "loongarch64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabihf]
linker = "arm-unknown-linux-musleabihf-gcc"
rustflags = [
"-C",
"target-feature=+crt-static",
"-L",
"./musl_gcc/arm-unknown-linux-musleabihf/arm-unknown-linux-musleabihf/lib",
"-L",
"./musl_gcc/arm-unknown-linux-musleabihf/lib/gcc/arm-unknown-linux-musleabihf/15.1.0",
"-l",
"atomic",
"-l",
"gcc",
]
[target.arm-unknown-linux-musleabi]
linker = "arm-unknown-linux-musleabi-gcc"
rustflags = [
"-C",
"target-feature=+crt-static",
"-L",
"./musl_gcc/arm-unknown-linux-musleabi/arm-unknown-linux-musleabi/lib",
"-L",
"./musl_gcc/arm-unknown-linux-musleabi/lib/gcc/arm-unknown-linux-musleabi/15.1.0",
"-l",
"atomic",
"-l",
"gcc",
]
+90
View File
@@ -0,0 +1,90 @@
name: prepare-build
author: Luna
description: Prepare build environment
inputs:
target:
description: 'The target to build for'
required: false
pnpm:
description: 'Whether to run pnpm build'
required: true
default: 'true'
pnpm-build-filter:
description: 'The filter argument for pnpm build (e.g. ./easytier-web/*)'
required: false
default: './easytier-web/*'
gui:
description: 'Whether to prepare the GUI build environment'
required: true
default: 'true'
token:
description: 'GitHub token, used by setup-protoc action'
required: false
runs:
using: 'composite'
steps:
- run: mkdir -p easytier-gui/dist
shell: bash
- name: Install dependencies
if: ${{ runner.os == 'Linux' }}
run: |
sudo apt-get update
sudo apt-get install -qqy build-essential mold musl-tools
shell: bash
- name: Setup Frontend Environment
if: ${{ inputs.pnpm == 'true' }}
uses: ./.github/actions/prepare-pnpm
with:
build-filter: ${{ inputs.pnpm-build-filter }}
- name: Install GUI dependencies (Linux)
if: ${{ inputs.gui == 'true' && runner.os == 'Linux' }}
run: |
sudo apt-get install -qq xdg-utils \
libappindicator3-dev \
libgtk-3-dev \
librsvg2-dev \
libwebkit2gtk-4.1-dev \
libxdo-dev
shell: bash
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: 1.95
target: ${{ !contains(inputs.target, 'mips') && inputs.target || '' }}
components: ${{ contains(inputs.target, 'mips') && 'rust-src' || '' }}
cache: false
rustflags: ''
- name: Install Rust (MIPS)
if: ${{ contains(inputs.target, 'mips') }}
run: |
MUSL_TARGET=${{ inputs.target }}sf
mkdir -p ./musl_gcc
wget --inet4-only -c https://github.com/cross-tools/musl-cross/releases/download/20250520/${MUSL_TARGET}.tar.xz -P ./musl_gcc/
tar xf ./musl_gcc/${MUSL_TARGET}.tar.xz -C ./musl_gcc/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/bin/*gcc /usr/bin/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/include/ /usr/include/musl-cross
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/${MUSL_TARGET}/sysroot/ ./musl_gcc/sysroot
sudo chmod -R a+rwx ./musl_gcc
if [[ -d "./musl_gcc/sysroot" ]]; then
echo "BINDGEN_EXTRA_CLANG_ARGS=--sysroot=$(readlink -f ./musl_gcc/sysroot)" >> $GITHUB_ENV
fi
cd "$PWD/musl_gcc/${MUSL_TARGET}/lib/gcc/${MUSL_TARGET}/15.1.0" || exit 255
# for panic-abort
cp libgcc_eh.a libunwind.a
# for mimalloc
ar x libgcc.a _ctzsi2.o _clz.o _bswapsi2.o
ar rcs libctz.a _ctzsi2.o _clz.o _bswapsi2.o
shell: bash
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ inputs.token }}
+48
View File
@@ -0,0 +1,48 @@
name: 'Setup pnpm'
author: Luna
description: 'Setup Node.js, pnpm, and install dependencies'
inputs:
build-filter:
description: 'The filter argument for pnpm build (e.g. ./easytier-web/*)'
required: false
default: ''
runs:
using: "composite"
steps:
- name: Setup Node.js
uses: actions/setup-node@v5
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v5
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v5
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install and build
shell: bash
run: |
pnpm -r install
if [ -n "${{ inputs.build-filter }}" ]; then
echo "Building with filter: ${{ inputs.build-filter }}"
pnpm -r --filter "${{ inputs.build-filter }}" build
else
echo "No build filter provided, building all packages"
pnpm -r build
fi
+143 -177
View File
@@ -2,9 +2,14 @@ name: EasyTier Core
on: on:
push: push:
branches: ["develop", "main", "releases/**"] branches: [ "develop", "main", "releases/**" ]
pull_request: pull_request:
branches: ["develop", "main"] branches: [ "develop", "main" ]
types: [ opened, synchronize, reopened, ready_for_review ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job: pre_job:
# continue-on-error: true # Uncomment once integration is finished # continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output # Map a step output to a job output
outputs: outputs:
# do not skip push on branch starts with releases/ # do not skip push on branch starts with releases/
@@ -30,85 +36,69 @@ jobs:
concurrent_skipping: 'same_content_newer' concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true' skip_after_successful_duplicate: 'true'
cancel_others: 'true' cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh"]' paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/actions/**", "easytier-web/**"]'
build_web: build_web:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: pre_job needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true' if: needs.pre_job.outputs.should_skip != 'true'
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- uses: actions/setup-node@v4 - name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
with: with:
node-version: 22 build-filter: './easytier-web/*'
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r --filter "./easytier-web/*" build
- name: Archive artifact - name: Archive artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: easytier-web-dashboard name: easytier-web-dashboard
path: | path: |
easytier-web/frontend/dist/* easytier-web/frontend/dist/*
build: build:
strategy: strategy:
fail-fast: false fail-fast: true
matrix: matrix:
include: include:
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-unknown-linux-musl - TARGET: x86_64-unknown-linux-musl
OS: ubuntu-22.04 OS: ubuntu-24.04
ARTIFACT_NAME: linux-x86_64 ARTIFACT_NAME: linux-x86_64
- TARGET: riscv64gc-unknown-linux-musl - TARGET: aarch64-unknown-linux-musl
OS: ubuntu-22.04 OS: ubuntu-24.04-arm
ARTIFACT_NAME: linux-riscv64 ARTIFACT_NAME: linux-aarch64
- TARGET: mips-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-mips
- TARGET: mipsel-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-mipsel
- TARGET: armv7-unknown-linux-musleabihf # raspberry pi 2-3-4, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armv7hf
- TARGET: armv7-unknown-linux-musleabi # raspberry pi 2-3-4, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armv7
- TARGET: arm-unknown-linux-musleabihf # raspberry pi 0-1, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armhf
- TARGET: arm-unknown-linux-musleabi # raspberry pi 0-1, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-arm
- TARGET: riscv64gc-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-riscv64
- TARGET: loongarch64-unknown-linux-musl - TARGET: loongarch64-unknown-linux-musl
OS: ubuntu-24.04 OS: ubuntu-24.04
ARTIFACT_NAME: linux-loongarch64 ARTIFACT_NAME: linux-loongarch64
- TARGET: armv7-unknown-linux-musleabihf # raspberry pi 2-3-4, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armv7hf
- TARGET: armv7-unknown-linux-musleabi # raspberry pi 2-3-4, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armv7
- TARGET: arm-unknown-linux-musleabihf # raspberry pi 0-1, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armhf
- TARGET: arm-unknown-linux-musleabi # raspberry pi 0-1, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-arm
- TARGET: mips-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-mips
- TARGET: mipsel-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-mipsel
- TARGET: x86_64-unknown-freebsd
OS: ubuntu-24.04
ARTIFACT_NAME: freebsd-13.2-x86_64
BSD_VERSION: 13.2
- TARGET: x86_64-apple-darwin - TARGET: x86_64-apple-darwin
OS: macos-latest OS: macos-latest
ARTIFACT_NAME: macos-x86_64 ARTIFACT_NAME: macos-x86_64
@@ -119,17 +109,12 @@ jobs:
- TARGET: x86_64-pc-windows-msvc - TARGET: x86_64-pc-windows-msvc
OS: windows-latest OS: windows-latest
ARTIFACT_NAME: windows-x86_64 ARTIFACT_NAME: windows-x86_64
- TARGET: aarch64-pc-windows-msvc
OS: windows-latest
ARTIFACT_NAME: windows-arm64
- TARGET: i686-pc-windows-msvc - TARGET: i686-pc-windows-msvc
OS: windows-latest OS: windows-latest
ARTIFACT_NAME: windows-i686 ARTIFACT_NAME: windows-i686
- TARGET: aarch64-pc-windows-msvc
- TARGET: x86_64-unknown-freebsd OS: windows-11-arm
OS: ubuntu-22.04 ARTIFACT_NAME: windows-arm64
ARTIFACT_NAME: freebsd-13.2-x86_64
BSD_VERSION: 13.2
runs-on: ${{ matrix.OS }} runs-on: ${{ matrix.OS }}
env: env:
@@ -142,7 +127,7 @@ jobs:
- build_web - build_web
if: needs.pre_job.outputs.should_skip != 'true' if: needs.pre_job.outputs.should_skip != 'true'
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- name: Set current ref as env variable - name: Set current ref as env variable
run: | run: |
@@ -154,158 +139,131 @@ jobs:
name: easytier-web-dashboard name: easytier-web-dashboard
path: easytier-web/frontend/dist/ path: easytier-web/frontend/dist/
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
target: ${{ matrix.TARGET }}
gui: true
pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
if: ${{ ! endsWith(matrix.TARGET, 'freebsd') }}
with: with:
# The prefix cache key, this can be changed to start a new cache manually. # The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust" # default: "v0-rust"
prefix-key: "" prefix-key: ""
shared-key: "core-registry"
cache-targets: "false" cache-targets: "false"
- name: Setup protoc - uses: mlugg/setup-zig@v2
uses: arduino/setup-protoc@v3 if: ${{ contains(matrix.OS, 'ubuntu') }}
with: with:
# GitHub repo token to use to avoid rate limiter version: 0.16.0
repo-token: ${{ secrets.GITHUB_TOKEN }} use-cache: true
- name: Build Core & Cli - uses: taiki-e/install-action@v2
if: ${{ ! endsWith(matrix.TARGET, 'freebsd') }} if: ${{ contains(matrix.OS, 'ubuntu') }}
run: | with:
bash ./.github/workflows/install_rust.sh tool: cargo-zigbuild
# loongarch need llvm-18 - name: Build
if [[ $TARGET =~ ^loongarch.*$ ]]; then if: ${{ !contains(matrix.TARGET, 'mips') }}
sudo apt-get install -qq llvm-18 clang-18 run: |
export LLVM_CONFIG_PATH=/usr/lib/llvm-18/bin/llvm-config if [[ "$TARGET" == *windows* ]]; then
fi SUFFIX=.exe
# we set the sysroot when sysroot is a dir
# this dir is a soft link generated by install_rust.sh
# kcp-sys need this to gen ffi bindings. without this clang may fail to find some libc headers such as bits/libc-header-start.h
if [[ -d "./musl_gcc/sysroot" ]]; then
export BINDGEN_EXTRA_CLANG_ARGS=--sysroot=$(readlink -f ./musl_gcc/sysroot)
fi
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cargo +nightly-2025-09-01 build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
else else
if [[ $OS =~ ^windows.*$ ]]; then SUFFIX=""
SUFFIX=.exe
CORE_FEATURES="--features=mimalloc"
elif [[ $TARGET =~ ^riscv64.*$ || $TARGET =~ ^loongarch64.*$ || $TARGET =~ ^aarch64.*$ ]]; then
CORE_FEATURES="--features=mimalloc"
else
CORE_FEATURES="--features=jemalloc"
fi
cargo build --release --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./target/$TARGET/release/easytier-web-embed"$SUFFIX"
cargo build --release --target $TARGET $CORE_FEATURES
fi fi
# Copied and slightly modified from @lmq8267 (https://github.com/lmq8267) if [[ "$TARGET" =~ (x86_64-unknown-linux-musl|aarch64-unknown-linux-musl|windows|darwin) ]]; then
- name: Build Core & Cli (X86_64 FreeBSD) BUILD=build
uses: vmactions/freebsd-vm@670398e4236735b8b65805c3da44b7a511fb8b27 else
if: ${{ endsWith(matrix.TARGET, 'freebsd') }} BUILD=zigbuild
fi
if [[ "$TARGET" =~ ^(riscv64|loongarch64|aarch64).*$ || "$TARGET" =~ (freebsd|windows) ]]; then
FEATURES="mimalloc"
else
FEATURES="jemalloc"
fi
cargo $BUILD --release --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./target/$TARGET/release/easytier-web-embed"$SUFFIX"
cargo $BUILD --release --target $TARGET --features=$FEATURES
- name: Build (MIPS)
if: ${{ contains(matrix.TARGET, 'mips') }}
env: env:
TARGET: ${{ matrix.TARGET }} RUSTC_BOOTSTRAP: 1
with: run: |
envs: TARGET cargo build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
release: ${{ matrix.BSD_VERSION }}
arch: x86_64
usesh: true
mem: 6144
cpu: 4
run: |
uname -a
echo $SHELL
pwd
ls -lah
whoami
env | sort
pkg install -y git protobuf llvm-devel sudo curl
curl --proto 'https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
. $HOME/.cargo/env
rustup set auto-self-update disable
rustup install 1.89
rustup default 1.89
export CC=clang
export CXX=clang++
export CARGO_TERM_COLOR=always
cargo build --release --verbose --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web ./target/$TARGET/release/easytier-web-embed
cargo build --release --verbose --target $TARGET --features=mimalloc
- name: Compress - name: Compress
run: | run: |
mkdir -p ./artifacts/objects/ mkdir -p ./artifacts/objects/
# windows is the only OS using a different convention for executable file name # windows is the only OS using a different convention for executable file name
if [[ $OS =~ ^windows.*$ && $TARGET =~ ^x86_64.*$ ]]; then if [[ $OS =~ ^windows.*$ ]]; then
SUFFIX=.exe SUFFIX=.exe
cp easytier/third_party/x86_64/* ./artifacts/objects/ case $TARGET in
elif [[ $OS =~ ^windows.*$ && $TARGET =~ ^i686.*$ ]]; then x86_64*) ARCH_DIR=x86_64 ;;
SUFFIX=.exe i686*) ARCH_DIR=i686 ;;
cp easytier/third_party/i686/* ./artifacts/objects/ aarch64*) ARCH_DIR=arm64 ;;
elif [[ $OS =~ ^windows.*$ && $TARGET =~ ^aarch64.*$ ]]; then esac
SUFFIX=.exe if [[ -n "$ARCH_DIR" ]]; then
cp easytier/third_party/arm64/* ./artifacts/objects/ find "easytier/third_party/${ARCH_DIR}" -maxdepth 1 -type f \( -name "*.dll" -o -name "*.sys" \) -exec cp {} ./artifacts/objects/ \;
fi
fi fi
if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then
TAG=$GITHUB_REF_NAME TAG=$GITHUB_REF_NAME
else else
TAG=$GITHUB_SHA TAG=$GITHUB_SHA
fi fi
if [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ (loongarch|freebsd) ]]; then
HOST_ARCH=$(uname -m)
case $HOST_ARCH in
x86_64) UPX_ARCH="amd64" ;;
aarch64) UPX_ARCH="arm64" ;;
*) UPX_ARCH="amd64" ;;
esac
if [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^.*freebsd$ && ! $TARGET =~ ^loongarch.*$ && ! $TARGET =~ ^riscv64.*$ ]]; then
UPX_VERSION=4.2.4 UPX_VERSION=4.2.4
curl -L https://github.com/upx/upx/releases/download/v${UPX_VERSION}/upx-${UPX_VERSION}-amd64_linux.tar.xz -s | tar xJvf - UPX_PKG="upx-${UPX_VERSION}-${UPX_ARCH}_linux"
cp upx-${UPX_VERSION}-amd64_linux/upx . curl -L "https://github.com/upx/upx/releases/download/v${UPX_VERSION}/${UPX_PKG}.tar.xz" -s | tar xJvf -
./upx --lzma --best ./target/$TARGET/release/easytier-core"$SUFFIX" cp "${UPX_PKG}/upx" .
./upx --lzma --best ./target/$TARGET/release/easytier-cli"$SUFFIX" UPX_BIN=./upx
fi fi
mv ./target/$TARGET/release/easytier-core"$SUFFIX" ./artifacts/objects/ for BIN in ./target/$TARGET/release/easytier-{core,cli,web,web-embed}"$SUFFIX"; do
mv ./target/$TARGET/release/easytier-cli"$SUFFIX" ./artifacts/objects/ if [[ -f "$BIN" ]]; then
if [[ ! $TARGET =~ ^mips.*$ ]]; then if [[ -n "$UPX_BIN" ]]; then
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./artifacts/objects/ $UPX_BIN --lzma --best "$BIN" || true
mv ./target/$TARGET/release/easytier-web-embed"$SUFFIX" ./artifacts/objects/ fi
fi
mv "$BIN" ./artifacts/objects/
fi
done
mv ./artifacts/objects/* ./artifacts/ mv ./artifacts/objects/* ./artifacts/
rm -rf ./artifacts/objects/ rm -rf ./artifacts/objects/
- name: Archive artifact - name: Archive artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: easytier-${{ matrix.ARTIFACT_NAME }} name: easytier-${{ matrix.ARTIFACT_NAME }}
path: | path: |
./artifacts/* ./artifacts/*
core-result: build_magisk:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest
needs:
- pre_job
- build_web
- build
steps:
- name: Mark result as failed
if: needs.build.result != 'success'
run: exit 1
magisk_build:
needs:
- pre_job
- build_web
- build
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [ pre_job, build_web, build ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps: steps:
- name: Checkout Code - name: Checkout Code
uses: actions/checkout@v4 # 必须先检出代码才能获取模块配置 uses: actions/checkout@v5 # 必须先检出代码才能获取模块配置
# 下载二进制文件到独立目录 # 下载二进制文件到独立目录
- name: Download Linux aarch64 binaries - name: Download Linux aarch64 binaries
@@ -322,10 +280,9 @@ jobs:
cp ./downloaded-binaries/easytier-cli ./easytier-contrib/easytier-magisk/ cp ./downloaded-binaries/easytier-cli ./easytier-contrib/easytier-magisk/
cp ./downloaded-binaries/easytier-web ./easytier-contrib/easytier-magisk/ cp ./downloaded-binaries/easytier-web ./easytier-contrib/easytier-magisk/
# 上传生成的模块 # 上传生成的模块
- name: Upload Magisk Module - name: Upload Magisk Module
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: Easytier-Magisk name: Easytier-Magisk
path: | path: |
@@ -333,3 +290,12 @@ jobs:
!./easytier-contrib/easytier-magisk/build.sh !./easytier-contrib/easytier-magisk/build.sh
!./easytier-contrib/easytier-magisk/magisk_update.json !./easytier-contrib/easytier-magisk/magisk_update.json
if-no-files-found: error if-no-files-found: error
core-result:
runs-on: ubuntu-latest
needs: [ pre_job, build_web, build, build_magisk ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: contains(needs.*.result, 'failure')
run: exit 1
+2 -2
View File
@@ -11,7 +11,7 @@ on:
image_tag: image_tag:
description: 'Tag for this image build' description: 'Tag for this image build'
type: string type: string
default: 'v2.5.0' default: 'v2.6.4'
required: true required: true
mark_latest: mark_latest:
description: 'Mark this image as latest' description: 'Mark this image as latest'
@@ -31,7 +31,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Validate inputs name: Validate inputs
run: | run: |
+48 -112
View File
@@ -5,7 +5,12 @@ on:
branches: ["develop", "main", "releases/**"] branches: ["develop", "main", "releases/**"]
pull_request: pull_request:
branches: ["develop", "main"] branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job: pre_job:
# continue-on-error: true # Uncomment once integration is finished # continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output # Map a step output to a job output
outputs: outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }} should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
@@ -29,20 +35,20 @@ jobs:
concurrent_skipping: 'same_content_newer' concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true' skip_after_successful_duplicate: 'true'
cancel_others: 'true' cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh", ".github/workflows/install_gui_dep.sh"]' paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/actions/**", "easytier-web/frontend-lib/**"]'
build-gui: build-gui:
strategy: strategy:
fail-fast: false fail-fast: true
matrix: matrix:
include: include:
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-22.04
GUI_TARGET: aarch64-unknown-linux-gnu
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-unknown-linux-musl - TARGET: x86_64-unknown-linux-musl
OS: ubuntu-22.04 OS: ubuntu-24.04
GUI_TARGET: x86_64-unknown-linux-gnu GUI_TARGET: x86_64-unknown-linux-gnu
ARTIFACT_NAME: linux-x86_64 ARTIFACT_NAME: linux-x86_64
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-24.04-arm
GUI_TARGET: aarch64-unknown-linux-gnu
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-apple-darwin - TARGET: x86_64-apple-darwin
OS: macos-latest OS: macos-latest
@@ -57,16 +63,14 @@ jobs:
OS: windows-latest OS: windows-latest
GUI_TARGET: x86_64-pc-windows-msvc GUI_TARGET: x86_64-pc-windows-msvc
ARTIFACT_NAME: windows-x86_64 ARTIFACT_NAME: windows-x86_64
- TARGET: aarch64-pc-windows-msvc
OS: windows-latest
GUI_TARGET: aarch64-pc-windows-msvc
ARTIFACT_NAME: windows-arm64
- TARGET: i686-pc-windows-msvc - TARGET: i686-pc-windows-msvc
OS: windows-latest OS: windows-latest
GUI_TARGET: i686-pc-windows-msvc GUI_TARGET: i686-pc-windows-msvc
ARTIFACT_NAME: windows-i686 ARTIFACT_NAME: windows-i686
- TARGET: aarch64-pc-windows-msvc
OS: windows-11-arm
GUI_TARGET: aarch64-pc-windows-msvc
ARTIFACT_NAME: windows-arm64
runs-on: ${{ matrix.OS }} runs-on: ${{ matrix.OS }}
env: env:
@@ -78,103 +82,39 @@ jobs:
needs: pre_job needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true' if: needs.pre_job.outputs.should_skip != 'true'
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- name: Install GUI dependencies (x86 only)
if: ${{ matrix.TARGET == 'x86_64-unknown-linux-musl' }}
run: bash ./.github/workflows/install_gui_dep.sh
- name: Install GUI cross compile (aarch64 only)
if: ${{ matrix.TARGET == 'aarch64-unknown-linux-musl' }}
run: |
# see https://tauri.app/v1/guides/building/linux/
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
sudo dpkg --add-architecture arm64
sudo apt update
sudo apt install aptitude
sudo aptitude install -y libgstreamer1.0-0:arm64 gstreamer1.0-plugins-base:arm64 gstreamer1.0-plugins-good:arm64 \
libgstreamer-gl1.0-0:arm64 libgstreamer-plugins-base1.0-0:arm64 libgstreamer-plugins-good1.0-0:arm64 libwebkit2gtk-4.1-0:arm64 \
libwebkit2gtk-4.1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu libsoup-3.0-dev:arm64 libjavascriptcoregtk-4.1-dev:arm64
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
- name: Set current ref as env variable - name: Set current ref as env variable
run: | run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-node@v4 - name: Prepare build environment
uses: ./.github/actions/prepare-build
with: with:
node-version: 22 target: ${{ matrix.TARGET }}
gui: true
- name: Install pnpm pnpm: true
uses: pnpm/action-setup@v4 pnpm-build-filter: ''
with: token: ${{ secrets.GITHUB_TOKEN }}
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r build
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
with: with:
# The prefix cache key, this can be changed to start a new cache manually. # The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust" # default: "v0-rust"
prefix-key: "" prefix-key: ""
shared-key: "gui-registry"
- name: Install rust target cache-targets: "false"
run: bash ./.github/workflows/install_rust.sh
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: copy correct DLLs - name: copy correct DLLs
if: ${{ matrix.OS == 'windows-latest' }} if: ${{ contains(matrix.GUI_TARGET, 'windows') }}
run: | run: |
if [[ $GUI_TARGET =~ ^aarch64.*$ ]]; then case $TARGET in
cp ./easytier/third_party/arm64/* ./easytier-gui/src-tauri/ x86_64*) ARCH_DIR=x86_64 ;;
elif [[ $GUI_TARGET =~ ^i686.*$ ]]; then i686*) ARCH_DIR=i686 ;;
cp ./easytier/third_party/i686/* ./easytier-gui/src-tauri/ aarch64*) ARCH_DIR=arm64 ;;
else esac
cp ./easytier/third_party/x86_64/* ./easytier-gui/src-tauri/ if [[ -n "$ARCH_DIR" ]]; then
find "./easytier/third_party/${ARCH_DIR}" -maxdepth 1 -type f \( -name "*.dll" -o -name "*.sys" \) -exec cp {} ./easytier-gui/src-tauri/ \;
fi fi
- name: Build GUI - name: Build GUI
@@ -182,10 +122,9 @@ jobs:
uses: tauri-apps/tauri-action@v0 uses: tauri-apps/tauri-action@v0
with: with:
projectPath: ./easytier-gui projectPath: ./easytier-gui
# https://tauri.app/v1/guides/building/linux/#cross-compiling-tauri-applications-for-arm-based-devices args: --verbose --target ${{ matrix.GUI_TARGET }}
args: --verbose --target ${{ matrix.GUI_TARGET }} ${{ matrix.OS == 'ubuntu-22.04' && contains(matrix.TARGET, 'aarch64') && '--bundles deb' || '' }}
- name: Compress - name: Collect artifact
run: | run: |
mkdir -p ./artifacts/objects/ mkdir -p ./artifacts/objects/
@@ -194,36 +133,33 @@ jobs:
else else
TAG=$GITHUB_SHA TAG=$GITHUB_SHA
fi fi
# copy gui bundle, gui is built without specific target # copy gui bundle, gui is built without specific target
if [[ $OS =~ ^windows.*$ ]]; then if [[ $GUI_TARGET =~ windows ]]; then
mv ./target/$GUI_TARGET/release/bundle/nsis/*.exe ./artifacts/objects/ mv ./target/$GUI_TARGET/release/bundle/nsis/*.exe ./artifacts/objects/
elif [[ $OS =~ ^macos.*$ ]]; then elif [[ $GUI_TARGET =~ darwin ]]; then
mv ./target/$GUI_TARGET/release/bundle/dmg/*.dmg ./artifacts/objects/ mv ./target/$GUI_TARGET/release/bundle/dmg/*.dmg ./artifacts/objects/
elif [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^mips.*$ ]]; then elif [[ $GUI_TARGET =~ linux ]]; then
mv ./target/$GUI_TARGET/release/bundle/deb/*.deb ./artifacts/objects/ mv ./target/$GUI_TARGET/release/bundle/deb/*.deb ./artifacts/objects/
if [[ $GUI_TARGET =~ ^x86_64.*$ ]]; then mv ./target/$GUI_TARGET/release/bundle/rpm/*.rpm ./artifacts/objects/
# currently only x86 appimage is supported mv ./target/$GUI_TARGET/release/bundle/appimage/*.AppImage ./artifacts/objects/
mv ./target/$GUI_TARGET/release/bundle/appimage/*.AppImage ./artifacts/objects/
fi
fi fi
mv ./artifacts/objects/* ./artifacts/ mv ./artifacts/objects/* ./artifacts/
rm -rf ./artifacts/objects/ rm -rf ./artifacts/objects/
- name: Archive artifact - name: Archive artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: easytier-gui-${{ matrix.ARTIFACT_NAME }} name: easytier-gui-${{ matrix.ARTIFACT_NAME }}
path: | path: |
./artifacts/* ./artifacts/*
gui-result: gui-result:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: needs: [ pre_job, build-gui ]
- pre_job if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
- build-gui
steps: steps:
- name: Mark result as failed - name: Mark result as failed
if: needs.build-gui.result != 'success' if: contains(needs.*.result, 'failure')
run: exit 1 run: exit 1
-11
View File
@@ -1,11 +0,0 @@
sudo apt update
sudo apt install -qq libwebkit2gtk-4.1-dev \
build-essential \
curl \
wget \
file \
libgtk-3-dev \
librsvg2-dev \
libxdo-dev \
libssl-dev \
patchelf
-61
View File
@@ -1,61 +0,0 @@
#!/usr/bin/env bash
# env needed:
# - TARGET
# - GUI_TARGET
# - OS
# dependencies are only needed on ubuntu as that's the only place where
# we make cross-compilation
if [[ $OS =~ ^ubuntu.*$ ]]; then
sudo apt-get update && sudo apt-get install -qq musl-tools libappindicator3-dev llvm clang
# https://github.com/cross-tools/musl-cross/releases
# if "musl" is a substring of TARGET, we assume that we are using musl
MUSL_TARGET=$TARGET
# if target is mips or mipsel, we should use soft-float version of musl
if [[ $TARGET =~ ^mips.*$ || $TARGET =~ ^mipsel.*$ ]]; then
MUSL_TARGET=${TARGET}sf
elif [[ $TARGET =~ ^riscv64gc-.*$ ]]; then
MUSL_TARGET=${TARGET/#riscv64gc-/riscv64-}
fi
if [[ $MUSL_TARGET =~ musl ]]; then
mkdir -p ./musl_gcc
wget --inet4-only -c https://github.com/cross-tools/musl-cross/releases/download/20250520/${MUSL_TARGET}.tar.xz -P ./musl_gcc/
tar xf ./musl_gcc/${MUSL_TARGET}.tar.xz -C ./musl_gcc/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/bin/*gcc /usr/bin/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/include/ /usr/include/musl-cross
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/${MUSL_TARGET}/sysroot/ ./musl_gcc/sysroot
sudo chmod -R a+rwx ./musl_gcc
fi
fi
# see https://github.com/rust-lang/rustup/issues/3709
rustup set auto-self-update disable
rustup install 1.89
rustup default 1.89
# mips/mipsel cannot add target from rustup, need compile by ourselves
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cd "$PWD/musl_gcc/${MUSL_TARGET}/lib/gcc/${MUSL_TARGET}/15.1.0" || exit 255
# for panic-abort
cp libgcc_eh.a libunwind.a
# for mimalloc
ar x libgcc.a _ctzsi2.o _clz.o _bswapsi2.o
ar rcs libctz.a _ctzsi2.o _clz.o _bswapsi2.o
rustup toolchain install nightly-2025-09-01-x86_64-unknown-linux-gnu
rustup component add rust-src --toolchain nightly-2025-09-01-x86_64-unknown-linux-gnu
# https://github.com/rust-lang/rust/issues/128808
# remove it after Cargo or rustc fix this.
RUST_LIB_SRC=$HOME/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/
if [[ -f $RUST_LIB_SRC/library/Cargo.lock && ! -f $RUST_LIB_SRC/Cargo.lock ]]; then
cp -f $RUST_LIB_SRC/library/Cargo.lock $RUST_LIB_SRC/Cargo.lock
fi
else
rustup target add $TARGET
if [[ $GUI_TARGET != '' ]]; then
rustup target add $GUI_TARGET
fi
fi
+42 -64
View File
@@ -5,7 +5,12 @@ on:
branches: ["develop", "main", "releases/**"] branches: ["develop", "main", "releases/**"]
pull_request: pull_request:
branches: ["develop", "main"] branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job: pre_job:
# continue-on-error: true # Uncomment once integration is finished # continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output # Map a step output to a job output
outputs: outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }} should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
@@ -29,25 +35,30 @@ jobs:
concurrent_skipping: 'same_content_newer' concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true' skip_after_successful_duplicate: 'true'
cancel_others: 'true' cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/workflows/install_rust.sh"]' paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/actions/**"]'
build-mobile: build-mobile:
strategy: strategy:
fail-fast: false fail-fast: true
matrix: matrix:
include: include:
- TARGET: android - TARGET: aarch64-linux-android
OS: ubuntu-22.04 ARCH: aarch64
ARTIFACT_NAME: android - TARGET: armv7-linux-androideabi
runs-on: ${{ matrix.OS }} ARCH: armv7
- TARGET: i686-linux-android
ARCH: i686
- TARGET: x86_64-linux-android
ARCH: x86_64
runs-on: ubuntu-latest
env: env:
NAME: easytier NAME: easytier
TARGET: ${{ matrix.TARGET }} TARGET: ${{ matrix.TARGET }}
OS: ${{ matrix.OS }} ARCH: ${{ matrix.ARCH }}
OSS_BUCKET: ${{ secrets.ALIYUN_OSS_BUCKET }} OSS_BUCKET: ${{ secrets.ALIYUN_OSS_BUCKET }}
needs: pre_job needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true' if: needs.pre_job.outputs.should_skip != 'true'
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- name: Set current ref as env variable - name: Set current ref as env variable
run: | run: |
@@ -61,72 +72,41 @@ jobs:
- name: Setup Android SDK - name: Setup Android SDK
uses: android-actions/setup-android@v3 uses: android-actions/setup-android@v3
with: with:
cmdline-tools-version: 11076708 cmdline-tools-version: 12.0
packages: 'build-tools;34.0.0 ndk;26.0.10792818 tools platform-tools platforms;android-34 ' packages: 'build-tools;34.0.0 ndk;26.0.10792818 platform-tools platforms;android-34 '
- name: Setup Android Environment - name: Setup Android Environment
run: | run: |
echo "$ANDROID_HOME/platform-tools" >> $GITHUB_PATH echo "$ANDROID_HOME/platform-tools" >> $GITHUB_PATH
echo "$ANDROID_HOME/ndk/26.0.10792818/toolchains/llvm/prebuilt/linux-x86_64/bin" >> $GITHUB_PATH echo "$ANDROID_HOME/ndk/26.0.10792818/toolchains/llvm/prebuilt/linux-x86_64/bin" >> $GITHUB_PATH
echo "NDK_HOME=$ANDROID_HOME/ndk/26.0.10792818/" > $GITHUB_ENV echo "NDK_HOME=$ANDROID_HOME/ndk/26.0.10792818/" >> $GITHUB_ENV
- uses: actions/setup-node@v4 - name: Prepare build environment
uses: ./.github/actions/prepare-build
with: with:
node-version: 22 target: ${{ matrix.TARGET }}
gui: false
- name: Install pnpm pnpm: true
uses: pnpm/action-setup@v4 pnpm-build-filter: ''
with: token: ${{ secrets.GITHUB_TOKEN }}
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r build
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
with: with:
# The prefix cache key, this can be changed to start a new cache manually. # The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust" # default: "v0-rust"
prefix-key: "" prefix-key: ""
shared-key: "gui-registry"
cache-targets: "false"
- name: Install rust target - name: Build
run: |
bash ./.github/workflows/install_rust.sh
rustup target add aarch64-linux-android
rustup target add armv7-linux-androideabi
rustup target add i686-linux-android
rustup target add x86_64-linux-android
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build Android
run: | run: |
cd easytier-gui cd easytier-gui
pnpm tauri android build pnpm tauri android build --apk --target "$ARCH" --split-per-abi
- name: Compress - name: Collect artifact
run: | run: |
mkdir -p ./artifacts/objects/ mkdir -p ./artifacts/objects/
mv easytier-gui/src-tauri/gen/android/app/build/outputs/apk/universal/release/app-universal-release.apk ./artifacts/objects/ mv easytier-gui/src-tauri/gen/android/app/build/outputs/apk/*/release/*.apk ./artifacts/objects/
if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then
TAG=$GITHUB_REF_NAME TAG=$GITHUB_REF_NAME
@@ -134,23 +114,21 @@ jobs:
TAG=$GITHUB_SHA TAG=$GITHUB_SHA
fi fi
mv ./artifacts/objects/* ./artifacts mv ./artifacts/objects/* ./artifacts/
rm -rf ./artifacts/objects/ rm -rf ./artifacts/objects/
- name: Archive artifact - name: Archive artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: easytier-gui-${{ matrix.ARTIFACT_NAME }} name: easytier-mobile-android-${{ matrix.ARCH }}
path: | path: |
./artifacts/* ./artifacts/*
mobile-result: mobile-result:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: needs: [ pre_job, build-mobile ]
- pre_job if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
- build-mobile
steps: steps:
- name: Mark result as failed - name: Mark result as failed
if: needs.build-mobile.result != 'success' if: contains(needs.*.result, 'failure')
run: exit 1 run: exit 1
+44
View File
@@ -0,0 +1,44 @@
name: Nix Check
on:
push:
branches: ["main", "develop"]
paths:
- "**/*.nix"
- "flake.lock"
- "rust-toolchain.toml"
pull_request:
branches: ["main", "develop"]
types: [opened, synchronize, reopened, ready_for_review]
paths:
- "**/*.nix"
- "flake.lock"
- "rust-toolchain.toml"
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-full-shell:
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Magic Nix Cache
uses: DeterminateSystems/magic-nix-cache-action@v6
- name: Warm up full devShell
run: nix develop .#full --command true
- name: Cargo check in flake environment
run: nix develop .#full --command cargo check
- name: Cargo build in flake environment
run: nix develop .#full --command cargo build
+95 -34
View File
@@ -3,10 +3,18 @@ name: EasyTier OHOS
on: on:
push: push:
branches: ["develop", "main", "releases/**"] branches: ["develop", "main", "releases/**"]
tags:
- 'v*'
- '!*-pre'
pull_request: pull_request:
branches: ["develop", "main"] branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
workflow_dispatch: workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
@@ -17,18 +25,29 @@ defaults:
jobs: jobs:
cargo_fmt_check: cargo_fmt_check:
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v5
- name: fmt check
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
gui: false
pnpm: false
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: rustfmt
- name: Check formatting
working-directory: ./easytier-contrib/easytier-ohrs working-directory: ./easytier-contrib/easytier-ohrs
run: | run: cargo fmt --all -- --check
bash ../../.github/workflows/install_rust.sh
rustup component add rustfmt
cargo fmt --all -- --check
pre_job: pre_job:
# continue-on-error: true # Uncomment once integration is finished # continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output # Map a step output to a job output
outputs: outputs:
# do not skip push on branch starts with releases/ # do not skip push on branch starts with releases/
@@ -41,55 +60,71 @@ jobs:
concurrent_skipping: "same_content_newer" concurrent_skipping: "same_content_newer"
skip_after_successful_duplicate: "true" skip_after_successful_duplicate: "true"
cancel_others: "true" cancel_others: "true"
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-contrib/easytier-ohrs/**", ".github/workflows/ohos.yml", ".github/workflows/install_rust.sh"]' paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-contrib/easytier-ohrs/**", ".github/workflows/ohos.yml", ".github/actions/**"]'
build-ohos: build-ohos:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: pre_job needs: pre_job
env:
OHPM_PUBLISH_CODE: ${{ secrets.OHPM_PUBLISH_CODE }}
if: needs.pre_job.outputs.should_skip != 'true' if: needs.pre_job.outputs.should_skip != 'true'
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v5
- name: Install dependencies - name: Install dependencies
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y \ sudo apt-get install -qq \
build-essential \ build-essential \
wget \ wget \
unzip \ unzip \
git \ git \
pkg-config curl libgl1-mesa-dev expect pkg-config curl libgl1-mesa-dev expect
sudo apt-get clean
- name: Count commits since last tag on upstream main - name: Resolve easytier version
run: | run: |
set -e
UPSTREAM_REPO="https://github.com/EasyTier/EasyTier.git" UPSTREAM_REPO="https://github.com/EasyTier/EasyTier.git"
git remote add upstream "$UPSTREAM_REPO" 2>/dev/null || true git remote add upstream "$UPSTREAM_REPO" 2>/dev/null || true
git fetch upstream --tags --force git fetch --unshallow upstream main || git fetch upstream main
git fetch --tags upstream --force
# upstream/main 最新提交 # cargo 版本
git fetch upstream main CARGO_VERSION=$(cargo metadata --format-version 1 --no-deps --manifest-path easytier/Cargo.toml \
| jq -r '.packages[0].version')
# 获取 upstream/main 最新 tag
LAST_TAG=$(git describe --tags --abbrev=0 upstream/main 2>/dev/null || echo "") LAST_TAG=$(git describe --tags --abbrev=0 upstream/main 2>/dev/null || echo "")
LAST_TAG_VERSION="${LAST_TAG#v}"
if [ -z "$LAST_TAG" ]; then # 语义版本比较
version_gt() {
[ "$(printf '%s\n' "$1" "$2" | sort -V | tail -n1)" = "$1" ] && [ "$1" != "$2" ]
}
if [ -z "$LAST_TAG_VERSION" ]; then
BASE_VERSION="$CARGO_VERSION"
DIFF_COUNT=$(git rev-list --count upstream/main) DIFF_COUNT=$(git rev-list --count upstream/main)
elif version_gt "$CARGO_VERSION" "$LAST_TAG_VERSION"; then
BASE_VERSION="$CARGO_VERSION"
DIFF_COUNT=0
else else
BASE_VERSION="$LAST_TAG_VERSION"
DIFF_COUNT=$(git rev-list --count "${LAST_TAG}..upstream/main") DIFF_COUNT=$(git rev-list --count "${LAST_TAG}..upstream/main")
fi fi
echo "TAG_COMMIT_DIFF=$DIFF_COUNT" COMMIT_HASH=$(git rev-parse --short upstream/main)
echo "TAG_COMMIT_DIFF=$DIFF_COUNT" >> $GITHUB_ENV EASYTIER_VERSION="${BASE_VERSION}-${DIFF_COUNT}-${COMMIT_HASH}"
- name: Get easytier version echo "EASYTIER_VERSION=$EASYTIER_VERSION"
run: | echo "EASYTIER_VERSION=$EASYTIER_VERSION" >> $GITHUB_ENV
EASYTIER_CARGO_VERSION=$(cargo metadata --format-version 1 --no-deps --manifest-path easytier/Cargo.toml \
| jq -r '.packages[0].version')
EASYTIER_VERSION="${EASYTIER_CARGO_VERSION}-${TAG_COMMIT_DIFF}"
echo "EASYTIER_VERSION=${EASYTIER_VERSION}" >> $GITHUB_ENV
cd ./easytier-contrib/easytier-ohrs/package cd ./easytier-contrib/easytier-ohrs/package
jq --arg v "$EASYTIER_VERSION" '.version = $v' oh-package.json5 > oh-package.tmp.json5 jq --arg v "$EASYTIER_VERSION" '.version = $v' oh-package.json5 > oh-package.tmp.json5
mv oh-package.tmp.json5 oh-package.json5 mv oh-package.tmp.json5 oh-package.json5
- name: Generate CHANGELOG.md for current commit - name: Generate CHANGELOG.md for current commit
working-directory: ./easytier-contrib/easytier-ohrs/package working-directory: ./easytier-contrib/easytier-ohrs/package
run: | run: |
@@ -115,6 +150,15 @@ jobs:
run: | run: |
echo "TARGET_ARCH=aarch64-linux-ohos" >> $GITHUB_ENV echo "TARGET_ARCH=aarch64-linux-ohos" >> $GITHUB_ENV
rustup install stable
rustup default stable
rustup target add aarch64-unknown-linux-ohos
- uses: taiki-e/install-action@v2
with:
tool: ohrs
- name: Create clang wrapper script - name: Create clang wrapper script
run: | run: |
sudo mkdir -p $OHOS_NDK_HOME/native/llvm sudo mkdir -p $OHOS_NDK_HOME/native/llvm
@@ -128,38 +172,50 @@ jobs:
EOF EOF
sudo chmod +x $OHOS_NDK_HOME/native/llvm/aarch64-unknown-linux-ohos-clang.sh sudo chmod +x $OHOS_NDK_HOME/native/llvm/aarch64-unknown-linux-ohos-clang.sh
- name: Build - name: Build latest Har
working-directory: ./easytier-contrib/easytier-ohrs working-directory: ./easytier-contrib/easytier-ohrs
run: | run: |
sudo apt-get install -y llvm clang lldb lld sudo apt-get install -y llvm clang lldb lld
sudo apt-get install -y protobuf-compiler sudo apt-get install -y protobuf-compiler
bash ../../.github/workflows/install_rust.sh
source env.sh source env.sh
cargo install ohrs
rustup target add aarch64-unknown-linux-ohos
cargo update easytier
ohrs doctor ohrs doctor
ohrs build --release --arch aarch ohrs build --release --arch aarch
ohrs artifact ohrs artifact
mv package.har easytier-ohrs.har mv package.har easytier-ohrs.har
- name: Build Release Package
if: startsWith(github.ref, 'refs/tags/')
working-directory: ./easytier-contrib/easytier-ohrs
run: |
echo "🎉 Official Release detected. Building easytier-release..."
TAG_NAME="${{ github.ref_name }}"
TAG_VERSION="${TAG_NAME#v}"
echo "Release Version: $TAG_VERSION"
cd package
jq --arg v "$TAG_VERSION" '.name = "easytier-release" | .version = $v' oh-package.json5 > oh-package.tmp.json5 && mv oh-package.tmp.json5 oh-package.json5
cd ..
ohrs build --release --arch aarch
cd dist/arm64-v8a
mv libeasytier_ohrs.so libeasytier_release.so
cd ../..
ohrs artifact
mv package.har easytier-release.har
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v5
with: with:
name: easytier-ohos name: easytier-ohos
path: | path: |
./easytier-contrib/easytier-ohrs/easytier-ohrs.har ./easytier-contrib/easytier-ohrs/easytier-ohrs.har
./easytier-contrib/easytier-ohrs/dist/arm64-v8a/libeasytier_ohrs.so
retention-days: 5 retention-days: 5
if-no-files-found: error if-no-files-found: error
- name: Publish To Center Ohpm - name: Publish To Center Ohpm
if: github.event_name == 'push'
working-directory: ./easytier-contrib/easytier-ohrs working-directory: ./easytier-contrib/easytier-ohrs
env: env:
OHPM_PUBLISH_CODE: ${{ secrets.OHPM_PUBLISH_CODE }}
OHPM_PRIVATE_KEY: ${{ secrets.OHPM_PRIVATE_KEY }} OHPM_PRIVATE_KEY: ${{ secrets.OHPM_PRIVATE_KEY }}
OHPM_KEY_PASSPHRASE: ${{ secrets.OHPM_KEY_PASSPHRASE }} OHPM_KEY_PASSPHRASE: ${{ secrets.OHPM_KEY_PASSPHRASE }}
if: ${{ env.OHPM_PUBLISH_CODE != '' && github.event_name == 'push' }}
run: | run: |
ohpm config set publish_id "$OHPM_PUBLISH_CODE" ohpm config set publish_id "$OHPM_PUBLISH_CODE"
ohpm config set publish_registry https://ohpm.openharmony.cn/ohpm ohpm config set publish_registry https://ohpm.openharmony.cn/ohpm
@@ -176,10 +232,15 @@ jobs:
ohpm publish easytier-ohrs.har ohpm publish easytier-ohrs.har
- name: Publish To Private Ohpm - name: Publish To Private Ohpm
if: github.event_name == 'push'
working-directory: ./easytier-contrib/easytier-ohrs working-directory: ./easytier-contrib/easytier-ohrs
if: ${{ env.OHPM_PUBLISH_CODE != '' && github.event_name == 'push' }}
run: | run: |
printf '%s' "${{ secrets.CODEARTS_PRIVATE_OHPM }}" > ~/.ohpm/.ohpmrc printf '%s' "${{ secrets.CODEARTS_PRIVATE_OHPM }}" > ~/.ohpm/.ohpmrc
ohpm config set strict_ssl false ohpm config set strict_ssl false
ohpm publish easytier-ohrs.har ohpm publish easytier-ohrs.har
if [ -f "easytier-release.har" ]; then
echo "🚀 Publishing Release package..."
ohpm publish easytier-release.har
fi
curl --header "Content-Type: application/json" --request POST --data "{}" ${{ secrets.CODEARTS_WEBHOOKS }} curl --header "Content-Type: application/json" --request POST --data "{}" ${{ secrets.CODEARTS_WEBHOOKS }}
+3 -3
View File
@@ -18,7 +18,7 @@ on:
version: version:
description: 'Version for this release' description: 'Version for this release'
type: string type: string
default: 'v2.5.0' default: 'v2.6.4'
required: true required: true
make_latest: make_latest:
description: 'Mark this release as latest' description: 'Mark this release as latest'
@@ -35,7 +35,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- name: Download Core Artifact - name: Download Core Artifact
uses: dawidd6/action-download-artifact@v11 uses: dawidd6/action-download-artifact@v11
@@ -92,4 +92,4 @@ jobs:
files: | files: |
./zipped_assets/* ./zipped_assets/*
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
tag_name: ${{ inputs.version }} tag_name: ${{ inputs.version }}
+116 -68
View File
@@ -2,12 +2,18 @@ name: EasyTier Test
on: on:
push: push:
branches: ["develop", "main"] branches: [ "develop", "main" ]
pull_request: pull_request:
branches: ["develop", "main"] branches: [ "develop", "main" ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
# RUSTC_WRAPPER: "sccache"
# SCCACHE_GHA_ENABLED: "true"
defaults: defaults:
run: run:
@@ -28,22 +34,104 @@ jobs:
# All of these options are optional, so you can remove them if you are happy with the defaults # All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never' concurrent_skipping: 'never'
skip_after_successful_duplicate: 'true' skip_after_successful_duplicate: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/test.yml", ".github/workflows/install_gui_dep.sh", ".github/workflows/install_rust.sh"]' paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/test.yml", ".github/actions/**"]'
test:
runs-on: ubuntu-22.04
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- name: Setup protoc check:
uses: arduino/setup-protoc@v3 name: Run linters & check
runs-on: ubuntu-latest
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v5
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with: with:
# GitHub repo token to use to avoid rate limiter gui: true
repo-token: ${{ secrets.GITHUB_TOKEN }} pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: rustfmt,clippy
rustflags: ''
- uses: taiki-e/install-action@cargo-hack
- name: Check formatting
if: ${{ !cancelled() }}
run: cargo fmt --all -- --check
- name: Check Clippy
if: ${{ !cancelled() }}
run: cargo clippy --all-targets --features full --all -- -D warnings
- name: Check features
if: ${{ !cancelled() }}
run: cargo hack check --package easytier --each-feature --exclude-features macos-ne --verbose
- name: Check Cargo.lock is up to date
if: ${{ !cancelled() }}
run: |
if ! cargo metadata --format-version 1 --locked > /dev/null; then
echo "::error::Cargo.lock is out of date. Run cargo generate-lockfile or cargo build locally, then commit Cargo.lock."
exit 1
fi
pre-test:
name: Build test
runs-on: ubuntu-latest
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v5
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
gui: true
pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
- uses: taiki-e/install-action@nextest
- name: Archive test
run: cargo nextest archive --archive-file tests.tar.zst --package easytier --features full
- uses: actions/upload-artifact@v5
with:
name: tests
path: tests.tar.zst
retention-days: 1
test_matrix:
name: Test (${{ matrix.name }})
runs-on: ubuntu-latest
needs: [ pre_job, pre-test ]
if: needs.pre_job.outputs.should_skip != 'true'
strategy:
fail-fast: false
matrix:
include:
- name: "easytier"
opts: "-E 'not test(tests::three_node)' --test-threads 1 --no-fail-fast"
- name: "three_node"
opts: "-E 'test(tests::three_node) and not test(subnet_proxy_three_node_test)' --test-threads 1 --no-fail-fast"
- name: "three_node::subnet_proxy_three_node_test"
opts: "-E 'test(subnet_proxy_three_node_test)' --test-threads 1 --no-fail-fast"
steps:
- uses: actions/checkout@v5
- name: Setup tools for test - name: Setup tools for test
run: sudo apt install bridge-utils run: sudo apt install bridge-utils
- name: Setup upnpd for test
run: |
sudo apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y miniupnpd miniupnpd-iptables iptables
- name: Setup system for test - name: Setup system for test
run: | run: |
@@ -53,63 +141,23 @@ jobs:
sudo sysctl net.ipv6.conf.lo.disable_ipv6=0 sudo sysctl net.ipv6.conf.lo.disable_ipv6=0
sudo ip addr add 2001:db8::2/64 dev lo sudo ip addr add 2001:db8::2/64 dev lo
- uses: actions/setup-node@v4 - uses: taiki-e/install-action@nextest
- name: Download tests
uses: actions/download-artifact@v4
with: with:
node-version: 22 name: tests
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r --filter "./easytier-web/*" build
- name: Cargo cache
uses: actions/cache@v4
with:
path: |
~/.cargo
./target
key: ${{ runner.os }}-cargo-test-${{ hashFiles('**/Cargo.lock') }}
- name: Install GUI dependencies (Used by clippy)
run: |
bash ./.github/workflows/install_gui_dep.sh
bash ./.github/workflows/install_rust.sh
rustup component add rustfmt
rustup component add clippy
- name: Check formatting
if: ${{ !cancelled() }}
run: cargo fmt --all -- --check
- name: Check Clippy
if: ${{ !cancelled() }}
# NOTE: tauri need `dist` dir in build.rs
run: |
mkdir -p easytier-gui/dist
cargo clippy --all-targets --all-features --all -- -D warnings
- name: Run tests - name: Run tests
run: | run: |
sudo prlimit --pid $$ --nofile=1048576:1048576 sudo prlimit --pid $$ --nofile=1048576:1048576
sudo -E env "PATH=$PATH" cargo test --no-default-features --features=full --verbose -- --test-threads=1 sudo -E env "PATH=$PATH" cargo nextest run --archive-file tests.tar.zst ${{ matrix.opts }}
sudo chown -R $USER:$USER ./target
sudo chown -R $USER:$USER ~/.cargo test:
runs-on: ubuntu-latest
needs: [ pre_job, check, test_matrix ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: contains(needs.*.result, 'failure')
run: exit 1
+3 -3
View File
@@ -26,7 +26,7 @@ Thank you for your interest in contributing to EasyTier! This document provides
#### Required Tools #### Required Tools
- Node.js v21 or higher - Node.js v21 or higher
- pnpm v9 or higher - pnpm v9 or higher
- Rust toolchain (version 1.89) - Rust toolchain (version 1.95)
- LLVM and Clang - LLVM and Clang
- Protoc (Protocol Buffers compiler) - Protoc (Protocol Buffers compiler)
@@ -79,8 +79,8 @@ sudo apt install -y bridge-utils
2. Install dependencies: 2. Install dependencies:
```bash ```bash
# Install Rust toolchain # Install Rust toolchain
rustup install 1.89 rustup install 1.95
rustup default 1.89 rustup default 1.95
# Install project dependencies # Install project dependencies
pnpm -r install pnpm -r install
+3 -3
View File
@@ -34,7 +34,7 @@
#### 必需工具 #### 必需工具
- Node.js v21 或更高版本 - Node.js v21 或更高版本
- pnpm v9 或更高版本 - pnpm v9 或更高版本
- Rust 工具链(版本 1.89 - Rust 工具链(版本 1.95
- LLVM 和 Clang - LLVM 和 Clang
- ProtocProtocol Buffers 编译器) - ProtocProtocol Buffers 编译器)
@@ -87,8 +87,8 @@ sudo apt install -y bridge-utils
2. 安装依赖: 2. 安装依赖:
```bash ```bash
# 安装 Rust 工具链 # 安装 Rust 工具链
rustup install 1.89 rustup install 1.95
rustup default 1.89 rustup default 1.95
# 安装项目依赖 # 安装项目依赖
pnpm -r install pnpm -r install
Generated
+2195 -1135
View File
File diff suppressed because it is too large Load Diff
+4
View File
@@ -14,6 +14,10 @@ exclude = [
"easytier-contrib/easytier-ohrs", # it needs ohrs sdk "easytier-contrib/easytier-ohrs", # it needs ohrs sdk
] ]
[workspace.package]
edition = "2024"
rust-version = "1.95"
[profile.dev] [profile.dev]
panic = "unwind" panic = "unwind"
debug = 2 debug = 2
+31 -28
View File
@@ -48,40 +48,43 @@
Choose the installation method that best suits your needs: Choose the installation method that best suits your needs:
Linux (Recommended):
```bash ```bash
# 1. Download pre-built binary (Recommended, All platforms supported) curl -fsSL "https://github.com/EasyTier/EasyTier/blob/main/script/install.sh?raw=true" | sudo bash -s install
# Visit https://github.com/EasyTier/EasyTier/releases ```
# 2. Install via cargo (Latest development version) Homebrew (MacOS/Linux):
cargo install --git https://github.com/EasyTier/EasyTier.git easytier ```bash
# 3. Install via Docker
# See https://easytier.cn/en/guide/installation.html#installation-methods
# 4. Linux Quick Install
wget -O- https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh | sudo bash -s install
# 5. MacOS via Homebrew
brew tap brewforge/chinese brew tap brewforge/chinese
brew install --cask easytier-gui brew install --cask easytier-gui
# 6. OpenWrt Luci Web UI
# Visit https://github.com/EasyTier/luci-app-easytier
# 7. (Optional) Install shell completions:
easytier-core --gen-autocomplete fish > ~/.config/fish/completions/easytier-core.fish
easytier-cli gen-autocomplete fish > ~/.config/fish/completions/easytier-cli.fish
``` ```
Windows (Recommended, run with administrator privileges):
```powershell
irm "https://github.com/EasyTier/EasyTier/blob/main/script/install.ps1?raw=true" | iex
```
Install via cargo (Latest development version):
```bash
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
[Install pre-built binary](https://github.com/EasyTier/EasyTier/releases) (Recommended, All platforms supported)
[Install via Docker](https://easytier.cn/en/guide/installation.html#installation-methods)
[Install OpenWrt ipk package](https://github.com/EasyTier/luci-app-easytier)
Additional steps:
[One-Click Register Service](https://easytier.cn/en/guide/network/oneclick-install-as-service.html) (Automatically start when the system boots and run in the background)
### 🚀 Basic Usage ### 🚀 Basic Usage
#### Quick Networking with Shared Nodes #### Quick Networking with Shared Nodes
EasyTier supports quick networking using shared public nodes. When you don't have a public IP, you can use the free shared nodes provided by the EasyTier community. Nodes will automatically attempt NAT traversal and establish P2P connections. When P2P fails, data will be relayed through shared nodes. EasyTier supports quick networking using shared public nodes. When you don't have a public IP, you can use the free shared nodes provided by the EasyTier community. Nodes will automatically attempt NAT traversal and establish P2P connections. When P2P fails, data will be relayed through shared nodes.
The currently deployed shared public node is `tcp://public.easytier.cn:11010`.
When using shared nodes, each node entering the network needs to provide the same `--network-name` and `--network-secret` parameters as the unique identifier of the network. When using shared nodes, each node entering the network needs to provide the same `--network-name` and `--network-secret` parameters as the unique identifier of the network.
Taking two nodes as an example (Please use more complex network name to avoid conflicts): Taking two nodes as an example (Please use more complex network name to avoid conflicts):
@@ -90,14 +93,14 @@ Taking two nodes as an example (Please use more complex network name to avoid co
```bash ```bash
# Run with administrator privileges # Run with administrator privileges
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP>:11010
``` ```
2. Run on Node B: 2. Run on Node B:
```bash ```bash
# Run with administrator privileges # Run with administrator privileges
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP>:11010
``` ```
After successful execution, you can check the network status using `easytier-cli`: After successful execution, you can check the network status using `easytier-cli`:
@@ -105,9 +108,9 @@ After successful execution, you can check the network status using `easytier-cli
```text ```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version | | ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- | | ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.5.0-70e69a38~ | | 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.2-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.5.0-70e69a38~ | | 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.2-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.5.0-70e69a38~ | | | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.2-70e69a38~ |
``` ```
You can test connectivity between nodes: You can test connectivity between nodes:
@@ -124,7 +127,7 @@ To improve availability, you can connect to multiple shared nodes simultaneously
```bash ```bash
# Connect to multiple shared nodes # Connect to multiple shared nodes
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 -p udp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP1>:11010 -p udp://<SharedNodeIP2>:11010
``` ```
Once your network is set up successfully, you can easily configure it to start automatically on system boot. Refer to the [One-Click Register Service guide](https://easytier.cn/en/guide/network/oneclick-install-as-service.html) for step-by-step instructions on registering EasyTier as a system service. Once your network is set up successfully, you can easily configure it to start automatically on system boot. Refer to the [One-Click Register Service guide](https://easytier.cn/en/guide/network/oneclick-install-as-service.html) for step-by-step instructions on registering EasyTier as a system service.
+32 -30
View File
@@ -48,40 +48,42 @@
选择最适合您需求的安装方式: 选择最适合您需求的安装方式:
Linux(推荐):
```bash ```bash
# 1. 下载预编译二进制文件(推荐,支持所有平台) curl -fsSL "https://github.com/EasyTier/EasyTier/blob/main/script/install.sh?raw=true" | sudo bash -s install
# 访问 https://github.com/EasyTier/EasyTier/releases ```
# 2. 通过 cargo 安装(最新开发版本) HomebrewMacOS/Linux):
cargo install --git https://github.com/EasyTier/EasyTier.git easytier ```bash
# 3. 通过 Docker 安装
# 参见 https://easytier.cn/guide/installation.html#%E5%AE%89%E8%A3%85%E6%96%B9%E5%BC%8F
# 4. Linux 快速安装
wget -O- https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh | sudo bash -s install
# 5. MacOS 通过 Homebrew 安装
brew tap brewforge/chinese brew tap brewforge/chinese
brew install --cask easytier-gui brew install --cask easytier-gui
# 6. OpenWrt Luci Web 界面
# 访问 https://github.com/EasyTier/luci-app-easytier
# 7.(可选)安装 Shell 补全功能:
# Fish 补全
easytier-core --gen-autocomplete fish > ~/.config/fish/completions/easytier-core.fish
easytier-cli gen-autocomplete fish > ~/.config/fish/completions/easytier-cli.fish
``` ```
Windows(推荐,请以管理员权限运行):
```powershell
irm "https://github.com/EasyTier/EasyTier/blob/main/script/install.ps1?raw=true" | iex
```
通过 cargo 安装(最新开发版本):
```bash
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
[下载预编译文件](https://github.com/EasyTier/EasyTier/releases)(推荐,支持所有平台)
[通过 Docker 安装](https://easytier.cn/guide/installation.html#%E5%AE%89%E8%A3%85%E6%96%B9%E5%BC%8F)
[安装 OpenWrt ipk 软件包](https://github.com/EasyTier/luci-app-easytier)
附加步骤:
[一键注册系统服务](https://easytier.cn/guide/network/oneclick-install-as-service.html)(系统启动时自动后台运行)
### 🚀 基本用法 ### 🚀 基本用法
#### 使用共享节点快速组网 #### 使用共享节点快速组网
EasyTier 支持使用共享公共节点快速组网。当您没有公网 IP 时,可以使用 EasyTier 社区提供的免费共享节点。节点会自动尝试 NAT 穿透并建立 P2P 连接。当 P2P 失败时,数据将通过共享节点中继。 EasyTier 支持使用共享节点快速组网。当您没有公网 IP 时,可以使用公共共享节点。节点会自动尝试 NAT 穿透并建立 P2P 连接。当 P2P 失败时,数据将通过共享节点中继。
当前部署的共享公共节点是 `tcp://public.easytier.cn:11010`
使用共享节点时,每个进入网络的节点需要提供相同的 `--network-name``--network-secret` 参数作为网络的唯一标识符。 使用共享节点时,每个进入网络的节点需要提供相同的 `--network-name``--network-secret` 参数作为网络的唯一标识符。
@@ -91,14 +93,14 @@ EasyTier 支持使用共享公共节点快速组网。当您没有公网 IP 时
```bash ```bash
# 以管理员权限运行 # 以管理员权限运行
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<共享节点IP>:11010
``` ```
2. 在节点 B 上运行: 2. 在节点 B 上运行:
```bash ```bash
# 以管理员权限运行 # 以管理员权限运行
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<共享节点IP>:11010
``` ```
执行成功后,可以使用 `easytier-cli` 检查网络状态: 执行成功后,可以使用 `easytier-cli` 检查网络状态:
@@ -106,9 +108,9 @@ sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.ea
```text ```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version | | ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- | | ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.5.0-70e69a38~ | | 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.2-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.5.0-70e69a38~ | | 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.2-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.5.0-70e69a38~ | | | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.2-70e69a38~ |
``` ```
您可以测试节点之间的连通性: 您可以测试节点之间的连通性:
@@ -125,7 +127,7 @@ ping 10.126.126.2
```bash ```bash
# 连接多个共享节点 # 连接多个共享节点
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 -p udp://public.easytier.cn:11010 sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<公共节点IP>:11010 -p udp://<公共节点IP>:11010
``` ```
#### 去中心化组网 #### 去中心化组网
@@ -1,7 +1,7 @@
[package] [package]
name = "easytier-android-jni" name = "easytier-android-jni"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition.workspace = true
[lib] [lib]
crate-type = ["cdylib"] crate-type = ["cdylib"]
@@ -1,7 +1,7 @@
use easytier::proto::api::manage::{NetworkInstanceRunningInfo, NetworkInstanceRunningInfoMap}; use easytier::proto::api::manage::{NetworkInstanceRunningInfo, NetworkInstanceRunningInfoMap};
use jni::JNIEnv;
use jni::objects::{JClass, JObjectArray, JString}; use jni::objects::{JClass, JObjectArray, JString};
use jni::sys::{jint, jstring}; use jni::sys::{jint, jstring};
use jni::JNIEnv;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::ffi::{CStr, CString}; use std::ffi::{CStr, CString};
use std::ptr; use std::ptr;
@@ -15,7 +15,7 @@ pub struct KeyValuePair {
} }
// 声明外部 C 函数 // 声明外部 C 函数
extern "C" { unsafe extern "C" {
fn set_tun_fd(inst_name: *const std::ffi::c_char, fd: std::ffi::c_int) -> std::ffi::c_int; fn set_tun_fd(inst_name: *const std::ffi::c_char, fd: std::ffi::c_int) -> std::ffi::c_int;
fn get_error_msg(out: *mut *const std::ffi::c_char); fn get_error_msg(out: *mut *const std::ffi::c_char);
fn free_string(s: *const std::ffi::c_char); fn free_string(s: *const std::ffi::c_char);
@@ -68,7 +68,7 @@ fn throw_exception(env: &mut JNIEnv, message: &str) {
} }
/// 设置 TUN 文件描述符 /// 设置 TUN 文件描述符
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_setTunFd( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_setTunFd(
mut env: JNIEnv, mut env: JNIEnv,
_class: JClass, _class: JClass,
@@ -87,17 +87,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_setTunFd(
unsafe { unsafe {
let result = set_tun_fd(inst_name_cstr.as_ptr(), fd); let result = set_tun_fd(inst_name_cstr.as_ptr(), fd);
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
result result
} }
} }
/// 解析配置 /// 解析配置
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_parseConfig( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_parseConfig(
mut env: JNIEnv, mut env: JNIEnv,
_class: JClass, _class: JClass,
@@ -115,17 +115,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_parseConfig(
unsafe { unsafe {
let result = parse_config(config_cstr.as_ptr()); let result = parse_config(config_cstr.as_ptr());
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
result result
} }
} }
/// 运行网络实例 /// 运行网络实例
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_runNetworkInstance( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_runNetworkInstance(
mut env: JNIEnv, mut env: JNIEnv,
_class: JClass, _class: JClass,
@@ -143,17 +143,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_runNetworkInstance(
unsafe { unsafe {
let result = run_network_instance(config_cstr.as_ptr()); let result = run_network_instance(config_cstr.as_ptr());
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
result result
} }
} }
/// 保持网络实例 /// 保持网络实例
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
mut env: JNIEnv, mut env: JNIEnv,
_class: JClass, _class: JClass,
@@ -165,10 +165,10 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
if instance_names.is_null() { if instance_names.is_null() {
unsafe { unsafe {
let result = retain_network_instance(ptr::null(), 0); let result = retain_network_instance(ptr::null(), 0);
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
return result; return result;
} }
@@ -187,10 +187,10 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
if array_length == 0 { if array_length == 0 {
unsafe { unsafe {
let result = retain_network_instance(ptr::null(), 0); let result = retain_network_instance(ptr::null(), 0);
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
return result; return result;
} }
@@ -234,17 +234,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
unsafe { unsafe {
let result = retain_network_instance(c_string_ptrs.as_ptr(), c_string_ptrs.len()); let result = retain_network_instance(c_string_ptrs.as_ptr(), c_string_ptrs.len());
if result != 0 { if result != 0
if let Some(error) = get_last_error() { && let Some(error) = get_last_error()
throw_exception(&mut env, &error); {
} throw_exception(&mut env, &error);
} }
result result
} }
} }
/// 收集网络信息 /// 收集网络信息
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_collectNetworkInfos( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_collectNetworkInfos(
mut env: JNIEnv, mut env: JNIEnv,
_class: JClass, _class: JClass,
@@ -304,7 +304,7 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_collectNetworkInfos(
} }
/// 获取最后的错误信息 /// 获取最后的错误信息
#[no_mangle] #[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_getLastError( pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_getLastError(
env: JNIEnv, env: JNIEnv,
_class: JClass, _class: JClass,
+1 -1
View File
@@ -1,7 +1,7 @@
[package] [package]
name = "easytier-ffi" name = "easytier-ffi"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition.workspace = true
[lib] [lib]
crate-type = ["cdylib"] crate-type = ["cdylib"]
+9 -9
View File
@@ -30,7 +30,7 @@ fn set_error_msg(msg: &str) {
/// # Safety /// # Safety
/// Set the tun fd /// Set the tun fd
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn set_tun_fd( pub unsafe extern "C" fn set_tun_fd(
inst_name: *const std::ffi::c_char, inst_name: *const std::ffi::c_char,
fd: std::ffi::c_int, fd: std::ffi::c_int,
@@ -59,7 +59,7 @@ pub unsafe extern "C" fn set_tun_fd(
/// # Safety /// # Safety
/// Get the last error message /// Get the last error message
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn get_error_msg(out: *mut *const std::ffi::c_char) { pub unsafe extern "C" fn get_error_msg(out: *mut *const std::ffi::c_char) {
let msg_buf = ERROR_MSG.lock().unwrap(); let msg_buf = ERROR_MSG.lock().unwrap();
if msg_buf.is_empty() { if msg_buf.is_empty() {
@@ -74,7 +74,7 @@ pub unsafe extern "C" fn get_error_msg(out: *mut *const std::ffi::c_char) {
} }
} }
#[no_mangle] #[unsafe(no_mangle)]
pub extern "C" fn free_string(s: *const std::ffi::c_char) { pub extern "C" fn free_string(s: *const std::ffi::c_char) {
if s.is_null() { if s.is_null() {
return; return;
@@ -86,7 +86,7 @@ pub extern "C" fn free_string(s: *const std::ffi::c_char) {
/// # Safety /// # Safety
/// Parse the config /// Parse the config
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn parse_config(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int { pub unsafe extern "C" fn parse_config(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int {
let cfg_str = unsafe { let cfg_str = unsafe {
assert!(!cfg_str.is_null()); assert!(!cfg_str.is_null());
@@ -105,7 +105,7 @@ pub unsafe extern "C" fn parse_config(cfg_str: *const std::ffi::c_char) -> std::
/// # Safety /// # Safety
/// Run the network instance /// Run the network instance
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn run_network_instance(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int { pub unsafe extern "C" fn run_network_instance(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int {
let cfg_str = unsafe { let cfg_str = unsafe {
assert!(!cfg_str.is_null()); assert!(!cfg_str.is_null());
@@ -144,7 +144,7 @@ pub unsafe extern "C" fn run_network_instance(cfg_str: *const std::ffi::c_char)
/// # Safety /// # Safety
/// Retain the network instance /// Retain the network instance
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn retain_network_instance( pub unsafe extern "C" fn retain_network_instance(
inst_names: *const *const std::ffi::c_char, inst_names: *const *const std::ffi::c_char,
length: usize, length: usize,
@@ -188,7 +188,7 @@ pub unsafe extern "C" fn retain_network_instance(
/// # Safety /// # Safety
/// Collect the network infos /// Collect the network infos
#[no_mangle] #[unsafe(no_mangle)]
pub unsafe extern "C" fn collect_network_infos( pub unsafe extern "C" fn collect_network_infos(
infos: *mut KeyValuePair, infos: *mut KeyValuePair,
max_length: usize, max_length: usize,
@@ -215,7 +215,7 @@ pub unsafe extern "C" fn collect_network_infos(
if index >= max_length { if index >= max_length {
break; break;
} }
let Some(key) = INSTANCE_MANAGER.get_network_instance_name(instance_id) else { let Some(key) = INSTANCE_MANAGER.get_instance_name(instance_id) else {
continue; continue;
}; };
// convert value to json string // convert value to json string
@@ -228,7 +228,7 @@ pub unsafe extern "C" fn collect_network_infos(
}; };
infos[index] = KeyValuePair { infos[index] = KeyValuePair {
key: std::ffi::CString::new(key.clone()).unwrap().into_raw(), key: std::ffi::CString::new(key).unwrap().into_raw(),
value: std::ffi::CString::new(value).unwrap().into_raw(), value: std::ffi::CString::new(value).unwrap().into_raw(),
}; };
index += 1; index += 1;
+57 -26
View File
@@ -1,43 +1,74 @@
#!/data/adb/magisk/busybox sh #!/data/adb/magisk/busybox sh
MODDIR=${0%/*} MODDIR=${0%/*}
MODULE_PROP="${MODDIR}/module.prop" MODULE_PROP="${MODDIR}/module.prop"
IP_RULE_SCRIPT="${MODDIR}/hotspot_iprule.sh"
ET_STATUS="" ET_STATUS=""
REDIR_STATUS="" REDIR_STATUS=""
# 更新module.prop文件中的description IS_RUNNING=false
# 确保辅助脚本有执行权限
chmod +x "${IP_RULE_SCRIPT}" 2>/dev/null
# 更新 module.prop 文件中的 description
update_module_description() { update_module_description() {
local status_message=$1 local status_message=$1
sed -i "/^description=/c\description=[状态]${status_message}" ${MODULE_PROP} # 检查 module.prop 文件存在且 description 发生变化了再写入
if [ -f "${MODULE_PROP}" ]; then
local current_desc=$(grep "^description=" "${MODULE_PROP}")
local new_desc="description=[状态] ${status_message}"
if [ "${current_desc}" != "${new_desc}" ]; then
sed -i "s#^description=.*#${new_desc}#" "${MODULE_PROP}"
fi
fi
} }
# 判断程序启动状态
if [ -f "${MODDIR}/disable" ]; then if [ -f "${MODDIR}/disable" ]; then
ET_STATUS="已关闭" IS_RUNNING=false
elif pgrep -f 'easytier-core' >/dev/null; then ET_STATUS="主程序已关闭"
if [ -f "${MODDIR}/config/command_args"]; then
ET_STATUS="主程序已开启(启动参数模式)" elif pgrep -f "${MODDIR}/easytier-core" >/dev/null; then
IS_RUNNING=true
if [ -f "${MODDIR}/config/command_args" ]; then
ET_STATUS="主程序正在运行(启动参数模式)"
else else
ET_STATUS="主程序已开启(配置文件模式)" ET_STATUS="主程序正在运行(配置文件模式"
fi fi
elif [ -z "$ET_STATUS" ]; then
# 既没 disable 也没运行,说明是异常停止或未启动
ET_STATUS="主程序启动失败或未运行"
fi fi
#ET_STATUS不存在说明开启模块未正常运行,不修改状态 # 无论主程序是否运行,都允许切换“开关文件”的状态,以便下次生效
if [ -n "$ET_STATUS" ]; then if [ -f "${MODDIR}/enable_IP_rule" ]; then
if [ -f "${MODDIR}/enable_IP_rule" ]; then rm -f "${MODDIR}/enable_IP_rule"
rm -f "${MODDIR}/enable_IP_rule"
${MODDIR}/hotspot_iprule.sh del "${IP_RULE_SCRIPT}" del >/dev/null 2>&1
REDIR_STATUS="转发已禁用"
echo "热点子网转发已禁用" REDIR_STATUS="转发已禁用"
echo "[ET-NAT] IP rule disabled." >> "${MODDIR}/log.log" echo "热点子网转发已禁用"
else echo "[ET-NAT] Action: IP rule disabled." >> "${MODDIR}/log.log"
touch "${MODDIR}/enable_IP_rule"
${MODDIR}/hotspot_iprule.sh del
${MODDIR}/hotspot_iprule.sh add_once
REDIR_STATUS="转发已激活"
echo "热点子网转发已激活,热点开启后将自动将热点加入转发网络(要求已配置本地网络cidr=参数)。转发规则将随着热点开关而自动开关。该状态将保持到转发被禁用为止。"
echo "[ET-NAT] IP rule enabled." >> "${MODDIR}/log.log"
fi
update_module_description "${ET_STATUS} | ${REDIR_STATUS}"
else else
echo "主程序未正常启动,请先检查配置文件" touch "${MODDIR}/enable_IP_rule"
if [ "$IS_RUNNING" = true ]; then
"${IP_RULE_SCRIPT}" del >/dev/null 2>&1
"${IP_RULE_SCRIPT}" add_once
echo "转发规则将立即生效,无需重启"
else
echo "主程序未运行,转发规则将在下次启动时生效"
fi
REDIR_STATUS="转发已激活"
echo "----------------------------------"
echo "热点子网转发已激活"
echo "热点开启后将自动将热点加入转发网络"
echo "需要在配置中提前配置好 cidr 参数"
echo "----------------------------------"
echo "[ET-NAT] Action: IP rule enabled." >> "${MODDIR}/log.log"
fi fi
sync
update_module_description "${ET_STATUS}| ${REDIR_STATUS}"
+19 -9
View File
@@ -1,9 +1,19 @@
ui_print '安装完成' SKIPMOUNT=false
ui_print '当前架构为' + $ARCH PROPFILE=true
ui_print '当前系统版本为' + $API POSTFSDATA=true
ui_print '安装目录为: /data/adb/modules/easytier_magisk' LATESTARTSERVICE=true
ui_print '配置文件位置: /data/adb/modules/easytier_magisk/config/config.toml'
ui_print '如果需要自定义启动参数,可将 /data/adb/modules/easytier_magisk/config/command_args_sample 重命名为 command_args,并修改其中内容,使用自定义启动参数时会忽略配置文件' set_perm_recursive $MODPATH 0 0 0777 0777
ui_print '修改配置文件后在magisk app禁用应用再启动即可生效'
ui_print '点击操作按钮可启动/关闭热点子网转发,配合easytier的子网代理功能实现手机热点访问easytier网络' ui_print "系统架构为:$ARCH"
ui_print '记得重启' ui_print "系统 SDK 版本:$API"
ui_print "EasyTier 安装位置:/data/adb/modules/easytier_magisk"
ui_print "配置文件位置:/data/adb/modules/easytier_magisk/config/config.toml"
ui_print "如需使用启动参数模式,请将 /data/adb/modules/easytier_magisk/config/command_args_sample 重命名为 command_args,并修改其中的内容"
ui_print "config 目录中存在 command_args 文件时,模块会自动忽略 config.toml 文件"
ui_print "----------------------------------"
ui_print "注意!启动参数文件中不能存在 \" 和 ',配置文件则没有这个限制"
ui_print "----------------------------------"
ui_print "修改配置后无需重启设备,在 Magisk 中禁用 EasyTier 模块,等待 10 秒后重新启用即可让新配置生效"
ui_print "点击 Magisk 中模块左下角的“操作”按钮可以禁用或激活热点子网转发,使用该功能前需要在配置中提前配置好 cidr 参数"
ui_print "模块安装完成,重启设备生效"
@@ -2,64 +2,111 @@
MODDIR=${0%/*} MODDIR=${0%/*}
CONFIG_FILE="${MODDIR}/config/config.toml" CONFIG_FILE="${MODDIR}/config/config.toml"
COMMAND_ARGS="${MODDIR}/config/command_args"
LOG_FILE="${MODDIR}/log.log" LOG_FILE="${MODDIR}/log.log"
MODULE_PROP="${MODDIR}/module.prop" MODULE_PROP="${MODDIR}/module.prop"
EASYTIER="${MODDIR}/easytier-core" EASYTIER="${MODDIR}/easytier-core"
# 处理获取到的设备型号中可能出现的空格
BRAND=$(getprop ro.product.brand | tr ' ' '-')
MODEL=$(getprop ro.product.model | tr ' ' '-')
DEVICE_HOSTNAME="${BRAND}-${MODEL}"
REDIR_STATUS="" REDIR_STATUS=""
# 更新module.prop文件中的description # 更新 module.prop 文件中的 description
update_module_description() { update_module_description() {
local status_message=$1 local status_message=$1
sed -i "/^description=/c\description=[状态]${status_message}" ${MODULE_PROP} # 检查 module.prop 文件存在且 description 发生变化了再写入
if [ -f "${MODULE_PROP}" ]; then
local current_desc=$(grep "^description=" "${MODULE_PROP}")
local new_desc="description=[状态] ${status_message}"
if [ "${current_desc}" != "${new_desc}" ]; then
sed -i "s#^description=.*#${new_desc}#" "${MODULE_PROP}"
fi
fi
} }
if [ -f "${MODDIR}/enable_IP_rule" ]; then # 检查并初始化 TUN 设备
REDIR_STATUS="转发已激活"
else
REDIR_STATUS="转发已禁用"
fi
if [ ! -e /dev/net/tun ]; then if [ ! -e /dev/net/tun ]; then
if [ ! -d /dev/net ]; then if [ ! -d /dev/net ]; then
mkdir -p /dev/net mkdir -p /dev/net
fi fi
ln -s /dev/tun /dev/net/tun ln -s /dev/tun /dev/net/tun
fi fi
while true; do while true; do
if ls $MODDIR | grep -q "disable"; then # 获取子网转发激活状态
update_module_description "关闭中 | ${REDIR_STATUS}" if [ -f "${MODDIR}/enable_IP_rule" ]; then
if pgrep -f 'easytier-core' >/dev/null; then REDIR_STATUS="转发已激活"
echo "开关控制$(date "+%Y-%m-%d %H:%M:%S") 进程已存在,正在关闭 ..."
pkill easytier-core # 关闭进程
fi
else else
if ! pgrep -f 'easytier-core' >/dev/null; then REDIR_STATUS="转发已禁用"
if [ ! -f "$CONFIG_FILE" ]; then fi
update_module_description "config.toml不存在"
sleep 3s
continue
fi
# 如果 config 目录下存在 command_args 文件,则读取其中的内容作为启动参数 # 检查模块是否被禁用
if [ -f "${MODDIR}/config/command_args" ]; then if [ -f "${MODDIR}/disable" ]; then
TZ=Asia/Shanghai ${EASYTIER} $(cat ${MODDIR}/config/command_args) --hostname "$(getprop ro.product.brand)-$(getprop ro.product.model)" > ${LOG_FILE} & update_module_description "主程序已关闭 | ${REDIR_STATUS}"
sleep 5s # 等待easytier-core启动完成 if pgrep -f "${EASYTIER}" >/dev/null; then
update_module_description "主程序已开启(启动参数模式) | ${REDIR_STATUS}" echo "开关控制 $(date "+%Y-%m-%d %H:%M:%S") 进程已存在,正在关闭"
else pkill -f "${EASYTIER}"
TZ=Asia/Shanghai ${EASYTIER} -c ${CONFIG_FILE} --hostname "$(getprop ro.product.brand)-$(getprop ro.product.model)" > ${LOG_FILE} &
sleep 5s # 等待easytier-core启动完成
update_module_description "主程序已开启(配置文件模式) | ${REDIR_STATUS}"
fi
ip rule add from all lookup main
if ! pgrep -f 'easytier-core' >/dev/null; then
update_module_descriptio "主程序启动失败,请检查配置文件"
fi
else
echo "开关控制$(date "+%Y-%m-%d %H:%M:%S") 进程已存在"
fi fi
sleep 10s
continue
fi fi
sleep 3s # 暂停3秒后再次执行循环 # 检查进程是否已经在运行
done if pgrep -f "${EASYTIER}" >/dev/null; then
sleep 10s
continue
fi
# 检查配置文件是否存在
if [ ! -f "${CONFIG_FILE}" ] && [ ! -f "${COMMAND_ARGS}" ]; then
update_module_description "缺少配置文件或启动参数文件"
sleep 10s
continue
fi
# 如果 config 目录下存在 command_args 文件,则读取其中的内容作为启动参数
if [ -f "${COMMAND_ARGS}" ]; then
# 启动参数模式
CMD_CONTENT=$(tr '\r\n' ' ' < "${COMMAND_ARGS}")
if echo "${CMD_CONTENT}" | grep -q "\-\-hostname"; then
FINAL_ARGS="${CMD_CONTENT}"
else
FINAL_ARGS="${CMD_CONTENT} --hostname ${DEVICE_HOSTNAME}"
fi
TZ=Asia/Shanghai "${EASYTIER}" ${FINAL_ARGS} > "${LOG_FILE}" 2>&1 &
STR_MODE="启动参数模式"
# 否则读取 config.toml 的内容作为启动参数
else
# 配置文件模式
if grep -q "^[[:space:]]*hostname[[:space:]]*=" "${CONFIG_FILE}"; then
TZ=Asia/Shanghai "${EASYTIER}" -c "${CONFIG_FILE}" > "${LOG_FILE}" 2>&1 &
else
TZ=Asia/Shanghai "${EASYTIER}" -c "${CONFIG_FILE}" --hostname "${DEVICE_HOSTNAME}" > "${LOG_FILE}" 2>&1 &
fi
STR_MODE="配置文件模式"
fi
# 等待进程启动
sleep 5s
# 启动后的扫尾工作
if pgrep -f "${EASYTIER}" >/dev/null; then
if ! ip rule show | grep -q "lookup main"; then
ip rule add from all lookup main
fi
update_module_description "主程序正在运行(${STR_MODE}| ${REDIR_STATUS}"
else
update_module_description "主程序启动失败,请检查配置文件或启动参数"
fi
sleep 10s
done
+1 -1
View File
@@ -1,6 +1,6 @@
id=easytier_magisk id=easytier_magisk
name=EasyTier_Magisk name=EasyTier_Magisk
version=v2.5.0 version=v2.6.4
versionCode=1 versionCode=1
author=EasyTier author=EasyTier
description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier) description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier)
@@ -1,3 +1,5 @@
MODDIR=${0%/*} MODDIR=${0%/*}
pkill easytier-core # 结束 easytier-core 进程 pkill -f "${MODDIR}/easytier-core"
rm -rf $MODDIR/*
# 使用 ${MODDIR:?} 确保变量非空,避免执行 rm -rf /*
rm -rf "${MODDIR:?}/"*
+963 -163
View File
File diff suppressed because it is too large Load Diff
+10
View File
@@ -7,6 +7,10 @@ edition = "2024"
crate-type=["cdylib"] crate-type=["cdylib"]
[dependencies] [dependencies]
async-trait = "0.1"
base64 = "0.22"
flate2 = "1.1"
gethostname = "1.1"
ohos-hilog-binding = {version = "*", features = ["redirect"]} ohos-hilog-binding = {version = "*", features = ["redirect"]}
easytier = { path = "../../easytier" } easytier = { path = "../../easytier" }
napi-derive-ohos = "1.1" napi-derive-ohos = "1.1"
@@ -26,10 +30,16 @@ napi-ohos = { version = "1.1", default-features = false, features = [
"web_stream", "web_stream",
] } ] }
once_cell = "1.21.3" once_cell = "1.21.3"
ipnet = "2.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.125" serde_json = "1.0.125"
prost-reflect = { version = "0.14.5", default-features = false, features = ["derive"] }
rusqlite = { version = "0.32", features = ["bundled"] }
tracing-subscriber = "0.3.19" tracing-subscriber = "0.3.19"
tracing-core = "0.1.33" tracing-core = "0.1.33"
tracing = "0.1.41" tracing = "0.1.41"
tokio = { version = "1", features = ["rt-multi-thread", "sync", "time"] }
url = "2.5"
uuid = { version = "1.5.0", features = [ uuid = { version = "1.5.0", features = [
"v4", "v4",
"fast-rng", "fast-rng",
@@ -0,0 +1,4 @@
pub(crate) mod repository;
pub(crate) mod services;
pub(crate) mod storage;
pub(crate) mod types;
@@ -0,0 +1,13 @@
#[path = "../../config_repo/field_store.rs"]
mod field_store;
#[path = "../../config_repo/import_export.rs"]
mod import_export;
#[path = "../../config_repo/legacy_migration.rs"]
mod legacy_migration;
#[path = "../../config_repo/validation.rs"]
mod validation;
#[path = "../../config_repo.rs"]
mod repo;
pub use repo::*;
@@ -0,0 +1,2 @@
pub(crate) mod schema_service;
pub(crate) mod share_link_service;
@@ -0,0 +1,414 @@
use easytier::proto::ALL_DESCRIPTOR_BYTES;
use napi_derive_ohos::napi;
use once_cell::sync::Lazy;
use prost_reflect::{Cardinality, DescriptorPool, FieldDescriptor, Kind, MessageDescriptor};
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct FieldOption {
pub label: String,
pub value: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ValidationRule {
pub rule_type: String,
pub arg: String,
pub message: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct NetworkConfigSchema {
pub node_kind: String,
pub name: String,
pub field_number: i32,
pub type_name: Option<String>,
pub semantic_type: Option<String>,
pub value_kind: String,
pub is_list: bool,
pub required: bool,
pub default_value_text: Option<String>,
pub enum_options: Vec<FieldOption>,
pub validations: Vec<ValidationRule>,
pub children: Vec<NetworkConfigSchema>,
pub definitions: Vec<NetworkConfigSchema>,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ConfigFieldMapping {
pub field_name: String,
pub field_number: i32,
}
static DESCRIPTOR_POOL: Lazy<DescriptorPool> = Lazy::new(|| {
DescriptorPool::decode(ALL_DESCRIPTOR_BYTES)
.expect("easytier descriptor pool should decode from embedded protobuf descriptors")
});
const NETWORK_CONFIG_MESSAGE_NAME: &str = "api.manage.NetworkConfig";
fn descriptor_pool() -> &'static DescriptorPool {
&DESCRIPTOR_POOL
}
fn network_config_descriptor() -> MessageDescriptor {
descriptor_pool()
.get_message_by_name(NETWORK_CONFIG_MESSAGE_NAME)
.expect("api.manage.NetworkConfig descriptor should exist")
}
fn field_default_value_text(field: &FieldDescriptor) -> Option<String> {
if field.is_list() || field.is_map() {
return Some("[]".to_string());
}
match field.kind() {
Kind::Bool => Some("false".to_string()),
Kind::String => Some("\"\"".to_string()),
Kind::Bytes => Some("\"\"".to_string()),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => Some("0".to_string()),
Kind::Enum(enum_desc) => enum_desc
.get_value(0)
.map(|value| value.number().to_string()),
Kind::Message(_) => None,
}
}
fn field_type_name(field: &FieldDescriptor) -> Option<String> {
match field.kind() {
Kind::Enum(enum_desc) => Some(enum_desc.full_name().to_string()),
Kind::Message(message_desc) => Some(message_desc.full_name().to_string()),
_ => None,
}
}
fn field_semantic_type(field: &FieldDescriptor) -> Option<String> {
match field.name() {
"virtual_ipv4" => Some("cidr_ip".to_string()),
"network_length" => Some("cidr_mask".to_string()),
"peer_urls" => Some("peer[]".to_string()),
"proxy_cidrs" => Some("cidr[]".to_string()),
"listener_urls" => Some("listener[]".to_string()),
"routes" => Some("route[]".to_string()),
"exit_nodes" => Some("ip[]".to_string()),
"relay_network_whitelist" => Some("network_name[]".to_string()),
"mapped_listeners" => Some("mapped_listener[]".to_string()),
"port_forwards" => Some("port_forward[]".to_string()),
_ => None,
}
}
fn enum_options(kind: Kind) -> Vec<FieldOption> {
match kind {
Kind::Enum(enum_desc) => enum_desc
.values()
.map(|value| FieldOption {
label: value.name().to_string(),
value: value.number().to_string(),
})
.collect(),
_ => Vec::new(),
}
}
fn should_expose_field(field: &FieldDescriptor) -> bool {
match field.containing_oneof() {
Some(_) => field
.field_descriptor_proto()
.proto3_optional
.unwrap_or(false),
None => true,
}
}
fn build_validations(field: &FieldDescriptor) -> Vec<ValidationRule> {
if field.cardinality() == Cardinality::Required {
return vec![ValidationRule {
rule_type: "required".to_string(),
arg: String::new(),
message: format!("{} is required", field.name()),
}];
}
Vec::new()
}
fn kind_to_value_kind(field: &FieldDescriptor) -> String {
if field.is_map() {
return "object".to_string();
}
match field.kind() {
Kind::Bool => "boolean".to_string(),
Kind::String | Kind::Bytes => "string".to_string(),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => "number".to_string(),
Kind::Enum(_) => "enum".to_string(),
Kind::Message(_) => "object".to_string(),
}
}
fn build_node(
node_kind: &str,
name: String,
field_number: i32,
type_name: Option<String>,
semantic_type: Option<String>,
value_kind: String,
is_list: bool,
required: bool,
default_value_text: Option<String>,
enum_options: Vec<FieldOption>,
validations: Vec<ValidationRule>,
children: Vec<NetworkConfigSchema>,
definitions: Vec<NetworkConfigSchema>,
) -> NetworkConfigSchema {
NetworkConfigSchema {
node_kind: node_kind.to_string(),
name,
field_number,
type_name,
semantic_type,
value_kind,
is_list,
required,
default_value_text,
enum_options,
validations,
children,
definitions,
}
}
fn build_map_entry_node(message_desc: &MessageDescriptor) -> NetworkConfigSchema {
let key_field = message_desc.map_entry_key_field();
let value_field = message_desc.map_entry_value_field();
build_node(
"object",
message_desc.name().to_string(),
0,
Some(message_desc.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
vec![
build_schema_field_node(&key_field),
build_schema_field_node(&value_field),
],
Vec::new(),
)
}
fn field_children(field: &FieldDescriptor) -> Vec<NetworkConfigSchema> {
if field.is_map() {
if let Kind::Message(message_desc) = field.kind() {
return vec![build_map_entry_node(&message_desc)];
}
}
match field.kind() {
Kind::Message(message_desc) => build_message_children(&message_desc),
_ => Vec::new(),
}
}
fn build_message_children(message_desc: &MessageDescriptor) -> Vec<NetworkConfigSchema> {
message_desc
.fields()
.filter(should_expose_field)
.map(|field| build_schema_field_node(&field))
.collect()
}
fn build_schema_field_node(field: &FieldDescriptor) -> NetworkConfigSchema {
build_node(
"field",
field.name().to_string(),
field.number() as i32,
field_type_name(field),
field_semantic_type(field),
kind_to_value_kind(field),
field.is_list() || field.is_map(),
field.cardinality() == Cardinality::Required,
field_default_value_text(field),
enum_options(field.kind()),
build_validations(field),
field_children(field),
Vec::new(),
)
}
fn collect_definitions() -> Vec<NetworkConfigSchema> {
let mut definitions = Vec::new();
for message_desc in descriptor_pool().all_messages() {
let full_name = message_desc.full_name();
if full_name == NETWORK_CONFIG_MESSAGE_NAME || message_desc.is_map_entry() {
continue;
}
definitions.push(build_node(
"object",
full_name.to_string(),
0,
Some(full_name.to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&message_desc),
Vec::new(),
));
}
for enum_desc in descriptor_pool().all_enums() {
definitions.push(build_node(
"enum",
enum_desc.full_name().to_string(),
0,
Some(enum_desc.full_name().to_string()),
None,
"enum".to_string(),
false,
false,
None,
enum_options(Kind::Enum(enum_desc.clone())),
Vec::new(),
Vec::new(),
Vec::new(),
));
}
definitions.sort_by(|a, b| a.name.cmp(&b.name));
definitions
}
fn build_network_config_schema() -> NetworkConfigSchema {
let network_config = network_config_descriptor();
build_node(
"schema",
network_config.name().to_string(),
0,
Some(network_config.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&network_config),
collect_definitions(),
)
}
fn build_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
network_config_descriptor()
.fields()
.filter(should_expose_field)
.map(|field| ConfigFieldMapping {
field_name: field.name().to_string(),
field_number: field.number() as i32,
})
.collect()
}
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn schema_is_exposed_as_single_tree_type() {
let schema = get_network_config_schema();
assert_eq!(schema.node_kind, "schema");
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(
schema.type_name.as_deref(),
Some("api.manage.NetworkConfig")
);
let virtual_ipv4 = schema
.children
.iter()
.find(|field| field.name == "virtual_ipv4")
.expect("virtual_ipv4 field");
assert_eq!(virtual_ipv4.semantic_type.as_deref(), Some("cidr_ip"));
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
let secure_mode_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "common.SecureModeConfig")
.expect("secure mode definition");
assert!(
secure_mode_definition
.children
.iter()
.any(|field| field.name == "local_private_key")
);
let networking_method_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "api.manage.NetworkingMethod")
.expect("networking method enum definition");
assert!(
networking_method_definition
.enum_options
.iter()
.any(|option| option.label == "PublicServer")
);
}
}
@@ -0,0 +1,197 @@
use crate::config::repository::{get_config_record, save_config_record};
use crate::config::services::schema_service::get_network_config_field_mappings;
use crate::config::types::stored_config::SharedConfigLinkPayload;
use base64::{Engine as _, engine::general_purpose::URL_SAFE_NO_PAD};
use easytier::proto::api::manage::NetworkConfig;
use flate2::{Compression, read::ZlibDecoder, write::ZlibEncoder};
use gethostname::gethostname;
use std::collections::HashMap;
use std::io::{Read, Write};
use url::Url;
use uuid::Uuid;
const SHARE_LINK_HOST: &str = "easytier.cn";
const SHARE_LINK_PATH: &str = "/comp_cfg";
fn field_name_to_id_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_name, mapping.field_number.to_string()))
.collect()
}
fn field_id_to_name_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_number.to_string(), mapping.field_name))
.collect()
}
fn prune_empty(value: &serde_json::Value) -> Option<serde_json::Value> {
match value {
serde_json::Value::Null => None,
serde_json::Value::Array(values) if values.is_empty() => None,
_ => Some(value.clone()),
}
}
fn map_config_json(config: &NetworkConfig) -> Result<String, String> {
let field_name_to_id = field_name_to_id_map();
let raw = serde_json::to_value(config).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in raw.as_object().cloned().unwrap_or_default() {
let Some(value) = prune_empty(&value) else {
continue;
};
let mapped_key = field_name_to_id.get(&key).cloned().unwrap_or(key);
mapped.insert(mapped_key, value);
}
serde_json::to_string(&mapped).map_err(|err| err.to_string())
}
fn unmap_config_json(raw: &str) -> Result<NetworkConfig, String> {
let field_id_to_name = field_id_to_name_map();
let value = serde_json::from_str::<serde_json::Value>(raw).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in value.as_object().cloned().unwrap_or_default() {
let field_name = field_id_to_name.get(&key).cloned().unwrap_or(key);
mapped.insert(field_name, value);
}
serde_json::from_value(serde_json::Value::Object(mapped)).map_err(|err| err.to_string())
}
fn compress_to_base64url(raw: &str) -> Result<String, String> {
let mut encoder = ZlibEncoder::new(Vec::new(), Compression::best());
encoder
.write_all(raw.as_bytes())
.map_err(|err| err.to_string())?;
let compressed = encoder.finish().map_err(|err| err.to_string())?;
Ok(URL_SAFE_NO_PAD.encode(compressed))
}
fn decompress_from_base64url(raw: &str) -> Result<String, String> {
let compressed = URL_SAFE_NO_PAD.decode(raw).map_err(|err| err.to_string())?;
let mut decoder = ZlibDecoder::new(compressed.as_slice());
let mut out = String::new();
decoder
.read_to_string(&mut out)
.map_err(|err| err.to_string())?;
Ok(out)
}
pub fn build_config_share_link(
config_id: &str,
display_name: Option<String>,
only_start: bool,
) -> Option<String> {
let record = get_config_record(config_id)?;
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let mapped_json = map_config_json(&config).ok()?;
let compressed = compress_to_base64url(&mapped_json).ok()?;
let final_name = display_name
.or(Some(record.meta.display_name))
.filter(|name| !name.is_empty());
let mut url = Url::parse(&format!("https://{SHARE_LINK_HOST}{SHARE_LINK_PATH}")).ok()?;
url.query_pairs_mut().append_pair("cfg", &compressed);
if let Some(name) = final_name {
url.query_pairs_mut().append_pair("name", &name);
}
if only_start {
url.query_pairs_mut().append_pair("only_start", "true");
}
Some(url.to_string())
}
pub fn parse_config_share_link(share_link: &str) -> Option<SharedConfigLinkPayload> {
let url = Url::parse(share_link).ok()?;
if url.host_str()? != SHARE_LINK_HOST || url.path() != SHARE_LINK_PATH {
return None;
}
let cfg = url
.query_pairs()
.find(|(key, _)| key == "cfg")?
.1
.to_string();
let mapped_json = decompress_from_base64url(&cfg).ok()?;
let mut config = unmap_config_json(&mapped_json).ok()?;
config.instance_id = Some(Uuid::new_v4().to_string());
let hostname = gethostname().to_string_lossy().to_string();
if !hostname.is_empty() {
config.hostname = Some(hostname);
}
let config_json = serde_json::to_string(&config).ok()?;
let display_name = url
.query_pairs()
.find(|(key, _)| key == "name")
.map(|(_, value)| value.to_string())
.filter(|name| !name.is_empty());
let only_start = url
.query_pairs()
.find(|(key, _)| key == "only_start")
.map(|(_, value)| value == "true")
.unwrap_or(false);
Some(SharedConfigLinkPayload {
config_json,
display_name,
only_start,
})
}
pub fn import_config_share_link(
share_link: &str,
display_name_override: Option<String>,
) -> Option<String> {
let payload = parse_config_share_link(share_link)?;
let config = serde_json::from_str::<NetworkConfig>(&payload.config_json).ok()?;
let config_id = config.instance_id.clone()?;
let display_name = display_name_override
.filter(|name| !name.is_empty())
.or(payload.display_name)
.unwrap_or_else(|| config_id.clone());
save_config_record(config_id.clone(), display_name, payload.config_json)?;
Some(config_id)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config_repo::{create_config_record, init_config_store};
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
std::env::temp_dir()
.join(format!("easytier_ohrs_share_test_{unique}"))
.to_string_lossy()
.into_owned()
}
#[test]
fn share_link_roundtrip_works() {
assert!(init_config_store(test_root()));
create_config_record("cfg-share".to_string(), "share-demo".to_string())
.expect("create config");
let link = build_config_share_link("cfg-share", None, true).expect("share link");
let payload = parse_config_share_link(&link).expect("parse link");
let config =
serde_json::from_str::<NetworkConfig>(&payload.config_json).expect("config json");
assert!(payload.only_start);
assert_eq!(payload.display_name.as_deref(), Some("share-demo"));
assert_ne!(config.instance_id.as_deref(), Some("cfg-share"));
let imported_id = import_config_share_link(&link, None).expect("import link");
assert_ne!(imported_id, "cfg-share");
}
}
@@ -0,0 +1,333 @@
use crate::config::types::stored_config::{StoredConfigList, StoredConfigMeta};
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::{Connection, OptionalExtension, params};
use std::path::PathBuf;
use std::sync::Mutex;
use std::time::{SystemTime, UNIX_EPOCH};
static CONFIG_DB_PATH: Mutex<Option<PathBuf>> = Mutex::new(None);
const CONFIG_DB_FILE_NAME: &str = "easytier-config-store.db";
#[derive(Debug, Clone)]
struct StoredConfigMetaRecord {
config_id: String,
display_name: String,
created_at: String,
updated_at: String,
favorite: bool,
temporary: bool,
}
pub(crate) fn now_ts_string() -> String {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|d| d.as_secs().to_string())
.unwrap_or_else(|_| "0".to_string())
}
fn db_file_path() -> Option<PathBuf> {
CONFIG_DB_PATH
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
fn init_schema(conn: &Connection) -> rusqlite::Result<()> {
conn.execute_batch(
"PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS stored_configs (
config_id TEXT PRIMARY KEY,
display_name TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
favorite INTEGER NOT NULL DEFAULT 0,
temporary INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE IF NOT EXISTS stored_config_fields (
config_id TEXT NOT NULL,
field_name TEXT NOT NULL,
field_json TEXT NOT NULL,
updated_at TEXT NOT NULL,
PRIMARY KEY (config_id, field_name),
FOREIGN KEY (config_id) REFERENCES stored_configs(config_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_stored_config_fields_config_id
ON stored_config_fields(config_id);",
)
}
pub(crate) fn open_db() -> Option<Connection> {
let path = db_file_path()?;
let conn = match Connection::open(&path) {
Ok(conn) => conn,
Err(e) => {
hilog_error!("[Rust] failed to open config db {}: {}", path.display(), e);
return None;
}
};
if let Err(e) = init_schema(&conn) {
hilog_error!(
"[Rust] failed to initialize config db {}: {}",
path.display(),
e
);
return None;
}
Some(conn)
}
fn row_to_meta(row: &rusqlite::Row<'_>) -> rusqlite::Result<StoredConfigMetaRecord> {
Ok(StoredConfigMetaRecord {
config_id: row.get(0)?,
display_name: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
favorite: row.get::<_, i64>(4)? != 0,
temporary: row.get::<_, i64>(5)? != 0,
})
}
fn load_meta_record(conn: &Connection, config_id: &str) -> Option<StoredConfigMetaRecord> {
conn.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
}
fn to_meta(record: StoredConfigMetaRecord) -> StoredConfigMeta {
StoredConfigMeta {
config_id: record.config_id,
display_name: record.display_name,
created_at: record.created_at,
updated_at: record.updated_at,
favorite: record.favorite,
temporary: record.temporary,
}
}
pub fn init_config_meta_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
if let Err(e) = std::fs::create_dir_all(&root) {
hilog_error!(
"[Rust] failed to create config db dir {}: {}",
root.display(),
e
);
return false;
}
let db_path = root.join(CONFIG_DB_FILE_NAME);
match CONFIG_DB_PATH.lock() {
Ok(mut guard) => {
*guard = Some(db_path.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config db path: {}", e);
return false;
}
}
if open_db().is_none() {
return false;
}
hilog_debug!("[Rust] initialized config db at {}", db_path.display());
true
}
pub fn list_config_meta_entries() -> StoredConfigList {
let Some(conn) = open_db() else {
return StoredConfigList { configs: vec![] };
};
let mut stmt = match conn.prepare(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs
ORDER BY updated_at DESC, display_name ASC",
) {
Ok(stmt) => stmt,
Err(e) => {
hilog_error!("[Rust] failed to prepare list meta query: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let rows = match stmt.query_map([], row_to_meta) {
Ok(rows) => rows,
Err(e) => {
hilog_error!("[Rust] failed to list config meta rows: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let configs = rows.filter_map(Result::ok).map(to_meta).collect();
StoredConfigList { configs }
}
pub fn get_config_display_name(config_id: &str) -> Option<String> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(|record| record.display_name)
}
pub fn get_config_meta(config_id: &str) -> Option<StoredConfigMeta> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(to_meta)
}
pub fn upsert_config_meta(
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> StoredConfigMeta {
let now = now_ts_string();
let Some(conn) = open_db() else {
return StoredConfigMeta {
config_id,
display_name,
created_at: now.clone(),
updated_at: now,
favorite,
temporary,
};
};
let created_at = load_meta_record(&conn, &config_id)
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
if let Err(e) = conn.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
) {
hilog_error!("[Rust] failed to upsert config meta: {}", e);
}
get_config_meta(&config_id).unwrap_or(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
})
}
pub(crate) fn upsert_config_meta_in_tx(
tx: &rusqlite::Transaction<'_>,
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> Option<StoredConfigMeta> {
let now = now_ts_string();
let created_at = tx
.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
tx.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
)
.ok()?;
tx.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(to_meta)
.or(Some(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
}))
}
pub fn set_config_display_name(
config_id: String,
display_name: String,
) -> Option<StoredConfigMeta> {
let conn = open_db()?;
let mut record = load_meta_record(&conn, &config_id)?;
record.display_name = display_name;
record.updated_at = now_ts_string();
conn.execute(
"UPDATE stored_configs
SET display_name = ?2, updated_at = ?3
WHERE config_id = ?1",
params![config_id, record.display_name, record.updated_at],
)
.ok()?;
Some(to_meta(record))
}
pub fn delete_config_meta(config_id: &str) -> bool {
let Some(conn) = open_db() else {
return false;
};
match conn.execute(
"DELETE FROM stored_configs WHERE config_id = ?1",
params![config_id],
) {
Ok(rows) => rows > 0,
Err(e) => {
hilog_error!("[Rust] failed to delete config meta {}: {}", config_id, e);
false
}
}
}
@@ -0,0 +1 @@
pub(crate) mod config_meta;
@@ -0,0 +1 @@
pub(crate) mod stored_config;
@@ -0,0 +1,68 @@
use napi_derive_ohos::napi;
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigMeta {
pub config_id: String,
pub display_name: String,
pub created_at: String,
pub updated_at: String,
pub favorite: bool,
pub temporary: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigRecord {
pub meta: StoredConfigMeta,
pub config_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigList {
pub configs: Vec<StoredConfigMeta>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct ExportTomlResult {
pub toml_text: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigSummary {
pub config_id: String,
pub display_name: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct SharedConfigLinkPayload {
pub config_json: String,
pub display_name: Option<String>,
pub only_start: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct LocalSocketSyncMessage {
pub message_type: String,
pub payload_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct KeyValuePair {
pub key: String,
pub value: String,
}
@@ -0,0 +1,349 @@
use super::{field_store, import_export, legacy_migration, validation};
use crate::config::storage::config_meta::{
delete_config_meta, get_config_meta, init_config_meta_store, list_config_meta_entries, open_db,
upsert_config_meta_in_tx,
};
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::ConfigLoader;
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::params;
use serde_json::Value;
use std::path::PathBuf;
use std::sync::Mutex;
static CONFIG_ROOT_DIR: Mutex<Option<PathBuf>> = Mutex::new(None);
pub(crate) const CONFIG_DIR_NAME: &str = "easytier-configs";
pub(crate) const KERNEL_SOCKET_FILE_NAME: &str = "easytier-kernel.sock";
pub(crate) fn config_root_dir() -> Option<PathBuf> {
CONFIG_ROOT_DIR
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
pub(crate) fn kernel_socket_path() -> Option<PathBuf> {
config_root_dir().map(|root| root.join(KERNEL_SOCKET_FILE_NAME))
}
pub(crate) fn legacy_config_file_path(config_id: &str) -> Option<PathBuf> {
legacy_migration::legacy_config_file_path(&config_root_dir(), CONFIG_DIR_NAME, config_id)
}
pub fn init_config_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
let configs_dir = root.join(CONFIG_DIR_NAME);
if let Err(e) = std::fs::create_dir_all(&configs_dir) {
hilog_error!(
"[Rust] failed to create config dir {}: {}",
configs_dir.display(),
e
);
return false;
}
match CONFIG_ROOT_DIR.lock() {
Ok(mut guard) => {
*guard = Some(root.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config root dir: {}", e);
return false;
}
}
if !init_config_meta_store(root.to_string_lossy().into_owned()) {
return false;
}
hilog_debug!(
"[Rust] initialized config repo at {}",
configs_dir.display()
);
true
}
fn migrate_legacy_file_if_needed(config_id: &str) -> Option<()> {
legacy_migration::migrate_legacy_file_if_needed(
&config_root_dir(),
CONFIG_DIR_NAME,
config_id,
save_config_record,
)
}
pub fn save_config_record(
config_id: String,
display_name: String,
config_json: String,
) -> Option<StoredConfigRecord> {
let config = match validation::validate_config_json(&config_json, config_id.clone()) {
Ok(config) => config,
Err(e) => {
hilog_error!("[Rust] save_config_record failed {}", e);
return None;
}
};
let normalized_json = match serde_json::to_string(&config) {
Ok(raw) => raw,
Err(e) => {
hilog_error!(
"[Rust] failed to serialize normalized config {}: {}",
config_id,
e
);
return None;
}
};
let fields = match validation::config_to_top_level_map(&config) {
Some(fields) => fields,
None => return None,
};
let conn = open_db()?;
let tx = conn.unchecked_transaction().ok()?;
let existing_meta = get_config_meta(&config_id);
let favorite = existing_meta
.as_ref()
.map(|meta| meta.favorite)
.unwrap_or(false);
let temporary = existing_meta
.as_ref()
.map(|meta| meta.temporary)
.unwrap_or(false);
let meta = upsert_config_meta_in_tx(&tx, config_id.clone(), display_name, favorite, temporary)?;
field_store::replace_config_fields(&tx, &config_id, fields)?;
tx.commit().ok()?;
if let Some(legacy_path) = legacy_config_file_path(&config_id) {
if legacy_path.exists() {
let _ = std::fs::remove_file(legacy_path);
}
}
Some(StoredConfigRecord {
meta,
config_json: normalized_json,
})
}
pub fn load_config_json(config_id: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let object = field_store::load_config_map_from_db(config_id)?;
serde_json::to_string(&Value::Object(object)).ok()
}
pub fn get_config_record(config_id: &str) -> Option<StoredConfigRecord> {
let config_json = load_config_json(config_id)?;
let meta = get_config_meta(config_id)?;
Some(StoredConfigRecord { meta, config_json })
}
pub fn get_config_field_value(config_id: &str, field: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let conn = open_db()?;
conn.query_row(
"SELECT field_json FROM stored_config_fields
WHERE config_id = ?1 AND field_name = ?2",
params![config_id, field],
|row| row.get::<_, String>(0),
)
.ok()
}
pub fn set_config_field_value(config_id: &str, field: &str, json_value: &str) -> bool {
if field.contains('.') {
return false;
}
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
let mut value = match serde_json::from_str::<Value>(&raw) {
Ok(value) => value,
Err(_) => return false,
};
let new_field_value = match serde_json::from_str::<Value>(json_value) {
Ok(value) => value,
Err(_) => return false,
};
let object = match value.as_object_mut() {
Some(object) => object,
None => return false,
};
object.insert(field.to_string(), new_field_value);
let normalized = match serde_json::to_string(&value) {
Ok(raw) => raw,
Err(_) => return false,
};
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, normalized).is_some()
}
pub fn get_display_name(config_id: &str) -> Option<String> {
get_config_meta(config_id).map(|meta| meta.display_name)
}
pub fn get_default_config_json() -> Option<String> {
crate::build_default_network_config_json().ok()
}
pub fn create_config_record(config_id: String, display_name: String) -> Option<StoredConfigRecord> {
let raw = get_default_config_json()?;
let mut config = serde_json::from_str::<NetworkConfig>(&raw).ok()?;
config.instance_id = Some(config_id.clone());
let normalized_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, display_name, normalized_json)
}
pub fn start_kernel_with_config_id(config_id: &str) -> bool {
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
crate::run_network_instance_from_json(&raw)
}
pub fn list_config_meta_json() -> String {
serde_json::to_string(&list_config_meta_entries().configs).unwrap_or_else(|_| "[]".to_string())
}
pub fn delete_config_record(config_id: &str) -> bool {
if let Some(path) = legacy_config_file_path(config_id) {
if path.exists() {
let _ = std::fs::remove_file(path);
}
}
let conn = match open_db() {
Some(conn) => conn,
None => return false,
};
if let Err(e) = conn.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!("[Rust] failed to delete config fields {}: {}", config_id, e);
return false;
}
delete_config_meta(config_id)
}
pub fn export_config_toml(config_id: &str) -> Option<ExportTomlResult> {
let record = get_config_record(config_id)?;
import_export::export_config_toml_from_record(&record)
}
pub fn import_toml_config(
toml_text: String,
display_name: Option<String>,
) -> Option<StoredConfigRecord> {
import_export::import_toml_to_record(toml_text, display_name, save_config_record)
}
#[cfg(test)]
mod tests {
use super::*;
use rusqlite::params;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
let dir = std::env::temp_dir().join(format!("easytier_ohrs_test_{}", unique));
dir.to_string_lossy().into_owned()
}
#[test]
fn save_get_export_delete_roundtrip() {
let root = test_root();
assert!(init_config_store(root.clone()));
let config_json = crate::build_default_network_config_json().expect("default config");
let saved = save_config_record("cfg-1".to_string(), "test-config".to_string(), config_json)
.expect("save config");
assert_eq!(saved.meta.config_id, "cfg-1");
assert_eq!(saved.meta.display_name, "test-config");
let loaded = get_config_record("cfg-1").expect("load config");
assert_eq!(loaded.meta.display_name, "test-config");
assert!(loaded.config_json.contains("cfg-1"));
let legacy_json_path = PathBuf::from(&root)
.join(CONFIG_DIR_NAME)
.join("cfg-1.json");
assert!(
!legacy_json_path.exists(),
"config should no longer be persisted as a per-config json file"
);
let conn = open_db().expect("db should be open");
let field_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM stored_config_fields WHERE config_id = ?1",
params!["cfg-1"],
|row| row.get(0),
)
.expect("count config fields");
assert!(field_count > 0, "config fields should be stored in sqlite");
let exported = export_config_toml("cfg-1").expect("export toml");
assert!(exported.toml_text.contains("instance_id"));
assert!(delete_config_record("cfg-1"));
assert!(get_config_record("cfg-1").is_none());
}
#[test]
fn set_config_field_updates_only_requested_top_level_field() {
let root = test_root();
assert!(init_config_store(root));
let config_json = crate::build_default_network_config_json().expect("default config");
save_config_record(
"cfg-field".to_string(),
"field-config".to_string(),
config_json,
)
.expect("save config");
let before_network_name = get_config_field_value("cfg-field", "network_name");
let before_instance_id = get_config_field_value("cfg-field", "instance_id")
.expect("instance id field should exist");
assert!(set_config_field_value(
"cfg-field",
"network_name",
"\"changed-network\""
));
assert_eq!(
get_config_field_value("cfg-field", "network_name"),
Some("\"changed-network\"".to_string())
);
assert_eq!(
get_config_field_value("cfg-field", "instance_id"),
Some(before_instance_id)
);
assert_ne!(
get_config_field_value("cfg-field", "network_name"),
before_network_name
);
}
}
@@ -0,0 +1,67 @@
use crate::config::storage::config_meta::{now_ts_string, open_db};
use ohos_hilog_binding::hilog_error;
use rusqlite::{Connection, params};
use serde_json::{Map, Value};
pub(super) fn load_config_map_from_db(config_id: &str) -> Option<Map<String, Value>> {
let conn = open_db()?;
let mut stmt = conn
.prepare(
"SELECT field_name, field_json
FROM stored_config_fields
WHERE config_id = ?1",
)
.ok()?;
let rows = stmt
.query_map(params![config_id], |row| {
let field_name: String = row.get(0)?;
let field_json: String = row.get(1)?;
Ok((field_name, field_json))
})
.ok()?;
let mut object = Map::new();
for row in rows {
let (field_name, field_json) = row.ok()?;
let value = serde_json::from_str::<Value>(&field_json).ok()?;
object.insert(field_name, value);
}
if object.is_empty() {
None
} else {
Some(object)
}
}
pub(super) fn replace_config_fields(
tx: &Connection,
config_id: &str,
fields: Map<String, Value>,
) -> Option<()> {
if let Err(e) = tx.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!(
"[Rust] failed to clear existing config fields {}: {}",
config_id,
e
);
return None;
}
for (field_name, value) in fields {
let field_json = serde_json::to_string(&value).ok()?;
if let Err(e) = tx.execute(
"INSERT INTO stored_config_fields (config_id, field_name, field_json, updated_at)
VALUES (?1, ?2, ?3, ?4)",
params![config_id, field_name, field_json, now_ts_string()],
) {
hilog_error!("[Rust] failed to persist config field {}: {}", config_id, e);
return None;
}
}
Some(())
}
@@ -0,0 +1,48 @@
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::{ConfigLoader, TomlConfigLoader};
use easytier::proto::api::manage::NetworkConfig;
pub(super) fn export_config_toml_from_record(
record: &StoredConfigRecord,
) -> Option<ExportTomlResult> {
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let toml = config.gen_config().ok()?;
Some(ExportTomlResult {
toml_text: toml.dump(),
})
}
pub(super) fn import_toml_to_record(
toml_text: String,
display_name: Option<String>,
save_config_record: impl Fn(String, String, String) -> Option<StoredConfigRecord>,
) -> Option<StoredConfigRecord> {
let config =
NetworkConfig::new_from_config(TomlConfigLoader::new_from_str(&toml_text).ok()?).ok()?;
let config_id = config.instance_id.clone()?;
let name_from_toml = toml_text
.lines()
.find_map(|line| {
let trimmed = line.trim();
if !trimmed.starts_with("instance_name") {
return None;
}
trimmed.split_once('=').map(|(_, value)| {
value
.trim()
.trim_matches('"')
.trim_matches('\'')
.to_string()
})
})
.filter(|name| !name.is_empty());
let final_name = display_name
.filter(|name| !name.is_empty())
.or(name_from_toml)
.unwrap_or_else(|| config_id.clone());
let config_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, final_name, config_json)
}
@@ -0,0 +1,45 @@
use crate::config::storage::config_meta::get_config_meta;
use ohos_hilog_binding::hilog_error;
use std::path::PathBuf;
pub(super) fn legacy_config_file_path(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
) -> Option<PathBuf> {
root_dir.as_ref().map(|root| {
root.join(config_dir_name)
.join(format!("{}.json", config_id))
})
}
pub(super) fn migrate_legacy_file_if_needed(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
save_config_record: impl Fn(
String,
String,
String,
) -> Option<crate::config::types::stored_config::StoredConfigRecord>,
) -> Option<()> {
let legacy_path = legacy_config_file_path(root_dir, config_dir_name, config_id)?;
if !legacy_path.exists() {
return Some(());
}
let raw = std::fs::read_to_string(&legacy_path).ok()?;
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, raw)?;
if let Err(e) = std::fs::remove_file(&legacy_path) {
hilog_error!(
"[Rust] failed to remove legacy config file {}: {}",
legacy_path.display(),
e
);
}
Some(())
}
@@ -0,0 +1,30 @@
use easytier::proto::api::manage::NetworkConfig;
use serde_json::{Map, Value};
pub(super) fn normalize_config_id(
mut config: NetworkConfig,
requested_id: String,
) -> Result<NetworkConfig, String> {
if requested_id.is_empty() {
return Err("config_id is required".to_string());
}
config.instance_id = Some(requested_id);
Ok(config)
}
pub(super) fn validate_config_json(
config_json: &str,
config_id: String,
) -> Result<NetworkConfig, String> {
let config = serde_json::from_str::<NetworkConfig>(config_json)
.map_err(|e| format!("parse config json failed: {}", e))?;
let config = normalize_config_id(config, config_id)?;
config
.gen_config()
.map_err(|e| format!("generate toml failed: {}", e))?;
Ok(config)
}
pub(super) fn config_to_top_level_map(config: &NetworkConfig) -> Option<Map<String, Value>> {
serde_json::to_value(config).ok()?.as_object().cloned()
}
@@ -0,0 +1,2 @@
pub(crate) mod config_api;
pub(crate) mod runtime_api;
@@ -0,0 +1,46 @@
use crate::config;
pub(crate) fn init_config_store(root_dir: String) -> bool {
config::repository::init_config_store(root_dir)
}
pub(crate) fn list_configs() -> String {
config::repository::list_config_meta_json()
}
pub(crate) fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
config::repository::save_config_record(config_id, display_name, config_json).is_some()
}
pub(crate) fn create_config(config_id: String, display_name: String) -> bool {
config::repository::create_config_record(config_id, display_name).is_some()
}
pub(crate) fn delete_stored_config_meta(config_id: String) -> bool {
config::repository::delete_config_record(&config_id)
}
pub(crate) fn get_config(config_id: String) -> Option<String> {
config::repository::load_config_json(&config_id)
}
pub(crate) fn get_default_config() -> Option<String> {
config::repository::get_default_config_json()
}
pub(crate) fn get_config_field(config_id: String, field: String) -> Option<String> {
config::repository::get_config_field_value(&config_id, &field)
}
pub(crate) fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
config::repository::set_config_field_value(&config_id, &field, &json_value)
}
pub(crate) fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
config::repository::import_toml_config(toml_text, display_name)
.map(|record| record.meta.config_id)
}
pub(crate) fn export_toml(config_id: String) -> Option<String> {
config::repository::export_config_toml(&config_id).map(|ret| ret.toml_text)
}
@@ -0,0 +1,184 @@
use crate::config::repository::load_config_json;
use crate::config::storage::config_meta::get_config_display_name;
use crate::config::types::stored_config::KeyValuePair;
use crate::kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use crate::runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use crate::{ASYNC_RUNTIME, EASYTIER_VERSION, INSTANCE_MANAGER, WEB_CLIENTS};
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_error, hilog_info};
use std::sync::Arc;
pub(crate) fn start_kernel(
config_id: String,
start_kernel_with_config_id: impl Fn(&str) -> bool,
) -> bool {
start_kernel_with_config_id(&config_id)
}
pub(crate) fn stop_kernel(
config_id: String,
stop_web_client: impl Fn(&str) -> bool,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
maybe_stop_local_socket_server: impl Fn(),
) -> bool {
clear_tun_attached(&config_id);
if stop_web_client(&config_id) {
return true;
}
let Some(instance_id) = parse_instance_uuid(&config_id) else {
return false;
};
let ret = INSTANCE_MANAGER
.delete_network_instance(vec![instance_id])
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!("[Rust] stop_kernel failed {}: {}", config_id, err);
false
});
maybe_stop_local_socket_server();
ret
}
pub(crate) fn stop_network_instance(
config_ids: Vec<String>,
stop_kernel: impl Fn(String) -> bool,
) -> bool {
let mut ok = true;
for config_id in config_ids {
ok = stop_kernel(config_id) && ok;
}
ok
}
pub(crate) fn collect_network_infos() -> Vec<KeyValuePair> {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return vec![];
}
};
infos
.into_iter()
.filter_map(|(key, value)| {
serde_json::to_string(&value)
.ok()
.map(|value_json| KeyValuePair {
key: key.to_string(),
value: value_json,
})
})
.collect()
}
pub(crate) fn set_tun_fd(
config_id: String,
fd: i32,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
) -> bool {
let Some(instance_id) = parse_instance_uuid(&config_id) else {
hilog_error!("[Rust] set_tun_fd invalid instance id: {}", config_id);
return false;
};
INSTANCE_MANAGER
.set_tun_fd(&instance_id, fd)
.map(|_| {
mark_tun_attached(&config_id);
hilog_info!(
"[Rust] set_tun_fd success instance={} fd={} marked_attached=true",
config_id,
fd
);
true
})
.unwrap_or_else(|err| {
hilog_error!("[Rust] set_tun_fd failed {}: {}", config_id, err);
false
})
}
pub(crate) fn get_runtime_snapshot() -> RuntimeAggregateState {
get_runtime_snapshot_inner()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return RuntimeAggregateState {
instances: vec![],
tun: TunAggregateState {
active: false,
attached_instance_ids: vec![],
aggregated_routes: vec![],
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count: 0,
};
}
};
let mut instances = Vec::with_capacity(infos.len());
for (instance_uuid, info) in infos {
let config_id = instance_uuid.to_string();
let display_name = get_config_display_name(&config_id).unwrap_or_else(|| config_id.clone());
let config_json = load_config_json(&config_id);
let stored_config = config_json
.as_deref()
.and_then(|raw| serde_json::from_str::<NetworkConfig>(raw).ok());
let magic_dns_enabled = stored_config
.as_ref()
.and_then(|cfg| cfg.enable_magic_dns)
.unwrap_or(false);
let need_exit_node = stored_config
.as_ref()
.map(|cfg| !cfg.exit_nodes.is_empty())
.unwrap_or(false);
instances.push(runtime_instance_from_running_info(
config_id,
display_name,
magic_dns_enabled,
need_exit_node,
info,
));
}
instances.sort_by(|a, b| {
a.display_name
.cmp(&b.display_name)
.then_with(|| a.instance_id.cmp(&b.instance_id))
});
let attached_instance_ids = instances
.iter()
.filter(|instance| instance.tun_required)
.map(|instance| instance.instance_id.clone())
.collect::<Vec<_>>();
let aggregated_routes = aggregate_requested_tun_routes(&instances);
let running_instance_count =
instances.iter().filter(|instance| instance.running).count() as i32;
let tun_active = !attached_instance_ids.is_empty();
RuntimeAggregateState {
instances,
tun: TunAggregateState {
active: tun_active,
attached_instance_ids,
aggregated_routes,
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count,
}
}
@@ -0,0 +1,6 @@
mod protocol;
mod routing;
mod socket_server;
pub(crate) use routing::aggregate_requested_tun_routes;
pub use socket_server::{start_local_socket_server, stop_local_socket_server};
@@ -0,0 +1,50 @@
use crate::config::types::stored_config::LocalSocketSyncMessage;
use serde::Serialize;
use std::io::{Error, ErrorKind, Write};
use std::os::unix::net::UnixStream;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct TunRequestPayload {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub aggregated_routes: Vec<String>,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
}
pub(crate) fn send_local_socket_message(
stream: &mut UnixStream,
message_type: &str,
payload_json: String,
) -> std::io::Result<()> {
let message = LocalSocketSyncMessage {
message_type: message_type.to_string(),
payload_json,
};
let mut raw = serde_json::to_vec(&message)
.map_err(|err| Error::new(ErrorKind::InvalidData, err.to_string()))?;
raw.push(b'\n');
stream.write_all(&raw)?;
Ok(())
}
pub(crate) fn broadcast_local_socket_message(
clients: &mut Vec<UnixStream>,
message_type: &str,
payload_json: &str,
) -> bool {
let mut active_clients = Vec::with_capacity(clients.len());
let mut delivered = false;
for mut client in clients.drain(..) {
if send_local_socket_message(&mut client, message_type, payload_json.to_string()).is_ok() {
delivered = true;
active_clients.push(client);
}
}
*clients = active_clients;
delivered
}
@@ -0,0 +1,105 @@
use crate::config::repository::load_config_json;
use crate::runtime::state::runtime_state::RuntimeInstanceState;
use easytier::proto::api::manage::NetworkConfig;
use ipnet::IpNet;
use ohos_hilog_binding::hilog_debug;
use std::collections::HashSet;
use std::net::IpAddr;
pub(crate) fn load_manual_routes(config_id: &str) -> Vec<String> {
load_config_json(config_id)
.and_then(|raw| serde_json::from_str::<NetworkConfig>(&raw).ok())
.map(|config| config.routes)
.unwrap_or_default()
}
fn normalize_route_cidr(route: &str) -> Option<String> {
route
.parse::<IpNet>()
.ok()
.map(|network| match network {
IpNet::V4(net) => net.trunc().to_string(),
IpNet::V6(net) => net.trunc().to_string(),
})
.or_else(|| {
route.parse::<IpAddr>().ok().map(|addr| match addr {
IpAddr::V4(ip) => format!("{}/32", ip),
IpAddr::V6(ip) => format!("{}/128", ip),
})
})
}
fn simplify_routes(routes: Vec<String>) -> Vec<String> {
let mut parsed = routes
.into_iter()
.filter_map(|route| normalize_route_cidr(&route))
.filter_map(|route| route.parse::<IpNet>().ok())
.collect::<Vec<_>>();
parsed.sort_by(|left, right| {
left.prefix_len()
.cmp(&right.prefix_len())
.then_with(|| left.network().to_string().cmp(&right.network().to_string()))
});
let mut simplified = Vec::<IpNet>::new();
'outer: for route in parsed {
for existing in &simplified {
if existing.contains(&route.network()) && existing.prefix_len() <= route.prefix_len() {
continue 'outer;
}
}
simplified.retain(|existing| {
!(route.contains(&existing.network()) && route.prefix_len() <= existing.prefix_len())
});
simplified.push(route);
}
let mut seen = HashSet::new();
simplified
.into_iter()
.map(|route| route.to_string())
.filter(|route| seen.insert(route.clone()))
.collect()
}
pub(crate) fn aggregate_tun_routes(instance: &RuntimeInstanceState) -> Vec<String> {
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
let manual_routes = load_manual_routes(&instance.config_id);
let proxy_cidrs = instance
.routes
.iter()
.flat_map(|route| route.proxy_cidrs.iter().cloned())
.collect::<Vec<_>>();
let mut raw_routes = Vec::new();
if let Some(cidr) = virtual_ipv4_cidr.clone() {
raw_routes.push(cidr);
}
raw_routes.extend(manual_routes.iter().cloned());
raw_routes.extend(proxy_cidrs.iter().cloned());
let aggregated_routes = simplify_routes(raw_routes);
hilog_debug!(
"[Rust] aggregate_tun_routes instance={} proxy_cidrs={:?} aggregated_routes={:?}",
instance.instance_id,
proxy_cidrs,
aggregated_routes
);
aggregated_routes
}
pub(crate) fn aggregate_requested_tun_routes(instances: &[RuntimeInstanceState]) -> Vec<String> {
let mut aggregated_routes = Vec::new();
let mut seen_routes = HashSet::new();
for instance in instances.iter().filter(|instance| instance.tun_required) {
for route in aggregate_tun_routes(instance) {
if seen_routes.insert(route.clone()) {
aggregated_routes.push(route);
}
}
}
aggregated_routes
}
@@ -0,0 +1,196 @@
use super::protocol::{TunRequestPayload, broadcast_local_socket_message};
use crate::config::repository::kernel_socket_path;
use crate::get_runtime_snapshot_inner;
use crate::kernel_bridge::routing::aggregate_tun_routes;
use ohos_hilog_binding::{hilog_error, hilog_info};
use once_cell::sync::Lazy;
use std::collections::{HashMap, HashSet};
use std::io::ErrorKind;
use std::os::unix::net::{UnixListener, UnixStream};
use std::path::PathBuf;
use std::sync::Mutex;
use std::sync::atomic::{AtomicBool, Ordering};
use std::thread::{self, JoinHandle};
use std::time::Duration;
struct LocalSocketState {
stop_flag: std::sync::Arc<AtomicBool>,
socket_path: PathBuf,
worker: JoinHandle<()>,
}
static LOCAL_SOCKET_STATE: Lazy<Mutex<Option<LocalSocketState>>> = Lazy::new(|| Mutex::new(None));
pub fn start_local_socket_server() -> bool {
let socket_path = match kernel_socket_path() {
Some(path) => path,
None => {
hilog_error!("[Rust] kernel socket path unavailable");
return false;
}
};
match LOCAL_SOCKET_STATE.lock() {
Ok(guard) if guard.is_some() => return true,
Ok(_) => {}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
}
if socket_path.exists() {
let _ = std::fs::remove_file(&socket_path);
}
let listener = match UnixListener::bind(&socket_path) {
Ok(listener) => listener,
Err(err) => {
hilog_error!(
"[Rust] bind localsocket failed {}: {}",
socket_path.display(),
err
);
return false;
}
};
if let Err(err) = listener.set_nonblocking(true) {
hilog_error!("[Rust] set localsocket nonblocking failed: {}", err);
let _ = std::fs::remove_file(&socket_path);
return false;
}
let stop_flag = std::sync::Arc::new(AtomicBool::new(false));
let worker_stop_flag = stop_flag.clone();
let worker = thread::spawn(move || {
let mut last_snapshot_json = String::new();
let mut delivered_tun_requests = HashSet::new();
let mut last_tun_route_signatures = HashMap::<String, String>::new();
let mut clients = Vec::<UnixStream>::new();
while !worker_stop_flag.load(Ordering::Relaxed) {
let mut accepted_client = false;
loop {
match listener.accept() {
Ok((stream, _addr)) => {
accepted_client = true;
clients.push(stream);
}
Err(err) if err.kind() == ErrorKind::WouldBlock => break,
Err(err) => {
hilog_error!("[Rust] accept localsocket failed: {}", err);
break;
}
}
}
let snapshot = get_runtime_snapshot_inner();
let snapshot_json = match serde_json::to_string(&snapshot) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize runtime snapshot failed: {}", err);
thread::sleep(Duration::from_millis(250));
continue;
}
};
if accepted_client || snapshot_json != last_snapshot_json {
let _ = broadcast_local_socket_message(
&mut clients,
"runtime_snapshot",
&snapshot_json,
);
last_snapshot_json = snapshot_json;
}
for instance in snapshot.instances.iter() {
if instance.running && instance.tun_required {
let virtual_ipv4 = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4.clone());
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
if clients.is_empty() {
continue;
}
if virtual_ipv4.is_none() || virtual_ipv4_cidr.is_none() {
continue;
}
let aggregated_routes = aggregate_tun_routes(instance);
let route_signature = serde_json::to_string(&aggregated_routes)
.unwrap_or_else(|_| "[]".to_string());
let should_send = !delivered_tun_requests.contains(&instance.instance_id)
|| last_tun_route_signatures
.get(&instance.instance_id)
.map(|value| value != &route_signature)
.unwrap_or(true);
if !should_send {
continue;
}
let payload = TunRequestPayload {
config_id: instance.config_id.clone(),
instance_id: instance.instance_id.clone(),
display_name: instance.display_name.clone(),
virtual_ipv4,
virtual_ipv4_cidr,
aggregated_routes,
magic_dns_enabled: instance.magic_dns_enabled,
need_exit_node: instance.need_exit_node,
};
let payload_json = match serde_json::to_string(&payload) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize tun request failed: {}", err);
continue;
}
};
if broadcast_local_socket_message(&mut clients, "tun_request", &payload_json) {
delivered_tun_requests.insert(instance.instance_id.clone());
last_tun_route_signatures
.insert(instance.instance_id.clone(), route_signature);
}
} else {
delivered_tun_requests.remove(&instance.instance_id);
last_tun_route_signatures.remove(&instance.instance_id);
}
}
thread::sleep(Duration::from_millis(250));
}
});
match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => {
*guard = Some(LocalSocketState {
stop_flag,
socket_path,
worker,
});
true
}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
false
}
}
}
pub fn stop_local_socket_server() -> bool {
let state = match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => guard.take(),
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
};
if let Some(state) = state {
state.stop_flag.store(true, Ordering::Relaxed);
let _ = state.worker.join();
let _ = std::fs::remove_file(state.socket_path);
}
true
}
+439 -139
View File
@@ -1,185 +1,485 @@
mod native_log; mod config;
mod exports;
mod kernel_bridge;
mod platform;
mod runtime;
use easytier::common::config::{ConfigFileControl, ConfigLoader, TomlConfigLoader}; use config::repository::{
create_config_record, delete_config_record, export_config_toml, get_config_field_value,
get_default_config_json, import_toml_config, init_config_store as init_repo_store,
list_config_meta_json, save_config_record, set_config_field_value, start_kernel_with_config_id,
};
use config::services::schema_service::{
ConfigFieldMapping, NetworkConfigSchema,
get_network_config_field_mappings as build_network_config_field_mappings,
get_network_config_schema as build_network_config_schema,
};
use config::services::share_link_service::{
build_config_share_link as build_config_share_link_inner,
import_config_share_link as import_config_share_link_inner,
parse_config_share_link as parse_config_share_link_inner,
};
use config::storage::config_meta::get_config_display_name;
use config::types::stored_config::{KeyValuePair, SharedConfigLinkPayload};
use easytier::common::constants::EASYTIER_VERSION; use easytier::common::constants::EASYTIER_VERSION;
use easytier::common::{
MachineIdOptions,
config::{ConfigFileControl, ConfigLoader, TomlConfigLoader},
};
use easytier::instance_manager::NetworkInstanceManager; use easytier::instance_manager::NetworkInstanceManager;
use easytier::proto::api::manage::NetworkConfig; use easytier::proto::api::manage::NetworkConfig;
use easytier::proto::api::manage::NetworkingMethod;
use easytier::web_client::{WebClient, WebClientHooks, run_web_client};
use kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use napi_derive_ohos::napi; use napi_derive_ohos::napi;
use ohos_hilog_binding::{hilog_debug, hilog_error}; use ohos_hilog_binding::{hilog_error, hilog_info};
use runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use std::collections::{HashMap, HashSet};
use std::format; use std::format;
use std::sync::{Arc, Mutex};
use tokio::runtime::{Builder, Runtime};
use uuid::Uuid; use uuid::Uuid;
static INSTANCE_MANAGER: once_cell::sync::Lazy<NetworkInstanceManager> = pub(crate) static INSTANCE_MANAGER: once_cell::sync::Lazy<Arc<NetworkInstanceManager>> =
once_cell::sync::Lazy::new(NetworkInstanceManager::new); once_cell::sync::Lazy::new(|| Arc::new(NetworkInstanceManager::new()));
static ASYNC_RUNTIME: once_cell::sync::Lazy<Runtime> = once_cell::sync::Lazy::new(|| {
Builder::new_multi_thread()
.enable_all()
.build()
.expect("tokio runtime for easytier-ohrs")
});
static WEB_CLIENTS: once_cell::sync::Lazy<Mutex<HashMap<String, ManagedWebClient>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashMap::new()));
#[napi(object)] #[derive(Default)]
pub struct KeyValuePair { struct TrackedWebClientHooks {
pub key: String, instance_ids: Mutex<HashSet<Uuid>>,
pub value: String,
} }
#[napi] struct ManagedWebClient {
pub fn easytier_version() -> String { _client: WebClient,
EASYTIER_VERSION.to_string() hooks: Arc<TrackedWebClientHooks>,
} }
#[napi] #[async_trait::async_trait]
pub fn set_tun_fd(inst_id: String, fd: i32) -> bool { impl WebClientHooks for TrackedWebClientHooks {
match Uuid::try_parse(&inst_id) { async fn post_run_network_instance(&self, id: &Uuid) -> Result<(), String> {
Ok(uuid) => match INSTANCE_MANAGER.set_tun_fd(&uuid, fd) { self.instance_ids
Ok(_) => { .lock()
hilog_debug!("[Rust] set tun fd {} to {}.", fd, inst_id); .map_err(|err| err.to_string())?
true .insert(*id);
} Ok(())
Err(e) => { }
hilog_error!("[Rust] cant set tun fd {} to {}. {}", fd, inst_id, e);
false async fn post_remove_network_instances(&self, ids: &[Uuid]) -> Result<(), String> {
} let mut guard = self.instance_ids.lock().map_err(|err| err.to_string())?;
}, for id in ids {
Err(e) => { guard.remove(id);
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e); }
Ok(())
}
}
fn is_config_server_config(config: &NetworkConfig) -> bool {
matches!(
NetworkingMethod::try_from(config.networking_method.unwrap_or_default())
.unwrap_or_default(),
NetworkingMethod::PublicServer
) && config
.public_server_url
.as_ref()
.is_some_and(|url| !url.trim().is_empty())
}
fn stop_web_client(config_id: &str) -> bool {
let managed = match WEB_CLIENTS.lock() {
Ok(mut guard) => guard.remove(config_id),
Err(err) => {
hilog_error!("[Rust] stop_web_client lock failed {}", err);
return false;
}
};
let Some(managed) = managed else {
return false;
};
let tracked_ids = managed
.hooks
.instance_ids
.lock()
.map(|guard| guard.iter().copied().collect::<Vec<_>>())
.unwrap_or_default();
drop(managed);
if tracked_ids.is_empty() {
maybe_stop_local_socket_server();
return true;
}
let ret = INSTANCE_MANAGER
.delete_network_instance(tracked_ids)
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!(
"[Rust] stop config server instances failed {}: {}",
config_id,
err
);
false
});
maybe_stop_local_socket_server();
ret
}
fn ensure_local_socket_server_started() -> bool {
start_local_socket_server_inner()
}
fn maybe_stop_local_socket_server() {
let no_local_instances = INSTANCE_MANAGER.list_network_instance_ids().is_empty();
let no_web_clients = WEB_CLIENTS
.lock()
.map(|guard| guard.is_empty())
.unwrap_or(false);
if no_local_instances && no_web_clients {
let _ = stop_local_socket_server_inner();
}
}
fn run_config_server_instance(config_id: &str, config: &NetworkConfig) -> bool {
if INSTANCE_MANAGER
.list_network_instance_ids()
.iter()
.next()
.is_some()
{
hilog_error!("[Rust] there is a running instance!");
return false;
}
let Some(config_server_url) = config.public_server_url.clone() else {
hilog_error!("[Rust] public_server_url missing for config server mode");
return false;
};
let hooks = Arc::new(TrackedWebClientHooks::default());
let secure_mode = config
.secure_mode
.as_ref()
.map(|mode| mode.enabled)
.unwrap_or(false);
let hostname = config.hostname.clone();
if !ensure_local_socket_server_started() {
return false;
}
let client = ASYNC_RUNTIME.block_on(run_web_client(
&config_server_url,
MachineIdOptions::default(),
hostname,
secure_mode,
INSTANCE_MANAGER.clone(),
Some(hooks.clone()),
));
let client = match client {
Ok(client) => client,
Err(err) => {
hilog_error!("[Rust] start config server failed {}", err);
return false;
}
};
match WEB_CLIENTS.lock() {
Ok(mut guard) => {
guard.insert(
config_id.to_string(),
ManagedWebClient {
_client: client,
hooks,
},
);
true
}
Err(err) => {
hilog_error!("[Rust] store config server client failed {}", err);
false false
} }
} }
} }
#[napi] pub(crate) fn build_default_network_config_json() -> Result<String, String> {
pub fn default_network_config() -> String { let config = NetworkConfig::new_from_config(TomlConfigLoader::default())
match NetworkConfig::new_from_config(TomlConfigLoader::default()) { .map_err(|e| format!("default_network_config failed {}", e))?;
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)), serde_json::to_string(&config).map_err(|e| format!("default_network_config failed {}", e))
Err(e) => {
hilog_error!("[Rust] default_network_config failed {}", e);
format!("ERROR {}", e)
}
}
} }
#[napi] fn convert_toml_to_network_config_inner(toml_text: &str) -> Result<String, String> {
pub fn convert_toml_to_network_config(cfg_str: String) -> String { let config = NetworkConfig::new_from_config(
match TomlConfigLoader::new_from_str(&cfg_str) { TomlConfigLoader::new_from_str(toml_text).map_err(|e| e.to_string())?,
Ok(cfg) => match NetworkConfig::new_from_config(cfg) { )
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)), .map_err(|e| e.to_string())?;
Err(e) => { serde_json::to_string(&config).map_err(|e| e.to_string())
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
},
Err(e) => {
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
}
} }
#[napi] fn parse_network_config_inner(cfg_json: &str) -> bool {
pub fn parse_network_config(cfg_json: String) -> bool { serde_json::from_str::<NetworkConfig>(cfg_json)
match serde_json::from_str::<NetworkConfig>(&cfg_json) { .ok()
Ok(cfg) => match cfg.gen_config() { .and_then(|cfg| cfg.gen_config().ok())
Ok(toml) => { .is_some()
hilog_debug!("[Rust] Convert to Toml {}", toml.dump());
true
}
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
},
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
}
} }
#[napi] pub(crate) fn run_network_instance_from_json(cfg_json: &str) -> bool {
pub fn run_network_instance(cfg_json: String) -> bool { let config = match serde_json::from_str::<NetworkConfig>(cfg_json) {
let cfg = match serde_json::from_str::<NetworkConfig>(&cfg_json) { Ok(cfg) => cfg,
Ok(cfg) => match cfg.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
},
Err(e) => { Err(e) => {
hilog_error!("[Rust] parse config failed {}", e); hilog_error!("[Rust] parse config failed {}", e);
return false; return false;
} }
}; };
if INSTANCE_MANAGER.list_network_instance_ids().len() > 0 { if is_config_server_config(&config) {
let Some(config_id) = config.instance_id.as_deref() else {
hilog_error!("[Rust] config server config missing instance id");
return false;
};
return run_config_server_instance(config_id, &config);
}
let cfg = match config.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
};
if !INSTANCE_MANAGER.list_network_instance_ids().is_empty() {
hilog_error!("[Rust] there is a running instance!"); hilog_error!("[Rust] there is a running instance!");
return false; return false;
} }
if !ensure_local_socket_server_started() {
return false;
}
let inst_id = cfg.get_id(); let inst_id = cfg.get_id();
if INSTANCE_MANAGER if INSTANCE_MANAGER
.list_network_instance_ids() .list_network_instance_ids()
.contains(&inst_id) .contains(&inst_id)
{ {
hilog_error!("[Rust] instance {} already exists", inst_id);
return false; return false;
} }
INSTANCE_MANAGER
.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG)
.unwrap();
true
}
#[napi] match INSTANCE_MANAGER.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG) {
pub fn stop_network_instance(inst_names: Vec<String>) { Ok(_) => true,
INSTANCE_MANAGER Err(err) => {
.delete_network_instance( hilog_error!("[Rust] start_kernel failed for {}: {}", inst_id, err);
inst_names
.into_iter()
.filter_map(|s| Uuid::parse_str(&s).ok())
.collect(),
)
.unwrap();
hilog_debug!("[Rust] stop_network_instance");
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
let mut result = Vec::new();
match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(map) => {
for (uuid, info) in map.iter() {
// convert value to json string
let value = match serde_json::to_string(&info) {
Ok(value) => value,
Err(e) => {
hilog_error!("[Rust] failed to serialize instance {} info: {}", uuid, e);
continue;
}
};
result.push(KeyValuePair {
key: uuid.clone().to_string(),
value: value.clone(),
});
}
}
Err(_) => {}
}
result
}
#[napi]
pub fn collect_running_network() -> Vec<String> {
INSTANCE_MANAGER
.list_network_instance_ids()
.clone()
.into_iter()
.map(|id| id.to_string())
.collect()
}
#[napi]
pub fn is_running_network(inst_id: String) -> bool {
match Uuid::try_parse(&inst_id) {
Ok(uuid) => INSTANCE_MANAGER.list_network_instance_ids().contains(&uuid),
Err(e) => {
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e);
false false
} }
} }
} }
fn parse_instance_uuid(config_id: &str) -> Option<Uuid> {
match Uuid::parse_str(config_id) {
Ok(uuid) => Some(uuid),
Err(err) => {
hilog_error!("[Rust] invalid config_id {}: {}", config_id, err);
None
}
}
}
#[napi]
pub fn init_config_store(root_dir: String) -> bool {
exports::config_api::init_config_store(root_dir)
}
#[napi]
pub fn list_configs() -> String {
exports::config_api::list_configs()
}
#[napi]
pub fn get_config_display_name_by_id(config_id: String) -> Option<String> {
get_config_display_name(&config_id)
}
#[napi]
pub fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
exports::config_api::save_config(config_id, display_name, config_json)
}
#[napi]
pub fn create_config(config_id: String, display_name: String) -> bool {
exports::config_api::create_config(config_id, display_name)
}
#[napi]
pub fn rename_stored_config(config_id: String, display_name: String) -> bool {
config::storage::config_meta::set_config_display_name(config_id, display_name).is_some()
}
#[napi]
pub fn delete_stored_config_meta(config_id: String) -> bool {
exports::config_api::delete_stored_config_meta(config_id)
}
#[napi]
pub fn get_config(config_id: String) -> Option<String> {
exports::config_api::get_config(config_id)
}
#[napi]
pub fn get_default_config() -> Option<String> {
exports::config_api::get_default_config()
}
#[napi]
pub fn get_config_field(config_id: String, field: String) -> Option<String> {
exports::config_api::get_config_field(config_id, field)
}
#[napi]
pub fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
exports::config_api::set_config_field(config_id, field, json_value)
}
#[napi]
pub fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
exports::config_api::import_toml(toml_text, display_name)
}
#[napi]
pub fn export_toml(config_id: String) -> Option<String> {
exports::config_api::export_toml(config_id)
}
#[napi]
pub fn start_kernel(config_id: String) -> bool {
exports::runtime_api::start_kernel(config_id, start_kernel_with_config_id)
}
#[napi]
pub fn stop_kernel(config_id: String) -> bool {
exports::runtime_api::stop_kernel(
config_id,
stop_web_client,
parse_instance_uuid,
maybe_stop_local_socket_server,
)
}
#[napi]
pub fn stop_network_instance(config_ids: Vec<String>) -> bool {
exports::runtime_api::stop_network_instance(config_ids, stop_kernel)
}
#[napi]
pub fn easytier_version() -> String {
EASYTIER_VERSION.to_string()
}
#[napi]
pub fn default_network_config() -> String {
get_default_config().unwrap_or_else(|| "{}".to_string())
}
#[napi]
pub fn convert_toml_to_network_config(toml_text: String) -> String {
convert_toml_to_network_config_inner(&toml_text).unwrap_or_else(|err| format!("ERROR: {err}"))
}
#[napi]
pub fn parse_network_config(cfg_json: String) -> bool {
parse_network_config_inner(&cfg_json)
}
#[napi]
pub fn run_network_instance(cfg_json: String) -> bool {
run_network_instance_from_json(&cfg_json)
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
exports::runtime_api::collect_network_infos()
}
#[napi]
pub fn set_tun_fd(config_id: String, fd: i32) -> bool {
exports::runtime_api::set_tun_fd(config_id, fd, parse_instance_uuid)
}
#[napi]
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
#[napi]
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn exported_plain_object_schema_contains_core_networkconfig_metadata() {
let schema = get_network_config_schema();
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(schema.node_kind, "schema");
assert!(
schema
.children
.iter()
.any(|field| field.name == "network_name")
);
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
}
}
#[napi]
pub fn get_runtime_snapshot() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot_inner()
}
#[napi]
pub fn build_config_share_link(config_id: String, only_start: Option<bool>) -> Option<String> {
build_config_share_link_inner(&config_id, None, only_start.unwrap_or(false))
}
#[napi]
pub fn parse_config_share_link(share_link: String) -> Option<SharedConfigLinkPayload> {
parse_config_share_link_inner(&share_link)
}
#[napi]
pub fn import_config_share_link(
share_link: String,
display_name_override: Option<String>,
) -> Option<String> {
import_config_share_link_inner(&share_link, display_name_override)
}
@@ -0,0 +1 @@
pub(crate) mod logging;
@@ -0,0 +1 @@
pub(crate) mod native_log;
@@ -0,0 +1 @@
pub(crate) mod state;
@@ -0,0 +1 @@
pub(crate) mod runtime_state;
@@ -0,0 +1,293 @@
use easytier::proto::{api, common};
use napi_derive_ohos::napi;
use serde::Serialize;
use std::collections::HashSet;
use std::sync::Mutex;
static ATTACHED_TUN_INSTANCE_IDS: once_cell::sync::Lazy<Mutex<HashSet<String>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashSet::new()));
pub fn mark_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.insert(instance_id.to_string());
}
}
pub fn clear_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.remove(instance_id);
}
}
pub fn is_tun_attached(instance_id: &str) -> bool {
ATTACHED_TUN_INSTANCE_IDS
.lock()
.map(|guard| guard.contains(instance_id))
.unwrap_or(false)
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnStats {
pub rx_bytes: i64,
pub tx_bytes: i64,
pub rx_packets: i64,
pub tx_packets: i64,
pub latency_us: i64,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnInfo {
pub conn_id: String,
pub my_peer_id: i64,
pub peer_id: i64,
pub features: Vec<String>,
pub tunnel_type: Option<String>,
pub local_addr: Option<String>,
pub remote_addr: Option<String>,
pub resolved_remote_addr: Option<String>,
pub stats: Option<PeerConnStats>,
pub loss_rate: Option<f64>,
pub is_client: bool,
pub network_name: Option<String>,
pub is_closed: bool,
pub secure_auth_level: Option<i32>,
pub peer_identity_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerInfo {
pub peer_id: i64,
pub default_conn_id: Option<String>,
pub directly_connected_conns: Vec<String>,
pub conns: Vec<PeerConnInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RouteView {
pub peer_id: i64,
pub hostname: Option<String>,
pub ipv4: Option<String>,
pub ipv4_cidr: Option<String>,
pub ipv6_cidr: Option<String>,
pub proxy_cidrs: Vec<String>,
pub next_hop_peer_id: Option<i64>,
pub cost: Option<i32>,
pub path_latency: Option<i64>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
pub inst_id: Option<String>,
pub version: Option<String>,
pub is_public_server: Option<bool>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct MyNodeInfo {
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub hostname: Option<String>,
pub version: Option<String>,
pub peer_id: Option<i64>,
pub listeners: Vec<String>,
pub vpn_portal_cfg: Option<String>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeInstanceState {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub running: bool,
pub tun_required: bool,
pub tun_attached: bool,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
pub error_message: Option<String>,
pub my_node_info: Option<MyNodeInfo>,
pub events: Vec<String>,
pub routes: Vec<RouteView>,
pub peers: Vec<PeerInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct TunAggregateState {
pub active: bool,
pub attached_instance_ids: Vec<String>,
pub aggregated_routes: Vec<String>,
pub dns_servers: Vec<String>,
pub need_rebuild: bool,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeAggregateState {
pub instances: Vec<RuntimeInstanceState>,
pub tun: TunAggregateState,
pub running_instance_count: i32,
}
fn stringify_ipv4_inet(value: Option<common::Ipv4Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_ipv6_inet(value: Option<common::Ipv6Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_url(value: Option<common::Url>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_uuid(value: Option<common::Uuid>) -> Option<String> {
value.map(|v| v.to_string())
}
fn optional_u32_to_i64(value: Option<u32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn optional_i32_to_i64(value: Option<i32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn route_to_view(route: api::instance::Route) -> RouteView {
let stun = route.stun_info;
let feature_flag = route.feature_flag;
RouteView {
peer_id: route.peer_id as i64,
hostname: (!route.hostname.is_empty()).then_some(route.hostname),
ipv4: route
.ipv4_addr
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
ipv4_cidr: stringify_ipv4_inet(route.ipv4_addr),
ipv6_cidr: stringify_ipv6_inet(route.ipv6_addr),
proxy_cidrs: route.proxy_cidrs,
next_hop_peer_id: optional_u32_to_i64(route.next_hop_peer_id_latency_first)
.or_else(|| Some(route.next_hop_peer_id as i64)),
cost: Some(route.cost),
path_latency: optional_i32_to_i64(route.path_latency_latency_first)
.or_else(|| Some(route.path_latency as i64)),
udp_nat_type: stun.as_ref().map(|info| info.udp_nat_type),
tcp_nat_type: stun.as_ref().map(|info| info.tcp_nat_type),
inst_id: (!route.inst_id.is_empty()).then_some(route.inst_id),
version: (!route.version.is_empty()).then_some(route.version),
is_public_server: feature_flag.map(|flag| flag.is_public_server),
}
}
fn peer_conn_to_view(conn: api::instance::PeerConnInfo) -> PeerConnInfo {
let stats = conn.stats.map(|stats| PeerConnStats {
rx_bytes: stats.rx_bytes as i64,
tx_bytes: stats.tx_bytes as i64,
rx_packets: stats.rx_packets as i64,
tx_packets: stats.tx_packets as i64,
latency_us: stats.latency_us as i64,
});
PeerConnInfo {
conn_id: conn.conn_id,
my_peer_id: conn.my_peer_id as i64,
peer_id: conn.peer_id as i64,
features: conn.features,
tunnel_type: conn.tunnel.as_ref().map(|t| t.tunnel_type.clone()),
local_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.local_addr.clone())),
remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.remote_addr.clone())),
resolved_remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.resolved_remote_addr.clone())),
stats,
loss_rate: Some(conn.loss_rate as f64),
is_client: conn.is_client,
network_name: (!conn.network_name.is_empty()).then_some(conn.network_name),
is_closed: conn.is_closed,
secure_auth_level: Some(conn.secure_auth_level),
peer_identity_type: Some(conn.peer_identity_type),
}
}
fn peer_to_view(peer: api::instance::PeerInfo) -> PeerInfo {
PeerInfo {
peer_id: peer.peer_id as i64,
default_conn_id: stringify_uuid(peer.default_conn_id),
directly_connected_conns: peer
.directly_connected_conns
.into_iter()
.map(|id| id.to_string())
.collect(),
conns: peer.conns.into_iter().map(peer_conn_to_view).collect(),
}
}
fn my_node_info_to_view(info: api::manage::MyNodeInfo) -> MyNodeInfo {
MyNodeInfo {
virtual_ipv4: info
.virtual_ipv4
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
virtual_ipv4_cidr: stringify_ipv4_inet(info.virtual_ipv4),
hostname: (!info.hostname.is_empty()).then_some(info.hostname),
version: (!info.version.is_empty()).then_some(info.version),
peer_id: Some(info.peer_id as i64),
listeners: info
.listeners
.into_iter()
.map(|url| url.to_string())
.collect(),
vpn_portal_cfg: info.vpn_portal_cfg,
udp_nat_type: info.stun_info.as_ref().map(|stun| stun.udp_nat_type),
tcp_nat_type: info.stun_info.as_ref().map(|stun| stun.tcp_nat_type),
}
}
pub fn runtime_instance_from_running_info(
config_id: String,
display_name: String,
magic_dns_enabled: bool,
need_exit_node: bool,
info: api::manage::NetworkInstanceRunningInfo,
) -> RuntimeInstanceState {
let tun_attached = info.running && is_tun_attached(&config_id);
let tun_required = info.running && (info.dev_name != "no_tun" || tun_attached);
RuntimeInstanceState {
config_id: config_id.clone(),
instance_id: config_id,
display_name,
running: info.running,
tun_required,
tun_attached,
magic_dns_enabled,
need_exit_node,
error_message: info.error_msg,
my_node_info: info.my_node_info.map(my_node_info_to_view),
events: info.events,
routes: info.routes.into_iter().map(route_to_view).collect(),
peers: info.peers.into_iter().map(peer_to_view).collect(),
}
}
+2 -1
View File
@@ -1,7 +1,7 @@
[package] [package]
name = "easytier-uptime" name = "easytier-uptime"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition.workspace = true
[dependencies] [dependencies]
tokio = { version = "1.0", features = ["full"] } tokio = { version = "1.0", features = ["full"] }
@@ -12,6 +12,7 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1.0", features = ["v4", "serde"] } uuid = { version = "1.0", features = ["v4", "serde"] }
guarden = "0.1"
# Axum web framework # Axum web framework
axum = { version = "0.8.4", features = ["macros"] } axum = { version = "0.8.4", features = ["macros"] }
+9 -9
View File
@@ -9,7 +9,7 @@
"version": "0.0.0", "version": "0.0.0",
"dependencies": { "dependencies": {
"@element-plus/icons-vue": "^2.3.1", "@element-plus/icons-vue": "^2.3.1",
"axios": "^1.7.9", "axios": "^1.13.5",
"dayjs": "^1.11.13", "dayjs": "^1.11.13",
"element-plus": "^2.8.8", "element-plus": "^2.8.8",
"vue": "^3.5.18", "vue": "^3.5.18",
@@ -1220,13 +1220,13 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/axios": { "node_modules/axios": {
"version": "1.11.0", "version": "1.13.6",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.11.0.tgz", "resolved": "https://registry.npmjs.org/axios/-/axios-1.13.6.tgz",
"integrity": "sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA==", "integrity": "sha512-ChTCHMouEe2kn713WHbQGcuYrr6fXTBiu460OTwWrWob16g1bXn4vtz07Ope7ewMozJAnEquLk5lWQWtBig9DQ==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"follow-redirects": "^1.15.6", "follow-redirects": "^1.15.11",
"form-data": "^4.0.4", "form-data": "^4.0.5",
"proxy-from-env": "^1.1.0" "proxy-from-env": "^1.1.0"
} }
}, },
@@ -1616,9 +1616,9 @@
} }
}, },
"node_modules/form-data": { "node_modules/form-data": {
"version": "4.0.4", "version": "4.0.5",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz", "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.5.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==", "integrity": "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"asynckit": "^0.4.0", "asynckit": "^0.4.0",
@@ -10,7 +10,7 @@
}, },
"dependencies": { "dependencies": {
"@element-plus/icons-vue": "^2.3.1", "@element-plus/icons-vue": "^2.3.1",
"axios": "^1.7.9", "axios": "^1.13.5",
"dayjs": "^1.11.13", "dayjs": "^1.11.13",
"easytier-uptime-frontend": "link:", "easytier-uptime-frontend": "link:",
"element-plus": "^2.8.8", "element-plus": "^2.8.8",
@@ -1,7 +1,7 @@
use std::ops::{Div, Mul}; use std::ops::{Div, Mul};
use axum::extract::{Path, State};
use axum::Json; use axum::Json;
use axum::extract::{Path, State};
use sea_orm::{ use sea_orm::{
ColumnTrait, Condition, EntityTrait, IntoActiveModel, ModelTrait, Order, PaginatorTrait, ColumnTrait, Condition, EntityTrait, IntoActiveModel, ModelTrait, Order, PaginatorTrait,
QueryFilter, QueryOrder, QuerySelect, Set, TryIntoModel, QueryFilter, QueryOrder, QuerySelect, Set, TryIntoModel,
@@ -14,7 +14,7 @@ use crate::api::{
models::*, models::*,
}; };
use crate::db::entity::{self, health_records, shared_nodes}; use crate::db::entity::{self, health_records, shared_nodes};
use crate::db::{operations::*, Db}; use crate::db::{Db, operations::*};
use crate::health_checker_manager::HealthCheckerManager; use crate::health_checker_manager::HealthCheckerManager;
use axum_extra::extract::Query; use axum_extra::extract::Query;
use std::sync::Arc; use std::sync::Arc;
@@ -273,7 +273,7 @@ pub struct InstanceFilterParams {
use crate::config::AppConfig; use crate::config::AppConfig;
use axum::http::{HeaderMap, StatusCode}; use axum::http::{HeaderMap, StatusCode};
use chrono::{Duration, Utc}; use chrono::{Duration, Utc};
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation}; use jsonwebtoken::{DecodingKey, EncodingKey, Header, Validation, decode, encode};
use serde::Serialize; use serde::Serialize;
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
@@ -370,19 +370,19 @@ pub async fn admin_get_nodes(
let ids = NodeOperations::filter_node_ids_by_tag(&app_state.db, &tag).await?; let ids = NodeOperations::filter_node_ids_by_tag(&app_state.db, &tag).await?;
filtered_ids = Some(ids); filtered_ids = Some(ids);
} }
if let Some(tags) = filters.tags { if let Some(tags) = filters.tags
if !tags.is_empty() { && !tags.is_empty()
let ids_any = NodeOperations::filter_node_ids_by_tags_any(&app_state.db, &tags).await?; {
filtered_ids = match filtered_ids { let ids_any = NodeOperations::filter_node_ids_by_tags_any(&app_state.db, &tags).await?;
Some(mut existing) => { filtered_ids = match filtered_ids {
existing.extend(ids_any); Some(mut existing) => {
existing.sort(); existing.extend(ids_any);
existing.dedup(); existing.sort();
Some(existing) existing.dedup();
} Some(existing)
None => Some(ids_any), }
}; None => Some(ids_any),
} };
} }
if let Some(ids) = filtered_ids { if let Some(ids) = filtered_ids {
if ids.is_empty() { if ids.is_empty() {
@@ -1,5 +1,5 @@
use axum::routing::{delete, get, post, put};
use axum::Router; use axum::Router;
use axum::routing::{delete, get, post, put};
use tower_http::compression::CompressionLayer; use tower_http::compression::CompressionLayer;
use tower_http::cors::CorsLayer; use tower_http::cors::CorsLayer;
@@ -1,7 +1,7 @@
use crate::db::entity::*;
use crate::db::Db; use crate::db::Db;
use crate::db::entity::*;
use sea_orm::*; use sea_orm::*;
use tokio::time::{sleep, Duration}; use tokio::time::{Duration, sleep};
use tracing::{error, info, warn}; use tracing::{error, info, warn};
/// 数据清理策略配置 /// 数据清理策略配置
@@ -5,12 +5,12 @@ pub mod operations;
use std::fmt; use std::fmt;
use sea_orm::{ use sea_orm::{
prelude::*, sea_query::OnConflict, ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait, ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait, QueryFilter as _, Set,
QueryFilter as _, Set, SqlxSqliteConnector, Statement, TransactionTrait as _, SqlxSqliteConnector, Statement, TransactionTrait as _, prelude::*, sea_query::OnConflict,
}; };
use sea_orm_migration::MigratorTrait as _; use sea_orm_migration::MigratorTrait as _;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::{migrate::MigrateDatabase as _, Sqlite, SqlitePool}; use sqlx::{Sqlite, SqlitePool, migrate::MigrateDatabase as _};
use crate::migrator; use crate::migrator;
@@ -1,8 +1,8 @@
use crate::api::CreateNodeRequest; use crate::api::CreateNodeRequest;
use crate::db::entity::*;
use crate::db::Db; use crate::db::Db;
use crate::db::HealthStats; use crate::db::HealthStats;
use crate::db::HealthStatus; use crate::db::HealthStatus;
use crate::db::entity::*;
use sea_orm::*; use sea_orm::*;
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
@@ -7,21 +7,21 @@ use std::{
use anyhow::Context as _; use anyhow::Context as _;
use dashmap::DashMap; use dashmap::DashMap;
use easytier::{ use easytier::{
common::{ common::config::{
config::{ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader}, ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader,
scoped_task::ScopedTask,
}, },
defer,
instance_manager::NetworkInstanceManager, instance_manager::NetworkInstanceManager,
}; };
use guarden::defer;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::any; use sqlx::any;
use tokio_util::task::AbortOnDropHandle;
use tracing::{debug, error, info, instrument, warn}; use tracing::{debug, error, info, instrument, warn};
use crate::db::{ use crate::db::{
Db, HealthStatus,
entity::shared_nodes, entity::shared_nodes,
operations::{HealthOperations, NodeOperations}, operations::{HealthOperations, NodeOperations},
Db, HealthStatus,
}; };
pub struct HealthCheckOneNode { pub struct HealthCheckOneNode {
@@ -240,7 +240,7 @@ pub struct HealthChecker {
db: Db, db: Db,
instance_mgr: Arc<NetworkInstanceManager>, instance_mgr: Arc<NetworkInstanceManager>,
inst_id_map: DashMap<i32, uuid::Uuid>, inst_id_map: DashMap<i32, uuid::Uuid>,
node_tasks: DashMap<i32, ScopedTask<()>>, node_tasks: DashMap<i32, AbortOnDropHandle<()>>,
node_records: Arc<DashMap<i32, HealthyMemRecord>>, node_records: Arc<DashMap<i32, HealthyMemRecord>>,
node_cfg: Arc<DashMap<i32, TomlConfigLoader>>, node_cfg: Arc<DashMap<i32, TomlConfigLoader>>,
} }
@@ -359,6 +359,7 @@ impl HealthChecker {
) )
.parse() .parse()
.with_context(|| "failed to parse peer uri")?, .with_context(|| "failed to parse peer uri")?,
peer_public_key: None,
}]); }]);
let inst_id = inst_id.unwrap_or(uuid::Uuid::new_v4()); let inst_id = inst_id.unwrap_or(uuid::Uuid::new_v4());
@@ -464,7 +465,7 @@ impl HealthChecker {
} }
// 启动健康检查任务 // 启动健康检查任务
let task = ScopedTask::from(tokio::spawn(Self::node_health_check_task( let task = AbortOnDropHandle::new(tokio::spawn(Self::node_health_check_task(
node_id, node_id,
cfg.get_id(), cfg.get_id(),
Arc::clone(&self.instance_mgr), Arc::clone(&self.instance_mgr),
@@ -1,11 +1,11 @@
use std::{collections::HashSet, sync::Arc, time::Duration}; use std::{collections::HashSet, sync::Arc, time::Duration};
use anyhow::Context as _; use anyhow::Context as _;
use tokio::time::{interval, Interval}; use tokio::time::{Interval, interval};
use tracing::{error, info}; use tracing::{error, info};
use crate::{ use crate::{
db::{entity::shared_nodes, operations::NodeOperations, Db}, db::{Db, entity::shared_nodes, operations::NodeOperations},
health_checker::HealthChecker, health_checker::HealthChecker,
}; };
+6 -4
View File
@@ -10,8 +10,8 @@ mod migrator;
use api::routes::create_routes; use api::routes::create_routes;
use clap::Parser; use clap::Parser;
use config::AppConfig; use config::AppConfig;
use db::{operations::NodeOperations, Db}; use db::{Db, operations::NodeOperations};
use easytier::utils::init_logger; use easytier::common::log;
use health_checker::HealthChecker; use health_checker::HealthChecker;
use health_checker_manager::HealthCheckerManager; use health_checker_manager::HealthCheckerManager;
use std::env; use std::env;
@@ -42,14 +42,16 @@ async fn main() -> anyhow::Result<()> {
let config = AppConfig::default(); let config = AppConfig::default();
// 初始化日志 // 初始化日志
let _ = init_logger(&config.logging, false); let _ = log::init(&config.logging, false);
// 解析命令行参数 // 解析命令行参数
let args = Args::parse(); let args = Args::parse();
// 如果提供了管理员密码,设置环境变量 // 如果提供了管理员密码,设置环境变量
if let Some(password) = args.admin_password { if let Some(password) = args.admin_password {
env::set_var("ADMIN_PASSWORD", password); unsafe {
env::set_var("ADMIN_PASSWORD", password);
}
} }
tracing::info!( tracing::info!(
+4 -4
View File
@@ -1,7 +1,7 @@
{ {
"name": "easytier-gui", "name": "easytier-gui",
"type": "module", "type": "module",
"version": "2.5.0", "version": "2.6.4",
"private": true, "private": true,
"packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4", "packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4",
"scripts": { "scripts": {
@@ -53,10 +53,10 @@
"unplugin-vue-markdown": "^0.26.2", "unplugin-vue-markdown": "^0.26.2",
"unplugin-vue-router": "^0.10.8", "unplugin-vue-router": "^0.10.8",
"uuid": "^10.0.0", "uuid": "^10.0.0",
"vite": "^5.4.8", "vite": "^5.4.21",
"vite-plugin-vue-devtools": "^8.0.5", "vite-plugin-vue-devtools": "^7.4.6",
"vite-plugin-vue-layouts": "^0.11.0", "vite-plugin-vue-layouts": "^0.11.0",
"vue-i18n": "^10.0.0", "vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.10" "vue-tsc": "^2.1.10"
} }
} }
+12 -11
View File
@@ -1,9 +1,9 @@
[package] [package]
name = "easytier-gui" name = "easytier-gui"
version = "2.5.0" version = "2.6.4"
description = "EasyTier GUI" description = "EasyTier GUI"
authors = ["you"] authors = ["you"]
edition = "2021" edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@@ -11,15 +11,6 @@ edition = "2021"
name = "app_lib" name = "app_lib"
crate-type = ["staticlib", "cdylib", "rlib"] crate-type = ["staticlib", "cdylib", "rlib"]
[build-dependencies]
tauri-build = { version = "2.0.0-rc", features = [] }
# enable thunk-rs when compiling for x86_64 or i686 windows
[target.x86_64-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[target.i686-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[dependencies] [dependencies]
# wry 0.47 may crash on android, see https://github.com/EasyTier/EasyTier/issues/527 # wry 0.47 may crash on android, see https://github.com/EasyTier/EasyTier/issues/527
@@ -54,6 +45,8 @@ tauri-plugin-os = "2.3.0"
uuid = "1.17.0" uuid = "1.17.0"
async-trait = "0.1.89" async-trait = "0.1.89"
url = { version = "2.5", features = ["serde"] }
[target.'cfg(target_os = "windows")'.dependencies] [target.'cfg(target_os = "windows")'.dependencies]
windows = { version = "0.52", features = ["Win32_Foundation", "Win32_UI_Shell", "Win32_UI_WindowsAndMessaging"] } windows = { version = "0.52", features = ["Win32_Foundation", "Win32_UI_Shell", "Win32_UI_WindowsAndMessaging"] }
winapi = { version = "0.3.9", features = ["securitybaseapi", "processthreadsapi"] } winapi = { version = "0.3.9", features = ["securitybaseapi", "processthreadsapi"] }
@@ -64,6 +57,14 @@ libc = "0.2"
[target.'cfg(target_os = "macos")'.dependencies] [target.'cfg(target_os = "macos")'.dependencies]
security-framework-sys = "2.9.0" security-framework-sys = "2.9.0"
[build-dependencies]
tauri-build = { version = "2.0.0-rc", features = [] }
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[features] [features]
# This feature is used for production builds or when a dev server is not specified, DO NOT REMOVE!! # This feature is used for production builds or when a dev server is not specified, DO NOT REMOVE!!
custom-protocol = ["tauri/custom-protocol"] custom-protocol = ["tauri/custom-protocol"]
+12 -12
View File
@@ -1,12 +1,12 @@
fn main() { use std::env;
// enable thunk-rs when target os is windows and arch is x86_64 or i686
#[cfg(target_os = "windows")] fn main() {
if !std::env::var("TARGET") let target_os = env::var("CARGO_CFG_TARGET_OS").unwrap_or_default();
.unwrap_or_default() let target_arch = env::var("CARGO_CFG_TARGET_ARCH").unwrap_or_default();
.contains("aarch64") // enable thunk-rs when target os is windows and arch is x86_64 or i686
{ if target_os == "windows" && (target_arch == "x86" || target_arch == "x86_64") {
thunk::thunk(); thunk::thunk();
} }
tauri_build::build(); tauri_build::build();
} }
@@ -36,6 +36,7 @@
"core:tray:allow-set-show-menu-on-left-click", "core:tray:allow-set-show-menu-on-left-click",
"core:tray:allow-set-tooltip", "core:tray:allow-set-tooltip",
"vpnservice:allow-ping", "vpnservice:allow-ping",
"vpnservice:allow-get-vpn-status",
"vpnservice:allow-prepare-vpn", "vpnservice:allow-prepare-vpn",
"vpnservice:allow-start-vpn", "vpnservice:allow-start-vpn",
"vpnservice:allow-stop-vpn", "vpnservice:allow-stop-vpn",
@@ -47,4 +48,4 @@
"os:allow-platform", "os:allow-platform",
"os:allow-locale" "os:allow-locale"
] ]
} }
@@ -1,5 +1,6 @@
import java.util.Properties import java.util.Properties
import java.io.FileInputStream import java.io.FileInputStream
import groovy.json.JsonSlurper
plugins { plugins {
id("com.android.application") id("com.android.application")
@@ -14,6 +15,35 @@ val tauriProperties = Properties().apply {
} }
} }
val versionPattern = Regex("""^(\d+)\.(\d+)\.(\d+)$""")
val tauriVersionName = tauriProperties.getProperty("tauri.android.versionName")?.ifBlank { null } ?: run {
val tauriConfFile = file("../../../tauri.conf.json")
check(tauriConfFile.exists()) { "Missing tauri.conf.json at ${tauriConfFile.path}" }
val tauriConf = tauriConfFile.reader(Charsets.UTF_8).use { JsonSlurper().parse(it) as? Map<*, *> }
?: error("Failed to parse ${tauriConfFile.path} as a JSON object")
tauriConf["version"] as? String
?: error("Missing string field \"version\" in ${tauriConfFile.path}")
}
val tauriVersionMatch = versionPattern.matchEntire(tauriVersionName)
?: error("Android version must use x.y.z format, but got \"$tauriVersionName\"")
val tauriVersionCode = if (tauriProperties.getProperty("tauri.android.versionName")?.ifBlank { null } != null) {
val versionCodeProp = tauriProperties.getProperty("tauri.android.versionCode")
if (versionCodeProp != null) {
versionCodeProp.toIntOrNull()
?: error("Property \"tauri.android.versionCode\" must be an integer, but got \"$versionCodeProp\"")
} else {
val (major, minor, patch) = tauriVersionMatch.destructured
major.toInt() * 1_000_000 + minor.toInt() * 1_000 + patch.toInt()
}
} else {
val (major, minor, patch) = tauriVersionMatch.destructured
major.toInt() * 1_000_000 + minor.toInt() * 1_000 + patch.toInt()
}
android { android {
compileSdk = 34 compileSdk = 34
namespace = "com.kkrainbow.easytier" namespace = "com.kkrainbow.easytier"
@@ -22,8 +52,8 @@ android {
applicationId = "com.kkrainbow.easytier" applicationId = "com.kkrainbow.easytier"
minSdk = 24 minSdk = 24
targetSdk = 34 targetSdk = 34
versionCode = tauriProperties.getProperty("tauri.android.versionCode", "1").toInt() versionCode = tauriVersionCode
versionName = tauriProperties.getProperty("tauri.android.versionName", "1.0") versionName = tauriVersionName
} }
signingConfigs { signingConfigs {
create("release") { create("release") {
@@ -82,4 +112,4 @@ dependencies {
androidTestImplementation("androidx.test.espresso:espresso-core:3.5.0") androidTestImplementation("androidx.test.espresso:espresso-core:3.5.0")
} }
apply(from = "tauri.build.gradle.kts") apply(from = "tauri.build.gradle.kts")
+1 -1
View File
@@ -4,7 +4,7 @@
*--------------------------------------------------------------------------------------------*/ *--------------------------------------------------------------------------------------------*/
use super::Command; use super::Command;
use anyhow::{anyhow, Result}; use anyhow::{Result, anyhow};
use std::env; use std::env;
use std::ffi::OsStr; use std::ffi::OsStr;
use std::process::{Command as StdCommand, Output}; use std::process::{Command as StdCommand, Output};
+57 -9
View File
@@ -16,6 +16,8 @@
use super::Command; use super::Command;
use anyhow::Result; use anyhow::Result;
use std::env; use std::env;
use std::fs::File;
use std::io::Read as _;
use std::path::PathBuf; use std::path::PathBuf;
use std::process::{ExitStatus, Output}; use std::process::{ExitStatus, Output};
@@ -23,13 +25,15 @@ use std::ffi::{CString, OsString};
use std::io; use std::io;
use std::mem; use std::mem;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::FromRawFd;
use std::os::unix::process::ExitStatusExt;
use std::path::Path; use std::path::Path;
use std::ptr; use std::ptr;
use libc::{fcntl, fileno, waitpid, EINTR, F_GETOWN}; use libc::{EINTR, SHUT_WR, fileno, wait};
use security_framework_sys::authorization::{ use security_framework_sys::authorization::{
errAuthorizationSuccess, kAuthorizationFlagDefaults, kAuthorizationFlagDestroyRights,
AuthorizationCreate, AuthorizationExecuteWithPrivileges, AuthorizationFree, AuthorizationRef, AuthorizationCreate, AuthorizationExecuteWithPrivileges, AuthorizationFree, AuthorizationRef,
errAuthorizationSuccess, kAuthorizationFlagDefaults, kAuthorizationFlagDestroyRights,
}; };
const ENV_PATH: &str = "PATH"; const ENV_PATH: &str = "PATH";
@@ -71,7 +75,7 @@ macro_rules! make_cstring {
}; };
} }
unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 { unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> io::Result<ExitStatus> {
let mut authref: AuthorizationRef = ptr::null_mut(); let mut authref: AuthorizationRef = ptr::null_mut();
let mut pipe: *mut libc::FILE = ptr::null_mut(); let mut pipe: *mut libc::FILE = ptr::null_mut();
@@ -82,7 +86,7 @@ unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 {
&mut authref, &mut authref,
) != errAuthorizationSuccess ) != errAuthorizationSuccess
{ {
return -1; return Err(io::Error::last_os_error());
} }
if AuthorizationExecuteWithPrivileges( if AuthorizationExecuteWithPrivileges(
authref, authref,
@@ -93,22 +97,66 @@ unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 {
) != errAuthorizationSuccess ) != errAuthorizationSuccess
{ {
AuthorizationFree(authref, kAuthorizationFlagDestroyRights); AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return -1; return Err(io::Error::last_os_error());
}
let fd = fileno(pipe);
if fd == -1 {
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(io::Error::last_os_error());
}
// We never send input to the elevated GUI. Close the parent write half so
// the child sees EOF on stdin instead of waiting forever.
if libc::shutdown(fd, SHUT_WR) == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
// AuthorizationExecuteWithPrivileges wires the tool's stdin/stdout to a
// bidirectional pipe. Drain stdout so the child can't block on a full pipe.
let read_fd = libc::dup(fd);
if read_fd == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
let mut pipe_file = unsafe { File::from_raw_fd(read_fd) };
let mut sink = [0_u8; 8192];
loop {
match pipe_file.read(&mut sink) {
Ok(0) => break,
Ok(_) => {}
Err(err) if err.kind() == io::ErrorKind::Interrupted => continue,
Err(err) => {
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
}
} }
let pid = fcntl(fileno(pipe), F_GETOWN, 0);
let mut status = 0; let mut status = 0;
loop { loop {
let r = waitpid(pid, &mut status, 0); let r = wait(&mut status);
if r == -1 && io::Error::last_os_error().raw_os_error() == Some(EINTR) { if r == -1 && io::Error::last_os_error().raw_os_error() == Some(EINTR) {
continue; continue;
} else if r == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
} else { } else {
break; break;
} }
} }
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights); AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
status Ok(ExitStatus::from_raw(status))
} }
fn runas_root_gui(cmd: &Command) -> io::Result<ExitStatus> { fn runas_root_gui(cmd: &Command) -> io::Result<ExitStatus> {
@@ -126,7 +174,7 @@ fn runas_root_gui(cmd: &Command) -> io::Result<ExitStatus> {
let mut argv: Vec<_> = args.iter().map(|x| x.as_ptr()).collect(); let mut argv: Vec<_> = args.iter().map(|x| x.as_ptr()).collect();
argv.push(ptr::null()); argv.push(ptr::null());
unsafe { Ok(mem::transmute(gui_runas(prog.as_ptr(), argv.as_ptr()))) } unsafe { gui_runas(prog.as_ptr(), argv.as_ptr()) }
} }
/// The implementation of state check and elevated executing varies on each platform /// The implementation of state check and elevated executing varies on each platform
@@ -11,11 +11,11 @@ use std::process::{ExitStatus, Output};
use winapi::shared::minwindef::{DWORD, LPVOID}; use winapi::shared::minwindef::{DWORD, LPVOID};
use winapi::um::processthreadsapi::{GetCurrentProcess, OpenProcessToken}; use winapi::um::processthreadsapi::{GetCurrentProcess, OpenProcessToken};
use winapi::um::securitybaseapi::GetTokenInformation; use winapi::um::securitybaseapi::GetTokenInformation;
use winapi::um::winnt::{TokenElevation, HANDLE, TOKEN_ELEVATION, TOKEN_QUERY}; use winapi::um::winnt::{HANDLE, TOKEN_ELEVATION, TOKEN_QUERY, TokenElevation};
use windows::core::{w, HSTRING, PCWSTR};
use windows::Win32::Foundation::HWND; use windows::Win32::Foundation::HWND;
use windows::Win32::UI::Shell::ShellExecuteW; use windows::Win32::UI::Shell::ShellExecuteW;
use windows::Win32::UI::WindowsAndMessaging::SW_HIDE; use windows::Win32::UI::WindowsAndMessaging::SW_HIDE;
use windows::core::{HSTRING, PCWSTR, w};
/// The implementation of state check and elevated executing varies on each platform /// The implementation of state check and elevated executing varies on each platform
impl Command { impl Command {
+394 -111
View File
@@ -14,16 +14,23 @@ use easytier::rpc_service::remote_client::{
}; };
use easytier::web_client::{self, WebClient}; use easytier::web_client::{self, WebClient};
use easytier::{ use easytier::{
common::config::{ConfigLoader, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader}, common::{
config::{
ConfigLoader, ConfigSource, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader,
},
log,
},
instance_manager::NetworkInstanceManager, instance_manager::NetworkInstanceManager,
launcher::NetworkConfig, launcher::NetworkConfig,
rpc_service::ApiRpcServer, rpc_service::ApiRpcServer,
tunnel::TunnelListener,
tunnel::ring::RingTunnelListener, tunnel::ring::RingTunnelListener,
utils::{self}, tunnel::tcp::TcpTunnelListener,
utils::panic::setup_panic_handler,
}; };
use std::ops::Deref; use std::ops::Deref;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::{RwLock, RwLockReadGuard}; use tokio::sync::{Mutex, RwLock, RwLockReadGuard};
use uuid::Uuid; use uuid::Uuid;
use tauri::{AppHandle, Emitter, Manager as _}; use tauri::{AppHandle, Emitter, Manager as _};
@@ -40,8 +47,21 @@ static RPC_RING_UUID: once_cell::sync::Lazy<uuid::Uuid> =
static CLIENT_MANAGER: once_cell::sync::Lazy<RwLock<Option<manager::GUIClientManager>>> = static CLIENT_MANAGER: once_cell::sync::Lazy<RwLock<Option<manager::GUIClientManager>>> =
once_cell::sync::Lazy::new(|| RwLock::new(None)); once_cell::sync::Lazy::new(|| RwLock::new(None));
static RING_RPC_SERVER: once_cell::sync::Lazy<RwLock<Option<ApiRpcServer<RingTunnelListener>>>> = type BoxedTunnelListener = Box<dyn TunnelListener>;
once_cell::sync::Lazy::new(|| RwLock::new(None));
#[derive(Clone, Copy, PartialEq, Eq)]
enum RpcServerKind {
Ring,
Tcp,
}
struct RpcServer {
kind: RpcServerKind,
_server: ApiRpcServer<BoxedTunnelListener>,
bind_url: Option<url::Url>,
}
static RPC_SERVER: once_cell::sync::Lazy<Mutex<Option<RpcServer>>> =
once_cell::sync::Lazy::new(|| Mutex::new(None));
static WEB_CLIENT: once_cell::sync::Lazy<RwLock<Option<WebClient>>> = static WEB_CLIENT: once_cell::sync::Lazy<RwLock<Option<WebClient>>> =
once_cell::sync::Lazy::new(|| RwLock::new(None)); once_cell::sync::Lazy::new(|| RwLock::new(None));
@@ -100,7 +120,7 @@ async fn run_network_instance(
let client_manager = get_client_manager!()?; let client_manager = get_client_manager!()?;
let toml_config = cfg.gen_config().map_err(|e| e.to_string())?; let toml_config = cfg.gen_config().map_err(|e| e.to_string())?;
client_manager client_manager
.pre_run_network_instance_hook(&app, &toml_config) .pre_run_network_instance_hook(&app, &toml_config, manager::PersistedConfigSource::User)
.await?; .await?;
client_manager client_manager
.handle_run_network_instance(app.clone(), cfg, save) .handle_run_network_instance(app.clone(), cfg, save)
@@ -128,7 +148,6 @@ async fn collect_network_info(
#[tauri::command] #[tauri::command]
async fn set_logging_level(level: String) -> Result<(), String> { async fn set_logging_level(level: String) -> Result<(), String> {
println!("Setting logging level to: {}", level);
get_client_manager!()? get_client_manager!()?
.set_logging_level(level.clone()) .set_logging_level(level.clone())
.await .await
@@ -173,7 +192,7 @@ async fn remove_network_instance(app: AppHandle, instance_id: String) -> Result<
.await .await
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
client_manager client_manager
.post_remove_network_instances_hook(&app, &[instance_id]) .post_stop_network_instances_hook(&app)
.await?; .await?;
Ok(()) Ok(())
@@ -189,6 +208,20 @@ async fn update_network_config_state(
.parse() .parse()
.map_err(|e: uuid::Error| e.to_string())?; .map_err(|e: uuid::Error| e.to_string())?;
let client_manager = get_client_manager!()?; let client_manager = get_client_manager!()?;
if !disabled {
let (cfg, source) = client_manager
.handle_get_network_config_with_source(app.clone(), instance_id)
.await
.map_err(|e| e.to_string())?;
let toml_config = cfg.gen_config().map_err(|e| e.to_string())?;
client_manager
.pre_run_network_instance_hook(
&app,
&toml_config,
manager::PersistedConfigSource::from_runtime_source(source),
)
.await?;
}
client_manager client_manager
.handle_update_network_state(app.clone(), instance_id, disabled) .handle_update_network_state(app.clone(), instance_id, disabled)
.await .await
@@ -196,7 +229,11 @@ async fn update_network_config_state(
if disabled { if disabled {
client_manager client_manager
.post_remove_network_instances_hook(&app, &[instance_id]) .post_stop_network_instances_hook(&app)
.await?;
} else {
client_manager
.post_run_network_instance_hook(&app, &instance_id)
.await?; .await?;
} }
@@ -241,7 +278,7 @@ async fn get_config(app: AppHandle, instance_id: String) -> Result<NetworkConfig
#[tauri::command] #[tauri::command]
async fn load_configs( async fn load_configs(
app: AppHandle, app: AppHandle,
configs: Vec<NetworkConfig>, configs: Vec<manager::StoredGuiConfig>,
enabled_networks: Vec<String>, enabled_networks: Vec<String>,
) -> Result<(), String> { ) -> Result<(), String> {
get_client_manager!()? get_client_manager!()?
@@ -322,8 +359,25 @@ fn get_service_status() -> Result<&'static str, String> {
} }
} }
fn normalize_normal_mode_rpc_portal(portal: &str) -> Result<(url::Url, url::Url), String> {
let portal_url: url::Url = portal
.parse()
.map_err(|e| format!("invalid rpc portal: {:#}", e))?;
let bind_url = portal_url.clone();
let mut connect_url = portal_url.clone();
// if bind addr is 0.0.0.0, should convert to 127.0.0.1
if connect_url.host_str() == Some("0.0.0.0") {
connect_url.set_host(Some("127.0.0.1")).unwrap();
}
Ok((bind_url, connect_url))
}
#[tauri::command] #[tauri::command]
async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(), String> { async fn init_rpc_connection(
_app: AppHandle,
is_normal_mode: bool,
url: Option<String>,
) -> Result<(), String> {
let mut client_manager_guard = let mut client_manager_guard =
tokio::time::timeout(std::time::Duration::from_secs(5), CLIENT_MANAGER.write()) tokio::time::timeout(std::time::Duration::from_secs(5), CLIENT_MANAGER.write())
.await .await
@@ -331,41 +385,72 @@ async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(),
let mut instance_manager_guard = INSTANCE_MANAGER let mut instance_manager_guard = INSTANCE_MANAGER
.try_write() .try_write()
.map_err(|_| "Failed to acquire write lock for instance manager")?; .map_err(|_| "Failed to acquire write lock for instance manager")?;
let mut ring_rpc_server_guard = RING_RPC_SERVER let mut rpc_server_guard = RPC_SERVER
.try_write() .try_lock()
.map_err(|_| "Failed to acquire write lock for ring rpc server")?; .map_err(|_| "Failed to acquire lock for rpc server")?;
let normal_mode = url.is_none(); let mut client_url = url.clone();
if normal_mode { if is_normal_mode {
let instance_manager = if let Some(im) = instance_manager_guard.take() { let instance_manager = if let Some(im) = instance_manager_guard.take() {
im im
} else { } else {
Arc::new(NetworkInstanceManager::new()) Arc::new(NetworkInstanceManager::new())
}; };
let rpc_server = if let Some(rpc_server) = ring_rpc_server_guard.take() {
rpc_server let portal = url.and_then(|s| {
let trimmed = s.trim().to_string();
if trimmed.is_empty() {
None
} else {
Some(trimmed)
}
});
let (desired_kind, bind_url, connect_url) = if let Some(portal) = portal {
let (bind_url, connect_url) = normalize_normal_mode_rpc_portal(&portal)?;
(RpcServerKind::Tcp, Some(bind_url), Some(connect_url))
} else { } else {
ApiRpcServer::from_tunnel( (RpcServerKind::Ring, None, None)
RingTunnelListener::new(
format!("ring://{}", RPC_RING_UUID.deref()).parse().unwrap(),
),
instance_manager.clone(),
)
.with_rx_timeout(None)
.serve()
.await
.map_err(|e| e.to_string())?
}; };
let need_restart = rpc_server_guard
.as_ref()
.map(|x| x.kind != desired_kind || x.bind_url != bind_url)
.unwrap_or(true);
if need_restart {
*rpc_server_guard = None;
let tunnel: BoxedTunnelListener = match desired_kind {
RpcServerKind::Ring => Box::new(RingTunnelListener::new(
format!("ring://{}", RPC_RING_UUID.deref()).parse().unwrap(),
)),
RpcServerKind::Tcp => Box::new(TcpTunnelListener::new(
bind_url.clone().expect("tcp rpc must have bind url"),
)),
};
let rpc_server = ApiRpcServer::from_tunnel(tunnel, instance_manager.clone())
.with_rx_timeout(None)
.serve()
.await
.map_err(|e| e.to_string())?;
*rpc_server_guard = Some(RpcServer {
kind: desired_kind,
_server: rpc_server,
bind_url,
});
}
*instance_manager_guard = Some(instance_manager); *instance_manager_guard = Some(instance_manager);
*ring_rpc_server_guard = Some(rpc_server); client_url = connect_url.map(|u| u.to_string());
} else { } else {
*ring_rpc_server_guard = None; *rpc_server_guard = None;
} }
let client_manager = tokio::time::timeout( let client_manager = tokio::time::timeout(
std::time::Duration::from_millis(1000), std::time::Duration::from_millis(1000),
manager::GUIClientManager::new(url), manager::GUIClientManager::new(client_url),
) )
.await .await
.map_err(|_| "connect remote rpc timed out".to_string())? .map_err(|_| "connect remote rpc timed out".to_string())?
@@ -373,7 +458,7 @@ async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(),
.map_err(|e| format!("{:#}", e))?; .map_err(|e| format!("{:#}", e))?;
*client_manager_guard = Some(client_manager); *client_manager_guard = Some(client_manager);
if !normal_mode { if !is_normal_mode {
drop(WEB_CLIENT.write().await.take()); drop(WEB_CLIENT.write().await.take());
if let Some(instance_manager) = instance_manager_guard.take() { if let Some(instance_manager) = instance_manager_guard.take() {
instance_manager instance_manager
@@ -405,12 +490,26 @@ async fn init_web_client(app: AppHandle, url: Option<String>) -> Result<(), Stri
.ok_or_else(|| "Instance manager is not available".to_string())?; .ok_or_else(|| "Instance manager is not available".to_string())?;
let hooks = Arc::new(manager::GuiHooks { app: app.clone() }); let hooks = Arc::new(manager::GuiHooks { app: app.clone() });
let machine_id_state_dir = app
.path()
.app_data_dir()
.with_context(|| "Failed to resolve machine id state directory")
.map_err(|e| format!("{:#}", e))?;
let web_client = let web_client = web_client::run_web_client(
web_client::run_web_client(url.as_str(), None, None, instance_manager, Some(hooks)) url.as_str(),
.await easytier::common::MachineIdOptions {
.with_context(|| "Failed to initialize web client") explicit_machine_id: None,
.map_err(|e| format!("{:#}", e))?; state_dir: Some(machine_id_state_dir),
},
None,
false,
instance_manager,
Some(hooks),
)
.await
.with_context(|| "Failed to initialize web client")
.map_err(|e| format!("{:#}", e))?;
*web_client_guard = Some(web_client); *web_client_guard = Some(web_client);
Ok(()) Ok(())
} }
@@ -450,31 +549,34 @@ async fn get_log_dir_path(app: tauri::AppHandle) -> Result<String, String> {
#[cfg(not(target_os = "android"))] #[cfg(not(target_os = "android"))]
fn toggle_window_visibility(app: &tauri::AppHandle) { fn toggle_window_visibility(app: &tauri::AppHandle) {
if let Some(window) = app.get_webview_window("main") { if let Some(window) = app.get_webview_window("main") {
let visible = if window.is_visible().unwrap_or_default() { let visible = window.is_visible().unwrap_or_default();
if window.is_minimized().unwrap_or_default() { let minimized = window.is_minimized().unwrap_or_default();
let _ = window.unminimize(); let focused = window.is_focused().unwrap_or_default();
false
} else { let should_show = !visible || minimized || !focused;
true if should_show {
if !visible {
let _ = window.show();
} }
if minimized {
let _ = window.unminimize();
}
if !focused {
let _ = window.set_focus();
}
let _ = set_dock_visibility(app.clone(), true);
} else { } else {
let _ = window.show();
false
};
if visible {
let _ = window.hide(); let _ = window.hide();
} else { let _ = set_dock_visibility(app.clone(), false);
let _ = window.set_focus();
} }
let _ = set_dock_visibility(app.clone(), !visible);
} }
} }
fn get_exe_path() -> String { fn get_exe_path() -> String {
if let Ok(appimage_path) = std::env::var("APPIMAGE") { if let Ok(appimage_path) = std::env::var("APPIMAGE")
if !appimage_path.is_empty() { && !appimage_path.is_empty()
return appimage_path; {
} return appimage_path;
} }
std::env::current_exe() std::env::current_exe()
.map(|p| p.to_string_lossy().to_string()) .map(|p| p.to_string_lossy().to_string())
@@ -508,8 +610,8 @@ mod manager {
use easytier::proto::rpc_types::controller::BaseController; use easytier::proto::rpc_types::controller::BaseController;
use easytier::rpc_service::logger::LoggerRpcService; use easytier::rpc_service::logger::LoggerRpcService;
use easytier::rpc_service::remote_client::PersistentConfig; use easytier::rpc_service::remote_client::PersistentConfig;
use easytier::tunnel::ring::RingTunnelConnector;
use easytier::tunnel::TunnelConnector; use easytier::tunnel::TunnelConnector;
use easytier::tunnel::ring::RingTunnelConnector;
use easytier::web_client::WebClientHooks; use easytier::web_client::WebClientHooks;
pub(super) struct GuiHooks { pub(super) struct GuiHooks {
@@ -524,7 +626,11 @@ mod manager {
) -> Result<(), String> { ) -> Result<(), String> {
let client_manager = get_client_manager!()?; let client_manager = get_client_manager!()?;
client_manager client_manager
.pre_run_network_instance_hook(&self.app, cfg) .pre_run_network_instance_hook(
&self.app,
cfg,
PersistedConfigSource::from_runtime_source(cfg.get_network_config_source()),
)
.await .await
} }
@@ -538,19 +644,92 @@ mod manager {
async fn post_remove_network_instances(&self, ids: &[uuid::Uuid]) -> Result<(), String> { async fn post_remove_network_instances(&self, ids: &[uuid::Uuid]) -> Result<(), String> {
let client_manager = get_client_manager!()?; let client_manager = get_client_manager!()?;
client_manager client_manager
.post_remove_network_instances_hook(&self.app, ids) .post_remote_remove_network_instances_hook(&self.app, ids)
.await .await
} }
} }
#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)]
#[serde(rename_all = "snake_case")]
#[derive(Default)]
pub(super) enum PersistedConfigSource {
User,
Webhook,
#[serde(other)]
#[default]
Legacy,
}
impl PersistedConfigSource {
pub(super) fn from_runtime_source(source: ConfigSource) -> Self {
match source {
ConfigSource::User => Self::User,
ConfigSource::Webhook => Self::Webhook,
}
}
fn merge_persisted(self, incoming: Self) -> Self {
match (self, incoming) {
// Older runtimes report missing source as `user`. Keep the stronger persisted
// ownership until webhook sync or an explicit user save repairs it.
(Self::Webhook, Self::User) | (Self::Legacy, Self::User) => self,
(_, next) => next,
}
}
fn to_runtime_source(self) -> ConfigSource {
match self {
Self::User | Self::Legacy => ConfigSource::User,
Self::Webhook => ConfigSource::Webhook,
}
}
#[cfg(any(test, target_os = "android"))]
fn is_webhook_like(self) -> bool {
matches!(self, Self::Webhook)
}
}
#[derive(Clone)] #[derive(Clone)]
pub(super) struct GUIConfig(String, pub(crate) NetworkConfig); pub(super) struct GUIConfig {
inst_id: String,
pub(crate) config: NetworkConfig,
source: PersistedConfigSource,
}
#[derive(Clone, serde::Serialize, serde::Deserialize)]
pub(super) struct StoredGuiConfig {
config: NetworkConfig,
#[serde(default)]
source: PersistedConfigSource,
}
impl GUIConfig {
fn new(inst_id: String, config: NetworkConfig, source: PersistedConfigSource) -> Self {
Self {
inst_id,
config,
source,
}
}
fn into_stored(self) -> StoredGuiConfig {
StoredGuiConfig {
config: self.config,
source: self.source,
}
}
}
impl PersistentConfig<anyhow::Error> for GUIConfig { impl PersistentConfig<anyhow::Error> for GUIConfig {
fn get_network_inst_id(&self) -> &str { fn get_network_inst_id(&self) -> &str {
&self.0 &self.inst_id
} }
fn get_network_config(&self) -> Result<NetworkConfig, anyhow::Error> { fn get_network_config(&self) -> Result<NetworkConfig, anyhow::Error> {
Ok(self.1.clone()) Ok(self.config.clone())
}
fn get_network_config_source(&self) -> ConfigSource {
self.source.to_runtime_source()
} }
} }
@@ -567,13 +746,12 @@ mod manager {
} }
fn save_configs(&self, app: &AppHandle) -> anyhow::Result<()> { fn save_configs(&self, app: &AppHandle) -> anyhow::Result<()> {
let configs: Result<Vec<String>, _> = self let configs = self
.network_configs .network_configs
.iter() .iter()
.map(|entry| serde_json::to_string(&entry.value().1)) .map(|entry| entry.value().clone().into_stored())
.collect(); .collect::<Vec<_>>();
let payload = format!("[{}]", configs?.join(",")); app.emit("save_configs", configs)?;
app.emit_str("save_configs", payload)?;
Ok(()) Ok(())
} }
@@ -592,8 +770,14 @@ mod manager {
app: &AppHandle, app: &AppHandle,
inst_id: Uuid, inst_id: Uuid,
cfg: NetworkConfig, cfg: NetworkConfig,
source: PersistedConfigSource,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let config = GUIConfig(inst_id.to_string(), cfg); let source = self
.network_configs
.get(&inst_id)
.map(|existing| existing.source.merge_persisted(source))
.unwrap_or(source);
let config = GUIConfig::new(inst_id.to_string(), cfg, source);
self.network_configs.insert(inst_id, config); self.network_configs.insert(inst_id, config);
self.save_configs(app) self.save_configs(app)
} }
@@ -605,8 +789,14 @@ mod manager {
app: AppHandle, app: AppHandle,
network_inst_id: Uuid, network_inst_id: Uuid,
network_config: NetworkConfig, network_config: NetworkConfig,
source: ConfigSource,
) -> Result<(), anyhow::Error> { ) -> Result<(), anyhow::Error> {
self.save_config(&app, network_inst_id, network_config)?; self.save_config(
&app,
network_inst_id,
network_config,
PersistedConfigSource::from_runtime_source(source),
)?;
self.enabled_networks.insert(network_inst_id); self.enabled_networks.insert(network_inst_id);
self.save_enabled_networks(&app)?; self.save_enabled_networks(&app)?;
Ok(()) Ok(())
@@ -621,7 +811,9 @@ mod manager {
self.network_configs.remove(network_inst_id); self.network_configs.remove(network_inst_id);
self.enabled_networks.remove(network_inst_id); self.enabled_networks.remove(network_inst_id);
} }
self.save_configs(&app) self.save_configs(&app)?;
self.save_enabled_networks(&app)?;
Ok(())
} }
async fn update_network_config_state( async fn update_network_config_state(
@@ -721,17 +913,36 @@ mod manager {
.network_configs .network_configs
.iter() .iter()
.filter(|v| self.storage.enabled_networks.contains(v.key())) .filter(|v| self.storage.enabled_networks.contains(v.key()))
.filter(|v| !v.1.no_tun()) .filter(|v| !v.config.no_tun())
.filter_map(|c| c.1.instance_id().parse::<uuid::Uuid>().ok()) .filter_map(|c| c.config.instance_id().parse::<uuid::Uuid>().ok())
}
#[cfg(target_os = "android")]
pub fn get_enabled_instances_with_webhook_like_tun_ids(
&self,
) -> impl Iterator<Item = uuid::Uuid> + '_ {
self.storage
.network_configs
.iter()
.filter(|v| self.storage.enabled_networks.contains(v.key()))
.filter(|v| !v.config.no_tun())
.filter(|v| v.source.is_webhook_like())
.filter_map(|c| c.config.instance_id().parse::<uuid::Uuid>().ok())
} }
#[cfg(target_os = "android")] #[cfg(target_os = "android")]
pub(super) async fn disable_instances_with_tun( pub(super) async fn disable_instances_with_tun(
&self, &self,
app: &AppHandle, app: &AppHandle,
webhook_only: bool,
) -> Result<(), easytier::rpc_service::remote_client::RemoteClientError<anyhow::Error>> ) -> Result<(), easytier::rpc_service::remote_client::RemoteClientError<anyhow::Error>>
{ {
let inst_ids: Vec<uuid::Uuid> = self.get_enabled_instances_with_tun_ids().collect(); let inst_ids: Vec<uuid::Uuid> = if webhook_only {
self.get_enabled_instances_with_webhook_like_tun_ids()
.collect()
} else {
self.get_enabled_instances_with_tun_ids().collect()
};
for inst_id in inst_ids { for inst_id in inst_ids {
self.handle_update_network_state(app.clone(), inst_id, true) self.handle_update_network_state(app.clone(), inst_id, true)
.await?; .await?;
@@ -752,16 +963,32 @@ mod manager {
&self, &self,
app: &AppHandle, app: &AppHandle,
cfg: &easytier::common::config::TomlConfigLoader, cfg: &easytier::common::config::TomlConfigLoader,
source: PersistedConfigSource,
) -> Result<(), String> { ) -> Result<(), String> {
let instance_id = cfg.get_id(); let instance_id = cfg.get_id();
app.emit("pre_run_network_instance", instance_id) app.emit("pre_run_network_instance", instance_id.to_string())
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
#[cfg(target_os = "android")] #[cfg(target_os = "android")]
if !cfg.get_flags().no_tun { if !cfg.get_flags().no_tun {
self.disable_instances_with_tun(app) match source {
.await PersistedConfigSource::User | PersistedConfigSource::Legacy => {
.map_err(|e| e.to_string())?; self.disable_instances_with_tun(app, false)
.await
.map_err(|e| e.to_string())?;
}
PersistedConfigSource::Webhook => {
self.disable_instances_with_tun(app, true)
.await
.map_err(|e| e.to_string())?;
if self.get_enabled_instances_with_tun_ids().next().is_some() {
return Err(
"Android only supports one active TUN network; user-managed VPN remains active"
.to_string(),
);
}
}
}
} }
self.storage self.storage
@@ -769,6 +996,7 @@ mod manager {
app, app,
instance_id, instance_id,
NetworkConfig::new_from_config(cfg).map_err(|e| e.to_string())?, NetworkConfig::new_from_config(cfg).map_err(|e| e.to_string())?,
source,
) )
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@@ -791,20 +1019,21 @@ mod manager {
let app_clone = app.clone(); let app_clone = app.clone();
let instance_id_clone = *instance_id; let instance_id_clone = *instance_id;
tokio::spawn(async move { tokio::spawn(async move {
let instance_id_str = instance_id_clone.to_string();
loop { loop {
match event_receiver.recv().await { match event_receiver.recv().await {
Ok(easytier::common::global_ctx::GlobalCtxEvent::DhcpIpv4Changed(_, _)) => { Ok(easytier::common::global_ctx::GlobalCtxEvent::DhcpIpv4Changed(_, _)) => {
let _ = app_clone.emit("dhcp_ip_changed", instance_id_clone); let _ = app_clone.emit("dhcp_ip_changed", &instance_id_str);
} }
Ok(easytier::common::global_ctx::GlobalCtxEvent::ProxyCidrsUpdated(_, _)) => { Ok(easytier::common::global_ctx::GlobalCtxEvent::ProxyCidrsUpdated(_, _)) => {
let _ = app_clone.emit("proxy_cidrs_updated", instance_id_clone); let _ = app_clone.emit("proxy_cidrs_updated", &instance_id_str);
} }
Ok(_) => {} Ok(_) => {}
Err(tokio::sync::broadcast::error::RecvError::Closed) => { Err(tokio::sync::broadcast::error::RecvError::Closed) => {
break; break;
} }
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => { Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
let _ = app_clone.emit("event_lagged", instance_id_clone); let _ = app_clone.emit("event_lagged", &instance_id_str);
event_receiver = event_receiver.resubscribe(); event_receiver = event_receiver.resubscribe();
} }
} }
@@ -816,20 +1045,29 @@ mod manager {
self.storage.enabled_networks.insert(*instance_id); self.storage.enabled_networks.insert(*instance_id);
app.emit("post_run_network_instance", instance_id) app.emit("post_run_network_instance", instance_id.to_string())
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
Ok(()) Ok(())
} }
pub(super) async fn post_remove_network_instances_hook( pub(super) async fn post_remote_remove_network_instances_hook(
&self, &self,
app: &AppHandle, app: &AppHandle,
_ids: &[uuid::Uuid], ids: &[uuid::Uuid],
) -> Result<(), String> { ) -> Result<(), String> {
self.storage self.storage
.enabled_networks .delete_network_configs(app.clone(), ids)
.retain(|id| !_ids.contains(id)); .await
.map_err(|e| e.to_string())?;
self.notify_vpn_stop_if_no_tun(app)?;
Ok(())
}
pub(super) async fn post_stop_network_instances_hook(
&self,
app: &AppHandle,
) -> Result<(), String> {
self.notify_vpn_stop_if_no_tun(app)?; self.notify_vpn_stop_if_no_tun(app)?;
Ok(()) Ok(())
} }
@@ -862,15 +1100,15 @@ mod manager {
pub(super) async fn load_configs( pub(super) async fn load_configs(
&self, &self,
app: AppHandle, app: AppHandle,
configs: Vec<NetworkConfig>, configs: Vec<StoredGuiConfig>,
enabled_networks: Vec<String>, enabled_networks: Vec<String>,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
self.storage.network_configs.clear(); self.storage.network_configs.clear();
for cfg in configs { for stored in configs {
let instance_id = cfg.instance_id(); let instance_id = stored.config.instance_id();
self.storage.network_configs.insert( self.storage.network_configs.insert(
instance_id.parse()?, instance_id.parse()?,
GUIConfig(instance_id.to_string(), cfg), GUIConfig::new(instance_id.to_string(), stored.config, stored.source),
); );
} }
@@ -879,28 +1117,35 @@ mod manager {
.get_rpc_client(app.clone()) .get_rpc_client(app.clone())
.ok_or_else(|| anyhow::anyhow!("RPC client not found"))?; .ok_or_else(|| anyhow::anyhow!("RPC client not found"))?;
for id in enabled_networks { for id in enabled_networks {
if let Ok(uuid) = id.parse() { if let Ok(uuid) = id.parse()
if !self.storage.enabled_networks.contains(&uuid) { && !self.storage.enabled_networks.contains(&uuid)
let config = self {
.storage let config = self
.network_configs .storage
.get(&uuid) .network_configs
.map(|i| i.value().1.clone()); .get(&uuid)
if config.is_none() { .map(|i| (i.value().config.clone(), i.value().source));
continue; let Some((config, source)) = config else {
} continue;
client };
.run_network_instance( let toml_config = config.gen_config()?;
BaseController::default(), self.pre_run_network_instance_hook(&app, &toml_config, source)
RunNetworkInstanceRequest { .await
inst_id: None, .map_err(|e| anyhow::anyhow!(e))?;
config, client
overwrite: false, .run_network_instance(
}, BaseController::default(),
) RunNetworkInstanceRequest {
.await?; inst_id: None,
self.storage.enabled_networks.insert(uuid); config: Some(config),
} overwrite: false,
source: source.to_runtime_source().to_rpc(),
},
)
.await?;
self.post_run_network_instance_hook(&app, &uuid)
.await
.map_err(|e| anyhow::anyhow!(e))?;
} }
} }
Ok(()) Ok(())
@@ -926,6 +1171,44 @@ mod manager {
&self.storage &self.storage
} }
} }
#[cfg(test)]
mod tests {
use super::{PersistedConfigSource, StoredGuiConfig};
use easytier::proto::api::manage::NetworkConfig;
#[test]
fn stored_gui_config_defaults_missing_source_to_legacy() {
let stored: StoredGuiConfig = serde_json::from_value(serde_json::json!({
"config": NetworkConfig::default(),
}))
.unwrap();
assert_eq!(stored.source, PersistedConfigSource::Legacy);
}
#[test]
fn persisted_source_merge_keeps_legacy_and_webhook_over_ambiguous_user() {
assert_eq!(
PersistedConfigSource::Legacy.merge_persisted(PersistedConfigSource::User),
PersistedConfigSource::Legacy
);
assert_eq!(
PersistedConfigSource::Webhook.merge_persisted(PersistedConfigSource::User),
PersistedConfigSource::Webhook
);
assert_eq!(
PersistedConfigSource::Legacy.merge_persisted(PersistedConfigSource::Webhook),
PersistedConfigSource::Webhook
);
}
#[test]
fn only_webhook_configs_are_webhook_like() {
assert!(!PersistedConfigSource::Legacy.is_webhook_like());
assert!(!PersistedConfigSource::User.is_webhook_like());
assert!(PersistedConfigSource::Webhook.is_webhook_like());
}
}
} }
#[cfg(not(target_os = "android"))] #[cfg(not(target_os = "android"))]
@@ -1014,7 +1297,7 @@ pub fn run_gui() -> std::process::ExitCode {
process::exit(0); process::exit(0);
} }
utils::setup_panic_handler(); setup_panic_handler();
let mut builder = tauri::Builder::default(); let mut builder = tauri::Builder::default();
@@ -1053,7 +1336,7 @@ pub fn run_gui() -> std::process::ExitCode {
}) })
.build() .build()
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
let Ok(_) = utils::init_logger(&config, true) else { let Ok(_) = log::init(&config, true) else {
return Ok(()); return Ok(());
}; };
+2 -2
View File
@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false "createUpdaterArtifacts": false
}, },
"productName": "easytier-gui", "productName": "easytier-gui",
"version": "2.5.0", "version": "2.6.4",
"identifier": "com.kkrainbow.easytier", "identifier": "com.kkrainbow.easytier",
"plugins": { "plugins": {
"shell": { "shell": {
@@ -36,4 +36,4 @@
"csp": null "csp": null
} }
} }
} }
+6
View File
@@ -43,6 +43,7 @@ declare global {
const isWebClientConnected: typeof import('./composables/backend')['isWebClientConnected'] const isWebClientConnected: typeof import('./composables/backend')['isWebClientConnected']
const listNetworkInstanceIds: typeof import('./composables/backend')['listNetworkInstanceIds'] const listNetworkInstanceIds: typeof import('./composables/backend')['listNetworkInstanceIds']
const listenGlobalEvents: typeof import('./composables/event')['listenGlobalEvents'] const listenGlobalEvents: typeof import('./composables/event')['listenGlobalEvents']
const loadLastNetworkInstanceId: typeof import('./composables/config')['loadLastNetworkInstanceId']
const loadMode: typeof import('./composables/mode')['loadMode'] const loadMode: typeof import('./composables/mode')['loadMode']
const mapActions: typeof import('pinia')['mapActions'] const mapActions: typeof import('pinia')['mapActions']
const mapGetters: typeof import('pinia')['mapGetters'] const mapGetters: typeof import('pinia')['mapGetters']
@@ -76,6 +77,7 @@ declare global {
const ref: typeof import('vue')['ref'] const ref: typeof import('vue')['ref']
const resolveComponent: typeof import('vue')['resolveComponent'] const resolveComponent: typeof import('vue')['resolveComponent']
const runNetworkInstance: typeof import('./composables/backend')['runNetworkInstance'] const runNetworkInstance: typeof import('./composables/backend')['runNetworkInstance']
const saveLastNetworkInstanceId: typeof import('./composables/config')['saveLastNetworkInstanceId']
const saveMode: typeof import('./composables/mode')['saveMode'] const saveMode: typeof import('./composables/mode')['saveMode']
const saveNetworkConfig: typeof import('./composables/backend')['saveNetworkConfig'] const saveNetworkConfig: typeof import('./composables/backend')['saveNetworkConfig']
const sendConfigs: typeof import('./composables/backend')['sendConfigs'] const sendConfigs: typeof import('./composables/backend')['sendConfigs']
@@ -91,6 +93,7 @@ declare global {
const shallowReadonly: typeof import('vue')['shallowReadonly'] const shallowReadonly: typeof import('vue')['shallowReadonly']
const shallowRef: typeof import('vue')['shallowRef'] const shallowRef: typeof import('vue')['shallowRef']
const storeToRefs: typeof import('pinia')['storeToRefs'] const storeToRefs: typeof import('pinia')['storeToRefs']
const syncMobileVpnService: typeof import('./composables/mobile_vpn')['syncMobileVpnService']
const toRaw: typeof import('vue')['toRaw'] const toRaw: typeof import('vue')['toRaw']
const toRef: typeof import('vue')['toRef'] const toRef: typeof import('vue')['toRef']
const toRefs: typeof import('vue')['toRefs'] const toRefs: typeof import('vue')['toRefs']
@@ -165,6 +168,7 @@ declare module 'vue' {
readonly isWebClientConnected: UnwrapRef<typeof import('./composables/backend')['isWebClientConnected']> readonly isWebClientConnected: UnwrapRef<typeof import('./composables/backend')['isWebClientConnected']>
readonly listNetworkInstanceIds: UnwrapRef<typeof import('./composables/backend')['listNetworkInstanceIds']> readonly listNetworkInstanceIds: UnwrapRef<typeof import('./composables/backend')['listNetworkInstanceIds']>
readonly listenGlobalEvents: UnwrapRef<typeof import('./composables/event')['listenGlobalEvents']> readonly listenGlobalEvents: UnwrapRef<typeof import('./composables/event')['listenGlobalEvents']>
readonly loadLastNetworkInstanceId: UnwrapRef<typeof import('./composables/config')['loadLastNetworkInstanceId']>
readonly loadMode: UnwrapRef<typeof import('./composables/mode')['loadMode']> readonly loadMode: UnwrapRef<typeof import('./composables/mode')['loadMode']>
readonly mapActions: UnwrapRef<typeof import('pinia')['mapActions']> readonly mapActions: UnwrapRef<typeof import('pinia')['mapActions']>
readonly mapGetters: UnwrapRef<typeof import('pinia')['mapGetters']> readonly mapGetters: UnwrapRef<typeof import('pinia')['mapGetters']>
@@ -198,6 +202,7 @@ declare module 'vue' {
readonly ref: UnwrapRef<typeof import('vue')['ref']> readonly ref: UnwrapRef<typeof import('vue')['ref']>
readonly resolveComponent: UnwrapRef<typeof import('vue')['resolveComponent']> readonly resolveComponent: UnwrapRef<typeof import('vue')['resolveComponent']>
readonly runNetworkInstance: UnwrapRef<typeof import('./composables/backend')['runNetworkInstance']> readonly runNetworkInstance: UnwrapRef<typeof import('./composables/backend')['runNetworkInstance']>
readonly saveLastNetworkInstanceId: UnwrapRef<typeof import('./composables/config')['saveLastNetworkInstanceId']>
readonly saveMode: UnwrapRef<typeof import('./composables/mode')['saveMode']> readonly saveMode: UnwrapRef<typeof import('./composables/mode')['saveMode']>
readonly saveNetworkConfig: UnwrapRef<typeof import('./composables/backend')['saveNetworkConfig']> readonly saveNetworkConfig: UnwrapRef<typeof import('./composables/backend')['saveNetworkConfig']>
readonly sendConfigs: UnwrapRef<typeof import('./composables/backend')['sendConfigs']> readonly sendConfigs: UnwrapRef<typeof import('./composables/backend')['sendConfigs']>
@@ -213,6 +218,7 @@ declare module 'vue' {
readonly shallowReadonly: UnwrapRef<typeof import('vue')['shallowReadonly']> readonly shallowReadonly: UnwrapRef<typeof import('vue')['shallowReadonly']>
readonly shallowRef: UnwrapRef<typeof import('vue')['shallowRef']> readonly shallowRef: UnwrapRef<typeof import('vue')['shallowRef']>
readonly storeToRefs: UnwrapRef<typeof import('pinia')['storeToRefs']> readonly storeToRefs: UnwrapRef<typeof import('pinia')['storeToRefs']>
readonly syncMobileVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['syncMobileVpnService']>
readonly toRaw: UnwrapRef<typeof import('vue')['toRaw']> readonly toRaw: UnwrapRef<typeof import('vue')['toRaw']>
readonly toRef: UnwrapRef<typeof import('vue')['toRef']> readonly toRef: UnwrapRef<typeof import('vue')['toRef']>
readonly toRefs: UnwrapRef<typeof import('vue')['toRefs']> readonly toRefs: UnwrapRef<typeof import('vue')['toRefs']>
+82 -1
View File
@@ -1,6 +1,6 @@
<script setup lang="ts"> <script setup lang="ts">
import { computed, watch, onMounted, ref } from 'vue'; import { computed, watch, onMounted, ref } from 'vue';
import type { Mode, ServiceMode, RemoteMode } from '~/composables/mode'; import type { Mode, ServiceMode, RemoteMode, NormalMode } from '~/composables/mode';
import { appConfigDir, appLogDir } from '@tauri-apps/api/path'; import { appConfigDir, appLogDir } from '@tauri-apps/api/path';
import { join } from '@tauri-apps/api/path'; import { join } from '@tauri-apps/api/path';
import { getServiceStatus, type ServiceStatus } from '~/composables/backend'; import { getServiceStatus, type ServiceStatus } from '~/composables/backend';
@@ -15,6 +15,14 @@ const defaultLogDir = ref('')
const serviceStatus = ref<ServiceStatus>('NotInstalled') const serviceStatus = ref<ServiceStatus>('NotInstalled')
const isServiceStatusLoaded = ref(false) const isServiceStatusLoaded = ref(false)
function normalizeRpcListenPort(port: unknown): number {
const defaultPort = 15999
const numericPort = typeof port === 'number' ? port : Number.parseInt(String(port ?? ''), 10)
if (Number.isNaN(numericPort))
return defaultPort
return Math.min(65535, Math.max(1, Math.floor(numericPort)))
}
onMounted(async () => { onMounted(async () => {
defaultConfigDir.value = await join(await appConfigDir(), 'config.d') defaultConfigDir.value = await join(await appConfigDir(), 'config.d')
defaultLogDir.value = await appLogDir() defaultLogDir.value = await appLogDir()
@@ -26,6 +34,43 @@ const modeOptions = computed(() => [
{ label: t('mode.remote'), value: 'remote' }, { label: t('mode.remote'), value: 'remote' },
]); ]);
const normalMode = computed({
get: () => model.value.mode === 'normal' ? model.value as NormalMode : undefined,
set: (value) => {
if (value) {
model.value = value
}
}
})
const rpcListenOptions = computed(() => [
{ label: t('web.common.disable'), value: false },
{ label: t('web.common.enable'), value: true },
])
const rpcListenEnabled = computed<boolean>({
get: () => !!normalMode.value?.enable_rpc_port_listen,
set: (value) => {
if (!normalMode.value)
return
normalMode.value.enable_rpc_port_listen = value
},
})
const rpcListenPort = computed<string>({
get: () => String(normalMode.value?.rpc_listen_port ?? 15999),
set: (value) => {
if (!normalMode.value)
return
const trimmed = value.trim()
if (trimmed === '')
return
if (!/^\d+$/.test(trimmed))
return
normalMode.value.rpc_listen_port = Number.parseInt(trimmed, 10)
},
})
const serviceMode = computed({ const serviceMode = computed({
get: () => model.value.mode === 'service' ? model.value as ServiceMode : undefined, get: () => model.value.mode === 'service' ? model.value as ServiceMode : undefined,
set: (value) => { set: (value) => {
@@ -57,6 +102,24 @@ const statusColorClass = computed(() => {
} }
}) })
watch(() => [normalMode.value?.enable_rpc_port_listen, normalMode.value?.rpc_listen_port], ([enabled, port]) => {
if (!normalMode.value)
return
if (!enabled) {
normalMode.value.rpc_portal = undefined
return
}
const normalizedPort = normalizeRpcListenPort(port)
if (normalMode.value.rpc_listen_port !== normalizedPort)
normalMode.value.rpc_listen_port = normalizedPort
const desiredPortal = `tcp://0.0.0.0:${normalizedPort}`
if (normalMode.value.rpc_portal !== desiredPortal)
normalMode.value.rpc_portal = desiredPortal
}, { immediate: true })
watch(() => model.value.mode, async (newMode, oldMode) => { watch(() => model.value.mode, async (newMode, oldMode) => {
if (newMode === oldMode) if (newMode === oldMode)
return return
@@ -69,8 +132,12 @@ watch(() => model.value.mode, async (newMode, oldMode) => {
const oldModelValue = { ...model.value } const oldModelValue = { ...model.value }
if (newMode === 'normal') { if (newMode === 'normal') {
const portal = normalMode.value?.rpc_portal?.trim()
model.value = { model.value = {
...oldModelValue, ...oldModelValue,
rpc_portal: portal || undefined,
enable_rpc_port_listen: normalMode.value?.enable_rpc_port_listen,
rpc_listen_port: normalMode.value?.rpc_listen_port,
mode: 'normal', mode: 'normal',
} }
} }
@@ -113,6 +180,20 @@ watch(() => model.value.mode, async (newMode, oldMode) => {
{{ t('mode.remote_description') }} {{ t('mode.remote_description') }}
</div> </div>
<div v-if="normalMode" class="flex flex-col gap-2">
<div class="flex items-center gap-2">
<label for="rpc-listen-toggle">{{ t('mode.enable_rpc_tcp_listen') }}</label>
<SelectButton id="rpc-listen-toggle" v-model="rpcListenEnabled" :options="rpcListenOptions" option-label="label"
option-value="value" />
</div>
<div v-if="rpcListenEnabled" class="flex flex-col gap-2">
<div class="flex items-center gap-2">
<label for="rpc-listen-port">{{ t('mode.rpc_listen_port') }}</label>
<InputText id="rpc-listen-port" v-model="rpcListenPort" class="flex-1" inputmode="numeric" />
</div>
</div>
</div>
<div v-if="serviceMode" class="flex flex-col gap-2"> <div v-if="serviceMode" class="flex flex-col gap-2">
<div class="flex items-center gap-2"> <div class="flex items-center gap-2">
<label for="config-dir">{{ t('mode.config_dir') }}</label> <label for="config-dir">{{ t('mode.config_dir') }}</label>
+53 -11
View File
@@ -1,11 +1,12 @@
import { invoke } from '@tauri-apps/api/core' import { invoke } from '@tauri-apps/api/core'
import { Api, type NetworkTypes } from 'easytier-frontend-lib' import { Api, NetworkTypes } from 'easytier-frontend-lib'
import { GetNetworkMetasResponse } from 'node_modules/easytier-frontend-lib/dist/modules/api' import { GetNetworkMetasResponse } from 'node_modules/easytier-frontend-lib/dist/modules/api'
type NetworkConfig = NetworkTypes.NetworkConfig type NetworkConfig = NetworkTypes.NetworkConfig
type ValidateConfigResponse = Api.ValidateConfigResponse type ValidateConfigResponse = Api.ValidateConfigResponse
type ListNetworkInstanceIdResponse = Api.ListNetworkInstanceIdResponse type ListNetworkInstanceIdResponse = Api.ListNetworkInstanceIdResponse
type ConfigSource = 'user' | 'webhook' | 'legacy'
interface ServiceOptions { interface ServiceOptions {
config_dir: string config_dir: string
rpc_portal: string rpc_portal: string
@@ -16,16 +17,50 @@ interface ServiceOptions {
export type ServiceStatus = "Running" | "Stopped" | "NotInstalled" export type ServiceStatus = "Running" | "Stopped" | "NotInstalled"
interface StoredGuiConfig {
config: NetworkConfig
source: ConfigSource
}
function parseStoredConfigs(raw: string | null): StoredGuiConfig[] {
const parsed: unknown = JSON.parse(raw || '[]')
if (!Array.isArray(parsed)) {
return []
}
return parsed.flatMap((entry): StoredGuiConfig[] => {
if (entry && typeof entry === 'object' && 'config' in entry) {
const { config, source } = entry as {
config?: NetworkConfig
source?: ConfigSource
}
if (!config) {
return []
}
return [{
config: NetworkTypes.normalizeNetworkConfig(config),
source: source === 'user' || source === 'webhook' ? source : 'legacy',
}]
}
return [{
config: NetworkTypes.normalizeNetworkConfig(entry as NetworkConfig),
source: 'legacy',
}]
})
}
export async function parseNetworkConfig(cfg: NetworkConfig) { export async function parseNetworkConfig(cfg: NetworkConfig) {
return invoke<string>('parse_network_config', { cfg }) return invoke<string>('parse_network_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
} }
export async function generateNetworkConfig(tomlConfig: string) { export async function generateNetworkConfig(tomlConfig: string) {
return invoke<NetworkConfig>('generate_network_config', { tomlConfig }) const config = await invoke<NetworkConfig>('generate_network_config', { tomlConfig })
return NetworkTypes.normalizeNetworkConfig(config)
} }
export async function runNetworkInstance(cfg: NetworkConfig, save: boolean) { export async function runNetworkInstance(cfg: NetworkConfig, save: boolean) {
return invoke('run_network_instance', { cfg, save }) return invoke('run_network_instance', { cfg: NetworkTypes.toBackendNetworkConfig(cfg), save })
} }
export async function collectNetworkInfo(instanceId: string) { export async function collectNetworkInfo(instanceId: string) {
@@ -57,20 +92,27 @@ export async function updateNetworkConfigState(instanceId: string, disabled: boo
} }
export async function saveNetworkConfig(cfg: NetworkConfig) { export async function saveNetworkConfig(cfg: NetworkConfig) {
return await invoke('save_network_config', { cfg }) return await invoke('save_network_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
} }
export async function validateConfig(cfg: NetworkConfig) { export async function validateConfig(cfg: NetworkConfig) {
return await invoke<ValidateConfigResponse>('validate_config', { cfg }) return await invoke<ValidateConfigResponse>('validate_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
} }
export async function getConfig(instanceId: string) { export async function getConfig(instanceId: string) {
return await invoke<NetworkConfig>('get_config', { instanceId }) const config = await invoke<NetworkConfig>('get_config', { instanceId })
return NetworkTypes.normalizeNetworkConfig(config)
} }
export async function sendConfigs(enabledNetworks: string[]) { export async function sendConfigs(enabledNetworks: string[]) {
let networkList: NetworkConfig[] = JSON.parse(localStorage.getItem('networkList') || '[]'); const networkList = parseStoredConfigs(localStorage.getItem('networkList'))
return await invoke('load_configs', { configs: networkList, enabledNetworks }) return await invoke('load_configs', {
configs: networkList.map(({ config, source }) => ({
config: NetworkTypes.toBackendNetworkConfig(config),
source,
})),
enabledNetworks
})
} }
export async function getNetworkMetas(instanceIds: string[]) { export async function getNetworkMetas(instanceIds: string[]) {
@@ -89,8 +131,8 @@ export async function getServiceStatus() {
return await invoke<ServiceStatus>('get_service_status') return await invoke<ServiceStatus>('get_service_status')
} }
export async function initRpcConnection(url?: string) { export async function initRpcConnection(isNormalMode: boolean, url?: string) {
return await invoke('init_rpc_connection', { url }) return await invoke('init_rpc_connection', { isNormalMode, url })
} }
export async function isClientRunning() { export async function isClientRunning() {
+20
View File
@@ -0,0 +1,20 @@
/**
*
*
*/
/**
* 使 ID
* @param instanceId ID
*/
export function saveLastNetworkInstanceId(instanceId: string) {
localStorage.setItem('last_network_instance_id', instanceId)
}
/**
* 使 ID
* @returns 使 ID null
*/
export function loadLastNetworkInstanceId(): string | null {
return localStorage.getItem('last_network_instance_id')
}
+60 -16
View File
@@ -1,6 +1,12 @@
import { Event, listen } from "@tauri-apps/api/event"; import { Event, listen } from "@tauri-apps/api/event";
import { type } from "@tauri-apps/plugin-os"; import { type } from "@tauri-apps/plugin-os";
import { NetworkTypes } from "easytier-frontend-lib" import { NetworkTypes } from "easytier-frontend-lib"
import { Utils } from "easytier-frontend-lib";
interface StoredGuiConfig {
config: NetworkTypes.NetworkConfig
source?: 'user' | 'webhook' | 'legacy'
}
const EVENTS = Object.freeze({ const EVENTS = Object.freeze({
SAVE_CONFIGS: 'save_configs', SAVE_CONFIGS: 'save_configs',
@@ -12,44 +18,82 @@ const EVENTS = Object.freeze({
EVENT_LAGGED: 'event_lagged', EVENT_LAGGED: 'event_lagged',
}); });
function onSaveConfigs(event: Event<NetworkTypes.NetworkConfig[]>) { function onSaveConfigs(event: Event<StoredGuiConfig[]>) {
console.log(`Received event '${EVENTS.SAVE_CONFIGS}': ${event.payload}`); console.log(`Received event '${EVENTS.SAVE_CONFIGS}': ${event.payload}`);
localStorage.setItem('networkList', JSON.stringify(event.payload)); localStorage.setItem(
'networkList',
JSON.stringify(event.payload.map(({ config, source }) => ({
config: NetworkTypes.normalizeNetworkConfig(config),
source: source ?? 'legacy',
}))),
);
} }
async function onPreRunNetworkInstance(event: Event<string>) { function normalizeInstanceIdPayload(payload: unknown): string {
if (typeof payload === 'string') {
return payload
}
if (payload && typeof payload === 'object') {
const uuid = payload as Partial<Utils.UUID>
if (
typeof uuid.part1 === 'number'
&& typeof uuid.part2 === 'number'
&& typeof uuid.part3 === 'number'
&& typeof uuid.part4 === 'number'
) {
return Utils.UuidToStr(uuid as Utils.UUID)
}
}
if (payload == null) {
return ''
}
const fallback = String(payload)
return fallback === '[object Object]' ? '' : fallback
}
async function onPreRunNetworkInstance(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.PRE_RUN_NETWORK_INSTANCE}', raw payload:`, event.payload, 'normalized:', instanceId)
if (type() === 'android') { if (type() === 'android') {
await prepareVpnService(event.payload); await prepareVpnService(instanceId);
} }
} }
async function onPostRunNetworkInstance(event: Event<string>) { async function onPostRunNetworkInstance(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.POST_RUN_NETWORK_INSTANCE}', raw payload:`, event.payload, 'normalized:', instanceId)
if (type() === 'android') { if (type() === 'android') {
await onNetworkInstanceChange(event.payload); await onNetworkInstanceChange(instanceId);
} }
} }
async function onVpnServiceStop(event: Event<string>) { async function onVpnServiceStop(event: Event<unknown>) {
await onNetworkInstanceChange(event.payload); console.log(`Received event '${EVENTS.VPN_SERVICE_STOP}', raw payload:`, event.payload)
await syncMobileVpnService();
} }
async function onDhcpIpChanged(event: Event<string>) { async function onDhcpIpChanged(event: Event<unknown>) {
console.log(`Received event '${EVENTS.DHCP_IP_CHANGED}' for instance: ${event.payload}`); const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.DHCP_IP_CHANGED}' for instance: ${instanceId}`);
if (type() === 'android') { if (type() === 'android') {
await onNetworkInstanceChange(event.payload); await onNetworkInstanceChange(instanceId);
} }
} }
async function onProxyCidrsUpdated(event: Event<string>) { async function onProxyCidrsUpdated(event: Event<unknown>) {
console.log(`Received event '${EVENTS.PROXY_CIDRS_UPDATED}' for instance: ${event.payload}`); const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.PROXY_CIDRS_UPDATED}' for instance: ${instanceId}`);
if (type() === 'android') { if (type() === 'android') {
await onNetworkInstanceChange(event.payload); await onNetworkInstanceChange(instanceId);
} }
} }
async function onEventLagged(event: Event<string>) { async function onEventLagged(event: Event<unknown>) {
if (type() === 'android') { if (type() === 'android') {
await onNetworkInstanceChange(event.payload); await onNetworkInstanceChange(normalizeInstanceIdPayload(event.payload));
} }
} }
+140 -26
View File
@@ -1,7 +1,7 @@
import type { NetworkTypes } from 'easytier-frontend-lib' import type { NetworkTypes } from 'easytier-frontend-lib'
import { addPluginListener } from '@tauri-apps/api/core' import { addPluginListener } from '@tauri-apps/api/core'
import { Utils } from 'easytier-frontend-lib' import { Utils } from 'easytier-frontend-lib'
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api' import { get_vpn_status, prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
type Route = NetworkTypes.Route type Route = NetworkTypes.Route
@@ -24,6 +24,53 @@ const curVpnStatus: vpnStatus = {
dns: undefined, dns: undefined,
} }
async function requestVpnPermission() {
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
const granted = prepare_ret?.granted ?? true
if (!granted) {
console.info('vpn permission request was denied or dismissed')
}
return granted
}
function resetVpnConfigStatus() {
curVpnStatus.ipv4Addr = undefined
curVpnStatus.ipv4Cidr = undefined
curVpnStatus.routes = []
curVpnStatus.dns = undefined
}
function syncVpnStatusFromNative(status: Awaited<ReturnType<typeof get_vpn_status>>) {
curVpnStatus.running = status?.running ?? false
if (!curVpnStatus.running) {
resetVpnConfigStatus()
return
}
const ipv4WithCidr = status?.ipv4Addr
if (ipv4WithCidr?.length) {
const [ipv4Addr, cidr] = ipv4WithCidr.split('/')
curVpnStatus.ipv4Addr = ipv4Addr
const parsedCidr = Number(cidr)
curVpnStatus.ipv4Cidr = Number.isInteger(parsedCidr) ? parsedCidr : undefined
}
else {
curVpnStatus.ipv4Addr = undefined
curVpnStatus.ipv4Cidr = undefined
}
curVpnStatus.routes = [...(status?.routes ?? [])]
curVpnStatus.dns = status?.dns ?? undefined
}
async function waitVpnStatus(target_status: boolean, timeout_sec: number) { async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
const start_time = Date.now() const start_time = Date.now()
while (curVpnStatus.running !== target_status) { while (curVpnStatus.running !== target_status) {
@@ -34,18 +81,19 @@ async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
} }
} }
async function doStopVpn() { async function doStopVpn(force = false) {
if (!curVpnStatus.running) { const wasRunning = curVpnStatus.running
if (!force && !wasRunning) {
return return
} }
console.log('stop vpn') console.log('stop vpn')
const stop_ret = await stop_vpn() const stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret))) console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3) if (wasRunning) {
await waitVpnStatus(false, 3)
}
curVpnStatus.ipv4Addr = undefined resetVpnConfigStatus()
curVpnStatus.routes = []
curVpnStatus.dns = undefined
} }
async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[], dns?: string) { async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[], dns?: string) {
@@ -54,19 +102,32 @@ async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[], dns?
} }
console.log('start vpn service', ipv4Addr, cidr, routes, dns) console.log('start vpn service', ipv4Addr, cidr, routes, dns)
const start_ret = await start_vpn({ const request = {
ipv4Addr: `${ipv4Addr}/${cidr}`, ipv4Addr: `${ipv4Addr}/${cidr}`,
routes, routes,
dns, dns,
disallowedApplications: ['com.kkrainbow.easytier'], disallowedApplications: ['com.kkrainbow.easytier'],
mtu: 1300, mtu: 1300,
}) }
let start_ret = await start_vpn(request)
console.log('start vpn response', JSON.stringify(start_ret))
if (start_ret?.errorMsg === 'need_prepare') {
const granted = await requestVpnPermission()
if (!granted) {
throw new Error('vpn_permission_denied')
}
start_ret = await start_vpn(request)
console.log('start vpn retry response', JSON.stringify(start_ret))
}
if (start_ret?.errorMsg?.length) { if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg) throw new Error(start_ret.errorMsg)
} }
await waitVpnStatus(true, 3) await waitVpnStatus(true, 3)
curVpnStatus.ipv4Addr = ipv4Addr curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.ipv4Cidr = cidr
curVpnStatus.routes = routes curVpnStatus.routes = routes
curVpnStatus.dns = dns curVpnStatus.dns = dns
} }
@@ -75,13 +136,16 @@ async function onVpnServiceStart(payload: any) {
console.log('vpn service start', JSON.stringify(payload)) console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true curVpnStatus.running = true
if (payload.fd) { if (payload.fd) {
setTunFd(payload.fd) await setTunFd(payload.fd).catch((e) => {
console.error('set tun fd failed', e)
})
} }
} }
async function onVpnServiceStop(payload: any) { async function onVpnServiceStop(payload: any) {
console.log('vpn service stop', JSON.stringify(payload)) console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false curVpnStatus.running = false
resetVpnConfigStatus()
} }
async function registerVpnServiceListener() { async function registerVpnServiceListener() {
@@ -135,15 +199,25 @@ export async function onNetworkInstanceChange(instanceId: string) {
} }
if (!instanceId) { if (!instanceId) {
await doStopVpn() console.warn('vpn service skipped because instance id is empty')
if (curVpnStatus.running) {
await doStopVpn()
}
return return
} }
const config = await getConfig(instanceId) const config = await getConfig(instanceId)
console.log('vpn service loaded config', instanceId, JSON.stringify({
no_tun: config.no_tun,
dhcp: config.dhcp,
enable_magic_dns: config.enable_magic_dns,
}))
if (config.no_tun) { if (config.no_tun) {
console.log('vpn service skipped because no_tun is enabled', instanceId)
return return
} }
const curNetworkInfo = (await collectNetworkInfo(instanceId)).info.map[instanceId] const curNetworkInfo = (await collectNetworkInfo(instanceId)).info.map[instanceId]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) { if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
console.warn('vpn service skipped because network info is unavailable', instanceId, curNetworkInfo?.error_msg)
await doStopVpn() await doStopVpn()
return return
} }
@@ -170,27 +244,39 @@ export async function onNetworkInstanceChange(instanceId: string) {
const routes = getRoutesForVpn(curNetworkInfo?.routes, config) const routes = getRoutesForVpn(curNetworkInfo?.routes, config)
const dns = config.enable_magic_dns ? '100.100.100.101' : undefined; const dns = config.enable_magic_dns ? '100.100.100.101' : undefined
const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
const cidrChanged = network_length !== curVpnStatus.ipv4Cidr
const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes) const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
const dnsChanged = dns != curVpnStatus.dns const dnsChanged = dns != curVpnStatus.dns
const configChanged = ipChanged || cidrChanged || routesChanged || dnsChanged
const shouldStartVpn = !curVpnStatus.running
if (ipChanged || routesChanged || dnsChanged) { if (shouldStartVpn || configChanged) {
console.info('vpn service virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip) console.info('vpn service virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try { if (curVpnStatus.running) {
await doStopVpn() try {
} await doStopVpn()
catch (e) { }
console.error(e) catch (e) {
console.error(e)
}
} }
try { try {
await doStartVpn(virtual_ip, network_length, routes, dns) await doStartVpn(virtual_ip, network_length, routes, dns)
} }
catch (e) { catch (e) {
console.error('start vpn service failed, stop all other network insts.', e) if (e instanceof Error && e.message === 'need_prepare') {
await runNetworkInstance(config, true); //on android config should always be saved console.info('vpn permission is required before starting the Android VPN service')
return
}
if (e instanceof Error && e.message === 'vpn_permission_denied') {
console.info('vpn permission request was denied or dismissed')
return
}
console.error('start vpn service failed', e)
} }
} }
} }
@@ -202,6 +288,22 @@ async function isNoTunEnabled(instanceId: string | undefined) {
return (await getConfig(instanceId)).no_tun ?? false return (await getConfig(instanceId)).no_tun ?? false
} }
async function findRunningTunInstanceId() {
const instanceIds = await listNetworkInstanceIds()
const runningIds = instanceIds.running_inst_ids.map(Utils.UuidToStr)
console.log('vpn service sync running instances', JSON.stringify(runningIds))
for (const instanceId of runningIds) {
if (await isNoTunEnabled(instanceId)) {
continue
}
return instanceId
}
return undefined
}
export async function initMobileVpnService() { export async function initMobileVpnService() {
await registerVpnServiceListener() await registerVpnServiceListener()
} }
@@ -210,10 +312,22 @@ export async function prepareVpnService(instanceId: string) {
if (await isNoTunEnabled(instanceId)) { if (await isNoTunEnabled(instanceId)) {
return return
} }
console.log('prepare vpn') await requestVpnPermission()
const prepare_ret = await prepare_vpn() }
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) { export async function syncMobileVpnService() {
throw new Error(prepare_ret.errorMsg) syncVpnStatusFromNative(await get_vpn_status())
} const instanceId = await findRunningTunInstanceId()
if (instanceId) {
console.log('vpn service sync selected instance', instanceId)
await onNetworkInstanceChange(instanceId)
return
}
if (dhcpPollingTimer) {
clearTimeout(dhcpPollingTimer)
dhcpPollingTimer = null
}
await doStopVpn(true)
} }
+6 -1
View File
@@ -4,8 +4,12 @@ export interface WebClientConfig {
config_server_url?: string config_server_url?: string
} }
interface NormalMode extends WebClientConfig { export interface NormalMode extends WebClientConfig {
mode: 'normal' mode: 'normal'
// if not provided will use ring tunnel rpc server
rpc_portal?: string
enable_rpc_port_listen?: boolean
rpc_listen_port?: number
} }
export interface ServiceMode extends WebClientConfig { export interface ServiceMode extends WebClientConfig {
@@ -14,6 +18,7 @@ export interface ServiceMode extends WebClientConfig {
rpc_portal: string rpc_portal: string
file_log_level: 'off' | 'warn' | 'info' | 'debug' | 'trace' file_log_level: 'off' | 'warn' | 'info' | 'debug' | 'trace'
file_log_dir: string file_log_dir: string
installed_core_version?: string
} }
export interface RemoteMode { export interface RemoteMode {
+65 -25
View File
@@ -9,12 +9,14 @@ import { exit } from '@tauri-apps/plugin-process'
import { I18nUtils, RemoteManagement, Utils } from "easytier-frontend-lib" import { I18nUtils, RemoteManagement, Utils } from "easytier-frontend-lib"
import type { MenuItem } from 'primevue/menuitem' import type { MenuItem } from 'primevue/menuitem'
import { useTray } from '~/composables/tray' import { useTray } from '~/composables/tray'
import { initMobileVpnService } from '~/composables/mobile_vpn'
import { GUIRemoteClient } from '~/modules/api' import { GUIRemoteClient } from '~/modules/api'
import { useToast, useConfirm } from 'primevue' import { useToast, useConfirm } from 'primevue'
import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode' import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode'
import { saveLastNetworkInstanceId, loadLastNetworkInstanceId } from '~/composables/config'
import ModeSwitcher from '~/components/ModeSwitcher.vue' import ModeSwitcher from '~/components/ModeSwitcher.vue'
import { getServiceStatus } from '~/composables/backend' import { getEasytierVersion, getServiceStatus } from '~/composables/backend'
const { t, locale } = useI18n() const { t, locale } = useI18n()
const confirm = useConfirm() const confirm = useConfirm()
@@ -83,6 +85,20 @@ async function onUninstallService() {
}); });
} }
function stripModeMetadata(mode: Mode) {
if (mode.mode !== 'service') {
return mode
}
const serviceConfig = { ...mode }
delete serviceConfig.installed_core_version
return serviceConfig
}
function modeConfigChanged(next: Mode) {
return JSON.stringify(stripModeMetadata(next)) !== JSON.stringify(stripModeMetadata(currentMode.value))
}
async function onStopService() { async function onStopService() {
isModeSaving.value = true isModeSaving.value = true
manualDisconnect.value = true manualDisconnect.value = true
@@ -132,13 +148,14 @@ async function initWithMode(mode: Mode) {
} }
url = mode.remote_rpc_address url = mode.remote_rpc_address
break; break;
case 'service': case 'service': {
if (!mode.config_dir || !mode.file_log_dir || !mode.file_log_level || !mode.rpc_portal) { if (!mode.config_dir || !mode.file_log_dir || !mode.file_log_level || !mode.rpc_portal) {
toast.add({ severity: 'error', summary: t('error'), detail: t('mode.service_config_empty'), life: 10000 }) toast.add({ severity: 'error', summary: t('error'), detail: t('mode.service_config_empty'), life: 10000 })
return initWithMode({ ...mode, mode: 'normal' }); return initWithMode({ ...mode, mode: 'normal' });
} }
let serviceStatus = await getServiceStatus() let serviceStatus = await getServiceStatus()
if (serviceStatus === "NotInstalled" || JSON.stringify(mode) !== JSON.stringify(currentMode.value)) { const coreVersion = await getEasytierVersion()
if (serviceStatus === "NotInstalled" || modeConfigChanged(mode) || mode.installed_core_version !== coreVersion) {
mode.config_server_url = mode.config_server_url || undefined mode.config_server_url = mode.config_server_url || undefined
await initService({ await initService({
config_dir: mode.config_dir, config_dir: mode.config_dir,
@@ -147,6 +164,7 @@ async function initWithMode(mode: Mode) {
rpc_portal: mode.rpc_portal, rpc_portal: mode.rpc_portal,
config_server: mode.config_server_url, config_server: mode.config_server_url,
}) })
mode.installed_core_version = coreVersion
serviceStatus = await getServiceStatus() serviceStatus = await getServiceStatus()
} }
if (serviceStatus === "Stopped") { if (serviceStatus === "Stopped") {
@@ -155,13 +173,24 @@ async function initWithMode(mode: Mode) {
url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1") url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1")
retrys = 5 retrys = 5
break; break;
}
case 'normal':
url = mode.rpc_portal;
break;
} }
for (let i = 0; i < retrys; i++) { for (let i = 0; i < retrys; i++) {
try { try {
await connectRpcClient(url) await connectRpcClient(mode.mode === 'normal', url)
break; break;
} catch (e) { } catch (e) {
if (i === retrys - 1) { if (i === retrys - 1) {
const errMsg = e instanceof Error ? e.message : String(e)
toast.add({
severity: 'error',
summary: t('error'),
detail: t('mode.rpc_connection_failed', { error: errMsg }),
life: 1000,
})
throw e; throw e;
} }
console.error("Error connecting rpc client, retrying...", e) console.error("Error connecting rpc client, retrying...", e)
@@ -178,9 +207,25 @@ async function initWithMode(mode: Mode) {
clientRunning.value = await isClientRunning() clientRunning.value = await isClientRunning()
} }
onMounted(() => { onMounted(async () => {
const cleanupFns: Array<() => void> = []
if (type() === 'android') {
try {
await initMobileVpnService()
console.error("easytier init vpn service done")
} catch (e: any) {
console.error("easytier init vpn service failed", e)
}
}
cleanupFns.push(await listenGlobalEvents())
currentMode.value = loadMode() currentMode.value = loadMode()
initWithMode(currentMode.value); await initWithMode(currentMode.value);
onUnmounted(() => {
cleanupFns.forEach(unlisten => unlisten())
})
}); });
useTray(true) useTray(true)
@@ -190,6 +235,12 @@ const remoteClient = computed(() => new GUIRemoteClient());
const instanceId = ref<string | undefined>(undefined); const instanceId = ref<string | undefined>(undefined);
const clientRunning = ref(false); const clientRunning = ref(false);
watch(instanceId, (newVal) => {
if (newVal) {
saveLastNetworkInstanceId(newVal);
}
});
watch(clientRunning, async (newVal, oldVal) => { watch(clientRunning, async (newVal, oldVal) => {
if (!newVal && oldVal) { if (!newVal && oldVal) {
if (manualDisconnect.value) { if (manualDisconnect.value) {
@@ -197,6 +248,11 @@ watch(clientRunning, async (newVal, oldVal) => {
return return
} }
await reconnectClient() await reconnectClient()
} else if (newVal && !oldVal) {
const lastInstanceId = loadLastNetworkInstanceId();
if (lastInstanceId) {
instanceId.value = lastInstanceId;
}
} }
}) })
@@ -320,27 +376,11 @@ const setting_menu_items: Ref<MenuItem[]> = ref([
}, },
]) ])
async function connectRpcClient(url?: string) { async function connectRpcClient(isNormalMode: boolean, url?: string) {
await initRpcConnection(url) await initRpcConnection(isNormalMode, url)
console.log("easytier rpc connection established") console.log("easytier rpc connection established, isNormalMode: ", isNormalMode)
} }
onMounted(async () => {
if (type() === 'android') {
try {
await initMobileVpnService()
console.error("easytier init vpn service done")
} catch (e: any) {
console.error("easytier init vpn service failed", e)
}
}
const unlisten = await listenGlobalEvents()
onUnmounted(() => {
unlisten()
})
})
async function openConfigServerDialog() { async function openConfigServerDialog() {
editingMode.value = JSON.parse(JSON.stringify(loadMode())) editingMode.value = JSON.parse(JSON.stringify(loadMode()))
configServerDialogVisible.value = true configServerDialogVisible.value = true
+1 -2
View File
@@ -2,13 +2,12 @@
name = "easytier-rpc-build" name = "easytier-rpc-build"
description = "Protobuf RPC Service Generator for EasyTier" description = "Protobuf RPC Service Generator for EasyTier"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition.workspace = true
homepage = "https://github.com/EasyTier/EasyTier" homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier" repository = "https://github.com/EasyTier/EasyTier"
authors = ["kkrainbow"] authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"] keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"] categories = ["network-programming", "command-line-utilities"]
rust-version = "1.89.0"
license-file = "LICENSE" license-file = "LICENSE"
readme = "README.md" readme = "README.md"
+70 -1
View File
@@ -29,6 +29,7 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
let method_descriptor_name = format!("{}MethodDescriptor", service.name); let method_descriptor_name = format!("{}MethodDescriptor", service.name);
let mut trait_methods = String::new(); let mut trait_methods = String::new();
let mut weak_impl_methods = String::new();
let mut enum_methods = String::new(); let mut enum_methods = String::new();
let mut list_enum_methods = String::new(); let mut list_enum_methods = String::new();
let mut client_methods = String::new(); let mut client_methods = String::new();
@@ -40,6 +41,8 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
let mut match_output_type_methods = String::new(); let mut match_output_type_methods = String::new();
let mut match_output_proto_type_methods = String::new(); let mut match_output_proto_type_methods = String::new();
let mut match_handle_methods = String::new(); let mut match_handle_methods = String::new();
// generate trait default method Xxx::json_call_method match branch
let mut match_trait_json_methods = String::new();
let mut match_method_try_from = String::new(); let mut match_method_try_from = String::new();
@@ -66,6 +69,21 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
) )
.unwrap(); .unwrap();
writeln!(
weak_impl_methods,
r#" async fn {method_name}(&self, ctrl: Self::Controller, input: {input_type}) -> {namespace}::error::Result<{output_type}> {{
let Some(service) = self.upgrade() else {{
return Err({namespace}::error::Error::Shutdown);
}};
service.{method_name}(ctrl, input).await
}}"#,
method_name = method.name,
input_type = method.input_type,
output_type = method.output_type,
namespace = NAMESPACE,
)
.unwrap();
ServiceGenerator::write_comments(&mut enum_methods, 4, &method.comments).unwrap(); ServiceGenerator::write_comments(&mut enum_methods, 4, &method.comments).unwrap();
writeln!( writeln!(
enum_methods, enum_methods,
@@ -164,6 +182,22 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
namespace = NAMESPACE, namespace = NAMESPACE,
) )
.unwrap(); .unwrap();
write!(
match_trait_json_methods,
r#" "{name}" | "{proto_name}" => {{
let req: {input_type} = ::serde_json::from_value(json).map_err(|e| {namespace}::error::Error::MalformatRpcPacket(format!("json error: {{}}", e)))?;
let resp = self.{typed_method}(ctrl, req).await?;
Ok(::serde_json::to_value(resp).map_err(|e| {namespace}::error::Error::MalformatRpcPacket(format!("json error: {{}}", e)))?)
}}
"#,
name = method.name,
proto_name = method.proto_name,
input_type = method.input_type,
typed_method = method.name,
namespace = NAMESPACE,
)
.unwrap();
} }
ServiceGenerator::write_comments(&mut buf, 0, &service.comments).unwrap(); ServiceGenerator::write_comments(&mut buf, 0, &service.comments).unwrap();
@@ -176,6 +210,29 @@ pub trait {name} {{
type Controller: {namespace}::controller::Controller; type Controller: {namespace}::controller::Controller;
{trait_methods} {trait_methods}
async fn json_call_method(
&self,
ctrl: Self::Controller,
method_name: &str,
json: ::serde_json::Value,
) -> {namespace}::error::Result<::serde_json::Value> {{
match method_name {{
{match_trait_json_methods}
_ => Err({namespace}::error::Error::InvalidMethodIndex(0, method_name.to_string())),
}}
}}
}}
#[async_trait::async_trait]
impl<T> {name} for ::std::sync::Weak<T>
where
T: Send + Sync + 'static,
::std::sync::Arc<T>: {name},
{{
type Controller = <::std::sync::Arc<T> as {name}>::Controller;
{weak_impl_methods}
}} }}
/// A service descriptor for a `{name}`. /// A service descriptor for a `{name}`.
@@ -235,7 +292,7 @@ impl<C: {namespace}::controller::Controller> Clone for {client_name}Factory<C> {
impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C: {namespace}::controller::Controller {{ impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C: {namespace}::controller::Controller {{
type Descriptor = {descriptor_name}; type Descriptor = {descriptor_name};
type ClientImpl = Box<dyn {name}<Controller = C> + Send + 'static>; type ClientImpl = Box<dyn {name}<Controller = C> + Send + Sync + 'static>;
type Controller = C; type Controller = C;
fn new(handler: impl {namespace}::handler::Handler<Descriptor = Self::Descriptor, Controller = Self::Controller>) -> Self::ClientImpl {{ fn new(handler: impl {namespace}::handler::Handler<Descriptor = Self::Descriptor, Controller = Self::Controller>) -> Self::ClientImpl {{
@@ -250,6 +307,16 @@ impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C:
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct {server_name}<A>(A) where A: {name} + Clone + Send + 'static; pub struct {server_name}<A>(A) where A: {name} + Clone + Send + 'static;
impl<T> {server_name}<::std::sync::Weak<T>>
where
T: Send + Sync + 'static,
::std::sync::Arc<T>: {name},
{{
pub fn new_arc(service: ::std::sync::Arc<T>) -> {server_name}<::std::sync::Weak<T>> {{
{server_name}(::std::sync::Arc::downgrade(&service))
}}
}}
impl<A> {server_name}<A> where A: {name} + Clone + Send + 'static {{ impl<A> {server_name}<A> where A: {name} + Clone + Send + 'static {{
/// Creates a new server instance that dispatches all calls to the supplied service. /// Creates a new server instance that dispatches all calls to the supplied service.
pub fn new(service: A) -> {server_name}<A> {{ pub fn new(service: A) -> {server_name}<A> {{
@@ -345,6 +412,7 @@ impl {namespace}::descriptor::MethodDescriptor for {method_descriptor_name} {{
proto_name = service.proto_name, proto_name = service.proto_name,
package = service.package, package = service.package,
trait_methods = trait_methods, trait_methods = trait_methods,
weak_impl_methods = weak_impl_methods,
enum_methods = enum_methods, enum_methods = enum_methods,
list_enum_methods = list_enum_methods, list_enum_methods = list_enum_methods,
client_own_methods = client_own_methods, client_own_methods = client_own_methods,
@@ -356,6 +424,7 @@ impl {namespace}::descriptor::MethodDescriptor for {method_descriptor_name} {{
match_output_type_methods = match_output_type_methods, match_output_type_methods = match_output_type_methods,
match_output_proto_type_methods = match_output_proto_type_methods, match_output_proto_type_methods = match_output_proto_type_methods,
match_handle_methods = match_handle_methods, match_handle_methods = match_handle_methods,
match_trait_json_methods = match_trait_json_methods,
namespace = NAMESPACE, namespace = NAMESPACE,
).unwrap(); ).unwrap();
} }
+11 -9
View File
@@ -1,7 +1,7 @@
[package] [package]
name = "easytier-web" name = "easytier-web"
version = "2.5.0" version = "2.6.4"
edition = "2021" edition.workspace = true
description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server." description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server."
[dependencies] [dependencies]
@@ -10,6 +10,7 @@ tracing = { version = "0.1", features = ["log"] }
anyhow = { version = "1.0" } anyhow = { version = "1.0" }
thiserror = "1.0" thiserror = "1.0"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
tokio-util = { version = "0.7", features = ["rt"] }
dashmap = "6.1" dashmap = "6.1"
url = "2.2" url = "2.2"
async-trait = "0.1" async-trait = "0.1"
@@ -63,16 +64,17 @@ uuid = { version = "1.5.0", features = [
] } ] }
chrono = { version = "0.4.37", features = ["serde"] } chrono = { version = "0.4.37", features = ["serde"] }
openidconnect = { version = "4.0", default-features = false, features = ["accept-rfc3339-timestamps", "reqwest"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
subtle = "2.6"
mimalloc = { version = "*" } mimalloc = { version = "*" }
[build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[features] [features]
default = [] default = []
embed = ["dep:axum-embed"] embed = ["dep:axum-embed"]
# enable thunk-rs when compiling for x86_64 or i686 windows
[target.x86_64-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[target.i686-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
+5 -5
View File
@@ -1,10 +1,10 @@
use std::env;
fn main() { fn main() {
let target_os = env::var("CARGO_CFG_TARGET_OS").unwrap_or_default();
let target_arch = env::var("CARGO_CFG_TARGET_ARCH").unwrap_or_default();
// enable thunk-rs when target os is windows and arch is x86_64 or i686 // enable thunk-rs when target os is windows and arch is x86_64 or i686
#[cfg(target_os = "windows")] if target_os == "windows" && (target_arch == "x86" || target_arch == "x86_64") {
if !std::env::var("TARGET")
.unwrap_or_default()
.contains("aarch64")
{
thunk::thunk(); thunk::thunk();
} }
} }
+2 -2
View File
@@ -20,7 +20,7 @@
"dependencies": { "dependencies": {
"@primeuix/themes": "^1.2.3", "@primeuix/themes": "^1.2.3",
"@vueuse/core": "^11.1.0", "@vueuse/core": "^11.1.0",
"axios": "^1.7.7", "axios": "^1.13.5",
"chart.js": "^4.5.0", "chart.js": "^4.5.0",
"floating-vue": "^5.2", "floating-vue": "^5.2",
"ip-num": "1.5.1", "ip-num": "1.5.1",
@@ -41,7 +41,7 @@
"postcss-nested": "^7.0.2", "postcss-nested": "^7.0.2",
"tailwindcss": "=3.4.17", "tailwindcss": "=3.4.17",
"typescript": "~5.6.3", "typescript": "~5.6.3",
"vite": "^5.4.10", "vite": "^5.4.21",
"vite-plugin-dts": "^4.3.0", "vite-plugin-dts": "^4.3.0",
"vue-tsc": "^2.1.10" "vue-tsc": "^2.1.10"
}, },
@@ -1,16 +1,18 @@
<script setup lang="ts"> <script setup lang="ts">
import { AutoComplete, Button, Checkbox, Dialog, Divider, InputNumber, InputText, Panel, Password, SelectButton, ToggleButton } from 'primevue'
import InputGroup from 'primevue/inputgroup' import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon' import InputGroupAddon from 'primevue/inputgroupaddon'
import { SelectButton, Checkbox, InputText, InputNumber, AutoComplete, Panel, Divider, ToggleButton, Button, Password, Dialog } from 'primevue'
import { import {
addRow, addRow,
DEFAULT_NETWORK_CONFIG, DEFAULT_NETWORK_CONFIG,
NetworkConfig, NetworkConfig,
NetworkingMethod, normalizeNetworkConfig,
removeRow removeRow
} from '../types/network' } from '../types/network'
import { defineProps, defineEmits, ref, onMounted, onUnmounted } from 'vue' import { ref, onMounted, onUnmounted, watch } from 'vue'
import { useI18n } from 'vue-i18n' import { useI18n } from 'vue-i18n'
import AclManager from './acl/AclManager.vue'
import UrlListInput from './UrlListInput.vue'
const props = defineProps<{ const props = defineProps<{
configInvalid?: boolean configInvalid?: boolean
@@ -26,63 +28,18 @@ const curNetwork = defineModel('curNetwork', {
const { t } = useI18n() const { t } = useI18n()
const networking_methods = ref([ const protos: { [proto: string]: number } = {
{ value: NetworkingMethod.PublicServer, label: () => t('public_server') }, tcp: 11010,
{ value: NetworkingMethod.Manual, label: () => t('manual') }, udp: 11010,
{ value: NetworkingMethod.Standalone, label: () => t('standalone') }, wg: 11011,
]) ws: 11011,
wss: 11012,
const protos: { [proto: string]: number } = { tcp: 11010, udp: 11010, wg: 11011, ws: 11011, wss: 11012 } quic: 11012,
faketcp: 11013,
function searchUrlSuggestions(e: { query: string }): string[] { http: 80,
const query = e.query https: 443,
const ret = [] txt: 0,
// if query match "^\w+:.*", then no proto prefix srv: 0,
if (query.match(/^\w+:.*/)) {
// if query is a valid url, then add to suggestions
try {
// eslint-disable-next-line no-new
new URL(query)
ret.push(query)
}
catch { }
}
else {
for (const proto in protos) {
let item = `${proto}://${query}`
// if query match ":\d+$", then no port suffix
if (!query.match(/:\d+$/)) {
item += `:${protos[proto]}`
}
ret.push(item)
}
}
return ret
}
const publicServerSuggestions = ref([''])
function searchPresetPublicServers(e: { query: string }) {
const presetPublicServers = [
'tcp://public.easytier.top:11010',
]
const query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter(item => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
publicServerSuggestions.value = ret
}
const peerSuggestions = ref([''])
function searchPeerSuggestions(e: { query: string }) {
peerSuggestions.value = searchUrlSuggestions(e)
} }
const inetSuggestions = ref(['']) const inetSuggestions = ref([''])
@@ -99,34 +56,6 @@ function searchInetSuggestions(e: { query: string }) {
} }
} }
const listenerSuggestions = ref([''])
function searchListenerSuggestions(e: { query: string }) {
const ret = []
for (const proto in protos) {
let item = `${proto}://0.0.0.0:`
// if query is a number, use it as port
if (e.query.match(/^\d+$/)) {
item += e.query
}
else {
item += protos[proto]
}
if (item.includes(e.query)) {
ret.push(item)
}
}
if (ret.length === 0) {
ret.push(e.query)
}
listenerSuggestions.value = ret
}
const exitNodesSuggestions = ref(['']) const exitNodesSuggestions = ref([''])
function searchExitNodesSuggestions(e: { query: string }) { function searchExitNodesSuggestions(e: { query: string }) {
@@ -152,21 +81,26 @@ const bool_flags: BoolFlag[] = [
{ field: 'latency_first', help: 'latency_first_help' }, { field: 'latency_first', help: 'latency_first_help' },
{ field: 'use_smoltcp', help: 'use_smoltcp_help' }, { field: 'use_smoltcp', help: 'use_smoltcp_help' },
{ field: 'disable_ipv6', help: 'disable_ipv6_help' }, { field: 'disable_ipv6', help: 'disable_ipv6_help' },
{ field: 'ipv6_public_addr_auto', help: 'ipv6_public_addr_auto_help' },
{ field: 'enable_kcp_proxy', help: 'enable_kcp_proxy_help' }, { field: 'enable_kcp_proxy', help: 'enable_kcp_proxy_help' },
{ field: 'disable_kcp_input', help: 'disable_kcp_input_help' }, { field: 'disable_kcp_input', help: 'disable_kcp_input_help' },
{ field: 'enable_quic_proxy', help: 'enable_quic_proxy_help' }, { field: 'enable_quic_proxy', help: 'enable_quic_proxy_help' },
{ field: 'disable_quic_input', help: 'disable_quic_input_help' }, { field: 'disable_quic_input', help: 'disable_quic_input_help' },
{ field: 'disable_p2p', help: 'disable_p2p_help' }, { field: 'disable_p2p', help: 'disable_p2p_help' },
{ field: 'p2p_only', help: 'p2p_only_help' }, { field: 'p2p_only', help: 'p2p_only_help' },
{ field: 'lazy_p2p', help: 'lazy_p2p_help' },
{ field: 'bind_device', help: 'bind_device_help' }, { field: 'bind_device', help: 'bind_device_help' },
{ field: 'no_tun', help: 'no_tun_help' }, { field: 'no_tun', help: 'no_tun_help' },
{ field: 'enable_exit_node', help: 'enable_exit_node_help' }, { field: 'enable_exit_node', help: 'enable_exit_node_help' },
{ field: 'relay_all_peer_rpc', help: 'relay_all_peer_rpc_help' }, { field: 'relay_all_peer_rpc', help: 'relay_all_peer_rpc_help' },
{ field: 'need_p2p', help: 'need_p2p_help' },
{ field: 'multi_thread', help: 'multi_thread_help' }, { field: 'multi_thread', help: 'multi_thread_help' },
{ field: 'proxy_forward_by_system', help: 'proxy_forward_by_system_help' }, { field: 'proxy_forward_by_system', help: 'proxy_forward_by_system_help' },
{ field: 'disable_encryption', help: 'disable_encryption_help' }, { field: 'disable_encryption', help: 'disable_encryption_help' },
{ field: 'disable_tcp_hole_punching', help: 'disable_tcp_hole_punching_help' }, { field: 'disable_tcp_hole_punching', help: 'disable_tcp_hole_punching_help' },
{ field: 'disable_udp_hole_punching', help: 'disable_udp_hole_punching_help' }, { field: 'disable_udp_hole_punching', help: 'disable_udp_hole_punching_help' },
{ field: 'enable_udp_broadcast_relay', help: 'enable_udp_broadcast_relay_help' },
{ field: 'disable_upnp', help: 'disable_upnp_help' },
{ field: 'disable_sym_hole_punching', help: 'disable_sym_hole_punching_help' }, { field: 'disable_sym_hole_punching', help: 'disable_sym_hole_punching_help' },
{ field: 'enable_magic_dns', help: 'enable_magic_dns_help' }, { field: 'enable_magic_dns', help: 'enable_magic_dns_help' },
{ field: 'enable_private_mode', help: 'enable_private_mode_help' }, { field: 'enable_private_mode', help: 'enable_private_mode_help' },
@@ -217,6 +151,16 @@ onMounted(() => {
}); });
} }
}); });
function syncNormalizedNetwork(network: NetworkConfig | undefined): void {
if (!network) {
return
}
Object.assign(network, normalizeNetworkConfig(network))
}
watch(() => curNetwork.value, syncNormalizedNetwork, { immediate: true, deep: false })
</script> </script>
<template> <template>
@@ -263,17 +207,14 @@ onMounted(() => {
<div class="flex flex-row gap-x-9 flex-wrap"> <div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow"> <div class="flex flex-col gap-2 basis-5/12 grow">
<label for="nm">{{ t('networking_method') }}</label> <div class="flex items-center">
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" <label for="initial_nodes">{{ t('initial_nodes') }}</label>
:option-label="(v) => v.label()" option-value="value" /> <span class="pi pi-question-circle ml-2 self-center" v-tooltip="t('initial_nodes_help')"></span>
<div class="items-center flex flex-row p-fluid gap-x-1"> </div>
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips" <div class="items-center flex flex-col p-fluid gap-y-2">
v-model="curNetwork.peer_urls" :placeholder="t('chips_placeholder', ['tcp://8.8.8.8:11010'])" <UrlListInput id="initial_nodes" v-model="curNetwork.peer_urls" :protos="protos"
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions" /> defaultUrl="tcp://:11010" :add-label="t('add_initial_node')"
:placeholder="t('initial_node_placeholder')" />
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.PublicServer"
v-model="curNetwork.public_server_url" :suggestions="publicServerSuggestions" class="grow"
dropdown :complete-on-focus="false" @complete="searchPresetPublicServers" />
</div> </div>
</div> </div>
</div> </div>
@@ -345,10 +286,8 @@ onMounted(() => {
<div class="flex flex-row gap-x-9 flex-wrap"> <div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 grow p-fluid"> <div class="flex flex-col gap-2 grow p-fluid">
<label for="listener_urls">{{ t('listener_urls') }}</label> <label for="listener_urls">{{ t('listener_urls') }}</label>
<AutoComplete id="listener_urls" v-model="curNetwork.listener_urls" :suggestions="listenerSuggestions" <UrlListInput v-model="curNetwork.listener_urls" :protos="protos" :add-label="t('add_listener_url')"
class="w-full" dropdown :complete-on-focus="true" placeholder="0.0.0.0" />
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])" multiple
@complete="searchListenerSuggestions" />
</div> </div>
</div> </div>
@@ -371,6 +310,19 @@ onMounted(() => {
</div> </div>
</div> </div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow">
<div class="flex">
<label for="instance_recv_bps_limit">{{ t('instance_recv_bps_limit') }}</label>
<span class="pi pi-question-circle ml-2 self-center"
v-tooltip="t('instance_recv_bps_limit_help')"></span>
</div>
<InputNumber id="instance_recv_bps_limit" v-model="curNetwork.instance_recv_bps_limit"
aria-describedby="instance_recv_bps_limit-help" :format="false"
:placeholder="t('instance_recv_bps_limit_placeholder')" :min="1" fluid />
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap"> <div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow"> <div class="flex flex-col gap-2 basis-5/12 grow">
<div class="flex"> <div class="flex">
@@ -443,9 +395,8 @@ onMounted(() => {
<label for="mapped_listeners">{{ t('mapped_listeners') }}</label> <label for="mapped_listeners">{{ t('mapped_listeners') }}</label>
<span class="pi pi-question-circle ml-2 self-center" v-tooltip="t('mapped_listeners_help')"></span> <span class="pi pi-question-circle ml-2 self-center" v-tooltip="t('mapped_listeners_help')"></span>
</div> </div>
<AutoComplete id="mapped_listeners" v-model="curNetwork.mapped_listeners" <UrlListInput v-model="curNetwork.mapped_listeners" :protos="protos"
:placeholder="t('chips_placeholder', ['tcp://123.123.123.123:11223'])" class="w-full" multiple fluid :add-label="t('add_mapped_listener')" />
:suggestions="peerSuggestions" @complete="searchPeerSuggestions" />
</div> </div>
</div> </div>
@@ -541,6 +492,18 @@ onMounted(() => {
</div> </div>
</Panel> </Panel>
<Divider />
<Panel :header="t('acl.title')" toggleable collapsed>
<div v-if="curNetwork.acl" class="flex flex-col gap-y-2">
<AclManager v-model="curNetwork.acl" />
</div>
<div v-else class="flex justify-center p-4">
<Button :label="t('acl.enabled')"
@click="curNetwork.acl = { acl_v1: { chains: [], group: { declares: [], members: [] } } }" />
</div>
</Panel>
<div class="flex pt-6 justify-center"> <div class="flex pt-6 justify-center">
<Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid" <Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)" /> @click="$emit('runNetwork', curNetwork)" />
@@ -206,27 +206,39 @@ const confirmDeleteNetwork = (event: any) => {
}); });
}; };
const saveAndRunNewNetwork = async () => { const saveAndRunNewNetwork = async (config?: NetworkTypes.NetworkConfig) => {
if (!currentNetworkConfig.value) { const cfg = config ?? currentNetworkConfig.value;
if (!cfg) {
return; return;
} }
const targetInstanceId = instanceId.value ?? cfg.instance_id;
if (targetInstanceId && cfg.instance_id !== targetInstanceId) {
cfg.instance_id = targetInstanceId;
}
try { try {
await props.api.delete_network(instanceId.value!); if (networkIsDisabled.value) {
let ret = await props.api.run_network(currentNetworkConfig.value, currentNetworkControl.remoteSave.value); await props.api.save_config(cfg);
console.debug("saveAndRunNewNetwork", ret); await props.api.update_network_instance_state(cfg.instance_id, false);
} else {
await props.api.run_network(cfg, currentNetworkControl.remoteSave.value);
}
delete networkMetaCache.value[currentNetworkConfig.value.instance_id]; delete networkMetaCache.value[cfg.instance_id];
await loadNetworkMetas([currentNetworkConfig.value.instance_id]); await loadNetworkMetas([cfg.instance_id]);
selectedInstanceId.value = { uuid: currentNetworkConfig.value.instance_id }; selectedInstanceId.value = { uuid: cfg.instance_id };
await loadNetworkInstanceIds();
await loadCurrentNetworkInfo();
} catch (e: any) { } catch (e: any) {
console.error(e); console.error(e);
toast.add({ severity: 'error', summary: 'Error', detail: 'Failed to create network, error: ' + JSON.stringify(e.response.data), life: 2000 }); toast.add({ severity: 'error', summary: 'Error', detail: 'Failed to run network, error: ' + JSON.stringify(e.response?.data ?? e), life: 2000 });
return; return;
} }
emits('update'); emits('update');
// showCreateNetworkDialog.value = false; isEditingNetwork.value = false;
isEditingNetwork.value = false; // Exit creation mode after successful network creation
} }
const saveNetworkConfig = async () => { const saveNetworkConfig = async () => {
@@ -388,18 +400,18 @@ const updateScreenWidth = () => {
const menuRef = ref(); const menuRef = ref();
const actionMenu: Ref<MenuItem[]> = ref([ const actionMenu: Ref<MenuItem[]> = ref([
{ {
label: t('web.device_management.edit_network'), label: () => t('web.device_management.edit_network'),
icon: 'pi pi-pencil', icon: 'pi pi-pencil',
visible: () => !(networkIsDisabled.value ?? true) && currentNetworkControl.editable.value, visible: () => !(networkIsDisabled.value ?? true) && currentNetworkControl.editable.value,
command: () => editNetwork() command: () => editNetwork()
}, },
{ {
label: t('web.device_management.export_config'), label: () => t('web.device_management.export_config'),
icon: 'pi pi-download', icon: 'pi pi-download',
command: () => exportConfig() command: () => exportConfig()
}, },
{ {
label: t('web.device_management.delete_network'), label: () => t('web.device_management.delete_network'),
icon: 'pi pi-trash', icon: 'pi pi-trash',
class: 'p-error', class: 'p-error',
visible: () => currentNetworkControl.deletable.value, visible: () => currentNetworkControl.deletable.value,
@@ -539,13 +551,15 @@ onUnmounted(() => {
:label="t('web.device_management.edit_as_file')" iconPos="left" severity="secondary" /> :label="t('web.device_management.edit_as_file')" iconPos="left" severity="secondary" />
<Button @click="importConfig" icon="pi pi-upload" :label="t('web.device_management.import_config')" <Button @click="importConfig" icon="pi pi-upload" :label="t('web.device_management.import_config')"
iconPos="left" severity="help" /> iconPos="left" severity="help" />
<Button v-if="networkIsDisabled" @click="saveNetworkConfig" icon="pi pi-save" <Button v-if="networkIsDisabled" @click="saveNetworkConfig" :disabled="!currentNetworkConfig"
:label="t('web.device_management.save_config')" iconPos="left" severity="success" /> icon="pi pi-save" :label="t('web.device_management.save_config')" iconPos="left"
severity="success" />
</div> </div>
<Divider /> <Divider />
<Config :cur-network="currentNetworkConfig" @run-network="saveAndRunNewNetwork"></Config> <Config :cur-network="currentNetworkConfig" :config-invalid="!currentNetworkConfig"
@run-network="saveAndRunNewNetwork"></Config>
</div> </div>
<!-- Network Status (for running networks) --> <!-- Network Status (for running networks) -->
@@ -183,6 +183,12 @@ const myNodeInfoChips = computed(() => {
if (!my_node_info) if (!my_node_info)
return chips return chips
// peer id
chips.push({
label: `Peer ID: ${my_node_info.peer_id}`,
icon: '',
} as Chip)
// TUN Device Name // TUN Device Name
const dev_name = props.curNetworkInst.detail?.dev_name const dev_name = props.curNetworkInst.detail?.dev_name
if (dev_name) { if (dev_name) {
@@ -0,0 +1,242 @@
<script setup lang="ts">
import { AutoComplete, Button, Dialog, InputNumber, InputText } from 'primevue'
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { computed, ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'
const props = defineProps<{
placeholder?: string
protos: { [proto: string]: number }
}>()
const { t } = useI18n()
const url = defineModel<string>({ required: true })
const editing = ref(false)
const hostFocused = ref(false)
const parseUrl = (val: string | null | undefined): { proto: string; host: string; port: number | null } => {
const getValidPort = (portStr: string, proto: string) => {
const p = parseInt(portStr)
return isNaN(p) ? (props.protos[proto] ?? 11010) : p
}
const parseByPattern = (input: string) => {
const trimmed = input.trim()
if (!trimmed) {
return null
}
const match = trimmed.match(/^(\w+):\/\/(.*)$/)
const proto = match ? match[1] : 'tcp'
const rest = match ? match[2] : trimmed
const authority = rest.split(/[/?#]/)[0]
if (!authority) {
return null
}
const hostAndMaybePort = authority.includes('@') ? authority.slice(authority.lastIndexOf('@') + 1) : authority
if (hostAndMaybePort.startsWith('[')) {
const ipv6End = hostAndMaybePort.indexOf(']')
if (ipv6End > 0) {
const host = hostAndMaybePort.slice(0, ipv6End + 1)
const remain = hostAndMaybePort.slice(ipv6End + 1)
// null = no explicit port in URL; do not fabricate a default
const port: number | null = remain.startsWith(':') ? getValidPort(remain.slice(1), proto) : null
return { proto, host, port }
}
}
const portMatch = hostAndMaybePort.match(/^(.*):(\d+)$/)
const host = portMatch ? portMatch[1] : hostAndMaybePort
// null = no explicit port in URL; buildUrlValue will omit the port entirely,
// preserving the protocol's implied standard port (e.g. 443 for wss://).
const port: number | null = portMatch ? parseInt(portMatch[2]) : null
return { proto, host, port }
}
if (!val) {
return { proto: 'tcp', host: '', port: props.protos['tcp'] ?? 11010 }
}
const parsedByPattern = parseByPattern(val)
if (parsedByPattern) {
return parsedByPattern
}
return { proto: 'tcp', host: '', port: null }
}
const internalValue = ref(parseUrl(url.value))
const defaultHost = '0.0.0.0'
const buildUrlValue = (value: { proto: string, host: string, port: number | null }, forceDefaultHost = false) => {
const proto = value.proto || 'tcp'
const rawHost = (value.host ?? '').trim()
const host = rawHost || (forceDefaultHost ? defaultHost : '')
if (!host) {
return null
}
// Omit port when the protocol uses no port (protos value = 0), or when the
// original URL had no explicit port (port === null) avoids overwriting an
// implicit standard port (e.g. 443 for wss) with an EasyTier default (11012).
if (props.protos[proto] === 0 || value.port === null) {
return `${proto}://${host}`
}
return `${proto}://${host}:${value.port}`
}
const syncUrlFromInternal = (forceDefaultHost = false) => {
const nextUrl = buildUrlValue(internalValue.value, forceDefaultHost)
if (!nextUrl || nextUrl === url.value) {
return
}
url.value = nextUrl
}
const onHostBlur = () => {
hostFocused.value = false
syncUrlFromInternal(true)
}
const onHostFocus = () => {
hostFocused.value = true
}
const onDialogConfirm = () => {
syncUrlFromInternal(true)
editing.value = false
}
const isNoPortProto = computed(() => {
return props.protos[internalValue.value.proto] === 0
})
// Sync from external
watch(() => url.value, (newVal) => {
if (hostFocused.value) {
return
}
const parsed = parseUrl(newVal)
const internalHost = internalValue.value.host ?? ''
const sameHost = parsed.host === internalHost || (!internalHost.trim() && parsed.host === defaultHost)
if (parsed.proto !== internalValue.value.proto ||
!sameHost ||
parsed.port !== internalValue.value.port) {
internalValue.value = parsed
}
})
// Sync to external
watch(internalValue, () => {
syncUrlFromInternal(false)
}, { deep: true })
const protoOptions = computed(() => Object.keys(props.protos))
const filteredProtos = ref<string[]>([])
const searchProtos = (event: { query: string }) => {
if (!event.query.trim().length) {
filteredProtos.value = [...protoOptions.value]
} else {
filteredProtos.value = protoOptions.value.filter((proto) => {
return proto.toLowerCase().startsWith(event.query.toLowerCase())
})
}
}
const onProtoChange = (newProto: string) => {
const oldProto = internalValue.value.proto
const oldDefault = props.protos[oldProto]
const newDefault = props.protos[newProto]
if (oldDefault !== undefined && internalValue.value.port === oldDefault && newDefault !== undefined) {
internalValue.value.port = newDefault
}
internalValue.value.proto = newProto
}
</script>
<template>
<div class="url-input-container w-full min-w-0 overflow-hidden">
<InputGroup class="url-input-full w-full min-w-0">
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown
class="max-w-32 proto-autocomplete-in-group" @complete="searchProtos"
@update:model-value="onProtoChange" />
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow min-w-0"
@focus="onHostFocus" @blur="onHostBlur" />
<template v-if="!isNoPortProto">
<InputGroupAddon>
<span style="font-weight: bold">:</span>
</InputGroupAddon>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="max-w-24"
:placeholder="String(protos[internalValue.proto] ?? 11010)" fluid />
</template>
<!-- Rendered in both responsive branches; keep action slot content free of side effects and duplicate IDs. -->
<slot name="actions"></slot>
</InputGroup>
<div
class="url-input-compact flex justify-between items-center p-2 border rounded w-full min-w-0 overflow-hidden">
<span class="truncate mr-2 min-w-0 flex-1 overflow-hidden">{{ url }}</span>
<div class="flex items-center shrink-0">
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" :aria-label="t('web.common.edit')"
@click="editing = true" />
<slot name="actions"></slot>
</div>
</div>
<Dialog v-model:visible="editing" modal :header="placeholder" :style="{ width: '90vw', maxWidth: '500px' }">
<div class="flex flex-col gap-4 py-4">
<div class="flex flex-col gap-2">
<label>{{ t('tunnel_proto') }}</label>
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown fluid
@complete="searchProtos" @update:model-value="onProtoChange" />
</div>
<div class="flex flex-col gap-2">
<label>{{ t('web.common.address') || 'Address' }}</label>
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="w-full"
@focus="onHostFocus" @blur="onHostBlur" />
</div>
<div v-if="!isNoPortProto" class="flex flex-col gap-2">
<label>{{ t('port') }}</label>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="w-full"
:placeholder="String(protos[internalValue.proto] ?? 11010)" />
</div>
</div>
<template #footer>
<Button :label="t('web.common.confirm') || 'Done'" icon="pi pi-check" @click="onDialogConfirm"
autofocus />
</template>
</Dialog>
</div>
</template>
<style scoped>
.url-input-container {
container-type: inline-size;
}
.url-input-full {
display: none;
}
.url-input-compact {
display: flex;
}
@container (min-width: 400px) {
.url-input-full {
display: flex;
}
.url-input-compact {
display: none;
}
}
.proto-autocomplete-in-group,
.proto-autocomplete-in-group :deep(.p-autocomplete-input),
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
border-top-right-radius: 0 !important;
border-bottom-right-radius: 0 !important;
}
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
border-right: 0 !important;
}
</style>
@@ -0,0 +1,38 @@
<script setup lang="ts">
import { Button } from 'primevue'
import UrlInput from './UrlInput.vue'
const props = defineProps<{
protos: { [proto: string]: number }
addLabel: string
placeholder?: string
defaultUrl?: string
}>()
const list = defineModel<string[]>({ required: true })
const addUrl = () => {
list.value.push(props.defaultUrl || 'tcp://0.0.0.0:11010')
}
const removeUrl = (index: number) => {
list.value.splice(index, 1)
}
</script>
<template>
<div class="flex flex-col gap-y-2 w-full">
<div v-for="(_, index) in list" :key="index" class="flex gap-2 items-center w-full">
<UrlInput v-model="list[index]" :protos="protos" :placeholder="placeholder">
<template #actions>
<Button icon="pi pi-trash" severity="danger" text rounded @click="removeUrl(index)" />
</template>
</UrlInput>
</div>
<div class="flex justify-center items-center w-full h-10 border-2 border-dashed border-surface-300 dark:border-surface-600 rounded-lg cursor-pointer hover:border-primary hover:bg-surface-50 dark:hover:bg-surface-800 transition-colors duration-200 gap-2 text-surface-500 dark:text-surface-400"
@click="addUrl">
<i class="pi pi-plus text-sm"></i>
<span class="text-sm font-medium">{{ addLabel }}</span>
</div>
</div>
</template>
@@ -0,0 +1,218 @@
<script setup lang="ts">
import { Button, Column, DataTable, Divider, InputText, Select, SelectButton, ToggleButton } from 'primevue'
import { ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'
import { AclAction, AclChain, AclChainType, AclProtocol, AclRule } from '../../types/network'
import AclRuleDialog from './AclRuleDialog.vue'
const props = defineProps<{
groupNames?: string[]
}>()
const chain = defineModel<AclChain>({ required: true })
const { t } = useI18n()
watch(() => chain.value.rules, (newRules) => {
if (!newRules) return
const isSorted = newRules.every((rule, i) => i === 0 || (rule.priority || 0) <= (newRules[i - 1].priority || 0))
if (!isSorted) {
chain.value.rules.sort((a, b) => (b.priority || 0) - (a.priority || 0))
}
}, { deep: true, immediate: true })
const actionOptions = [
{ label: () => t('acl.allow'), value: AclAction.Allow },
{ label: () => t('acl.drop'), value: AclAction.Drop },
]
const chainTypeOptions = [
{ label: () => t('acl.inbound'), value: AclChainType.Inbound },
{ label: () => t('acl.outbound'), value: AclChainType.Outbound },
{ label: () => t('acl.forward'), value: AclChainType.Forward },
]
const editingRule = ref<AclRule | null>(null)
const editingRuleIndex = ref(-1)
const showRuleDialog = ref(false)
function getProtocolLabel(proto: AclProtocol) {
switch (proto) {
case AclProtocol.Any: return t('acl.any')
case AclProtocol.TCP: return 'TCP'
case AclProtocol.UDP: return 'UDP'
case AclProtocol.ICMP: return 'ICMP'
case AclProtocol.ICMPv6: return 'ICMPv6'
default: return t('event.Unknown')
}
}
function getActionLabel(action: AclAction) {
switch (action) {
case AclAction.Allow: return t('acl.allow')
case AclAction.Drop: return t('acl.drop')
default: return t('event.Unknown')
}
}
function addRule() {
editingRuleIndex.value = -1
editingRule.value = {
name: '',
description: '',
priority: chain.value.rules.length,
enabled: true,
protocol: AclProtocol.Any,
ports: [],
source_ips: [],
destination_ips: [],
source_ports: [],
action: AclAction.Allow,
rate_limit: 0,
burst_limit: 0,
stateful: false,
source_groups: [],
destination_groups: [],
}
showRuleDialog.value = true
}
function editRule(index: number) {
editingRuleIndex.value = index
editingRule.value = JSON.parse(JSON.stringify(chain.value.rules[index]))
showRuleDialog.value = true
}
function deleteRule(index: number) {
chain.value.rules.splice(index, 1)
}
function saveRule(rule: AclRule) {
if (editingRuleIndex.value === -1) {
chain.value.rules.push(rule)
} else {
chain.value.rules[editingRuleIndex.value] = rule
}
chain.value.rules.sort((a, b) => (b.priority || 0) - (a.priority || 0))
}
function onRowReorder(event: any) {
chain.value.rules = event.value
// Update priorities based on new order (higher priority at top)
chain.value.rules.forEach((rule, index) => {
rule.priority = chain.value.rules.length - index - 1
})
}
</script>
<template>
<div class="flex flex-col gap-6">
<!-- Chain Metadata Section -->
<div
class="grid grid-cols-1 md:grid-cols-2 gap-4 p-4 bg-gray-50 rounded-lg border border-gray-200 dark:bg-gray-900 dark:border-gray-700">
<div class="flex flex-col gap-2">
<label class="font-bold text-sm">{{ t('acl.chain.name') }}</label>
<InputText v-model="chain.name" size="small" />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold text-sm">{{ t('acl.rule.description') }}</label>
<InputText v-model="chain.description" size="small" />
</div>
<div class="flex items-center gap-6 col-span-full border-t pt-2 mt-2 dark:border-gray-700">
<div class="flex items-center gap-2">
<label class="font-bold text-sm">{{ t('acl.rule.enabled') }}</label>
<ToggleButton v-model="chain.enabled" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('web.common.enable')" :off-label="t('web.common.disable')" class="w-24" />
</div>
<div class="flex items-center gap-2">
<label class="font-bold text-sm">{{ t('acl.chain.type') }}</label>
<Select v-model="chain.chain_type" :options="chainTypeOptions" :option-label="opt => opt.label()"
option-value="value" size="small" class="w-40" />
</div>
<div class="flex items-center gap-2 ml-auto">
<label class="font-bold text-sm">{{ t('acl.default_action') }}</label>
<SelectButton v-model="chain.default_action" :options="actionOptions" :option-label="opt => opt.label()"
option-value="value" :allow-empty="false" />
</div>
</div>
</div>
<div class="flex flex-row items-center gap-4 justify-between">
<h4 class="text-md font-bold">{{ t('acl.rules') }}</h4>
<Button icon="pi pi-plus" :label="t('acl.add_rule')" severity="success" size="small" @click="addRule" />
</div>
<DataTable :value="chain.rules" @row-reorder="onRowReorder" responsiveLayout="scroll">
<Column rowReorder headerStyle="width: 3rem" />
<Column field="enabled" :header="t('acl.rule.enabled')">
<template #body="{ data }">
<i class="pi" :class="data.enabled ? 'pi-check-circle text-green-500' : 'pi-times-circle text-red-500'"></i>
</template>
</Column>
<Column field="name" :header="t('acl.rule.name')" />
<Column :header="t('acl.match')">
<template #body="{ data }">
<div class="flex flex-col gap-2 py-1">
<div class="flex items-center gap-2">
<span
class="px-2 py-0.5 bg-blue-100 text-blue-700 dark:bg-blue-900/30 dark:text-blue-400 rounded-md text-[10px] font-bold uppercase tracking-wider">
{{ getProtocolLabel(data.protocol) }}
</span>
</div>
<div class="flex flex-col sm:flex-row sm:items-center gap-1 sm:gap-3">
<div class="flex items-center gap-1.5 min-w-0">
<span class="text-[10px] font-bold text-gray-400 uppercase w-7">Src</span>
<div class="flex flex-wrap gap-1 items-center overflow-hidden">
<span v-for="ip in data.source_ips" :key="ip"
class="font-mono text-xs bg-surface-100 dark:bg-surface-800 px-1.5 py-0.5 rounded">{{ ip }}</span>
<span v-for="grp in data.source_groups" :key="grp"
class="text-xs font-bold text-purple-600 dark:text-purple-400">@{{ grp }}</span>
<span v-if="data.source_ports.length" class="text-xs text-blue-600 dark:text-blue-400 font-mono">:{{
data.source_ports.join(',') }}</span>
<span v-if="!data.source_ips.length && !data.source_groups.length" class="text-gray-400">*</span>
</div>
</div>
<i class="pi pi-arrow-right hidden sm:block text-gray-300 text-xs"></i>
<Divider layout="horizontal" class="sm:hidden my-1" />
<div class="flex items-center gap-1.5 min-w-0">
<span class="text-[10px] font-bold text-gray-400 uppercase w-7">Dst</span>
<div class="flex flex-wrap gap-1 items-center overflow-hidden">
<span v-for="ip in data.destination_ips" :key="ip"
class="font-mono text-xs bg-surface-100 dark:bg-surface-800 px-1.5 py-0.5 rounded">{{ ip }}</span>
<span v-for="grp in data.destination_groups" :key="grp"
class="text-xs font-bold text-purple-600 dark:text-purple-400">@{{ grp }}</span>
<span v-if="data.ports.length" class="text-xs text-blue-600 dark:text-blue-400 font-mono">:{{
data.ports.join(',') }}</span>
<span v-if="!data.destination_ips.length && !data.destination_groups.length"
class="text-gray-400">*</span>
</div>
</div>
</div>
</div>
</template>
</Column>
<Column field="action" :header="t('acl.rule.action')">
<template #body="{ data }">
<span :class="data.action === AclAction.Allow ? 'text-green-600' : 'text-red-600 font-bold'">
{{ getActionLabel(data.action) }}
</span>
</template>
</Column>
<Column :header="t('web.common.edit')">
<template #body="{ index }">
<div class="flex gap-2">
<Button icon="pi pi-pencil" text rounded @click="editRule(index)" />
<Button icon="pi pi-trash" severity="danger" text rounded @click="deleteRule(index)" />
</div>
</template>
</Column>
</DataTable>
<AclRuleDialog v-if="showRuleDialog && editingRule" v-model:visible="showRuleDialog" v-model:rule="editingRule"
:group-names="props.groupNames" @save="saveRule" />
</div>
</template>
@@ -0,0 +1,115 @@
<script setup lang="ts">
import { Button, Column, DataTable, Dialog, InputText, MultiSelect, Password } from 'primevue';
import { ref } from 'vue';
import { useI18n } from 'vue-i18n';
import { GroupIdentity, GroupInfo } from '../../types/network';
const props = defineProps<{
groupNames?: string[]
}>()
const group = defineModel<GroupInfo>({ required: true })
const emit = defineEmits(['rename-group'])
const { t } = useI18n()
const editingGroup = ref<GroupIdentity | null>(null)
const editingGroupIndex = ref(-1)
const showGroupDialog = ref(false)
const oldGroupName = ref('')
function addGroup() {
editingGroupIndex.value = -1
editingGroup.value = {
group_name: '',
group_secret: '',
}
oldGroupName.value = ''
showGroupDialog.value = true
}
function editGroup(index: number) {
editingGroupIndex.value = index
editingGroup.value = JSON.parse(JSON.stringify(group.value.declares[index]))
oldGroupName.value = editingGroup.value?.group_name || ''
showGroupDialog.value = true
}
function deleteGroup(index: number) {
group.value.declares.splice(index, 1)
}
function saveGroup() {
if (!editingGroup.value) return
const newName = editingGroup.value.group_name
if (editingGroupIndex.value === -1) {
group.value.declares.push(editingGroup.value)
} else {
if (oldGroupName.value && oldGroupName.value !== newName) {
// Sync in members
group.value.members = group.value.members.map(m => m === oldGroupName.value ? newName : m)
// Notify parent to sync in rules
emit('rename-group', { oldName: oldGroupName.value, newName })
}
group.value.declares[editingGroupIndex.value] = editingGroup.value
}
showGroupDialog.value = false
}
</script>
<template>
<div class="flex flex-col gap-6">
<div class="flex flex-col gap-2">
<div class="flex justify-between items-center">
<div class="flex flex-col">
<label class="font-bold text-lg">{{ t('acl.group.declares') }}</label>
<small class="text-gray-500">{{ t('acl.group.help') }}</small>
</div>
<Button icon="pi pi-plus" :label="t('web.common.add')" severity="success" @click="addGroup" />
</div>
<DataTable :value="group.declares" responsiveLayout="scroll">
<Column field="group_name" :header="t('acl.group.name')" />
<Column field="group_secret" :header="t('acl.group.secret')">
<template #body="{ data }">
<Password v-model="data.group_secret" :feedback="false" toggleMask readonly plain class="w-full" />
</template>
</Column>
<Column :header="t('web.common.edit')" headerStyle="width: 8rem">
<template #body="{ index }">
<div class="flex gap-2">
<Button icon="pi pi-pencil" text rounded @click="editGroup(index)" />
<Button icon="pi pi-trash" severity="danger" text rounded @click="deleteGroup(index)" />
</div>
</template>
</Column>
</DataTable>
</div>
<div class="flex flex-col gap-2">
<label class="font-bold text-lg">{{ t('acl.group.members') }}</label>
<MultiSelect v-model="group.members" :options="props.groupNames" multiple fluid filter
:placeholder="t('acl.group.members')" />
</div>
<!-- Group Identity Dialog -->
<Dialog v-model:visible="showGroupDialog" modal :header="t('acl.groups')" :style="{ width: '400px' }">
<div v-if="editingGroup" class="flex flex-col gap-4 pt-2">
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.group.name') }}</label>
<InputText v-model="editingGroup.group_name" fluid />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.group.secret') }}</label>
<Password v-model="editingGroup.group_secret" :feedback="false" toggleMask fluid />
</div>
</div>
<template #footer>
<Button :label="t('web.common.cancel')" icon="pi pi-times" @click="showGroupDialog = false" text />
<Button :label="t('web.common.save')" icon="pi pi-save" @click="saveGroup" />
</template>
</Dialog>
</div>
</template>
@@ -0,0 +1,150 @@
<script setup lang="ts">
import { Button, Menu, Tab, TabList, TabPanel, TabPanels, Tabs } from 'primevue'
import { computed, ref } from 'vue'
import { useI18n } from 'vue-i18n'
import { Acl, AclAction, AclChainType } from '../../types/network'
import AclChainEditor from './AclChainEditor.vue'
import AclGroupEditor from './AclGroupEditor.vue'
const acl = defineModel<Acl>({ required: true })
const { t } = useI18n()
const activeTab = ref(0)
const menu = ref()
const addMenuModel = ref([
{ label: () => t('acl.inbound'), command: () => addChain(AclChainType.Inbound) },
{ label: () => t('acl.outbound'), command: () => addChain(AclChainType.Outbound) },
{ label: () => t('acl.forward'), command: () => addChain(AclChainType.Forward) },
])
function addChain(type: AclChainType) {
if (!acl.value.acl_v1) {
acl.value.acl_v1 = { chains: [], group: { declares: [], members: [] } }
}
let defaultName = ''
switch (type) {
case AclChainType.Inbound: defaultName = 'Inbound'; break;
case AclChainType.Outbound: defaultName = 'Outbound'; break;
case AclChainType.Forward: defaultName = 'Forward'; break;
}
acl.value.acl_v1.chains.push({
name: defaultName,
chain_type: type,
description: '',
enabled: true,
rules: [],
default_action: AclAction.Allow
})
activeTab.value = acl.value.acl_v1.chains.length - 1
}
function removeChain(index: number) {
if (confirm(t('acl.delete_chain_confirm'))) {
acl.value.acl_v1?.chains.splice(index, 1)
if (activeTab.value >= (acl.value.acl_v1?.chains.length || 0)) {
activeTab.value = Math.max(0, (acl.value.acl_v1?.chains.length || 0))
}
}
}
function handleRenameGroup({ oldName, newName }: { oldName: string, newName: string }) {
if (!acl.value.acl_v1) return
acl.value.acl_v1.chains.forEach(chain => {
chain.rules.forEach(rule => {
rule.source_groups = rule.source_groups.map(g => g === oldName ? newName : g)
rule.destination_groups = rule.destination_groups.map(g => g === oldName ? newName : g)
})
})
}
const groupNames = computed(() => {
return acl.value.acl_v1?.group?.declares.map(g => g.group_name) || []
})
const tabs = computed(() => {
const chains = acl.value.acl_v1?.chains || []
const result: { type: string, label: string, index: number }[] = []
if (chains.length === 0) {
result.push({ type: 'empty', label: t('acl.chains'), index: 0 })
}
else {
chains.forEach((c, index) => {
result.push({
type: 'chain',
label: c.name || `Chain ${index}`,
index
})
})
}
result.push({ type: 'groups', label: t('acl.groups'), index: result.length })
return result
})
</script>
<template>
<div class="flex flex-col gap-4">
<Tabs v-model:value="activeTab">
<div class="flex items-center border-b border-surface-200 dark:border-surface-700">
<TabList class="flex-grow min-w-0 overflow-x-auto" style="border-bottom: none;">
<Tab v-for="tab in tabs" :key="tab.type + tab.index" :value="tab.index">
<div class="flex items-center gap-2 whitespace-nowrap">
{{ tab.label }}
<Button v-if="tab.type === 'chain'" icon="pi pi-times" severity="danger" text rounded size="small"
class="w-6 h-6 p-0" @click.stop="removeChain(tab.index)" />
</div>
</Tab>
</TabList>
<div
class="flex-shrink-0 flex items-center px-2 bg-white dark:bg-gray-900 border-l border-surface-100 dark:border-surface-800">
<Button icon="pi pi-plus" text rounded size="small" class="w-8 h-8 p-0"
@click="(event) => menu.toggle(event)" />
<Menu ref="menu" :model="addMenuModel" :popup="true" />
</div>
</div>
<TabPanels>
<TabPanel v-for="tab in tabs" :key="'panel' + tab.type + tab.index" :value="tab.index">
<!-- Empty State within TabPanel -->
<div v-if="tab.type === 'empty'"
class="py-8 flex flex-col items-center justify-center border-2 border-dashed border-surface-200 rounded-lg bg-surface-50 dark:bg-surface-900 dark:border-surface-700">
<i class="pi pi-shield text-5xl mb-4 text-primary" />
<div class="text-xl font-bold mb-2">{{ t('acl.chains') }}</div>
<p class="text-surface-500 mb-8 text-center max-w-sm px-4">{{ t('acl.help') }}</p>
<div class="flex flex-wrap gap-3 justify-center">
<Button :label="t('acl.inbound')" icon="pi pi-arrow-down-left" @click="addChain(AclChainType.Inbound)" />
<Button :label="t('acl.outbound')" icon="pi pi-arrow-up-right" @click="addChain(AclChainType.Outbound)" />
<Button :label="t('acl.forward')" icon="pi pi-directions" @click="addChain(AclChainType.Forward)" />
</div>
</div>
<!-- Rule Chains -->
<div v-if="tab.type === 'chain' && acl.acl_v1 && acl.acl_v1.chains[tab.index]" class="py-4">
<AclChainEditor v-model="acl.acl_v1.chains[tab.index]" :group-names="groupNames" />
</div>
<!-- Group Management -->
<div v-if="tab.type === 'groups'" class="py-4">
<template v-if="acl.acl_v1">
<AclGroupEditor v-if="acl.acl_v1.group" v-model="acl.acl_v1.group" :group-names="groupNames"
@rename-group="handleRenameGroup" />
<div v-else class="flex justify-center p-4">
<Button :label="t('web.common.add') + ' ' + t('acl.groups')"
@click="acl.acl_v1.group = { declares: [], members: [] }" />
</div>
</template>
<div v-else class="flex justify-center p-4">
<Button :label="t('acl.enabled')"
@click="acl.acl_v1 = { chains: [], group: { declares: [], members: [] } }" />
</div>
</div>
</TabPanel>
</TabPanels>
</Tabs>
</div>
</template>

Some files were not shown because too many files have changed in this diff Show More