Compare commits

...

107 Commits

Author SHA1 Message Date
fanyang 349dbf7d8d fix(web): avoid false default-password reminders
Only flag seeded accounts that still use the shipped password hash,
and keep auth status and password change responses stable during
review follow-up.
2026-04-05 17:54:12 +08:00
fanyang 7707b1cf5e fix(web): require password confirmation in auth forms
Require users to enter new passwords twice in the registration
and password change forms so typos are caught before credentials
are stored.
2026-04-05 17:31:22 +08:00
fanyang 2490bb9808 fix(web): enforce password strength in auth forms
Apply the same password policy to registration and password
changes so operators cannot replace default credentials with
another weak password and users see consistent guidance.
2026-04-05 17:31:22 +08:00
fanyang 3f3e36e653 feat(web): warn on default-password accounts
Track built-in admin and user accounts that still use their
seeded password so the web UI can prompt operators to
rotate credentials after deployment.

- Persist must-change-password state for seeded accounts.
- Clear the reminder after password changes and validate
  empty-password updates.
- Keep the migration and auth API behavior explicit.
2026-04-05 17:31:22 +08:00
fanyang 2cf2b0fcac feat(cli): implement connector add/remove, drop peer stubs (#2058)
Implement the previously stubbed connector add/remove CLI commands
using PatchConfig RPC with InstanceConfigPatch.connectors, and
remove the peer add/remove stubs that had incorrect semantics.
2026-04-05 13:56:17 +08:00
dependabot[bot] aa0cca3bb6 build(deps): bump quinn-proto in /easytier-contrib/easytier-ohrs (#2059)
Bumps [quinn-proto](https://github.com/quinn-rs/quinn) from 0.11.13 to 0.11.14.
- [Release notes](https://github.com/quinn-rs/quinn/releases)
- [Commits](https://github.com/quinn-rs/quinn/compare/quinn-proto-0.11.13...quinn-proto-0.11.14)

---
updated-dependencies:
- dependency-name: quinn-proto
  dependency-version: 0.11.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-05 13:16:33 +08:00
KKRainbow fb59f01058 fix: reconcile webhook-managed configs and make disable_p2p more intelligent (#2057)
* reconcile infra configs on webhook validate
* make disable_p2p more intelligent
* fix stats
2026-04-04 23:41:57 +08:00
Luna Yao e91a0da70a refactor: listener/connector protocol abstraction (#2026)
* fix listener protocol detection
* replace IpProtocol with IpNextHeaderProtocol
* use an enum to gather all listener schemes
* rename ListenerScheme to TunnelScheme; replace IpNextHeaderProtocols with socket2::Protocol
* move TunnelScheme to tunnel
* add IpScheme, simplify connector creation
* format; fix some typos; remove check_scheme_...;
* remove PROTO_PORT_OFFSET
* rename WSTunnel.. -> WsTunnel.., DNSTunnel.. -> DnsTunnel..
2026-04-04 10:55:58 +08:00
Luna Yao 9cc617ae4c ci: build rpm package (#2044)
* add rpm to ci
* rename build_filter to build-filter
* use prepare-pnpm action
2026-04-04 10:32:08 +08:00
韩嘉乐 e4b0f1f1bb Rename libeasytier_ohrs.so to libeasytier_release.so when build release package (#2056)
Rename shared library file for release.
2026-04-04 10:29:37 +08:00
Luna Yao 443c3ca0b3 fix: append address of reverse proxy to remote_addr (#2034)
* append address of reverse proxy to remote_addr
* validate proxy address in test
2026-03-30 16:48:23 +08:00
Luna Yao 55a0e5952c chore: use cfg_aliases for mobile (#2033) 2026-03-30 16:38:39 +08:00
KKRainbow 1dff388717 bump version to v2.6.0 (#2039) 2026-03-30 15:50:07 +08:00
Luna Yao 61c741f887 add BoxExt trait (#2036) 2026-03-30 13:25:53 +08:00
ParkGarden 01dd9a05c3 fix: 重构了 Magisk 模块的 easytier_core.sh, action.sh, uninstall.sh 三个脚本的逻辑,优化参数解析与进程管理,调整措辞 (#1964) 2026-03-30 13:18:42 +08:00
KKRainbow 8c19a2293c fix(windows): avoid pnet interface enumeration panic (#2031) 2026-03-29 23:16:44 +08:00
KKRainbow a1bec48dc9 fix android vpn permission grant (#2023)
* fix android vpn permission grant
* fix url input behaviour
2026-03-29 23:16:32 +08:00
KKRainbow 7e289865b2 fix(faketcp): avoid pnet interface lookup on windows (#2029) 2026-03-29 19:26:29 +08:00
fanyang 742c7edd57 fix: use default connection loss rate for peer stats (#2030) 2026-03-29 19:25:25 +08:00
Luna Yao b71a2889ef suppress clippy warnings when no feature flags are enabled (#2028) 2026-03-29 11:02:23 +08:00
KKRainbow bcd75d6ce3 Add instance recv limiter in peer conn (#2027) 2026-03-29 10:28:02 +08:00
Luna Yao d4c1b0e867 fix: read X-Forwarded-For from HTTP header of WS/WSS (#2019) 2026-03-28 22:20:46 +08:00
KKRainbow b037ea9c3f Relax private mode foreign network secret checks (#2022) 2026-03-28 22:19:23 +08:00
Luna Yao b5f475cd4c filter overlapped proxy cidr (#2024) 2026-03-28 09:40:05 +08:00
Luna Yao eaa4d2c7b8 test: use taiki-e/install-action for cargo-hack (#2020) 2026-03-28 00:07:59 +08:00
Luna Yao e160d9b048 ci: remove aes-gcm from check (#1925) 2026-03-27 22:48:22 +08:00
KKRainbow 0aeea39fbe refactor(gui): collapse public server and standalone into initial peer list (#2017)
The GUI exposed three networking modes: public server, manual, and standalone. In practice EasyTier does not have a server/client role distinction here. Those options only mapped to different peer bootstrap shapes, which made the product model misleading and pushed users toward a non-existent "public server" concept.

This change rewrites the shared configuration UX around initial nodes. Users now add or remove one or more initial node URLs directly, and the UI explains that EasyTier networking works like plugging in a cable: once a node connects to one or more existing nodes, it can join the mesh. Initial nodes may be self-hosted or shared by others.

To preserve compatibility, the frontend keeps the legacy fields and adds normalization helpers in the shared NetworkConfig layer. Old configs are read as initial_node_urls, while saves, runs, validation, config generation, and persisted GUI config sync still denormalize back into the current backend shape: zero initial nodes -> Standalone, one -> PublicServer, many -> Manual. This avoids any proto or backend API change while making old saved configs and imported TOML files load cleanly in the new UI.

Code changes:

- add initial_node_urls plus normalize/denormalize helpers in the shared frontend NetworkConfig model

- remove the mode switch and public-server/manual specific inputs from the shared Config component and replace them with a single initial-node list plus explanatory copy

- update Chinese and English locale strings for the new terminology

- normalize configs received from GUI/web backends and denormalize them before outbound API calls

- normalize GUI save-config events before storing them in localStorage so legacy payloads remain editable under the new model
2026-03-27 11:37:09 +08:00
KKRainbow e000636d83 feat(stats): add by-instance traffic metrics (#2011) 2026-03-26 13:46:33 +08:00
Luna Yao 8e4dc508bb test: improve test_txt_public_stun_server with timeout and retry mechanism (#2014) 2026-03-26 09:32:07 +08:00
Luna Yao e2684a93de refactor: use strum on EncryptionAlgorithm, use Xor as default when AesGcm not available (#1923) 2026-03-25 18:42:34 +08:00
KKRainbow 1d89ddbb16 Add lazy P2P demand tracking and need_p2p override (#2003)
- add lazy_p2p so nodes only start background P2P for peers that actually have recent business traffic
- add need_p2p so specific peers can still request eager background P2P even when other nodes enable lazy mode
- cover the new behavior with focused connector/peer-manager tests plus three-node integration tests that verify relay-to-direct route transition
2026-03-23 09:38:57 +08:00
KKRainbow 2bfdd44759 multi_fix: harden peer/session handling, tighten foreign-network trust, and improve web client metadata (#1999)
* machine-id should be scoped unbder same user-id
* feat: report device os metadata to console
* fix sync root key cause packet loss
* fix tun packet not invalid
* fix faketcp cause lat jitter
* fix some packet not decrypt
* fix peer info patch, improve performance of update self info
* fix foreign credential identity mismatch handling
2026-03-21 21:06:07 +08:00
Luna Yao 77966916c4 cargo: add used features for windows-sys (#1924) 2026-03-17 14:10:50 +08:00
TsXor 26b7455c1e ignores eol difference for auto-generated files (#1997) 2026-03-16 23:40:38 +08:00
KKRainbow 8922e7b991 fix: foreign credential handling and trusted key visibility (#1993)
* fix foreign credential handling
* allow list foreign network trusted keys
* fix(gui): delete removed config-server networks
* fix(web): reset managed instances on first sync
2026-03-16 22:19:31 +08:00
KKRainbow e6ac31fb20 feat(web): add webhook-managed machine access and multi-instance CLI support (#1989)
* feat: add webhook-managed access and multi-instance CLI support
* fix(foreign): verify credential of foreign credential peer
2026-03-15 12:08:50 +08:00
KKRainbow c8f3c5d6aa feat(credential): support custom credential ID generation (#1984)
introduces support for custom credential ID generation, allowing users to specify their own credential IDs instead of relying solely on auto-generated UUIDs.
2026-03-12 00:48:24 +08:00
KKRainbow 330659e449 feat(web): full-power RPC access + typed JSON proxy endpoint (#1983)
- extend web controller bindings to cover full RPC service set
- update rpc_service API wiring and session/controller integration
- generate trait-level json_call_method in rpc codegen
- route restful proxy-rpc requests via scoped typed clients
- add json-call regression tests and required Sync bound fixes~
2026-03-11 20:32:37 +08:00
Maxwell 80043df292 script: introduce EasyTier powershell installer (#1975) 2026-03-11 11:57:03 +08:00
KKRainbow ecd1ea6f8c feat(web): implement secure core-web tunnel with Noise protocol (#1976)
Implement end-to-end encryption for core-web connections using the
Noise protocol framework with the following changes:

Client-side (easytier/src/web_client/):
- Add security.rs module with Noise handshake implementation
- Add upgrade_client_tunnel() for client-side handshake
- Add Noise frame encryption/decryption via TunnelFilter
- Integrate GetFeature RPC for capability negotiation
- Support secure_mode option to enforce encrypted connections
- Handle graceful fallback for backward compatibility

Server-side (easytier-web/):
- Accept Noise handshake in client_manager
- Expose encryption support via GetFeature RPC

The implementation uses Noise_NN_25519_ChaChaPoly_SHA256 pattern for
encryption without authentication. Provides backward compatibility
with automatic fallback to plaintext connections.
2026-03-10 08:48:08 +08:00
KKRainbow 694b8d349d feat(credential): enforce signed credential distribution across mixed admin/shared topology (#1972) 2026-03-10 08:37:33 +08:00
KKRainbow ef44027f57 feat(credential): improve credential peer routing and visibility (#1971)
- improve credential peer filtering and related route lookup behavior
- expose credential peer information through CLI and API definitions
- add and refine tests for credential routing and peer interactions
2026-03-08 14:06:33 +08:00
KKRainbow f3db348b01 fix: resolve slow exit and reduce test timeouts (#1970)
- Explicitly shutdown tokio runtime on launcher cleanup to fix slow exit
- Add timeout to tunnel connector in tests to prevent hanging
- Reduce test wait durations from 5s to 100ms for faster test execution
- Bump num-bigint-dig from 0.8.4 to 0.8.6
2026-03-08 12:27:42 +08:00
KKRainbow c4eacf4591 feat(credential): implement credential peer auth and trust propagation (#1968)
- add credential manager and RPC/CLI for generate/list/revoke
- support credential-based Noise authentication and revocation handling
- propagate trusted credential metadata through OSPF route sync
- classify direct peers by auth level in session maintenance
- normalize sender credential flag for legacy non-secure compatibility
- add unit/integration tests for credential join, relay and revocation
2026-03-07 22:58:15 +08:00
KKRainbow 59d4475743 feat: relay peer end-to-end encryption via Noise IK handshake (#1960)
Enable encryption for non-direct nodes requiring relay forwarding.
When secure_mode is enabled, peers perform Noise IK handshake to
establish an encrypted PeerSession. Relay packets are encrypted at
the sender and decrypted at the receiver. Intermediate forwarding
nodes cannot read plaintext data.

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: KKRainbow <5665404+KKRainbow@users.noreply.github.com>
2026-03-07 14:47:22 +08:00
KKRainbow 22b4c4be2c fix: guard macos-ne feature with target_os = "macos" in cfg expressions (#1962)
All 13 occurrences of `any(target_os = "ios", feature = "macos-ne")` are
replaced with `any(target_os = "ios", all(target_os = "macos", feature = "macos-ne"))`.

Previously, enabling `macos-ne` on non-macOS platforms (e.g. `--all-features`
on Linux) would incorrectly compile macOS/mobile-specific code paths, causing
build failures or wrong runtime behavior.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-05 00:06:21 +08:00
Luna Yao 5f31583a84 refactor: 使用 tracing 输出日志 (#1856)
* change all println to tracing
2026-03-04 09:52:23 +08:00
Mg Pig 1d25240d8c refactor(ui): extract URL input components and enhance UI responsiveness (#1819) 2026-03-04 09:49:15 +08:00
fanyang eeb507d6ea fix: register PeerCenterRpc in management API server so CLI peer-center works (#1929)
PeerCenterRpc was only registered in the per-instance peer-to-peer RPC
manager (domain = network_name), but not in the management API server
(domain = ""). The CLI connects to the management API with an empty
domain, causing "Invalid service name: PeerCenterRpc" errors.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-04 09:37:37 +08:00
fanyang 9e9916efa5 fix(connector): skip self-connection when peer shares local interface IPs (#1941)
When two EasyTier instances run on the same machine and share the same
network, the direct connector would expand a remote peer's 0.0.0.0
listener into local interface IPs and then attempt to connect to
itself, causing an infinite loop of failed connection attempts.

The existing `peer_id != my_peer_id` guard does not cover this case
because the two instances have different peer IDs despite sharing the
same physical network interfaces.

Fix by adding a self-connection check in `spawn_direct_connect_task`:
before spawning a connect task, compare the candidate (scheme, IP,
port) against the local running listeners. If a local listener matches
on all three dimensions — accounting for 0.0.0.0/:: wildcards by
checking membership in the local interface IP sets — the candidate is
silently dropped with a DEBUG log message.

The fix covers all four code paths:
- IPv4 unspecified (0.0.0.0) expansion loop
- IPv4 specific-address branch
- IPv6 unspecified (::) expansion loop
- IPv6 specific-address branch

The TESTING flag logic is untouched so existing unit tests are
unaffected.

* refactor(connector): replace is_self_connect closure with GlobalCtx::should_deny_proxy (#1954)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-03-04 09:36:35 +08:00
hello db6b9e3684 feat: core config server use last path segment as user name (#1931) 2026-03-03 18:24:28 +08:00
Mg Pig ff24332e23 feat(web): add OIDC SSO login support (#1943) 2026-03-03 18:23:31 +08:00
fanyang d4ff0b1767 build(deps): upgrade vite to 5.4.21 in frontend and gui packages (#1950) 2026-03-01 13:47:02 +08:00
Mg Pig 5716f7f16b fix(web): allow configuring listen address for API and web servers (#1919) (#1948) 2026-03-01 01:02:31 +08:00
fanyang e5bd8f9e24 build(deps): upgrade minimatch to 10.2.4 (#1949) 2026-02-28 22:40:47 +08:00
sky96111 b56bcfb4b0 fix: increase websocket peer connection timeout to 20 seconds (#1939)
- Add ws/wss protocols to long timeout list
2026-02-28 18:26:19 +08:00
fanyang fb95b4827c build(deps): bump axios from 1.11.0 to 1.13.6 in frontend packages (#1947)
Addresses security vulnerabilities in axios <1.13.5. Updates the
declared specifier to ^1.13.5 in all three frontend package.json
files and regenerates both npm and pnpm lock files (resolved: 1.13.6).

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 11:17:18 +08:00
fanyang a8f7226195 fix(foreign_network): set avoid_relay_data when relay_data is false (#1935) 2026-02-25 09:30:24 +08:00
dependabot[bot] e6ee485352 build(deps-dev): bump vite from 5.4.10 to 5.4.21 in /easytier-web/frontend-lib (#1922)
* build(deps-dev): bump vite in /easytier-web/frontend-lib

Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.4.10 to 5.4.21.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.21/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.21/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.21
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-23 22:47:29 +08:00
hello 73291a3a1c feat: Update Cargo.toml to add support for tls1.2 when use wss (#1917) 2026-02-20 18:01:21 +08:00
fanyang f737708f45 fix: avoid panic on malformed short tunnel packets (#1904) 2026-02-18 00:04:30 +08:00
fanyang aa24d09aa2 fix: replace stale magic DNS records on IP change (#1906)
Magic DNS updates are full snapshots, so appending routes keeps old IPs and returns duplicate A records. Replace each client's previous routes on update and add a regression test to ensure hostname resolution keeps only the latest IP.
2026-02-16 13:20:11 +08:00
fanyang fe4e77979d fix: avoid panic for quic peer urls using port 0 (#1905)
Prevent crashes when users input quic://...:0 by rejecting port 0 explicitly and propagating connect setup errors. Add a regression test to ensure invalid QUIC targets fail gracefully.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-14 17:10:29 +08:00
Chenx Dust 7a26640c26 feat: support macOS Network Extension (#1902)
* feat: support macOS Network Extension
* fix: disable macOS NE feature in cargo hack check
2026-02-14 14:54:36 +08:00
Mg Pig 5a777959e3 ui: clarify encryption checkbox description in locales (#1841) 2026-02-13 16:04:26 +08:00
Mg Pig 3512a80597 feat(web): add --disable-registration flag to disable user registration (#1881) 2026-02-13 16:03:11 +08:00
Zkitefly 011770a601 Update http_connector.rs (#1900) 2026-02-13 16:02:32 +08:00
Chenx Dust 6475724d2e fix: toggle_window_visibility with focus check (#1888)
* refactor: better logics for toggle_window_visibility
2026-02-11 16:50:36 +08:00
Mg Pig 85e9029577 feat: add Nix CI workflow and update flake.lock dependencies (#1872) 2026-02-10 18:11:35 +08:00
Luna Yao b6e292cce3 ci: use shared key for build workflow (#1868) 2026-02-04 09:48:55 +08:00
KKRainbow c58140fb47 update rust to 1.93 (#1865) 2026-02-04 09:48:43 +08:00
Luna Yao aebb7facfa drop permit reserved by poll_reserve (#1858) 2026-02-03 11:14:11 +08:00
Chenx Dust 1e2124cb99 fix: force set tun fd when received (#1860) 2026-02-03 11:13:31 +08:00
Chenx Dust e1cbd07d1f feat: separate zstd and faketcp into features (#1861)
* feat: separate faketcp into a feature
* fix: no need to initialize out_len
* feat: separate zstd into a feature
* clippy: remove unnecessary cast, because for unix size_t always equals usize
2026-02-03 11:12:33 +08:00
韩嘉乐 7750e81168 CI(ohos): add a condition to check for the publish code (#1863)
Added a condition to check for the presence of a release code when running the publish step
2026-02-03 11:11:45 +08:00
KKRainbow bf3edbd28f remove src modified flag from pm hdr (#1857) 2026-02-02 16:47:26 +08:00
Luna Yao cd2cf56358 refactor: handle quic proxy internally instead of use external udp port (#1743)
* deprecate quic_listen_port, add disable_relay_quic and enable_relay_foreign_network_quic
* add set_src_modified to TcpProxyForWrappedSrcTrait
* prioritize quic over kcp
2026-02-02 11:53:40 +08:00
KKRainbow 21f4a944a7 fix perf degraded because of impact of is_empty() of dashmap (#1854) 2026-02-01 08:51:18 +08:00
KKRainbow 9617005136 make udp->ring transmit reliable (#1851) 2026-01-31 17:23:45 +08:00
deddey c85d1d41b3 allow set TUN dev name on FreeBSD (#1823)
Also rename stale interfaces from previous runs before creating new ones.
Works around rust-tun reusing existing tun0 instead of configured name.

Tested on FreeBSD 14.1
2026-01-30 23:51:52 +08:00
KKRainbow 9e3c9228bb improve perf of remove_network in foreign net mgr (#1847) 2026-01-30 23:04:31 +08:00
Luna Yao acd7c85ff6 ci: speed up test with matrix (#1830)
* add an action to install pnpm packages
* add an action to prepare build environment
* rewrite test workflow, using composite actions and matrix
2026-01-30 22:21:27 +08:00
KKRainbow 8727221513 call remove_peer instead of remove_network when peer id not match (#1844) 2026-01-30 16:01:52 +08:00
Luna Yao cdedaf3f63 refactor(quic): remove quinn encryption (#1831)
* use quinn-plaintext
* remove server_cert in QUICTunnelListener
* remove some customized transport config
* leave max_concurrent_bidi_streams as default

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-30 10:21:59 +08:00
KKRainbow ffe5644ddc add token bucket limiter on peer conn recv (#1842)
We should limit peer conn recv to make sure we don't recv too much from peers.
2026-01-29 16:12:26 +08:00
Chenx Dust ccc684a9ab Fix: Fixed compilation issue after partially removing the feature flag (#1835) 2026-01-28 21:38:34 +08:00
fanyang 977e502150 feat(cli): add column truncation controls (#1838)
- drop low-priority columns when tables exceed terminal width
- truncate optional columns to fit remaining width
- add --no-trunc flag to disable truncation
- compute column widths using unicode display width

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-28 14:50:14 +08:00
Mg Pig 518d26b25f feat: add X-Network-Name header to HTTP connector requests (#1839)
This allows HTTP redirect servers to provide network-specific node
lists based on the client's network identity. Updated unit tests
to verify the header is correctly sent.
2026-01-28 14:48:45 +08:00
KKRainbow 101f416268 Introduce secure mode (part 1) (#1808)
Use noise protocol on handshake. Check peer's public key if needed. Also support rekey and replay attack prevention.

E2EE and temporary password will be implemented based on this.
2026-01-25 20:16:51 +08:00
Chenx Dust ffa08d1c43 feat: add peer_id in MyNodeInfo (#1821) 2026-01-22 22:44:37 +08:00
韩嘉乐 cf3f9169b7 CI(ohos): Enhance CI workflow for release package builds (#1812)
Added support for building and publishing release packages based on tags.
2026-01-20 12:25:10 +08:00
KKRainbow 8343cd5e76 fix config loss when run network (#1802) 2026-01-17 00:58:42 +08:00
KKRainbow 005b321f62 allow open rpc port in gui normal mode (#1795)
* allow open rpc port for gui normal mode
* downgrade dev tool console
2026-01-16 11:12:32 +08:00
KKRainbow 53264f67bf fix peer establish direct conn with subnet proxy to one of local interface (#1782)
* fix peer establish direct conn with subnet proxy to one of local interface

* fix peer mgr ref loop
2026-01-15 01:00:32 +08:00
韩嘉乐 f8b34e3c86 Merge pull request #1787 from EasyTier/FrankHan052176-patch-1
action[ohos] fix the cnt of commit in ohos.yml
2026-01-13 23:58:26 +08:00
韩嘉乐 ce1bdac2bc action[ohos] fix the cnt of commit in ohos.yml 2026-01-13 22:57:43 +08:00
Copilot bd8f01fb26 Add Nushell completion script generation support (#1756)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
2026-01-11 18:41:02 +08:00
Chenx Dust b590700540 feat: support unix socket tunnel (for ios) (#1779)
Co-authored-by: Page Chen <pagechen04@gmail.com>
2026-01-11 16:37:32 +08:00
Chenx Dust 48c5c23f9b feat: support compile for iOS (#1777) 2026-01-11 16:36:58 +08:00
朝倉水希 f4f591d14c fix: outbound packet not dropped by acl (#1766) 2026-01-08 19:58:23 +08:00
Mg Pig 0c16e2211b feat(gui): persist and restore last used network instance ID (#1762) 2026-01-08 17:03:51 +08:00
Rinne 4bfea06a12 docs: update locales (#1755)
Co-authored-by: KKRainbow <443152178@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-08 11:08:32 +08:00
桜井 ホタル 057ee9f2c5 Resolves the issue of DNS resolution failure after installing KSU modules, resulting in inability to connect to nodes. (#1761) 2026-01-08 11:07:52 +08:00
Burning_TNT 7f48ca54a3 Implement requesting tun_fd with tokio channel. (#1734)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-04 21:04:43 +08:00
hello ee5227130c feat: Update Cargo.toml for easytier-gui and android app to support tls1.2 (#1744) 2026-01-04 21:03:34 +08:00
韩嘉乐 2e0d9a2b54 Refactor EasyTier version resolution in workflow (#1747)
Updated the workflow to resolve the EasyTier version based on the latest commit and tag information.
2026-01-04 21:02:55 +08:00
编程小白 c5d732773f Convert dead URL to ASCII before socket address lookup (#1739) 2026-01-02 18:49:23 +08:00
217 changed files with 29599 additions and 5410 deletions
+43
View File
@@ -0,0 +1,43 @@
name: prepare-build
author: Luna
description: Prepare build environment
inputs:
web:
description: 'Whether to prepare the web build environment'
required: true
default: 'true'
gui:
description: 'Whether to prepare the GUI build environment'
required: true
default: 'true'
token:
description: 'GitHub token, used by setup-protoc action'
required: false
runs:
using: 'composite'
steps:
- run: mkdir -p easytier-gui/dist
shell: bash
- name: Setup Frontend Environment
if: ${{ inputs.web == 'true' }}
uses: ./.github/actions/prepare-pnpm
with:
build-filter: './easytier-web/*'
- name: Install GUI dependencies (Used by clippy)
if: ${{ inputs.gui == 'true' }}
run: |
bash ./.github/workflows/install_gui_dep.sh
shell: bash
- name: Install Rust
run: |
bash ./.github/workflows/install_rust.sh
shell: bash
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ inputs.token }}
+48
View File
@@ -0,0 +1,48 @@
name: 'Setup pnpm'
author: Luna
description: 'Setup Node.js, pnpm, and install dependencies'
inputs:
build-filter:
description: 'The filter argument for pnpm build (e.g. ./easytier-web/*)'
required: false
default: ''
runs:
using: "composite"
steps:
- name: Setup Node.js
uses: actions/setup-node@v5
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v5
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v5
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install and build
shell: bash
run: |
pnpm -r install
if [ -n "${{ inputs.build-filter }}" ]; then
echo "Building with filter: ${{ inputs.build-filter }}"
pnpm -r --filter "${{ inputs.build-filter }}" build
else
echo "No build filter provided, building all packages"
pnpm -r build
fi
+33 -44
View File
@@ -30,44 +30,21 @@ jobs:
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh", "easytier-web/**"]'
build_web:
runs-on: ubuntu-latest
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: actions/setup-node@v4
- name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r --filter "./easytier-web/*" build
build-filter: './easytier-web/*'
- name: Archive artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: easytier-web-dashboard
path: |
@@ -142,7 +119,7 @@ jobs:
- build_web
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Set current ref as env variable
run: |
@@ -160,6 +137,7 @@ jobs:
# The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust"
prefix-key: ""
shared-key: "core-registry"
cache-targets: "false"
- name: Setup protoc
@@ -186,7 +164,7 @@ jobs:
fi
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cargo +nightly-2025-09-01 build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
cargo +nightly-2026-02-02 build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
else
if [[ $OS =~ ^windows.*$ ]]; then
SUFFIX=.exe
@@ -228,8 +206,8 @@ jobs:
rustup set auto-self-update disable
rustup install 1.89
rustup default 1.89
rustup install 1.93
rustup default 1.93
export CC=clang
export CXX=clang++
@@ -239,19 +217,30 @@ jobs:
mv ./target/$TARGET/release/easytier-web ./target/$TARGET/release/easytier-web-embed
cargo build --release --verbose --target $TARGET --features=mimalloc
mkdir -p built-bins/$TARGET/release/
mv ./target/$TARGET/release/easytier-web-embed ./built-bins/$TARGET/release/easytier-web-embed
mv ./target/$TARGET/release/easytier-web ./built-bins/$TARGET/release/easytier-web
mv ./target/$TARGET/release/easytier-core ./built-bins/$TARGET/release/easytier-core
mv ./target/$TARGET/release/easytier-cli ./built-bins/$TARGET/release/easytier-cli
# remove dirs to avoid copy many files back
rm -rf ./target ~/.cargo
mv ./built-bins ./target
- name: Compress
run: |
mkdir -p ./artifacts/objects/
# windows is the only OS using a different convention for executable file name
if [[ $OS =~ ^windows.*$ && $TARGET =~ ^x86_64.*$ ]]; then
if [[ $OS =~ ^windows.*$ ]]; then
SUFFIX=.exe
cp easytier/third_party/x86_64/* ./artifacts/objects/
elif [[ $OS =~ ^windows.*$ && $TARGET =~ ^i686.*$ ]]; then
SUFFIX=.exe
cp easytier/third_party/i686/* ./artifacts/objects/
elif [[ $OS =~ ^windows.*$ && $TARGET =~ ^aarch64.*$ ]]; then
SUFFIX=.exe
cp easytier/third_party/arm64/* ./artifacts/objects/
case $TARGET in
x86_64*) ARCH_DIR=x86_64 ;;
i686*) ARCH_DIR=i686 ;;
aarch64*) ARCH_DIR=arm64 ;;
esac
if [[ -n "$ARCH_DIR" ]]; then
find "easytier/third_party/${ARCH_DIR}" -maxdepth 1 -type f \( -name "*.dll" -o -name "*.sys" \) -exec cp {} ./artifacts/objects/ \;
fi
fi
if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then
TAG=$GITHUB_REF_NAME
@@ -278,7 +267,7 @@ jobs:
rm -rf ./artifacts/objects/
- name: Archive artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: easytier-${{ matrix.ARTIFACT_NAME }}
path: |
@@ -305,7 +294,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4 # 必须先检出代码才能获取模块配置
uses: actions/checkout@v5 # 必须先检出代码才能获取模块配置
# 下载二进制文件到独立目录
- name: Download Linux aarch64 binaries
@@ -325,7 +314,7 @@ jobs:
# 上传生成的模块
- name: Upload Magisk Module
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: Easytier-Magisk
path: |
+2 -2
View File
@@ -11,7 +11,7 @@ on:
image_tag:
description: 'Tag for this image build'
type: string
default: 'v2.5.0'
default: 'v2.6.0'
required: true
mark_latest:
description: 'Mark this image as latest'
@@ -31,7 +31,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Validate inputs
run: |
+20 -37
View File
@@ -29,7 +29,7 @@ jobs:
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh", ".github/workflows/install_gui_dep.sh"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh", ".github/workflows/install_gui_dep.sh", "easytier-web/frontend-lib/**"]'
build-gui:
strategy:
fail-fast: false
@@ -78,7 +78,7 @@ jobs:
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install GUI dependencies (x86 only)
if: ${{ matrix.TARGET == 'x86_64-unknown-linux-musl' }}
@@ -119,37 +119,18 @@ jobs:
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
- name: Install rpm package (Linux target only)
if: ${{ contains(matrix.TARGET, '-linux-') }}
run: |
sudo apt update
sudo apt install -y rpm
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r build
- name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
- uses: Swatinem/rust-cache@v2
with:
@@ -169,12 +150,13 @@ jobs:
- name: copy correct DLLs
if: ${{ matrix.OS == 'windows-latest' }}
run: |
if [[ $GUI_TARGET =~ ^aarch64.*$ ]]; then
cp ./easytier/third_party/arm64/* ./easytier-gui/src-tauri/
elif [[ $GUI_TARGET =~ ^i686.*$ ]]; then
cp ./easytier/third_party/i686/* ./easytier-gui/src-tauri/
else
cp ./easytier/third_party/x86_64/* ./easytier-gui/src-tauri/
case $TARGET in
x86_64*) ARCH_DIR=x86_64 ;;
i686*) ARCH_DIR=i686 ;;
aarch64*) ARCH_DIR=arm64 ;;
esac
if [[ -n "$ARCH_DIR" ]]; then
find "./easytier/third_party/${ARCH_DIR}" -maxdepth 1 -type f \( -name "*.dll" -o -name "*.sys" \) -exec cp {} ./easytier-gui/src-tauri/ \;
fi
- name: Build GUI
@@ -183,7 +165,7 @@ jobs:
with:
projectPath: ./easytier-gui
# https://tauri.app/v1/guides/building/linux/#cross-compiling-tauri-applications-for-arm-based-devices
args: --verbose --target ${{ matrix.GUI_TARGET }} ${{ matrix.OS == 'ubuntu-22.04' && contains(matrix.TARGET, 'aarch64') && '--bundles deb' || '' }}
args: --verbose --target ${{ matrix.GUI_TARGET }} ${{ contains(matrix.TARGET, '-linux-') && contains(matrix.TARGET, 'aarch64') && '--bundles deb,rpm' || '' }}
- name: Compress
run: |
@@ -201,6 +183,7 @@ jobs:
mv ./target/$GUI_TARGET/release/bundle/dmg/*.dmg ./artifacts/objects/
elif [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^mips.*$ ]]; then
mv ./target/$GUI_TARGET/release/bundle/deb/*.deb ./artifacts/objects/
mv ./target/$GUI_TARGET/release/bundle/rpm/*.rpm ./artifacts/objects/
if [[ $GUI_TARGET =~ ^x86_64.*$ ]]; then
# currently only x86 appimage is supported
mv ./target/$GUI_TARGET/release/bundle/appimage/*.AppImage ./artifacts/objects/
@@ -211,7 +194,7 @@ jobs:
rm -rf ./artifacts/objects/
- name: Archive artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: easytier-gui-${{ matrix.ARTIFACT_NAME }}
path: |
+4 -4
View File
@@ -31,8 +31,8 @@ fi
# see https://github.com/rust-lang/rustup/issues/3709
rustup set auto-self-update disable
rustup install 1.89
rustup default 1.89
rustup install 1.93
rustup default 1.93
# mips/mipsel cannot add target from rustup, need compile by ourselves
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
@@ -44,8 +44,8 @@ if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
ar x libgcc.a _ctzsi2.o _clz.o _bswapsi2.o
ar rcs libctz.a _ctzsi2.o _clz.o _bswapsi2.o
rustup toolchain install nightly-2025-09-01-x86_64-unknown-linux-gnu
rustup component add rust-src --toolchain nightly-2025-09-01-x86_64-unknown-linux-gnu
rustup toolchain install nightly-2026-02-02-x86_64-unknown-linux-gnu
rustup component add rust-src --toolchain nightly-2026-02-02-x86_64-unknown-linux-gnu
# https://github.com/rust-lang/rust/issues/128808
# remove it after Cargo or rustc fix this.
+4 -29
View File
@@ -47,7 +47,7 @@ jobs:
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Set current ref as env variable
run: |
@@ -70,33 +70,8 @@ jobs:
echo "$ANDROID_HOME/ndk/26.0.10792818/toolchains/llvm/prebuilt/linux-x86_64/bin" >> $GITHUB_PATH
echo "NDK_HOME=$ANDROID_HOME/ndk/26.0.10792818/" > $GITHUB_ENV
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r build
- name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
- uses: Swatinem/rust-cache@v2
with:
@@ -138,7 +113,7 @@ jobs:
rm -rf ./artifacts/objects/
- name: Archive artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: easytier-gui-${{ matrix.ARTIFACT_NAME }}
path: |
+30
View File
@@ -0,0 +1,30 @@
name: Nix Check
on:
push:
branches: ["main", "develop"]
paths:
- "**/*.nix"
- "flake.lock"
pull_request:
branches: ["main", "develop"]
paths:
- "**/*.nix"
- "flake.lock"
jobs:
check-full-shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Magic Nix Cache
uses: DeterminateSystems/magic-nix-cache-action@v6
- name: Check full devShell
run: nix develop .#full --command true
+62 -22
View File
@@ -3,6 +3,9 @@ name: EasyTier OHOS
on:
push:
branches: ["develop", "main", "releases/**"]
tags:
- 'v*'
- '!*-pre'
pull_request:
branches: ["develop", "main"]
workflow_dispatch:
@@ -19,7 +22,7 @@ jobs:
cargo_fmt_check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: fmt check
working-directory: ./easytier-contrib/easytier-ohrs
run: |
@@ -45,9 +48,11 @@ jobs:
build-ohos:
runs-on: ubuntu-latest
needs: pre_job
env:
OHPM_PUBLISH_CODE: ${{ secrets.OHPM_PUBLISH_CODE }}
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Install dependencies
run: |
sudo apt-get update
@@ -59,37 +64,51 @@ jobs:
pkg-config curl libgl1-mesa-dev expect
sudo apt-get clean
- name: Count commits since last tag on upstream main
- name: Resolve easytier version
run: |
set -e
UPSTREAM_REPO="https://github.com/EasyTier/EasyTier.git"
git remote add upstream "$UPSTREAM_REPO" 2>/dev/null || true
git fetch upstream --tags --force
git fetch --unshallow upstream main || git fetch upstream main
git fetch --tags upstream --force
# upstream/main 最新提交
git fetch upstream main
# cargo 版本
CARGO_VERSION=$(cargo metadata --format-version 1 --no-deps --manifest-path easytier/Cargo.toml \
| jq -r '.packages[0].version')
# 获取 upstream/main 最新 tag
LAST_TAG=$(git describe --tags --abbrev=0 upstream/main 2>/dev/null || echo "")
LAST_TAG_VERSION="${LAST_TAG#v}"
if [ -z "$LAST_TAG" ]; then
# 语义版本比较
version_gt() {
[ "$(printf '%s\n' "$1" "$2" | sort -V | tail -n1)" = "$1" ] && [ "$1" != "$2" ]
}
if [ -z "$LAST_TAG_VERSION" ]; then
BASE_VERSION="$CARGO_VERSION"
DIFF_COUNT=$(git rev-list --count upstream/main)
elif version_gt "$CARGO_VERSION" "$LAST_TAG_VERSION"; then
BASE_VERSION="$CARGO_VERSION"
DIFF_COUNT=0
else
BASE_VERSION="$LAST_TAG_VERSION"
DIFF_COUNT=$(git rev-list --count "${LAST_TAG}..upstream/main")
fi
echo "TAG_COMMIT_DIFF=$DIFF_COUNT"
echo "TAG_COMMIT_DIFF=$DIFF_COUNT" >> $GITHUB_ENV
- name: Get easytier version
run: |
EASYTIER_CARGO_VERSION=$(cargo metadata --format-version 1 --no-deps --manifest-path easytier/Cargo.toml \
| jq -r '.packages[0].version')
EASYTIER_VERSION="${EASYTIER_CARGO_VERSION}-${TAG_COMMIT_DIFF}"
echo "EASYTIER_VERSION=${EASYTIER_VERSION}" >> $GITHUB_ENV
COMMIT_HASH=$(git rev-parse --short upstream/main)
EASYTIER_VERSION="${BASE_VERSION}-${DIFF_COUNT}-${COMMIT_HASH}"
echo "EASYTIER_VERSION=$EASYTIER_VERSION"
echo "EASYTIER_VERSION=$EASYTIER_VERSION" >> $GITHUB_ENV
cd ./easytier-contrib/easytier-ohrs/package
jq --arg v "$EASYTIER_VERSION" '.version = $v' oh-package.json5 > oh-package.tmp.json5
mv oh-package.tmp.json5 oh-package.json5
- name: Generate CHANGELOG.md for current commit
working-directory: ./easytier-contrib/easytier-ohrs/package
run: |
@@ -128,7 +147,7 @@ jobs:
EOF
sudo chmod +x $OHOS_NDK_HOME/native/llvm/aarch64-unknown-linux-ohos-clang.sh
- name: Build
- name: Build latest Har
working-directory: ./easytier-contrib/easytier-ohrs
run: |
sudo apt-get install -y llvm clang lldb lld
@@ -143,23 +162,39 @@ jobs:
ohrs artifact
mv package.har easytier-ohrs.har
- name: Build Release Package
if: startsWith(github.ref, 'refs/tags/')
working-directory: ./easytier-contrib/easytier-ohrs
run: |
echo "🎉 Official Release detected. Building easytier-release..."
TAG_NAME="${{ github.ref_name }}"
TAG_VERSION="${TAG_NAME#v}"
echo "Release Version: $TAG_VERSION"
cd package
jq --arg v "$TAG_VERSION" '.name = "easytier-release" | .version = $v' oh-package.json5 > oh-package.tmp.json5 && mv oh-package.tmp.json5 oh-package.json5
cd ..
ohrs build --release --arch aarch
cd dist/arm64-v8a
mv libeasytier_ohrs.so libeasytier_release.so
cd ../..
ohrs artifact
mv package.har easytier-release.har
- name: Upload artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: easytier-ohos
path: |
./easytier-contrib/easytier-ohrs/easytier-ohrs.har
./easytier-contrib/easytier-ohrs/dist/arm64-v8a/libeasytier_ohrs.so
retention-days: 5
if-no-files-found: error
- name: Publish To Center Ohpm
if: github.event_name == 'push'
working-directory: ./easytier-contrib/easytier-ohrs
env:
OHPM_PUBLISH_CODE: ${{ secrets.OHPM_PUBLISH_CODE }}
OHPM_PRIVATE_KEY: ${{ secrets.OHPM_PRIVATE_KEY }}
OHPM_KEY_PASSPHRASE: ${{ secrets.OHPM_KEY_PASSPHRASE }}
if: ${{ env.OHPM_PUBLISH_CODE != '' && github.event_name == 'push' }}
run: |
ohpm config set publish_id "$OHPM_PUBLISH_CODE"
ohpm config set publish_registry https://ohpm.openharmony.cn/ohpm
@@ -176,10 +211,15 @@ jobs:
ohpm publish easytier-ohrs.har
- name: Publish To Private Ohpm
if: github.event_name == 'push'
working-directory: ./easytier-contrib/easytier-ohrs
if: ${{ env.OHPM_PUBLISH_CODE != '' && github.event_name == 'push' }}
run: |
printf '%s' "${{ secrets.CODEARTS_PRIVATE_OHPM }}" > ~/.ohpm/.ohpmrc
ohpm config set strict_ssl false
ohpm publish easytier-ohrs.har
if [ -f "easytier-release.har" ]; then
echo "🚀 Publishing Release package..."
ohpm publish easytier-release.har
fi
curl --header "Content-Type: application/json" --request POST --data "{}" ${{ secrets.CODEARTS_WEBHOOKS }}
+2 -2
View File
@@ -18,7 +18,7 @@ on:
version:
description: 'Version for this release'
type: string
default: 'v2.5.0'
default: 'v2.6.0'
required: true
make_latest:
description: 'Mark this release as latest'
@@ -35,7 +35,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Download Core Artifact
uses: dawidd6/action-download-artifact@v11
+106 -67
View File
@@ -2,12 +2,14 @@ name: EasyTier Test
on:
push:
branches: ["develop", "main"]
branches: [ "develop", "main" ]
pull_request:
branches: ["develop", "main"]
branches: [ "develop", "main" ]
env:
CARGO_TERM_COLOR: always
# RUSTC_WRAPPER: "sccache"
# SCCACHE_GHA_ENABLED: "true"
defaults:
run:
@@ -29,18 +31,95 @@ jobs:
concurrent_skipping: 'never'
skip_after_successful_duplicate: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/test.yml", ".github/workflows/install_gui_dep.sh", ".github/workflows/install_rust.sh"]'
test:
runs-on: ubuntu-22.04
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v3
- name: Setup protoc
uses: arduino/setup-protoc@v3
check:
name: Run linters & check
runs-on: ubuntu-latest
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v5
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
gui: true
web: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
- name: Install rustfmt and clippy
run: |
rustup component add rustfmt
rustup component add clippy
- uses: taiki-e/install-action@cargo-hack
- name: Check Cargo.lock is up to date
run: |
if ! cargo metadata --format-version 1 --locked --no-deps > /dev/null; then
echo "::error::Cargo.lock is out of date. Run cargo generate-lockfile or cargo build locally, then commit Cargo.lock."
exit 1
fi
- name: Check formatting
run: cargo fmt --all -- --check
- name: Check Clippy
run: cargo clippy --all-targets --features full --all -- -D warnings
- name: Check features
if: ${{ !cancelled() }}
run: cargo hack check --package easytier --each-feature --exclude-features macos-ne --verbose
pre-test:
name: Build test
runs-on: ubuntu-latest
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
steps:
- uses: actions/checkout@v5
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
gui: true
web: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
- uses: taiki-e/install-action@nextest
- name: Archive test
run: cargo nextest archive --archive-file tests.tar.zst --package easytier --features full
- uses: actions/upload-artifact@v5
with:
name: tests
path: tests.tar.zst
retention-days: 1
test_matrix:
name: Test (${{ matrix.name }})
runs-on: ubuntu-latest
needs: [ pre_job, pre-test ]
if: needs.pre_job.outputs.should_skip != 'true'
strategy:
fail-fast: false
matrix:
include:
- name: "easytier"
opts: "-E 'not test(tests::three_node)' --test-threads 1 --no-fail-fast"
- name: "three_node"
opts: "-E 'test(tests::three_node) and not test(subnet_proxy_three_node_test)' --test-threads 1 --no-fail-fast"
- name: "three_node::subnet_proxy_three_node_test"
opts: "-E 'test(subnet_proxy_three_node_test)' --test-threads 1 --no-fail-fast"
steps:
- uses: actions/checkout@v5
- name: Setup tools for test
run: sudo apt install bridge-utils
@@ -53,63 +132,23 @@ jobs:
sudo sysctl net.ipv6.conf.lo.disable_ipv6=0
sudo ip addr add 2001:db8::2/64 dev lo
- uses: actions/setup-node@v4
- uses: taiki-e/install-action@nextest
- name: Download tests
uses: actions/download-artifact@v4
with:
node-version: 22
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install frontend dependencies
run: |
pnpm -r install
pnpm -r --filter "./easytier-web/*" build
- name: Cargo cache
uses: actions/cache@v4
with:
path: |
~/.cargo
./target
key: ${{ runner.os }}-cargo-test-${{ hashFiles('**/Cargo.lock') }}
- name: Install GUI dependencies (Used by clippy)
run: |
bash ./.github/workflows/install_gui_dep.sh
bash ./.github/workflows/install_rust.sh
rustup component add rustfmt
rustup component add clippy
- name: Check formatting
if: ${{ !cancelled() }}
run: cargo fmt --all -- --check
- name: Check Clippy
if: ${{ !cancelled() }}
# NOTE: tauri need `dist` dir in build.rs
run: |
mkdir -p easytier-gui/dist
cargo clippy --all-targets --all-features --all -- -D warnings
name: tests
- name: Run tests
run: |
sudo prlimit --pid $$ --nofile=1048576:1048576
sudo -E env "PATH=$PATH" cargo test --no-default-features --features=full --verbose -- --test-threads=1
sudo chown -R $USER:$USER ./target
sudo chown -R $USER:$USER ~/.cargo
sudo -E env "PATH=$PATH" cargo nextest run --archive-file tests.tar.zst ${{ matrix.opts }}
test:
runs-on: ubuntu-latest
needs: [ pre_job, test_matrix ]
if: needs.pre_job.outputs.should_skip != 'true' && always()
steps:
- name: Mark result as failed
if: needs.test_matrix.result != 'success'
run: exit 1
+3 -3
View File
@@ -26,7 +26,7 @@ Thank you for your interest in contributing to EasyTier! This document provides
#### Required Tools
- Node.js v21 or higher
- pnpm v9 or higher
- Rust toolchain (version 1.89)
- Rust toolchain (version 1.93)
- LLVM and Clang
- Protoc (Protocol Buffers compiler)
@@ -79,8 +79,8 @@ sudo apt install -y bridge-utils
2. Install dependencies:
```bash
# Install Rust toolchain
rustup install 1.89
rustup default 1.89
rustup install 1.93
rustup default 1.93
# Install project dependencies
pnpm -r install
+3 -3
View File
@@ -34,7 +34,7 @@
#### 必需工具
- Node.js v21 或更高版本
- pnpm v9 或更高版本
- Rust 工具链(版本 1.89
- Rust 工具链(版本 1.93
- LLVM 和 Clang
- ProtocProtocol Buffers 编译器)
@@ -87,8 +87,8 @@ sudo apt install -y bridge-utils
2. 安装依赖:
```bash
# 安装 Rust 工具链
rustup install 1.89
rustup default 1.89
rustup install 1.93
rustup default 1.93
# 安装项目依赖
pnpm -r install
Generated
+545 -57
View File
File diff suppressed because it is too large Load Diff
+31 -28
View File
@@ -48,40 +48,43 @@
Choose the installation method that best suits your needs:
Linux (Recommended):
```bash
# 1. Download pre-built binary (Recommended, All platforms supported)
# Visit https://github.com/EasyTier/EasyTier/releases
curl -fsSL "https://github.com/EasyTier/EasyTier/blob/main/script/install.sh?raw=true" | sudo bash -s install
```
# 2. Install via cargo (Latest development version)
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
# 3. Install via Docker
# See https://easytier.cn/en/guide/installation.html#installation-methods
# 4. Linux Quick Install
wget -O- https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh | sudo bash -s install
# 5. MacOS via Homebrew
Homebrew (MacOS/Linux):
```bash
brew tap brewforge/chinese
brew install --cask easytier-gui
# 6. OpenWrt Luci Web UI
# Visit https://github.com/EasyTier/luci-app-easytier
# 7. (Optional) Install shell completions:
easytier-core --gen-autocomplete fish > ~/.config/fish/completions/easytier-core.fish
easytier-cli gen-autocomplete fish > ~/.config/fish/completions/easytier-cli.fish
```
Windows (Recommended, run with administrator privileges):
```powershell
irm "https://github.com/EasyTier/EasyTier/blob/main/script/install.ps1?raw=true" | iex
```
Install via cargo (Latest development version):
```bash
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
[Install pre-built binary](https://github.com/EasyTier/EasyTier/releases) (Recommended, All platforms supported)
[Install via Docker](https://easytier.cn/en/guide/installation.html#installation-methods)
[Install OpenWrt ipk package](https://github.com/EasyTier/luci-app-easytier)
Additional steps:
[One-Click Register Service](https://easytier.cn/en/guide/network/oneclick-install-as-service.html) (Automatically start when the system boots and run in the background)
### 🚀 Basic Usage
#### Quick Networking with Shared Nodes
EasyTier supports quick networking using shared public nodes. When you don't have a public IP, you can use the free shared nodes provided by the EasyTier community. Nodes will automatically attempt NAT traversal and establish P2P connections. When P2P fails, data will be relayed through shared nodes.
The currently deployed shared public node is `tcp://public.easytier.cn:11010`.
When using shared nodes, each node entering the network needs to provide the same `--network-name` and `--network-secret` parameters as the unique identifier of the network.
Taking two nodes as an example (Please use more complex network name to avoid conflicts):
@@ -90,14 +93,14 @@ Taking two nodes as an example (Please use more complex network name to avoid co
```bash
# Run with administrator privileges
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP>:11010
```
2. Run on Node B:
```bash
# Run with administrator privileges
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP>:11010
```
After successful execution, you can check the network status using `easytier-cli`:
@@ -105,9 +108,9 @@ After successful execution, you can check the network status using `easytier-cli
```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.5.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.5.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.5.0-70e69a38~ |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.0-70e69a38~ |
```
You can test connectivity between nodes:
@@ -124,7 +127,7 @@ To improve availability, you can connect to multiple shared nodes simultaneously
```bash
# Connect to multiple shared nodes
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 -p udp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<SharedNodeIP1>:11010 -p udp://<SharedNodeIP2>:11010
```
Once your network is set up successfully, you can easily configure it to start automatically on system boot. Refer to the [One-Click Register Service guide](https://easytier.cn/en/guide/network/oneclick-install-as-service.html) for step-by-step instructions on registering EasyTier as a system service.
+32 -30
View File
@@ -48,40 +48,42 @@
选择最适合您需求的安装方式:
Linux(推荐):
```bash
# 1. 下载预编译二进制文件(推荐,支持所有平台)
# 访问 https://github.com/EasyTier/EasyTier/releases
curl -fsSL "https://github.com/EasyTier/EasyTier/blob/main/script/install.sh?raw=true" | sudo bash -s install
```
# 2. 通过 cargo 安装(最新开发版本)
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
# 3. 通过 Docker 安装
# 参见 https://easytier.cn/guide/installation.html#%E5%AE%89%E8%A3%85%E6%96%B9%E5%BC%8F
# 4. Linux 快速安装
wget -O- https://raw.githubusercontent.com/EasyTier/EasyTier/main/script/install.sh | sudo bash -s install
# 5. MacOS 通过 Homebrew 安装
HomebrewMacOS/Linux):
```bash
brew tap brewforge/chinese
brew install --cask easytier-gui
# 6. OpenWrt Luci Web 界面
# 访问 https://github.com/EasyTier/luci-app-easytier
# 7.(可选)安装 Shell 补全功能:
# Fish 补全
easytier-core --gen-autocomplete fish > ~/.config/fish/completions/easytier-core.fish
easytier-cli gen-autocomplete fish > ~/.config/fish/completions/easytier-cli.fish
```
Windows(推荐,请以管理员权限运行):
```powershell
irm "https://github.com/EasyTier/EasyTier/blob/main/script/install.ps1?raw=true" | iex
```
通过 cargo 安装(最新开发版本):
```bash
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
[下载预编译文件](https://github.com/EasyTier/EasyTier/releases)(推荐,支持所有平台)
[通过 Docker 安装](https://easytier.cn/guide/installation.html#%E5%AE%89%E8%A3%85%E6%96%B9%E5%BC%8F)
[安装 OpenWrt ipk 软件包](https://github.com/EasyTier/luci-app-easytier)
附加步骤:
[一键注册系统服务](https://easytier.cn/guide/network/oneclick-install-as-service.html)(系统启动时自动后台运行)
### 🚀 基本用法
#### 使用共享节点快速组网
EasyTier 支持使用共享公共节点快速组网。当您没有公网 IP 时,可以使用 EasyTier 社区提供的免费共享节点。节点会自动尝试 NAT 穿透并建立 P2P 连接。当 P2P 失败时,数据将通过共享节点中继。
当前部署的共享公共节点是 `tcp://public.easytier.cn:11010`
EasyTier 支持使用共享节点快速组网。当您没有公网 IP 时,可以使用公共共享节点。节点会自动尝试 NAT 穿透并建立 P2P 连接。当 P2P 失败时,数据将通过共享节点中继。
使用共享节点时,每个进入网络的节点需要提供相同的 `--network-name``--network-secret` 参数作为网络的唯一标识符。
@@ -91,14 +93,14 @@ EasyTier 支持使用共享公共节点快速组网。当您没有公网 IP 时
```bash
# 以管理员权限运行
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<共享节点IP>:11010
```
2. 在节点 B 上运行:
```bash
# 以管理员权限运行
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<共享节点IP>:11010
```
执行成功后,可以使用 `easytier-cli` 检查网络状态:
@@ -106,9 +108,9 @@ sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.ea
```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.5.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.5.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.5.0-70e69a38~ |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.0-70e69a38~ |
```
您可以测试节点之间的连通性:
@@ -125,7 +127,7 @@ ping 10.126.126.2
```bash
# 连接多个共享节点
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://public.easytier.cn:11010 -p udp://public.easytier.cn:11010
sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<公共节点IP>:11010 -p udp://<公共节点IP>:11010
```
#### 去中心化组网
+2 -2
View File
@@ -215,7 +215,7 @@ pub unsafe extern "C" fn collect_network_infos(
if index >= max_length {
break;
}
let Some(key) = INSTANCE_MANAGER.get_network_instance_name(instance_id) else {
let Some(key) = INSTANCE_MANAGER.get_instance_name(instance_id) else {
continue;
};
// convert value to json string
@@ -228,7 +228,7 @@ pub unsafe extern "C" fn collect_network_infos(
};
infos[index] = KeyValuePair {
key: std::ffi::CString::new(key.clone()).unwrap().into_raw(),
key: std::ffi::CString::new(key).unwrap().into_raw(),
value: std::ffi::CString::new(value).unwrap().into_raw(),
};
index += 1;
+57 -26
View File
@@ -1,43 +1,74 @@
#!/data/adb/magisk/busybox sh
MODDIR=${0%/*}
MODULE_PROP="${MODDIR}/module.prop"
IP_RULE_SCRIPT="${MODDIR}/hotspot_iprule.sh"
ET_STATUS=""
REDIR_STATUS=""
# 更新module.prop文件中的description
IS_RUNNING=false
# 确保辅助脚本有执行权限
chmod +x "${IP_RULE_SCRIPT}" 2>/dev/null
# 更新 module.prop 文件中的 description
update_module_description() {
local status_message=$1
sed -i "/^description=/c\description=[状态]${status_message}" ${MODULE_PROP}
# 检查 module.prop 文件存在且 description 发生变化了再写入
if [ -f "${MODULE_PROP}" ]; then
local current_desc=$(grep "^description=" "${MODULE_PROP}")
local new_desc="description=[状态] ${status_message}"
if [ "${current_desc}" != "${new_desc}" ]; then
sed -i "s#^description=.*#${new_desc}#" "${MODULE_PROP}"
fi
fi
}
# 判断程序启动状态
if [ -f "${MODDIR}/disable" ]; then
ET_STATUS="已关闭"
elif pgrep -f 'easytier-core' >/dev/null; then
if [ -f "${MODDIR}/config/command_args"]; then
ET_STATUS="主程序已开启(启动参数模式)"
IS_RUNNING=false
ET_STATUS="主程序已关闭"
elif pgrep -f "${MODDIR}/easytier-core" >/dev/null; then
IS_RUNNING=true
if [ -f "${MODDIR}/config/command_args" ]; then
ET_STATUS="主程序正在运行(启动参数模式)"
else
ET_STATUS="主程序已开启(配置文件模式)"
ET_STATUS="主程序正在运行(配置文件模式"
fi
elif [ -z "$ET_STATUS" ]; then
# 既没 disable 也没运行,说明是异常停止或未启动
ET_STATUS="主程序启动失败或未运行"
fi
#ET_STATUS不存在说明开启模块未正常运行,不修改状态
if [ -n "$ET_STATUS" ]; then
if [ -f "${MODDIR}/enable_IP_rule" ]; then
rm -f "${MODDIR}/enable_IP_rule"
${MODDIR}/hotspot_iprule.sh del
REDIR_STATUS="转发已禁用"
echo "热点子网转发已禁用"
echo "[ET-NAT] IP rule disabled." >> "${MODDIR}/log.log"
else
touch "${MODDIR}/enable_IP_rule"
${MODDIR}/hotspot_iprule.sh del
${MODDIR}/hotspot_iprule.sh add_once
REDIR_STATUS="转发已激活"
echo "热点子网转发已激活,热点开启后将自动将热点加入转发网络(要求已配置本地网络cidr=参数)。转发规则将随着热点开关而自动开关。该状态将保持到转发被禁用为止。"
echo "[ET-NAT] IP rule enabled." >> "${MODDIR}/log.log"
fi
update_module_description "${ET_STATUS} | ${REDIR_STATUS}"
# 无论主程序是否运行,都允许切换“开关文件”的状态,以便下次生效
if [ -f "${MODDIR}/enable_IP_rule" ]; then
rm -f "${MODDIR}/enable_IP_rule"
"${IP_RULE_SCRIPT}" del >/dev/null 2>&1
REDIR_STATUS="转发已禁用"
echo "热点子网转发已禁用"
echo "[ET-NAT] Action: IP rule disabled." >> "${MODDIR}/log.log"
else
echo "主程序未正常启动,请先检查配置文件"
touch "${MODDIR}/enable_IP_rule"
if [ "$IS_RUNNING" = true ]; then
"${IP_RULE_SCRIPT}" del >/dev/null 2>&1
"${IP_RULE_SCRIPT}" add_once
echo "转发规则将立即生效,无需重启"
else
echo "主程序未运行,转发规则将在下次启动时生效"
fi
REDIR_STATUS="转发已激活"
echo "----------------------------------"
echo "热点子网转发已激活"
echo "热点开启后将自动将热点加入转发网络"
echo "需要在配置中提前配置好 cidr 参数"
echo "----------------------------------"
echo "[ET-NAT] Action: IP rule enabled." >> "${MODDIR}/log.log"
fi
sync
update_module_description "${ET_STATUS}| ${REDIR_STATUS}"
+19 -9
View File
@@ -1,9 +1,19 @@
ui_print '安装完成'
ui_print '当前架构为' + $ARCH
ui_print '当前系统版本为' + $API
ui_print '安装目录为: /data/adb/modules/easytier_magisk'
ui_print '配置文件位置: /data/adb/modules/easytier_magisk/config/config.toml'
ui_print '如果需要自定义启动参数,可将 /data/adb/modules/easytier_magisk/config/command_args_sample 重命名为 command_args,并修改其中内容,使用自定义启动参数时会忽略配置文件'
ui_print '修改配置文件后在magisk app禁用应用再启动即可生效'
ui_print '点击操作按钮可启动/关闭热点子网转发,配合easytier的子网代理功能实现手机热点访问easytier网络'
ui_print '记得重启'
SKIPMOUNT=false
PROPFILE=true
POSTFSDATA=true
LATESTARTSERVICE=true
set_perm_recursive $MODPATH 0 0 0777 0777
ui_print "系统架构为:$ARCH"
ui_print "系统 SDK 版本:$API"
ui_print "EasyTier 安装位置:/data/adb/modules/easytier_magisk"
ui_print "配置文件位置:/data/adb/modules/easytier_magisk/config/config.toml"
ui_print "如需使用启动参数模式,请将 /data/adb/modules/easytier_magisk/config/command_args_sample 重命名为 command_args,并修改其中的内容"
ui_print "config 目录中存在 command_args 文件时,模块会自动忽略 config.toml 文件"
ui_print "----------------------------------"
ui_print "注意!启动参数文件中不能存在 \" 和 ',配置文件则没有这个限制"
ui_print "----------------------------------"
ui_print "修改配置后无需重启设备,在 Magisk 中禁用 EasyTier 模块,等待 10 秒后重新启用即可让新配置生效"
ui_print "点击 Magisk 中模块左下角的“操作”按钮可以禁用或激活热点子网转发,使用该功能前需要在配置中提前配置好 cidr 参数"
ui_print "模块安装完成,重启设备生效"
@@ -2,64 +2,111 @@
MODDIR=${0%/*}
CONFIG_FILE="${MODDIR}/config/config.toml"
COMMAND_ARGS="${MODDIR}/config/command_args"
LOG_FILE="${MODDIR}/log.log"
MODULE_PROP="${MODDIR}/module.prop"
EASYTIER="${MODDIR}/easytier-core"
# 处理获取到的设备型号中可能出现的空格
BRAND=$(getprop ro.product.brand | tr ' ' '-')
MODEL=$(getprop ro.product.model | tr ' ' '-')
DEVICE_HOSTNAME="${BRAND}-${MODEL}"
REDIR_STATUS=""
# 更新module.prop文件中的description
# 更新 module.prop 文件中的 description
update_module_description() {
local status_message=$1
sed -i "/^description=/c\description=[状态]${status_message}" ${MODULE_PROP}
# 检查 module.prop 文件存在且 description 发生变化了再写入
if [ -f "${MODULE_PROP}" ]; then
local current_desc=$(grep "^description=" "${MODULE_PROP}")
local new_desc="description=[状态] ${status_message}"
if [ "${current_desc}" != "${new_desc}" ]; then
sed -i "s#^description=.*#${new_desc}#" "${MODULE_PROP}"
fi
fi
}
if [ -f "${MODDIR}/enable_IP_rule" ]; then
REDIR_STATUS="转发已激活"
else
REDIR_STATUS="转发已禁用"
fi
# 检查并初始化 TUN 设备
if [ ! -e /dev/net/tun ]; then
if [ ! -d /dev/net ]; then
mkdir -p /dev/net
fi
ln -s /dev/tun /dev/net/tun
fi
while true; do
if ls $MODDIR | grep -q "disable"; then
update_module_description "关闭中 | ${REDIR_STATUS}"
if pgrep -f 'easytier-core' >/dev/null; then
echo "开关控制$(date "+%Y-%m-%d %H:%M:%S") 进程已存在,正在关闭 ..."
pkill easytier-core # 关闭进程
fi
# 获取子网转发激活状态
if [ -f "${MODDIR}/enable_IP_rule" ]; then
REDIR_STATUS="转发已激活"
else
if ! pgrep -f 'easytier-core' >/dev/null; then
if [ ! -f "$CONFIG_FILE" ]; then
update_module_description "config.toml不存在"
sleep 3s
continue
fi
REDIR_STATUS="转发已禁用"
fi
# 如果 config 目录下存在 command_args 文件,则读取其中的内容作为启动参数
if [ -f "${MODDIR}/config/command_args" ]; then
TZ=Asia/Shanghai ${EASYTIER} $(cat ${MODDIR}/config/command_args) --hostname "$(getprop ro.product.brand)-$(getprop ro.product.model)" > ${LOG_FILE} &
sleep 5s # 等待easytier-core启动完成
update_module_description "主程序已开启(启动参数模式) | ${REDIR_STATUS}"
else
TZ=Asia/Shanghai ${EASYTIER} -c ${CONFIG_FILE} --hostname "$(getprop ro.product.brand)-$(getprop ro.product.model)" > ${LOG_FILE} &
sleep 5s # 等待easytier-core启动完成
update_module_description "主程序已开启(配置文件模式) | ${REDIR_STATUS}"
fi
ip rule add from all lookup main
if ! pgrep -f 'easytier-core' >/dev/null; then
update_module_descriptio "主程序启动失败,请检查配置文件"
fi
else
echo "开关控制$(date "+%Y-%m-%d %H:%M:%S") 进程已存在"
# 检查模块是否被禁用
if [ -f "${MODDIR}/disable" ]; then
update_module_description "主程序已关闭 | ${REDIR_STATUS}"
if pgrep -f "${EASYTIER}" >/dev/null; then
echo "开关控制 $(date "+%Y-%m-%d %H:%M:%S") 进程已存在,正在关闭"
pkill -f "${EASYTIER}"
fi
sleep 10s
continue
fi
sleep 3s # 暂停3秒后再次执行循环
done
# 检查进程是否已经在运行
if pgrep -f "${EASYTIER}" >/dev/null; then
sleep 10s
continue
fi
# 检查配置文件是否存在
if [ ! -f "${CONFIG_FILE}" ] && [ ! -f "${COMMAND_ARGS}" ]; then
update_module_description "缺少配置文件或启动参数文件"
sleep 10s
continue
fi
# 如果 config 目录下存在 command_args 文件,则读取其中的内容作为启动参数
if [ -f "${COMMAND_ARGS}" ]; then
# 启动参数模式
CMD_CONTENT=$(tr '\r\n' ' ' < "${COMMAND_ARGS}")
if echo "${CMD_CONTENT}" | grep -q "\-\-hostname"; then
FINAL_ARGS="${CMD_CONTENT}"
else
FINAL_ARGS="${CMD_CONTENT} --hostname ${DEVICE_HOSTNAME}"
fi
TZ=Asia/Shanghai "${EASYTIER}" ${FINAL_ARGS} > "${LOG_FILE}" 2>&1 &
STR_MODE="启动参数模式"
# 否则读取 config.toml 的内容作为启动参数
else
# 配置文件模式
if grep -q "^[[:space:]]*hostname[[:space:]]*=" "${CONFIG_FILE}"; then
TZ=Asia/Shanghai "${EASYTIER}" -c "${CONFIG_FILE}" > "${LOG_FILE}" 2>&1 &
else
TZ=Asia/Shanghai "${EASYTIER}" -c "${CONFIG_FILE}" --hostname "${DEVICE_HOSTNAME}" > "${LOG_FILE}" 2>&1 &
fi
STR_MODE="配置文件模式"
fi
# 等待进程启动
sleep 5s
# 启动后的扫尾工作
if pgrep -f "${EASYTIER}" >/dev/null; then
if ! ip rule show | grep -q "lookup main"; then
ip rule add from all lookup main
fi
update_module_description "主程序正在运行(${STR_MODE}| ${REDIR_STATUS}"
else
update_module_description "主程序启动失败,请检查配置文件或启动参数"
fi
sleep 10s
done
+1 -1
View File
@@ -1,6 +1,6 @@
id=easytier_magisk
name=EasyTier_Magisk
version=v2.5.0
version=v2.6.0
versionCode=1
author=EasyTier
description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier)
@@ -1,3 +1,5 @@
MODDIR=${0%/*}
pkill easytier-core # 结束 easytier-core 进程
rm -rf $MODDIR/*
pkill -f "${MODDIR}/easytier-core"
# 使用 ${MODDIR:?} 确保变量非空,避免执行 rm -rf /*
rm -rf "${MODDIR:?}/"*
+434 -46
View File
@@ -38,6 +38,20 @@ dependencies = [
"cpufeatures",
]
[[package]]
name = "aes-gcm"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "831010a0f742e1209b3bcea8fab6a8e149051ba6099432c8cb2cc117dec3ead1"
dependencies = [
"aead",
"aes",
"cipher",
"ctr",
"ghash",
"subtle",
]
[[package]]
name = "aho-corasick"
version = "1.1.3"
@@ -133,6 +147,12 @@ version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69f7f8c3906b62b754cd5326047894316021dcfe5a194c8ea52bdd94934a3457"
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "async-recursion"
version = "1.1.1"
@@ -202,6 +222,12 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
[[package]]
name = "atomic_refcell"
version = "0.1.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41e67cd8309bbd06cd603a9e693a784ac2e5d1e955f11286e355089fcab3047c"
[[package]]
name = "auto_impl"
version = "1.3.0"
@@ -254,14 +280,14 @@ checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "bindgen"
version = "0.71.1"
version = "0.72.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f58bf3d7db68cfbac37cfc485a8d711e87e064c3d0fe0435b92f7a407f9d6b3"
checksum = "993776b509cfb49c750f11b8f07a46fa23e0a1386ffc01fb1e7d343efc387895"
dependencies = [
"bitflags 2.9.4",
"cexpr",
"clang-sys",
"itertools 0.13.0",
"itertools 0.11.0",
"proc-macro2",
"quote",
"regex",
@@ -540,6 +566,16 @@ dependencies = [
"clap",
]
[[package]]
name = "clap_complete_nushell"
version = "4.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "685bc86fd34b7467e0532a4f8435ab107960d69a243785ef0275e571b35b641a"
dependencies = [
"clap",
"clap_complete",
]
[[package]]
name = "clap_derive"
version = "4.5.47"
@@ -598,6 +634,15 @@ dependencies = [
"unicode-segmentation",
]
[[package]]
name = "convert_case"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "633458d4ef8c78b72454de2d54fd6ab2e60f9e02be22f3c6104cdc8a4e0fceb9"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "core-foundation"
version = "0.9.4"
@@ -746,6 +791,15 @@ version = "0.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2931af7e13dc045d8e9d26afccc6fa115d64e115c9c84b1166288b46f6782c2"
[[package]]
name = "ctr"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0369ee1ad671834580515889b80f2ea915f23b8be8d0daa4bbaf2ac5c7590835"
dependencies = [
"cipher",
]
[[package]]
name = "curve25519-dalek"
version = "4.1.3"
@@ -894,6 +948,17 @@ dependencies = [
"powerfmt",
]
[[package]]
name = "derivative"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fcc3dd5e9e9c0b295d6e1e4d811fb6f157d5ffd784b8d202fc62eac8035a770b"
dependencies = [
"proc-macro2",
"quote",
"syn 1.0.109",
]
[[package]]
name = "derive_arbitrary"
version = "1.4.2"
@@ -936,6 +1001,29 @@ dependencies = [
"syn 2.0.106",
]
[[package]]
name = "derive_more"
version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d751e9e49156b02b44f9c1815bcb94b984cdcc4396ecc32521c739452808b134"
dependencies = [
"derive_more-impl",
]
[[package]]
name = "derive_more-impl"
version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "799a97264921d8623a957f6c3b9011f3b5492f557bbb7a5a19b7fa6d06ba8dcb"
dependencies = [
"convert_case 0.10.0",
"proc-macro2",
"quote",
"rustc_version",
"syn 2.0.106",
"unicode-xid",
]
[[package]]
name = "digest"
version = "0.10.7"
@@ -995,7 +1083,7 @@ checksum = "7454e41ff9012c00d53cf7f475c5e3afa3b91b7c90568495495e8d9bf47a1055"
[[package]]
name = "easytier"
version = "2.4.5"
version = "2.6.0"
dependencies = [
"anyhow",
"arc-swap",
@@ -1004,6 +1092,7 @@ dependencies = [
"async-stream",
"async-trait",
"atomic-shim",
"atomic_refcell",
"auto_impl",
"base64 0.22.1",
"bitflags 2.9.4",
@@ -1011,16 +1100,23 @@ dependencies = [
"bytecodec",
"byteorder",
"bytes",
"cfg-if",
"cfg_aliases",
"chrono",
"cidr",
"clap",
"clap_complete",
"clap_complete_nushell",
"crossbeam",
"dashmap",
"dbus",
"derivative",
"derive_builder",
"derive_more",
"easytier-rpc-build",
"encoding",
"flume",
"forwarded-header-value",
"futures",
"gethostname",
"git-version",
@@ -1036,6 +1132,8 @@ dependencies = [
"humansize",
"humantime-serde",
"idna",
"indoc",
"itertools 0.14.0",
"kcp-sys",
"machine-uid",
"multimap",
@@ -1046,7 +1144,9 @@ dependencies = [
"network-interface",
"nix 0.29.0",
"once_cell",
"ordered_hash_map",
"parking_lot",
"paste",
"percent-encoding",
"petgraph 0.8.2",
"pin-project-lite",
@@ -1056,8 +1156,11 @@ dependencies = [
"prost-build",
"prost-reflect",
"prost-reflect-build",
"prost-types",
"prost-wkt",
"prost-wkt-build",
"prost-wkt-types",
"quinn",
"quinn-plaintext",
"rand 0.8.5",
"rcgen",
"regex",
@@ -1073,10 +1176,13 @@ dependencies = [
"sha2",
"shellexpand",
"smoltcp",
"snow",
"socket2 0.5.10",
"strum",
"stun_codec",
"sys-locale",
"tabled",
"terminal_size",
"thiserror 1.0.69",
"thunk-rs",
"time",
@@ -1091,16 +1197,19 @@ dependencies = [
"tracing",
"tracing-subscriber",
"tun-easytier",
"unicode-width 0.1.11",
"url",
"uuid",
"version-compare",
"which 7.0.3",
"wildmatch",
"winapi",
"windivert",
"windows 0.52.0",
"windows-service",
"windows-sys 0.52.0",
"winreg 0.52.0",
"x25519-dalek",
"zerocopy 0.7.35",
"zip",
"zstd",
@@ -1126,8 +1235,6 @@ dependencies = [
[[package]]
name = "easytier-rpc-build"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24829168c28f6a448f57d18116c255dcbd2b8c25e76dbc60f6cd16d68ad2cf07"
dependencies = [
"heck 0.5.0",
"prost-build",
@@ -1253,6 +1360,17 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "erased-serde"
version = "0.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2add8a07dd6a8d93ff627029c51de145e12686fbc36ecb298ac22e74cf02dec"
dependencies = [
"serde",
"serde_core",
"typeid",
]
[[package]]
name = "errno"
version = "0.3.14"
@@ -1264,10 +1382,19 @@ dependencies = [
]
[[package]]
name = "fastbloom"
version = "0.14.0"
name = "etherparse"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18c1ddb9231d8554c2d6bdf4cfaabf0c59251658c68b6c95cd52dd0c513a912a"
checksum = "827292ea592108849932ad8e30218f8b1f21c0dfd0696698a18b5d0aed62d990"
dependencies = [
"arrayvec",
]
[[package]]
name = "fastbloom"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7f34442dbe69c60fe8eaf58a8cafff81a1f278816d8ab4db255b3bef4ac3c4"
dependencies = [
"getrandom 0.3.3",
"libm",
@@ -1280,6 +1407,9 @@ name = "fastrand"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
dependencies = [
"getrandom 0.2.16",
]
[[package]]
name = "fiat-crypto"
@@ -1310,6 +1440,18 @@ dependencies = [
"miniz_oxide",
]
[[package]]
name = "flume"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e139bc46ca777eb5efaf62df0ab8cc5fd400866427e56c68b22e414e53bd3be"
dependencies = [
"fastrand",
"futures-core",
"futures-sink",
"spin",
]
[[package]]
name = "fnv"
version = "1.0.7"
@@ -1346,6 +1488,16 @@ dependencies = [
"percent-encoding",
]
[[package]]
name = "forwarded-header-value"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8835f84f38484cc86f110a805655697908257fb9a7af005234060891557198e9"
dependencies = [
"nonempty",
"thiserror 1.0.69",
]
[[package]]
name = "futures"
version = "0.3.31"
@@ -1496,6 +1648,16 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "ghash"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0d8a4362ccb29cb0b265253fb0a2728f592895ee6854fd9bc13f2ffda266ff1"
dependencies = [
"opaque-debug",
"polyval",
]
[[package]]
name = "gimli"
version = "0.31.1"
@@ -1605,9 +1767,9 @@ checksum = "5419bdc4f6a9207fbeba6d11b604d481addf78ecd10c11ad51e76c2f6482748d"
[[package]]
name = "heapless"
version = "0.9.1"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1edcd5a338e64688fbdcb7531a846cfd3476a54784dcb918a0844682bc7ada5"
checksum = "2af2455f757db2b292a9b1768c4b70186d443bcb3b316252d6b540aec1cd89ed"
dependencies = [
"hash32",
"stable_deref_trait",
@@ -2064,6 +2226,15 @@ dependencies = [
"hashbrown 0.16.0",
]
[[package]]
name = "indoc"
version = "2.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79cf5c93f93228cf8efb3ba362535fb11199ac548a09ce117c9b1adc3030d706"
dependencies = [
"rustversion",
]
[[package]]
name = "inout"
version = "0.1.4"
@@ -2073,6 +2244,15 @@ dependencies = [
"generic-array",
]
[[package]]
name = "inventory"
version = "0.3.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4f0c30c76f2f4ccee3fe55a2435f691ca00c0e4bd87abe4f4a851b1d4dac39b"
dependencies = [
"rustversion",
]
[[package]]
name = "io-uring"
version = "0.7.10"
@@ -2161,15 +2341,6 @@ dependencies = [
"either",
]
[[package]]
name = "itertools"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "413ee7dfc52ee1a4949ceeb7dbc8a33f2d6c088194d9f922fb8318faf1f01186"
dependencies = [
"either",
]
[[package]]
name = "itertools"
version = "0.14.0"
@@ -2230,7 +2401,7 @@ dependencies = [
[[package]]
name = "kcp-sys"
version = "0.1.0"
source = "git+https://github.com/EasyTier/kcp-sys?rev=71eff18c573a4a71bf99c7fabc6a8b9f211c84c1#71eff18c573a4a71bf99c7fabc6a8b9f211c84c1"
source = "git+https://github.com/EasyTier/kcp-sys?rev=94964794caaed5d388463137da59b97499619e5f#94964794caaed5d388463137da59b97499619e5f"
dependencies = [
"anyhow",
"auto_impl",
@@ -2503,7 +2674,7 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b27250baa967a15214e57384dd6228c59afbccb15ab8f97207c9758917544bf5"
dependencies = [
"convert_case",
"convert_case 0.8.0",
"proc-macro2",
"quote",
"semver",
@@ -2516,7 +2687,7 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c844efa85d53b5adc3b326520f3a108c3a737b7534ee10d406f81884e7e71b3c"
dependencies = [
"convert_case",
"convert_case 0.8.0",
"ctor",
"napi-derive-backend-ohos",
"proc-macro2",
@@ -2564,7 +2735,7 @@ dependencies = [
"libc",
"log",
"openssl",
"openssl-probe",
"openssl-probe 0.1.6",
"openssl-sys",
"schannel",
"security-framework 2.11.1",
@@ -2689,6 +2860,12 @@ dependencies = [
"minimal-lexical",
]
[[package]]
name = "nonempty"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9e591e719385e6ebaeb5ce5d3887f7d5676fceca6411d1925ccc95745f3d6f7"
[[package]]
name = "normpath"
version = "1.5.0"
@@ -2810,6 +2987,12 @@ version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]]
name = "openssl-probe"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe"
[[package]]
name = "openssl-sys"
version = "0.9.109"
@@ -2822,6 +3005,15 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "ordered_hash_map"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6c699f8a30f345785be969deed7eee4c73a5de58c7faf61d6a3251ef798ff61"
dependencies = [
"hashbrown 0.15.5",
]
[[package]]
name = "papergrid"
version = "0.12.0"
@@ -2874,12 +3066,12 @@ dependencies = [
[[package]]
name = "pem"
version = "3.0.5"
version = "3.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38af38e8470ac9dee3ce1bae1af9c1671fffc44ddfd8bd1d0a3445bf349a8ef3"
checksum = "1d30c53c26bc5b31a98cd02d20f25a7c8567146caf63ed593a9d87b2775291be"
dependencies = [
"base64 0.22.1",
"serde",
"serde_core",
]
[[package]]
@@ -3045,6 +3237,18 @@ dependencies = [
"universal-hash",
]
[[package]]
name = "polyval"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d1fe60d06143b2430aa532c94cfe9e29783047f06c0d7fd359a9a51b729fa25"
dependencies = [
"cfg-if",
"cpufeatures",
"opaque-debug",
"universal-hash",
]
[[package]]
name = "portable-atomic"
version = "1.11.1"
@@ -3251,6 +3455,52 @@ dependencies = [
"prost",
]
[[package]]
name = "prost-wkt"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "497e1e938f0c09ef9cabe1d49437b4016e03e8f82fbbe5d1c62a9b61b9decae1"
dependencies = [
"chrono",
"inventory",
"prost",
"serde",
"serde_derive",
"serde_json",
"typetag",
]
[[package]]
name = "prost-wkt-build"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07b8bf115b70a7aa5af1fd5d6e9418492e9ccb6e4785e858c938e28d132a884b"
dependencies = [
"heck 0.5.0",
"prost",
"prost-build",
"prost-types",
"quote",
]
[[package]]
name = "prost-wkt-types"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8cdde6df0a98311c839392ca2f2f0bcecd545f86a62b4e3c6a49c336e970fe5"
dependencies = [
"chrono",
"prost",
"prost-build",
"prost-types",
"prost-wkt",
"prost-wkt-build",
"regex",
"serde",
"serde_derive",
"serde_json",
]
[[package]]
name = "quick-xml"
version = "0.38.3"
@@ -3281,10 +3531,22 @@ dependencies = [
]
[[package]]
name = "quinn-proto"
version = "0.11.13"
name = "quinn-plaintext"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31"
checksum = "f3e617feaeb6493018fa35fc47ae8b630ac8903d8159e9e747018841b99bad3d"
dependencies = [
"bytes",
"quinn-proto",
"seahash",
"tracing",
]
[[package]]
name = "quinn-proto"
version = "0.11.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "434b42fec591c96ef50e21e886936e66d3cc3f737104fdb9b737c40ffb94c098"
dependencies = [
"bytes",
"fastbloom",
@@ -3652,14 +3914,14 @@ dependencies = [
[[package]]
name = "rustls-native-certs"
version = "0.8.1"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcff2dd52b58a8d98a70243663a0d234c4e2b79235637849d15913394a247d3"
checksum = "612460d5f7bea540c490b2b6395d8e34a953e52b491accd6c86c8164c5932a63"
dependencies = [
"openssl-probe",
"openssl-probe 0.2.1",
"rustls-pki-types",
"schannel",
"security-framework 3.5.0",
"security-framework 3.5.1",
]
[[package]]
@@ -3683,9 +3945,9 @@ dependencies = [
[[package]]
name = "rustls-platform-verifier"
version = "0.6.1"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be59af91596cac372a6942530653ad0c3a246cdd491aaa9dcaee47f88d67d5a0"
checksum = "1d99feebc72bae7ab76ba994bb5e121b8d83d910ca40b36e0921f53becc41784"
dependencies = [
"core-foundation 0.10.1",
"core-foundation-sys",
@@ -3696,10 +3958,10 @@ dependencies = [
"rustls-native-certs",
"rustls-platform-verifier-android",
"rustls-webpki",
"security-framework 3.5.0",
"security-framework 3.5.1",
"security-framework-sys",
"webpki-root-certs",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -3761,6 +4023,12 @@ version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "seahash"
version = "4.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "security-framework"
version = "2.11.1"
@@ -3776,9 +4044,9 @@ dependencies = [
[[package]]
name = "security-framework"
version = "3.5.0"
version = "3.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc198e42d9b7510827939c9a15f5062a0c913f3371d765977e586d2fe6c16f4a"
checksum = "b3297343eaf830f66ede390ea39da1d462b6b0c1b000f420d0a83f898bbbe6ef"
dependencies = [
"bitflags 2.9.4",
"core-foundation 0.10.1",
@@ -3956,6 +4224,12 @@ version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d66dc143e6b11c1eddc06d5c423cfc97062865baf299914ab64caa38182078fe"
[[package]]
name = "simdutf8"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3a9fe34e3e7a50316060351f37187a3f546bce95496156754b601a5fa71b76e"
[[package]]
name = "siphasher"
version = "1.0.1"
@@ -3987,6 +4261,23 @@ dependencies = [
"managed",
]
[[package]]
name = "snow"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "599b506ccc4aff8cf7844bc42cf783009a434c1e26c964432560fb6d6ad02d82"
dependencies = [
"aes-gcm",
"blake2",
"chacha20poly1305",
"curve25519-dalek",
"getrandom 0.3.3",
"ring",
"rustc_version",
"sha2",
"subtle",
]
[[package]]
name = "socket2"
version = "0.5.10"
@@ -4007,6 +4298,15 @@ dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "spin"
version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
dependencies = [
"lock_api",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
@@ -4019,6 +4319,27 @@ version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "strum"
version = "0.27.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af23d6f6c1a224baef9d3f61e287d2761385a5b88fdab4eb4c6f11aeb54c4bcf"
dependencies = [
"strum_macros",
]
[[package]]
name = "strum_macros"
version = "0.27.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7695ce3845ea4b33927c055a39dc438a45b059f7c1b3d91d38d10355fb8cbca7"
dependencies = [
"heck 0.5.0",
"proc-macro2",
"quote",
"syn 2.0.106",
]
[[package]]
name = "stun_codec"
version = "0.3.5"
@@ -4369,9 +4690,9 @@ dependencies = [
[[package]]
name = "tokio-websockets"
version = "0.8.3"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "842e11addde61da7c37ef205cd625ebcd7b607076ea62e4698f06bfd5fd01a03"
checksum = "dad543404f98bfc969aeb71994105c592acfc6c43323fddcd016bb208d1c65cb"
dependencies = [
"base64 0.22.1",
"bytes",
@@ -4382,10 +4703,11 @@ dependencies = [
"httparse",
"ring",
"rustls-pki-types",
"simdutf8",
"tokio",
"tokio-rustls",
"tokio-util",
"webpki-roots 0.26.11",
"webpki-roots 1.0.2",
]
[[package]]
@@ -4617,12 +4939,42 @@ dependencies = [
"wintun",
]
[[package]]
name = "typeid"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc7d623258602320d5c55d1bc22793b57daff0ec7efc270ea7d55ce1d5f5471c"
[[package]]
name = "typenum"
version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dccffe3ce07af9386bfd29e80c0ab1a8205a2fc34e4bcd40364df902cfa8f3f"
[[package]]
name = "typetag"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be2212c8a9b9bcfca32024de14998494cf9a5dfa59ea1b829de98bac374b86bf"
dependencies = [
"erased-serde",
"inventory",
"once_cell",
"serde",
"typetag-impl",
]
[[package]]
name = "typetag-impl"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "27a7a9b72ba121f6f1f6c3632b85604cac41aedb5ddc70accbebb6cac83de846"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.106",
]
[[package]]
name = "unicase"
version = "2.8.1"
@@ -4653,6 +5005,12 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a1a07cc7db3810833284e8d372ccdc6da29741639ecc70c9ec107df0fa6154c"
[[package]]
name = "unicode-xid"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"
[[package]]
name = "universal-hash"
version = "0.5.1"
@@ -4895,9 +5253,9 @@ dependencies = [
[[package]]
name = "webpki-root-certs"
version = "1.0.2"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e4ffd8df1c57e87c325000a3d6ef93db75279dc3a231125aac571650f22b12a"
checksum = "36a29fc0408b113f68cf32637857ab740edfafdf460c326cd2afaa2d84cc05dc"
dependencies = [
"rustls-pki-types",
]
@@ -4987,6 +5345,36 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windivert"
version = "0.6.0"
source = "git+https://github.com/EasyTier/windivert-rust.git?rev=adcc56d1550f7b5377ec2b3429f413ee24a77375#adcc56d1550f7b5377ec2b3429f413ee24a77375"
dependencies = [
"etherparse",
"thiserror 1.0.69",
"windivert-sys",
"windows 0.48.0",
]
[[package]]
name = "windivert-sys"
version = "0.10.0"
source = "git+https://github.com/EasyTier/windivert-rust.git?rev=adcc56d1550f7b5377ec2b3429f413ee24a77375#adcc56d1550f7b5377ec2b3429f413ee24a77375"
dependencies = [
"cc",
"thiserror 1.0.69",
"windows 0.48.0",
]
[[package]]
name = "windows"
version = "0.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e686886bc078bc1b0b600cac0147aadb815089b6e4da64016cbd754b6342700f"
dependencies = [
"windows-targets 0.48.5",
]
[[package]]
name = "windows"
version = "0.52.0"
+9 -9
View File
@@ -9,7 +9,7 @@
"version": "0.0.0",
"dependencies": {
"@element-plus/icons-vue": "^2.3.1",
"axios": "^1.7.9",
"axios": "^1.13.5",
"dayjs": "^1.11.13",
"element-plus": "^2.8.8",
"vue": "^3.5.18",
@@ -1220,13 +1220,13 @@
"license": "MIT"
},
"node_modules/axios": {
"version": "1.11.0",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.11.0.tgz",
"integrity": "sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA==",
"version": "1.13.6",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.13.6.tgz",
"integrity": "sha512-ChTCHMouEe2kn713WHbQGcuYrr6fXTBiu460OTwWrWob16g1bXn4vtz07Ope7ewMozJAnEquLk5lWQWtBig9DQ==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.6",
"form-data": "^4.0.4",
"follow-redirects": "^1.15.11",
"form-data": "^4.0.5",
"proxy-from-env": "^1.1.0"
}
},
@@ -1616,9 +1616,9 @@
}
},
"node_modules/form-data": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"version": "4.0.5",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.5.tgz",
"integrity": "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==",
"license": "MIT",
"dependencies": {
"asynckit": "^0.4.0",
@@ -10,7 +10,7 @@
},
"dependencies": {
"@element-plus/icons-vue": "^2.3.1",
"axios": "^1.7.9",
"axios": "^1.13.5",
"dayjs": "^1.11.13",
"easytier-uptime-frontend": "link:",
"element-plus": "^2.8.8",
@@ -359,6 +359,7 @@ impl HealthChecker {
)
.parse()
.with_context(|| "failed to parse peer uri")?,
peer_public_key: None,
}]);
let inst_id = inst_id.unwrap_or(uuid::Uuid::new_v4());
+2 -2
View File
@@ -11,7 +11,7 @@ use api::routes::create_routes;
use clap::Parser;
use config::AppConfig;
use db::{operations::NodeOperations, Db};
use easytier::utils::init_logger;
use easytier::common::log;
use health_checker::HealthChecker;
use health_checker_manager::HealthCheckerManager;
use std::env;
@@ -42,7 +42,7 @@ async fn main() -> anyhow::Result<()> {
let config = AppConfig::default();
// 初始化日志
let _ = init_logger(&config.logging, false);
let _ = log::init(&config.logging, false);
// 解析命令行参数
let args = Args::parse();
+3 -3
View File
@@ -1,7 +1,7 @@
{
"name": "easytier-gui",
"type": "module",
"version": "2.5.0",
"version": "2.6.0",
"private": true,
"packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4",
"scripts": {
@@ -53,8 +53,8 @@
"unplugin-vue-markdown": "^0.26.2",
"unplugin-vue-router": "^0.10.8",
"uuid": "^10.0.0",
"vite": "^5.4.8",
"vite-plugin-vue-devtools": "^8.0.5",
"vite": "^5.4.21",
"vite-plugin-vue-devtools": "^7.4.6",
"vite-plugin-vue-layouts": "^0.11.0",
"vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.10"
+3 -1
View File
@@ -1,6 +1,6 @@
[package]
name = "easytier-gui"
version = "2.5.0"
version = "2.6.0"
description = "EasyTier GUI"
authors = ["you"]
edition = "2021"
@@ -54,6 +54,8 @@ tauri-plugin-os = "2.3.0"
uuid = "1.17.0"
async-trait = "0.1.89"
url = { version = "2.5", features = ["serde"] }
[target.'cfg(target_os = "windows")'.dependencies]
windows = { version = "0.52", features = ["Win32_Foundation", "Win32_UI_Shell", "Win32_UI_WindowsAndMessaging"] }
winapi = { version = "0.3.9", features = ["securitybaseapi", "processthreadsapi"] }
@@ -36,6 +36,7 @@
"core:tray:allow-set-show-menu-on-left-click",
"core:tray:allow-set-tooltip",
"vpnservice:allow-ping",
"vpnservice:allow-get-vpn-status",
"vpnservice:allow-prepare-vpn",
"vpnservice:allow-start-vpn",
"vpnservice:allow-stop-vpn",
@@ -47,4 +48,4 @@
"os:allow-platform",
"os:allow-locale"
]
}
}
+56 -8
View File
@@ -16,6 +16,8 @@
use super::Command;
use anyhow::Result;
use std::env;
use std::fs::File;
use std::io::Read as _;
use std::path::PathBuf;
use std::process::{ExitStatus, Output};
@@ -23,10 +25,12 @@ use std::ffi::{CString, OsString};
use std::io;
use std::mem;
use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::FromRawFd;
use std::os::unix::process::ExitStatusExt;
use std::path::Path;
use std::ptr;
use libc::{fcntl, fileno, waitpid, EINTR, F_GETOWN};
use libc::{fileno, wait, EINTR, SHUT_WR};
use security_framework_sys::authorization::{
errAuthorizationSuccess, kAuthorizationFlagDefaults, kAuthorizationFlagDestroyRights,
AuthorizationCreate, AuthorizationExecuteWithPrivileges, AuthorizationFree, AuthorizationRef,
@@ -71,7 +75,7 @@ macro_rules! make_cstring {
};
}
unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 {
unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> io::Result<ExitStatus> {
let mut authref: AuthorizationRef = ptr::null_mut();
let mut pipe: *mut libc::FILE = ptr::null_mut();
@@ -82,7 +86,7 @@ unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 {
&mut authref,
) != errAuthorizationSuccess
{
return -1;
return Err(io::Error::last_os_error());
}
if AuthorizationExecuteWithPrivileges(
authref,
@@ -93,22 +97,66 @@ unsafe fn gui_runas(prog: *const i8, argv: *const *const i8) -> i32 {
) != errAuthorizationSuccess
{
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return -1;
return Err(io::Error::last_os_error());
}
let fd = fileno(pipe);
if fd == -1 {
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(io::Error::last_os_error());
}
// We never send input to the elevated GUI. Close the parent write half so
// the child sees EOF on stdin instead of waiting forever.
if libc::shutdown(fd, SHUT_WR) == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
// AuthorizationExecuteWithPrivileges wires the tool's stdin/stdout to a
// bidirectional pipe. Drain stdout so the child can't block on a full pipe.
let read_fd = libc::dup(fd);
if read_fd == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
let mut pipe_file = unsafe { File::from_raw_fd(read_fd) };
let mut sink = [0_u8; 8192];
loop {
match pipe_file.read(&mut sink) {
Ok(0) => break,
Ok(_) => {}
Err(err) if err.kind() == io::ErrorKind::Interrupted => continue,
Err(err) => {
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
}
}
}
let pid = fcntl(fileno(pipe), F_GETOWN, 0);
let mut status = 0;
loop {
let r = waitpid(pid, &mut status, 0);
let r = wait(&mut status);
if r == -1 && io::Error::last_os_error().raw_os_error() == Some(EINTR) {
continue;
} else if r == -1 {
let err = io::Error::last_os_error();
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
return Err(err);
} else {
break;
}
}
libc::fclose(pipe);
AuthorizationFree(authref, kAuthorizationFlagDestroyRights);
status
Ok(ExitStatus::from_raw(status))
}
fn runas_root_gui(cmd: &Command) -> io::Result<ExitStatus> {
@@ -126,7 +174,7 @@ fn runas_root_gui(cmd: &Command) -> io::Result<ExitStatus> {
let mut argv: Vec<_> = args.iter().map(|x| x.as_ptr()).collect();
argv.push(ptr::null());
unsafe { Ok(mem::transmute(gui_runas(prog.as_ptr(), argv.as_ptr()))) }
unsafe { gui_runas(prog.as_ptr(), argv.as_ptr()) }
}
/// The implementation of state check and elevated executing varies on each platform
+169 -63
View File
@@ -14,16 +14,21 @@ use easytier::rpc_service::remote_client::{
};
use easytier::web_client::{self, WebClient};
use easytier::{
common::config::{ConfigLoader, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader},
common::{
config::{ConfigLoader, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader},
log,
},
instance_manager::NetworkInstanceManager,
launcher::NetworkConfig,
rpc_service::ApiRpcServer,
tunnel::ring::RingTunnelListener,
tunnel::tcp::TcpTunnelListener,
tunnel::TunnelListener,
utils::{self},
};
use std::ops::Deref;
use std::sync::Arc;
use tokio::sync::{RwLock, RwLockReadGuard};
use tokio::sync::{Mutex, RwLock, RwLockReadGuard};
use uuid::Uuid;
use tauri::{AppHandle, Emitter, Manager as _};
@@ -40,8 +45,21 @@ static RPC_RING_UUID: once_cell::sync::Lazy<uuid::Uuid> =
static CLIENT_MANAGER: once_cell::sync::Lazy<RwLock<Option<manager::GUIClientManager>>> =
once_cell::sync::Lazy::new(|| RwLock::new(None));
static RING_RPC_SERVER: once_cell::sync::Lazy<RwLock<Option<ApiRpcServer<RingTunnelListener>>>> =
once_cell::sync::Lazy::new(|| RwLock::new(None));
type BoxedTunnelListener = Box<dyn TunnelListener>;
#[derive(Clone, Copy, PartialEq, Eq)]
enum RpcServerKind {
Ring,
Tcp,
}
struct RpcServer {
kind: RpcServerKind,
_server: ApiRpcServer<BoxedTunnelListener>,
bind_url: Option<url::Url>,
}
static RPC_SERVER: once_cell::sync::Lazy<Mutex<Option<RpcServer>>> =
once_cell::sync::Lazy::new(|| Mutex::new(None));
static WEB_CLIENT: once_cell::sync::Lazy<RwLock<Option<WebClient>>> =
once_cell::sync::Lazy::new(|| RwLock::new(None));
@@ -128,7 +146,6 @@ async fn collect_network_info(
#[tauri::command]
async fn set_logging_level(level: String) -> Result<(), String> {
println!("Setting logging level to: {}", level);
get_client_manager!()?
.set_logging_level(level.clone())
.await
@@ -173,7 +190,7 @@ async fn remove_network_instance(app: AppHandle, instance_id: String) -> Result<
.await
.map_err(|e| e.to_string())?;
client_manager
.post_remove_network_instances_hook(&app, &[instance_id])
.post_stop_network_instances_hook(&app)
.await?;
Ok(())
@@ -189,6 +206,16 @@ async fn update_network_config_state(
.parse()
.map_err(|e: uuid::Error| e.to_string())?;
let client_manager = get_client_manager!()?;
if !disabled {
let cfg = client_manager
.handle_get_network_config(app.clone(), instance_id)
.await
.map_err(|e| e.to_string())?;
let toml_config = cfg.gen_config().map_err(|e| e.to_string())?;
client_manager
.pre_run_network_instance_hook(&app, &toml_config)
.await?;
}
client_manager
.handle_update_network_state(app.clone(), instance_id, disabled)
.await
@@ -196,7 +223,11 @@ async fn update_network_config_state(
if disabled {
client_manager
.post_remove_network_instances_hook(&app, &[instance_id])
.post_stop_network_instances_hook(&app)
.await?;
} else {
client_manager
.post_run_network_instance_hook(&app, &instance_id)
.await?;
}
@@ -322,8 +353,25 @@ fn get_service_status() -> Result<&'static str, String> {
}
}
fn normalize_normal_mode_rpc_portal(portal: &str) -> Result<(url::Url, url::Url), String> {
let portal_url: url::Url = portal
.parse()
.map_err(|e| format!("invalid rpc portal: {:#}", e))?;
let bind_url = portal_url.clone();
let mut connect_url = portal_url.clone();
// if bind addr is 0.0.0.0, should convert to 127.0.0.1
if connect_url.host_str() == Some("0.0.0.0") {
connect_url.set_host(Some("127.0.0.1")).unwrap();
}
Ok((bind_url, connect_url))
}
#[tauri::command]
async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(), String> {
async fn init_rpc_connection(
_app: AppHandle,
is_normal_mode: bool,
url: Option<String>,
) -> Result<(), String> {
let mut client_manager_guard =
tokio::time::timeout(std::time::Duration::from_secs(5), CLIENT_MANAGER.write())
.await
@@ -331,41 +379,72 @@ async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(),
let mut instance_manager_guard = INSTANCE_MANAGER
.try_write()
.map_err(|_| "Failed to acquire write lock for instance manager")?;
let mut ring_rpc_server_guard = RING_RPC_SERVER
.try_write()
.map_err(|_| "Failed to acquire write lock for ring rpc server")?;
let mut rpc_server_guard = RPC_SERVER
.try_lock()
.map_err(|_| "Failed to acquire lock for rpc server")?;
let normal_mode = url.is_none();
if normal_mode {
let mut client_url = url.clone();
if is_normal_mode {
let instance_manager = if let Some(im) = instance_manager_guard.take() {
im
} else {
Arc::new(NetworkInstanceManager::new())
};
let rpc_server = if let Some(rpc_server) = ring_rpc_server_guard.take() {
rpc_server
let portal = url.and_then(|s| {
let trimmed = s.trim().to_string();
if trimmed.is_empty() {
None
} else {
Some(trimmed)
}
});
let (desired_kind, bind_url, connect_url) = if let Some(portal) = portal {
let (bind_url, connect_url) = normalize_normal_mode_rpc_portal(&portal)?;
(RpcServerKind::Tcp, Some(bind_url), Some(connect_url))
} else {
ApiRpcServer::from_tunnel(
RingTunnelListener::new(
format!("ring://{}", RPC_RING_UUID.deref()).parse().unwrap(),
),
instance_manager.clone(),
)
.with_rx_timeout(None)
.serve()
.await
.map_err(|e| e.to_string())?
(RpcServerKind::Ring, None, None)
};
let need_restart = rpc_server_guard
.as_ref()
.map(|x| x.kind != desired_kind || x.bind_url != bind_url)
.unwrap_or(true);
if need_restart {
*rpc_server_guard = None;
let tunnel: BoxedTunnelListener = match desired_kind {
RpcServerKind::Ring => Box::new(RingTunnelListener::new(
format!("ring://{}", RPC_RING_UUID.deref()).parse().unwrap(),
)),
RpcServerKind::Tcp => Box::new(TcpTunnelListener::new(
bind_url.clone().expect("tcp rpc must have bind url"),
)),
};
let rpc_server = ApiRpcServer::from_tunnel(tunnel, instance_manager.clone())
.with_rx_timeout(None)
.serve()
.await
.map_err(|e| e.to_string())?;
*rpc_server_guard = Some(RpcServer {
kind: desired_kind,
_server: rpc_server,
bind_url,
});
}
*instance_manager_guard = Some(instance_manager);
*ring_rpc_server_guard = Some(rpc_server);
client_url = connect_url.map(|u| u.to_string());
} else {
*ring_rpc_server_guard = None;
*rpc_server_guard = None;
}
let client_manager = tokio::time::timeout(
std::time::Duration::from_millis(1000),
manager::GUIClientManager::new(url),
manager::GUIClientManager::new(client_url),
)
.await
.map_err(|_| "connect remote rpc timed out".to_string())?
@@ -373,7 +452,7 @@ async fn init_rpc_connection(_app: AppHandle, url: Option<String>) -> Result<(),
.map_err(|e| format!("{:#}", e))?;
*client_manager_guard = Some(client_manager);
if !normal_mode {
if !is_normal_mode {
drop(WEB_CLIENT.write().await.take());
if let Some(instance_manager) = instance_manager_guard.take() {
instance_manager
@@ -406,11 +485,17 @@ async fn init_web_client(app: AppHandle, url: Option<String>) -> Result<(), Stri
let hooks = Arc::new(manager::GuiHooks { app: app.clone() });
let web_client =
web_client::run_web_client(url.as_str(), None, None, instance_manager, Some(hooks))
.await
.with_context(|| "Failed to initialize web client")
.map_err(|e| format!("{:#}", e))?;
let web_client = web_client::run_web_client(
url.as_str(),
None,
None,
false,
instance_manager,
Some(hooks),
)
.await
.with_context(|| "Failed to initialize web client")
.map_err(|e| format!("{:#}", e))?;
*web_client_guard = Some(web_client);
Ok(())
}
@@ -450,23 +535,26 @@ async fn get_log_dir_path(app: tauri::AppHandle) -> Result<String, String> {
#[cfg(not(target_os = "android"))]
fn toggle_window_visibility(app: &tauri::AppHandle) {
if let Some(window) = app.get_webview_window("main") {
let visible = if window.is_visible().unwrap_or_default() {
if window.is_minimized().unwrap_or_default() {
let _ = window.unminimize();
false
} else {
true
let visible = window.is_visible().unwrap_or_default();
let minimized = window.is_minimized().unwrap_or_default();
let focused = window.is_focused().unwrap_or_default();
let should_show = !visible || minimized || !focused;
if should_show {
if !visible {
let _ = window.show();
}
if minimized {
let _ = window.unminimize();
}
if !focused {
let _ = window.set_focus();
}
let _ = set_dock_visibility(app.clone(), true);
} else {
let _ = window.show();
false
};
if visible {
let _ = window.hide();
} else {
let _ = window.set_focus();
let _ = set_dock_visibility(app.clone(), false);
}
let _ = set_dock_visibility(app.clone(), !visible);
}
}
@@ -538,7 +626,7 @@ mod manager {
async fn post_remove_network_instances(&self, ids: &[uuid::Uuid]) -> Result<(), String> {
let client_manager = get_client_manager!()?;
client_manager
.post_remove_network_instances_hook(&self.app, ids)
.post_remote_remove_network_instances_hook(&self.app, ids)
.await
}
}
@@ -621,7 +709,9 @@ mod manager {
self.network_configs.remove(network_inst_id);
self.enabled_networks.remove(network_inst_id);
}
self.save_configs(&app)
self.save_configs(&app)?;
self.save_enabled_networks(&app)?;
Ok(())
}
async fn update_network_config_state(
@@ -754,7 +844,7 @@ mod manager {
cfg: &easytier::common::config::TomlConfigLoader,
) -> Result<(), String> {
let instance_id = cfg.get_id();
app.emit("pre_run_network_instance", instance_id)
app.emit("pre_run_network_instance", instance_id.to_string())
.map_err(|e| e.to_string())?;
#[cfg(target_os = "android")]
@@ -791,20 +881,21 @@ mod manager {
let app_clone = app.clone();
let instance_id_clone = *instance_id;
tokio::spawn(async move {
let instance_id_str = instance_id_clone.to_string();
loop {
match event_receiver.recv().await {
Ok(easytier::common::global_ctx::GlobalCtxEvent::DhcpIpv4Changed(_, _)) => {
let _ = app_clone.emit("dhcp_ip_changed", instance_id_clone);
let _ = app_clone.emit("dhcp_ip_changed", &instance_id_str);
}
Ok(easytier::common::global_ctx::GlobalCtxEvent::ProxyCidrsUpdated(_, _)) => {
let _ = app_clone.emit("proxy_cidrs_updated", instance_id_clone);
let _ = app_clone.emit("proxy_cidrs_updated", &instance_id_str);
}
Ok(_) => {}
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
break;
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
let _ = app_clone.emit("event_lagged", instance_id_clone);
let _ = app_clone.emit("event_lagged", &instance_id_str);
event_receiver = event_receiver.resubscribe();
}
}
@@ -816,20 +907,29 @@ mod manager {
self.storage.enabled_networks.insert(*instance_id);
app.emit("post_run_network_instance", instance_id)
app.emit("post_run_network_instance", instance_id.to_string())
.map_err(|e| e.to_string())?;
Ok(())
}
pub(super) async fn post_remove_network_instances_hook(
pub(super) async fn post_remote_remove_network_instances_hook(
&self,
app: &AppHandle,
_ids: &[uuid::Uuid],
ids: &[uuid::Uuid],
) -> Result<(), String> {
self.storage
.enabled_networks
.retain(|id| !_ids.contains(id));
.delete_network_configs(app.clone(), ids)
.await
.map_err(|e| e.to_string())?;
self.notify_vpn_stop_if_no_tun(app)?;
Ok(())
}
pub(super) async fn post_stop_network_instances_hook(
&self,
app: &AppHandle,
) -> Result<(), String> {
self.notify_vpn_stop_if_no_tun(app)?;
Ok(())
}
@@ -886,20 +986,26 @@ mod manager {
.network_configs
.get(&uuid)
.map(|i| i.value().1.clone());
if config.is_none() {
let Some(config) = config else {
continue;
}
};
let toml_config = config.gen_config()?;
self.pre_run_network_instance_hook(&app, &toml_config)
.await
.map_err(|e| anyhow::anyhow!(e))?;
client
.run_network_instance(
BaseController::default(),
RunNetworkInstanceRequest {
inst_id: None,
config,
config: Some(config),
overwrite: false,
},
)
.await?;
self.storage.enabled_networks.insert(uuid);
self.post_run_network_instance_hook(&app, &uuid)
.await
.map_err(|e| anyhow::anyhow!(e))?;
}
}
}
@@ -1053,7 +1159,7 @@ pub fn run_gui() -> std::process::ExitCode {
})
.build()
.map_err(|e| e.to_string())?;
let Ok(_) = utils::init_logger(&config, true) else {
let Ok(_) = log::init(&config, true) else {
return Ok(());
};
+1 -1
View File
@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false
},
"productName": "easytier-gui",
"version": "2.5.0",
"version": "2.6.0",
"identifier": "com.kkrainbow.easytier",
"plugins": {
"shell": {
+6
View File
@@ -43,6 +43,7 @@ declare global {
const isWebClientConnected: typeof import('./composables/backend')['isWebClientConnected']
const listNetworkInstanceIds: typeof import('./composables/backend')['listNetworkInstanceIds']
const listenGlobalEvents: typeof import('./composables/event')['listenGlobalEvents']
const loadLastNetworkInstanceId: typeof import('./composables/config')['loadLastNetworkInstanceId']
const loadMode: typeof import('./composables/mode')['loadMode']
const mapActions: typeof import('pinia')['mapActions']
const mapGetters: typeof import('pinia')['mapGetters']
@@ -76,6 +77,7 @@ declare global {
const ref: typeof import('vue')['ref']
const resolveComponent: typeof import('vue')['resolveComponent']
const runNetworkInstance: typeof import('./composables/backend')['runNetworkInstance']
const saveLastNetworkInstanceId: typeof import('./composables/config')['saveLastNetworkInstanceId']
const saveMode: typeof import('./composables/mode')['saveMode']
const saveNetworkConfig: typeof import('./composables/backend')['saveNetworkConfig']
const sendConfigs: typeof import('./composables/backend')['sendConfigs']
@@ -91,6 +93,7 @@ declare global {
const shallowReadonly: typeof import('vue')['shallowReadonly']
const shallowRef: typeof import('vue')['shallowRef']
const storeToRefs: typeof import('pinia')['storeToRefs']
const syncMobileVpnService: typeof import('./composables/mobile_vpn')['syncMobileVpnService']
const toRaw: typeof import('vue')['toRaw']
const toRef: typeof import('vue')['toRef']
const toRefs: typeof import('vue')['toRefs']
@@ -165,6 +168,7 @@ declare module 'vue' {
readonly isWebClientConnected: UnwrapRef<typeof import('./composables/backend')['isWebClientConnected']>
readonly listNetworkInstanceIds: UnwrapRef<typeof import('./composables/backend')['listNetworkInstanceIds']>
readonly listenGlobalEvents: UnwrapRef<typeof import('./composables/event')['listenGlobalEvents']>
readonly loadLastNetworkInstanceId: UnwrapRef<typeof import('./composables/config')['loadLastNetworkInstanceId']>
readonly loadMode: UnwrapRef<typeof import('./composables/mode')['loadMode']>
readonly mapActions: UnwrapRef<typeof import('pinia')['mapActions']>
readonly mapGetters: UnwrapRef<typeof import('pinia')['mapGetters']>
@@ -198,6 +202,7 @@ declare module 'vue' {
readonly ref: UnwrapRef<typeof import('vue')['ref']>
readonly resolveComponent: UnwrapRef<typeof import('vue')['resolveComponent']>
readonly runNetworkInstance: UnwrapRef<typeof import('./composables/backend')['runNetworkInstance']>
readonly saveLastNetworkInstanceId: UnwrapRef<typeof import('./composables/config')['saveLastNetworkInstanceId']>
readonly saveMode: UnwrapRef<typeof import('./composables/mode')['saveMode']>
readonly saveNetworkConfig: UnwrapRef<typeof import('./composables/backend')['saveNetworkConfig']>
readonly sendConfigs: UnwrapRef<typeof import('./composables/backend')['sendConfigs']>
@@ -213,6 +218,7 @@ declare module 'vue' {
readonly shallowReadonly: UnwrapRef<typeof import('vue')['shallowReadonly']>
readonly shallowRef: UnwrapRef<typeof import('vue')['shallowRef']>
readonly storeToRefs: UnwrapRef<typeof import('pinia')['storeToRefs']>
readonly syncMobileVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['syncMobileVpnService']>
readonly toRaw: UnwrapRef<typeof import('vue')['toRaw']>
readonly toRef: UnwrapRef<typeof import('vue')['toRef']>
readonly toRefs: UnwrapRef<typeof import('vue')['toRefs']>
+82 -1
View File
@@ -1,6 +1,6 @@
<script setup lang="ts">
import { computed, watch, onMounted, ref } from 'vue';
import type { Mode, ServiceMode, RemoteMode } from '~/composables/mode';
import type { Mode, ServiceMode, RemoteMode, NormalMode } from '~/composables/mode';
import { appConfigDir, appLogDir } from '@tauri-apps/api/path';
import { join } from '@tauri-apps/api/path';
import { getServiceStatus, type ServiceStatus } from '~/composables/backend';
@@ -15,6 +15,14 @@ const defaultLogDir = ref('')
const serviceStatus = ref<ServiceStatus>('NotInstalled')
const isServiceStatusLoaded = ref(false)
function normalizeRpcListenPort(port: unknown): number {
const defaultPort = 15999
const numericPort = typeof port === 'number' ? port : Number.parseInt(String(port ?? ''), 10)
if (Number.isNaN(numericPort))
return defaultPort
return Math.min(65535, Math.max(1, Math.floor(numericPort)))
}
onMounted(async () => {
defaultConfigDir.value = await join(await appConfigDir(), 'config.d')
defaultLogDir.value = await appLogDir()
@@ -26,6 +34,43 @@ const modeOptions = computed(() => [
{ label: t('mode.remote'), value: 'remote' },
]);
const normalMode = computed({
get: () => model.value.mode === 'normal' ? model.value as NormalMode : undefined,
set: (value) => {
if (value) {
model.value = value
}
}
})
const rpcListenOptions = computed(() => [
{ label: t('web.common.disable'), value: false },
{ label: t('web.common.enable'), value: true },
])
const rpcListenEnabled = computed<boolean>({
get: () => !!normalMode.value?.enable_rpc_port_listen,
set: (value) => {
if (!normalMode.value)
return
normalMode.value.enable_rpc_port_listen = value
},
})
const rpcListenPort = computed<string>({
get: () => String(normalMode.value?.rpc_listen_port ?? 15999),
set: (value) => {
if (!normalMode.value)
return
const trimmed = value.trim()
if (trimmed === '')
return
if (!/^\d+$/.test(trimmed))
return
normalMode.value.rpc_listen_port = Number.parseInt(trimmed, 10)
},
})
const serviceMode = computed({
get: () => model.value.mode === 'service' ? model.value as ServiceMode : undefined,
set: (value) => {
@@ -57,6 +102,24 @@ const statusColorClass = computed(() => {
}
})
watch(() => [normalMode.value?.enable_rpc_port_listen, normalMode.value?.rpc_listen_port], ([enabled, port]) => {
if (!normalMode.value)
return
if (!enabled) {
normalMode.value.rpc_portal = undefined
return
}
const normalizedPort = normalizeRpcListenPort(port)
if (normalMode.value.rpc_listen_port !== normalizedPort)
normalMode.value.rpc_listen_port = normalizedPort
const desiredPortal = `tcp://0.0.0.0:${normalizedPort}`
if (normalMode.value.rpc_portal !== desiredPortal)
normalMode.value.rpc_portal = desiredPortal
}, { immediate: true })
watch(() => model.value.mode, async (newMode, oldMode) => {
if (newMode === oldMode)
return
@@ -69,8 +132,12 @@ watch(() => model.value.mode, async (newMode, oldMode) => {
const oldModelValue = { ...model.value }
if (newMode === 'normal') {
const portal = normalMode.value?.rpc_portal?.trim()
model.value = {
...oldModelValue,
rpc_portal: portal || undefined,
enable_rpc_port_listen: normalMode.value?.enable_rpc_port_listen,
rpc_listen_port: normalMode.value?.rpc_listen_port,
mode: 'normal',
}
}
@@ -113,6 +180,20 @@ watch(() => model.value.mode, async (newMode, oldMode) => {
{{ t('mode.remote_description') }}
</div>
<div v-if="normalMode" class="flex flex-col gap-2">
<div class="flex items-center gap-2">
<label for="rpc-listen-toggle">{{ t('mode.enable_rpc_tcp_listen') }}</label>
<SelectButton id="rpc-listen-toggle" v-model="rpcListenEnabled" :options="rpcListenOptions" option-label="label"
option-value="value" />
</div>
<div v-if="rpcListenEnabled" class="flex flex-col gap-2">
<div class="flex items-center gap-2">
<label for="rpc-listen-port">{{ t('mode.rpc_listen_port') }}</label>
<InputText id="rpc-listen-port" v-model="rpcListenPort" class="flex-1" inputmode="numeric" />
</div>
</div>
</div>
<div v-if="serviceMode" class="flex flex-col gap-2">
<div class="flex items-center gap-2">
<label for="config-dir">{{ t('mode.config_dir') }}</label>
+16 -11
View File
@@ -1,5 +1,5 @@
import { invoke } from '@tauri-apps/api/core'
import { Api, type NetworkTypes } from 'easytier-frontend-lib'
import { Api, NetworkTypes } from 'easytier-frontend-lib'
import { GetNetworkMetasResponse } from 'node_modules/easytier-frontend-lib/dist/modules/api'
@@ -17,15 +17,16 @@ interface ServiceOptions {
export type ServiceStatus = "Running" | "Stopped" | "NotInstalled"
export async function parseNetworkConfig(cfg: NetworkConfig) {
return invoke<string>('parse_network_config', { cfg })
return invoke<string>('parse_network_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
}
export async function generateNetworkConfig(tomlConfig: string) {
return invoke<NetworkConfig>('generate_network_config', { tomlConfig })
const config = await invoke<NetworkConfig>('generate_network_config', { tomlConfig })
return NetworkTypes.normalizeNetworkConfig(config)
}
export async function runNetworkInstance(cfg: NetworkConfig, save: boolean) {
return invoke('run_network_instance', { cfg, save })
return invoke('run_network_instance', { cfg: NetworkTypes.toBackendNetworkConfig(cfg), save })
}
export async function collectNetworkInfo(instanceId: string) {
@@ -57,20 +58,24 @@ export async function updateNetworkConfigState(instanceId: string, disabled: boo
}
export async function saveNetworkConfig(cfg: NetworkConfig) {
return await invoke('save_network_config', { cfg })
return await invoke('save_network_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
}
export async function validateConfig(cfg: NetworkConfig) {
return await invoke<ValidateConfigResponse>('validate_config', { cfg })
return await invoke<ValidateConfigResponse>('validate_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
}
export async function getConfig(instanceId: string) {
return await invoke<NetworkConfig>('get_config', { instanceId })
const config = await invoke<NetworkConfig>('get_config', { instanceId })
return NetworkTypes.normalizeNetworkConfig(config)
}
export async function sendConfigs(enabledNetworks: string[]) {
let networkList: NetworkConfig[] = JSON.parse(localStorage.getItem('networkList') || '[]');
return await invoke('load_configs', { configs: networkList, enabledNetworks })
const networkList: NetworkConfig[] = JSON.parse(localStorage.getItem('networkList') || '[]');
return await invoke('load_configs', {
configs: networkList.map((config) => NetworkTypes.toBackendNetworkConfig(NetworkTypes.normalizeNetworkConfig(config))),
enabledNetworks
})
}
export async function getNetworkMetas(instanceIds: string[]) {
@@ -89,8 +94,8 @@ export async function getServiceStatus() {
return await invoke<ServiceStatus>('get_service_status')
}
export async function initRpcConnection(url?: string) {
return await invoke('init_rpc_connection', { url })
export async function initRpcConnection(isNormalMode: boolean, url?: string) {
return await invoke('init_rpc_connection', { isNormalMode, url })
}
export async function isClientRunning() {
+20
View File
@@ -0,0 +1,20 @@
/**
* 配置持久化相关的函数
* 用于保存和加载应用程序的各种配置状态
*/
/**
* 保存上次使用的网络实例 ID
* @param instanceId 网络实例 ID
*/
export function saveLastNetworkInstanceId(instanceId: string) {
localStorage.setItem('last_network_instance_id', instanceId)
}
/**
* 加载上次使用的网络实例 ID
* @returns 上次使用的网络实例 ID,如果没有则返回 null
*/
export function loadLastNetworkInstanceId(): string | null {
return localStorage.getItem('last_network_instance_id')
}
+48 -15
View File
@@ -1,6 +1,7 @@
import { Event, listen } from "@tauri-apps/api/event";
import { type } from "@tauri-apps/plugin-os";
import { NetworkTypes } from "easytier-frontend-lib"
import { Utils } from "easytier-frontend-lib";
const EVENTS = Object.freeze({
SAVE_CONFIGS: 'save_configs',
@@ -14,42 +15,74 @@ const EVENTS = Object.freeze({
function onSaveConfigs(event: Event<NetworkTypes.NetworkConfig[]>) {
console.log(`Received event '${EVENTS.SAVE_CONFIGS}': ${event.payload}`);
localStorage.setItem('networkList', JSON.stringify(event.payload));
localStorage.setItem('networkList', JSON.stringify(event.payload.map((config) => NetworkTypes.normalizeNetworkConfig(config))));
}
async function onPreRunNetworkInstance(event: Event<string>) {
function normalizeInstanceIdPayload(payload: unknown): string {
if (typeof payload === 'string') {
return payload
}
if (payload && typeof payload === 'object') {
const uuid = payload as Partial<Utils.UUID>
if (
typeof uuid.part1 === 'number'
&& typeof uuid.part2 === 'number'
&& typeof uuid.part3 === 'number'
&& typeof uuid.part4 === 'number'
) {
return Utils.UuidToStr(uuid as Utils.UUID)
}
}
if (payload == null) {
return ''
}
const fallback = String(payload)
return fallback === '[object Object]' ? '' : fallback
}
async function onPreRunNetworkInstance(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.PRE_RUN_NETWORK_INSTANCE}', raw payload:`, event.payload, 'normalized:', instanceId)
if (type() === 'android') {
await prepareVpnService(event.payload);
await prepareVpnService(instanceId);
}
}
async function onPostRunNetworkInstance(event: Event<string>) {
async function onPostRunNetworkInstance(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.POST_RUN_NETWORK_INSTANCE}', raw payload:`, event.payload, 'normalized:', instanceId)
if (type() === 'android') {
await onNetworkInstanceChange(event.payload);
await onNetworkInstanceChange(instanceId);
}
}
async function onVpnServiceStop(event: Event<string>) {
await onNetworkInstanceChange(event.payload);
async function onVpnServiceStop(event: Event<unknown>) {
console.log(`Received event '${EVENTS.VPN_SERVICE_STOP}', raw payload:`, event.payload)
await syncMobileVpnService();
}
async function onDhcpIpChanged(event: Event<string>) {
console.log(`Received event '${EVENTS.DHCP_IP_CHANGED}' for instance: ${event.payload}`);
async function onDhcpIpChanged(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.DHCP_IP_CHANGED}' for instance: ${instanceId}`);
if (type() === 'android') {
await onNetworkInstanceChange(event.payload);
await onNetworkInstanceChange(instanceId);
}
}
async function onProxyCidrsUpdated(event: Event<string>) {
console.log(`Received event '${EVENTS.PROXY_CIDRS_UPDATED}' for instance: ${event.payload}`);
async function onProxyCidrsUpdated(event: Event<unknown>) {
const instanceId = normalizeInstanceIdPayload(event.payload)
console.log(`Received event '${EVENTS.PROXY_CIDRS_UPDATED}' for instance: ${instanceId}`);
if (type() === 'android') {
await onNetworkInstanceChange(event.payload);
await onNetworkInstanceChange(instanceId);
}
}
async function onEventLagged(event: Event<string>) {
async function onEventLagged(event: Event<unknown>) {
if (type() === 'android') {
await onNetworkInstanceChange(event.payload);
await onNetworkInstanceChange(normalizeInstanceIdPayload(event.payload));
}
}
+140 -26
View File
@@ -1,7 +1,7 @@
import type { NetworkTypes } from 'easytier-frontend-lib'
import { addPluginListener } from '@tauri-apps/api/core'
import { Utils } from 'easytier-frontend-lib'
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
import { get_vpn_status, prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
type Route = NetworkTypes.Route
@@ -24,6 +24,53 @@ const curVpnStatus: vpnStatus = {
dns: undefined,
}
async function requestVpnPermission() {
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
const granted = prepare_ret?.granted ?? true
if (!granted) {
console.info('vpn permission request was denied or dismissed')
}
return granted
}
function resetVpnConfigStatus() {
curVpnStatus.ipv4Addr = undefined
curVpnStatus.ipv4Cidr = undefined
curVpnStatus.routes = []
curVpnStatus.dns = undefined
}
function syncVpnStatusFromNative(status: Awaited<ReturnType<typeof get_vpn_status>>) {
curVpnStatus.running = status?.running ?? false
if (!curVpnStatus.running) {
resetVpnConfigStatus()
return
}
const ipv4WithCidr = status?.ipv4Addr
if (ipv4WithCidr?.length) {
const [ipv4Addr, cidr] = ipv4WithCidr.split('/')
curVpnStatus.ipv4Addr = ipv4Addr
const parsedCidr = Number(cidr)
curVpnStatus.ipv4Cidr = Number.isInteger(parsedCidr) ? parsedCidr : undefined
}
else {
curVpnStatus.ipv4Addr = undefined
curVpnStatus.ipv4Cidr = undefined
}
curVpnStatus.routes = [...(status?.routes ?? [])]
curVpnStatus.dns = status?.dns ?? undefined
}
async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
const start_time = Date.now()
while (curVpnStatus.running !== target_status) {
@@ -34,18 +81,19 @@ async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
}
}
async function doStopVpn() {
if (!curVpnStatus.running) {
async function doStopVpn(force = false) {
const wasRunning = curVpnStatus.running
if (!force && !wasRunning) {
return
}
console.log('stop vpn')
const stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
if (wasRunning) {
await waitVpnStatus(false, 3)
}
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
curVpnStatus.dns = undefined
resetVpnConfigStatus()
}
async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[], dns?: string) {
@@ -54,19 +102,32 @@ async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[], dns?
}
console.log('start vpn service', ipv4Addr, cidr, routes, dns)
const start_ret = await start_vpn({
const request = {
ipv4Addr: `${ipv4Addr}/${cidr}`,
routes,
dns,
disallowedApplications: ['com.kkrainbow.easytier'],
mtu: 1300,
})
}
let start_ret = await start_vpn(request)
console.log('start vpn response', JSON.stringify(start_ret))
if (start_ret?.errorMsg === 'need_prepare') {
const granted = await requestVpnPermission()
if (!granted) {
throw new Error('vpn_permission_denied')
}
start_ret = await start_vpn(request)
console.log('start vpn retry response', JSON.stringify(start_ret))
}
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.ipv4Cidr = cidr
curVpnStatus.routes = routes
curVpnStatus.dns = dns
}
@@ -75,13 +136,16 @@ async function onVpnServiceStart(payload: any) {
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(payload.fd)
await setTunFd(payload.fd).catch((e) => {
console.error('set tun fd failed', e)
})
}
}
async function onVpnServiceStop(payload: any) {
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
resetVpnConfigStatus()
}
async function registerVpnServiceListener() {
@@ -135,15 +199,25 @@ export async function onNetworkInstanceChange(instanceId: string) {
}
if (!instanceId) {
await doStopVpn()
console.warn('vpn service skipped because instance id is empty')
if (curVpnStatus.running) {
await doStopVpn()
}
return
}
const config = await getConfig(instanceId)
console.log('vpn service loaded config', instanceId, JSON.stringify({
no_tun: config.no_tun,
dhcp: config.dhcp,
enable_magic_dns: config.enable_magic_dns,
}))
if (config.no_tun) {
console.log('vpn service skipped because no_tun is enabled', instanceId)
return
}
const curNetworkInfo = (await collectNetworkInfo(instanceId)).info.map[instanceId]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
console.warn('vpn service skipped because network info is unavailable', instanceId, curNetworkInfo?.error_msg)
await doStopVpn()
return
}
@@ -170,27 +244,39 @@ export async function onNetworkInstanceChange(instanceId: string) {
const routes = getRoutesForVpn(curNetworkInfo?.routes, config)
const dns = config.enable_magic_dns ? '100.100.100.101' : undefined;
const dns = config.enable_magic_dns ? '100.100.100.101' : undefined
const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
const cidrChanged = network_length !== curVpnStatus.ipv4Cidr
const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
const dnsChanged = dns != curVpnStatus.dns
const configChanged = ipChanged || cidrChanged || routesChanged || dnsChanged
const shouldStartVpn = !curVpnStatus.running
if (ipChanged || routesChanged || dnsChanged) {
if (shouldStartVpn || configChanged) {
console.info('vpn service virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
}
catch (e) {
console.error(e)
if (curVpnStatus.running) {
try {
await doStopVpn()
}
catch (e) {
console.error(e)
}
}
try {
await doStartVpn(virtual_ip, network_length, routes, dns)
}
catch (e) {
console.error('start vpn service failed, stop all other network insts.', e)
await runNetworkInstance(config, true); //on android config should always be saved
if (e instanceof Error && e.message === 'need_prepare') {
console.info('vpn permission is required before starting the Android VPN service')
return
}
if (e instanceof Error && e.message === 'vpn_permission_denied') {
console.info('vpn permission request was denied or dismissed')
return
}
console.error('start vpn service failed', e)
}
}
}
@@ -202,6 +288,22 @@ async function isNoTunEnabled(instanceId: string | undefined) {
return (await getConfig(instanceId)).no_tun ?? false
}
async function findRunningTunInstanceId() {
const instanceIds = await listNetworkInstanceIds()
const runningIds = instanceIds.running_inst_ids.map(Utils.UuidToStr)
console.log('vpn service sync running instances', JSON.stringify(runningIds))
for (const instanceId of runningIds) {
if (await isNoTunEnabled(instanceId)) {
continue
}
return instanceId
}
return undefined
}
export async function initMobileVpnService() {
await registerVpnServiceListener()
}
@@ -210,10 +312,22 @@ export async function prepareVpnService(instanceId: string) {
if (await isNoTunEnabled(instanceId)) {
return
}
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
await requestVpnPermission()
}
export async function syncMobileVpnService() {
syncVpnStatusFromNative(await get_vpn_status())
const instanceId = await findRunningTunInstanceId()
if (instanceId) {
console.log('vpn service sync selected instance', instanceId)
await onNetworkInstanceChange(instanceId)
return
}
if (dhcpPollingTimer) {
clearTimeout(dhcpPollingTimer)
dhcpPollingTimer = null
}
await doStopVpn(true)
}
+5 -1
View File
@@ -4,8 +4,12 @@ export interface WebClientConfig {
config_server_url?: string
}
interface NormalMode extends WebClientConfig {
export interface NormalMode extends WebClientConfig {
mode: 'normal'
// if not provided will use ring tunnel rpc server
rpc_portal?: string
enable_rpc_port_listen?: boolean
rpc_listen_port?: number
}
export interface ServiceMode extends WebClientConfig {
+45 -22
View File
@@ -9,10 +9,12 @@ import { exit } from '@tauri-apps/plugin-process'
import { I18nUtils, RemoteManagement, Utils } from "easytier-frontend-lib"
import type { MenuItem } from 'primevue/menuitem'
import { useTray } from '~/composables/tray'
import { initMobileVpnService } from '~/composables/mobile_vpn'
import { GUIRemoteClient } from '~/modules/api'
import { useToast, useConfirm } from 'primevue'
import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode'
import { saveLastNetworkInstanceId, loadLastNetworkInstanceId } from '~/composables/config'
import ModeSwitcher from '~/components/ModeSwitcher.vue'
import { getServiceStatus } from '~/composables/backend'
@@ -155,13 +157,23 @@ async function initWithMode(mode: Mode) {
url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1")
retrys = 5
break;
case 'normal':
url = mode.rpc_portal;
break;
}
for (let i = 0; i < retrys; i++) {
try {
await connectRpcClient(url)
await connectRpcClient(mode.mode === 'normal', url)
break;
} catch (e) {
if (i === retrys - 1) {
const errMsg = e instanceof Error ? e.message : String(e)
toast.add({
severity: 'error',
summary: t('error'),
detail: t('mode.rpc_connection_failed', { error: errMsg }),
life: 1000,
})
throw e;
}
console.error("Error connecting rpc client, retrying...", e)
@@ -178,9 +190,25 @@ async function initWithMode(mode: Mode) {
clientRunning.value = await isClientRunning()
}
onMounted(() => {
onMounted(async () => {
const cleanupFns: Array<() => void> = []
if (type() === 'android') {
try {
await initMobileVpnService()
console.error("easytier init vpn service done")
} catch (e: any) {
console.error("easytier init vpn service failed", e)
}
}
cleanupFns.push(await listenGlobalEvents())
currentMode.value = loadMode()
initWithMode(currentMode.value);
await initWithMode(currentMode.value);
onUnmounted(() => {
cleanupFns.forEach(unlisten => unlisten())
})
});
useTray(true)
@@ -190,6 +218,12 @@ const remoteClient = computed(() => new GUIRemoteClient());
const instanceId = ref<string | undefined>(undefined);
const clientRunning = ref(false);
watch(instanceId, (newVal) => {
if (newVal) {
saveLastNetworkInstanceId(newVal);
}
});
watch(clientRunning, async (newVal, oldVal) => {
if (!newVal && oldVal) {
if (manualDisconnect.value) {
@@ -197,6 +231,11 @@ watch(clientRunning, async (newVal, oldVal) => {
return
}
await reconnectClient()
} else if (newVal && !oldVal) {
const lastInstanceId = loadLastNetworkInstanceId();
if (lastInstanceId) {
instanceId.value = lastInstanceId;
}
}
})
@@ -320,27 +359,11 @@ const setting_menu_items: Ref<MenuItem[]> = ref([
},
])
async function connectRpcClient(url?: string) {
await initRpcConnection(url)
console.log("easytier rpc connection established")
async function connectRpcClient(isNormalMode: boolean, url?: string) {
await initRpcConnection(isNormalMode, url)
console.log("easytier rpc connection established, isNormalMode: ", isNormalMode)
}
onMounted(async () => {
if (type() === 'android') {
try {
await initMobileVpnService()
console.error("easytier init vpn service done")
} catch (e: any) {
console.error("easytier init vpn service failed", e)
}
}
const unlisten = await listenGlobalEvents()
onUnmounted(() => {
unlisten()
})
})
async function openConfigServerDialog() {
editingMode.value = JSON.parse(JSON.stringify(loadMode()))
configServerDialogVisible.value = true
+1 -1
View File
@@ -8,7 +8,7 @@ repository = "https://github.com/EasyTier/EasyTier"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.89.0"
rust-version = "1.93.0"
license-file = "LICENSE"
readme = "README.md"
+70 -1
View File
@@ -29,6 +29,7 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
let method_descriptor_name = format!("{}MethodDescriptor", service.name);
let mut trait_methods = String::new();
let mut weak_impl_methods = String::new();
let mut enum_methods = String::new();
let mut list_enum_methods = String::new();
let mut client_methods = String::new();
@@ -40,6 +41,8 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
let mut match_output_type_methods = String::new();
let mut match_output_proto_type_methods = String::new();
let mut match_handle_methods = String::new();
// generate trait default method Xxx::json_call_method match branch
let mut match_trait_json_methods = String::new();
let mut match_method_try_from = String::new();
@@ -66,6 +69,21 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
)
.unwrap();
writeln!(
weak_impl_methods,
r#" async fn {method_name}(&self, ctrl: Self::Controller, input: {input_type}) -> {namespace}::error::Result<{output_type}> {{
let Some(service) = self.upgrade() else {{
return Err({namespace}::error::Error::Shutdown);
}};
service.{method_name}(ctrl, input).await
}}"#,
method_name = method.name,
input_type = method.input_type,
output_type = method.output_type,
namespace = NAMESPACE,
)
.unwrap();
ServiceGenerator::write_comments(&mut enum_methods, 4, &method.comments).unwrap();
writeln!(
enum_methods,
@@ -164,6 +182,22 @@ impl prost_build::ServiceGenerator for ServiceGenerator {
namespace = NAMESPACE,
)
.unwrap();
write!(
match_trait_json_methods,
r#" "{name}" | "{proto_name}" => {{
let req: {input_type} = ::serde_json::from_value(json).map_err(|e| {namespace}::error::Error::MalformatRpcPacket(format!("json error: {{}}", e)))?;
let resp = self.{typed_method}(ctrl, req).await?;
Ok(::serde_json::to_value(resp).map_err(|e| {namespace}::error::Error::MalformatRpcPacket(format!("json error: {{}}", e)))?)
}}
"#,
name = method.name,
proto_name = method.proto_name,
input_type = method.input_type,
typed_method = method.name,
namespace = NAMESPACE,
)
.unwrap();
}
ServiceGenerator::write_comments(&mut buf, 0, &service.comments).unwrap();
@@ -176,6 +210,29 @@ pub trait {name} {{
type Controller: {namespace}::controller::Controller;
{trait_methods}
async fn json_call_method(
&self,
ctrl: Self::Controller,
method_name: &str,
json: ::serde_json::Value,
) -> {namespace}::error::Result<::serde_json::Value> {{
match method_name {{
{match_trait_json_methods}
_ => Err({namespace}::error::Error::InvalidMethodIndex(0, method_name.to_string())),
}}
}}
}}
#[async_trait::async_trait]
impl<T> {name} for ::std::sync::Weak<T>
where
T: Send + Sync + 'static,
::std::sync::Arc<T>: {name},
{{
type Controller = <::std::sync::Arc<T> as {name}>::Controller;
{weak_impl_methods}
}}
/// A service descriptor for a `{name}`.
@@ -235,7 +292,7 @@ impl<C: {namespace}::controller::Controller> Clone for {client_name}Factory<C> {
impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C: {namespace}::controller::Controller {{
type Descriptor = {descriptor_name};
type ClientImpl = Box<dyn {name}<Controller = C> + Send + 'static>;
type ClientImpl = Box<dyn {name}<Controller = C> + Send + Sync + 'static>;
type Controller = C;
fn new(handler: impl {namespace}::handler::Handler<Descriptor = Self::Descriptor, Controller = Self::Controller>) -> Self::ClientImpl {{
@@ -250,6 +307,16 @@ impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C:
#[derive(Clone, Debug)]
pub struct {server_name}<A>(A) where A: {name} + Clone + Send + 'static;
impl<T> {server_name}<::std::sync::Weak<T>>
where
T: Send + Sync + 'static,
::std::sync::Arc<T>: {name},
{{
pub fn new_arc(service: ::std::sync::Arc<T>) -> {server_name}<::std::sync::Weak<T>> {{
{server_name}(::std::sync::Arc::downgrade(&service))
}}
}}
impl<A> {server_name}<A> where A: {name} + Clone + Send + 'static {{
/// Creates a new server instance that dispatches all calls to the supplied service.
pub fn new(service: A) -> {server_name}<A> {{
@@ -345,6 +412,7 @@ impl {namespace}::descriptor::MethodDescriptor for {method_descriptor_name} {{
proto_name = service.proto_name,
package = service.package,
trait_methods = trait_methods,
weak_impl_methods = weak_impl_methods,
enum_methods = enum_methods,
list_enum_methods = list_enum_methods,
client_own_methods = client_own_methods,
@@ -356,6 +424,7 @@ impl {namespace}::descriptor::MethodDescriptor for {method_descriptor_name} {{
match_output_type_methods = match_output_type_methods,
match_output_proto_type_methods = match_output_proto_type_methods,
match_handle_methods = match_handle_methods,
match_trait_json_methods = match_trait_json_methods,
namespace = NAMESPACE,
).unwrap();
}
+4 -1
View File
@@ -1,6 +1,6 @@
[package]
name = "easytier-web"
version = "2.5.0"
version = "2.6.0"
edition = "2021"
description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server."
@@ -63,6 +63,9 @@ uuid = { version = "1.5.0", features = [
] }
chrono = { version = "0.4.37", features = ["serde"] }
openidconnect = { version = "4.0", default-features = false, features = ["accept-rfc3339-timestamps", "reqwest"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
subtle = "2.6"
mimalloc = { version = "*" }
+2 -2
View File
@@ -20,7 +20,7 @@
"dependencies": {
"@primeuix/themes": "^1.2.3",
"@vueuse/core": "^11.1.0",
"axios": "^1.7.7",
"axios": "^1.13.5",
"chart.js": "^4.5.0",
"floating-vue": "^5.2",
"ip-num": "1.5.1",
@@ -41,7 +41,7 @@
"postcss-nested": "^7.0.2",
"tailwindcss": "=3.4.17",
"typescript": "~5.6.3",
"vite": "^5.4.10",
"vite": "^5.4.21",
"vite-plugin-dts": "^4.3.0",
"vue-tsc": "^2.1.10"
},
@@ -1,16 +1,17 @@
<script setup lang="ts">
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { SelectButton, Checkbox, InputText, InputNumber, AutoComplete, Panel, Divider, ToggleButton, Button, Password, Dialog } from 'primevue'
import { Checkbox, InputText, InputNumber, AutoComplete, Panel, Divider, ToggleButton, Button, Password, Dialog } from 'primevue'
import {
addRow,
DEFAULT_NETWORK_CONFIG,
NetworkConfig,
NetworkingMethod,
normalizeNetworkConfig,
removeRow
} from '../types/network'
import { defineProps, defineEmits, ref, onMounted, onUnmounted } from 'vue'
import { ref, onMounted, onUnmounted, watch } from 'vue'
import { useI18n } from 'vue-i18n'
import UrlListInput from './UrlListInput.vue'
const props = defineProps<{
configInvalid?: boolean
@@ -26,63 +27,18 @@ const curNetwork = defineModel('curNetwork', {
const { t } = useI18n()
const networking_methods = ref([
{ value: NetworkingMethod.PublicServer, label: () => t('public_server') },
{ value: NetworkingMethod.Manual, label: () => t('manual') },
{ value: NetworkingMethod.Standalone, label: () => t('standalone') },
])
const protos: { [proto: string]: number } = { tcp: 11010, udp: 11010, wg: 11011, ws: 11011, wss: 11012 }
function searchUrlSuggestions(e: { query: string }): string[] {
const query = e.query
const ret = []
// if query match "^\w+:.*", then no proto prefix
if (query.match(/^\w+:.*/)) {
// if query is a valid url, then add to suggestions
try {
// eslint-disable-next-line no-new
new URL(query)
ret.push(query)
}
catch { }
}
else {
for (const proto in protos) {
let item = `${proto}://${query}`
// if query match ":\d+$", then no port suffix
if (!query.match(/:\d+$/)) {
item += `:${protos[proto]}`
}
ret.push(item)
}
}
return ret
}
const publicServerSuggestions = ref([''])
function searchPresetPublicServers(e: { query: string }) {
const presetPublicServers = [
'tcp://public.easytier.top:11010',
]
const query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter(item => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
publicServerSuggestions.value = ret
}
const peerSuggestions = ref([''])
function searchPeerSuggestions(e: { query: string }) {
peerSuggestions.value = searchUrlSuggestions(e)
const protos: { [proto: string]: number } = {
tcp: 11010,
udp: 11010,
wg: 11011,
ws: 11011,
wss: 11012,
quic: 11012,
faketcp: 11013,
http: 80,
https: 443,
txt: 0,
srv: 0,
}
const inetSuggestions = ref([''])
@@ -99,34 +55,6 @@ function searchInetSuggestions(e: { query: string }) {
}
}
const listenerSuggestions = ref([''])
function searchListenerSuggestions(e: { query: string }) {
const ret = []
for (const proto in protos) {
let item = `${proto}://0.0.0.0:`
// if query is a number, use it as port
if (e.query.match(/^\d+$/)) {
item += e.query
}
else {
item += protos[proto]
}
if (item.includes(e.query)) {
ret.push(item)
}
}
if (ret.length === 0) {
ret.push(e.query)
}
listenerSuggestions.value = ret
}
const exitNodesSuggestions = ref([''])
function searchExitNodesSuggestions(e: { query: string }) {
@@ -158,10 +86,12 @@ const bool_flags: BoolFlag[] = [
{ field: 'disable_quic_input', help: 'disable_quic_input_help' },
{ field: 'disable_p2p', help: 'disable_p2p_help' },
{ field: 'p2p_only', help: 'p2p_only_help' },
{ field: 'lazy_p2p', help: 'lazy_p2p_help' },
{ field: 'bind_device', help: 'bind_device_help' },
{ field: 'no_tun', help: 'no_tun_help' },
{ field: 'enable_exit_node', help: 'enable_exit_node_help' },
{ field: 'relay_all_peer_rpc', help: 'relay_all_peer_rpc_help' },
{ field: 'need_p2p', help: 'need_p2p_help' },
{ field: 'multi_thread', help: 'multi_thread_help' },
{ field: 'proxy_forward_by_system', help: 'proxy_forward_by_system_help' },
{ field: 'disable_encryption', help: 'disable_encryption_help' },
@@ -217,6 +147,16 @@ onMounted(() => {
});
}
});
function syncNormalizedNetwork(network: NetworkConfig | undefined): void {
if (!network) {
return
}
Object.assign(network, normalizeNetworkConfig(network))
}
watch(() => curNetwork.value, syncNormalizedNetwork, { immediate: true, deep: false })
</script>
<template>
@@ -263,17 +203,14 @@ onMounted(() => {
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow">
<label for="nm">{{ t('networking_method') }}</label>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods"
:option-label="(v) => v.label()" option-value="value" />
<div class="items-center flex flex-row p-fluid gap-x-1">
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
v-model="curNetwork.peer_urls" :placeholder="t('chips_placeholder', ['tcp://8.8.8.8:11010'])"
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions" />
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.PublicServer"
v-model="curNetwork.public_server_url" :suggestions="publicServerSuggestions" class="grow"
dropdown :complete-on-focus="false" @complete="searchPresetPublicServers" />
<div class="flex items-center">
<label for="initial_nodes">{{ t('initial_nodes') }}</label>
<span class="pi pi-question-circle ml-2 self-center" v-tooltip="t('initial_nodes_help')"></span>
</div>
<div class="items-center flex flex-col p-fluid gap-y-2">
<UrlListInput id="initial_nodes" v-model="curNetwork.peer_urls" :protos="protos"
defaultUrl="tcp://:11010" :add-label="t('add_initial_node')"
:placeholder="t('initial_node_placeholder')" />
</div>
</div>
</div>
@@ -345,10 +282,8 @@ onMounted(() => {
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 grow p-fluid">
<label for="listener_urls">{{ t('listener_urls') }}</label>
<AutoComplete id="listener_urls" v-model="curNetwork.listener_urls" :suggestions="listenerSuggestions"
class="w-full" dropdown :complete-on-focus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])" multiple
@complete="searchListenerSuggestions" />
<UrlListInput v-model="curNetwork.listener_urls" :protos="protos" :add-label="t('add_listener_url')"
placeholder="0.0.0.0" />
</div>
</div>
@@ -371,6 +306,19 @@ onMounted(() => {
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow">
<div class="flex">
<label for="instance_recv_bps_limit">{{ t('instance_recv_bps_limit') }}</label>
<span class="pi pi-question-circle ml-2 self-center"
v-tooltip="t('instance_recv_bps_limit_help')"></span>
</div>
<InputNumber id="instance_recv_bps_limit" v-model="curNetwork.instance_recv_bps_limit"
aria-describedby="instance_recv_bps_limit-help" :format="false"
:placeholder="t('instance_recv_bps_limit_placeholder')" :min="1" fluid />
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-col gap-2 basis-5/12 grow">
<div class="flex">
@@ -443,9 +391,8 @@ onMounted(() => {
<label for="mapped_listeners">{{ t('mapped_listeners') }}</label>
<span class="pi pi-question-circle ml-2 self-center" v-tooltip="t('mapped_listeners_help')"></span>
</div>
<AutoComplete id="mapped_listeners" v-model="curNetwork.mapped_listeners"
:placeholder="t('chips_placeholder', ['tcp://123.123.123.123:11223'])" class="w-full" multiple fluid
:suggestions="peerSuggestions" @complete="searchPeerSuggestions" />
<UrlListInput v-model="curNetwork.mapped_listeners" :protos="protos"
:add-label="t('add_mapped_listener')" />
</div>
</div>
@@ -206,27 +206,39 @@ const confirmDeleteNetwork = (event: any) => {
});
};
const saveAndRunNewNetwork = async () => {
if (!currentNetworkConfig.value) {
const saveAndRunNewNetwork = async (config?: NetworkTypes.NetworkConfig) => {
const cfg = config ?? currentNetworkConfig.value;
if (!cfg) {
return;
}
const targetInstanceId = instanceId.value ?? cfg.instance_id;
if (targetInstanceId && cfg.instance_id !== targetInstanceId) {
cfg.instance_id = targetInstanceId;
}
try {
await props.api.delete_network(instanceId.value!);
let ret = await props.api.run_network(currentNetworkConfig.value, currentNetworkControl.remoteSave.value);
console.debug("saveAndRunNewNetwork", ret);
if (networkIsDisabled.value) {
await props.api.save_config(cfg);
await props.api.update_network_instance_state(cfg.instance_id, false);
} else {
await props.api.run_network(cfg, currentNetworkControl.remoteSave.value);
}
delete networkMetaCache.value[currentNetworkConfig.value.instance_id];
await loadNetworkMetas([currentNetworkConfig.value.instance_id]);
delete networkMetaCache.value[cfg.instance_id];
await loadNetworkMetas([cfg.instance_id]);
selectedInstanceId.value = { uuid: currentNetworkConfig.value.instance_id };
selectedInstanceId.value = { uuid: cfg.instance_id };
await loadNetworkInstanceIds();
await loadCurrentNetworkInfo();
} catch (e: any) {
console.error(e);
toast.add({ severity: 'error', summary: 'Error', detail: 'Failed to create network, error: ' + JSON.stringify(e.response.data), life: 2000 });
toast.add({ severity: 'error', summary: 'Error', detail: 'Failed to run network, error: ' + JSON.stringify(e.response?.data ?? e), life: 2000 });
return;
}
emits('update');
// showCreateNetworkDialog.value = false;
isEditingNetwork.value = false; // Exit creation mode after successful network creation
isEditingNetwork.value = false;
}
const saveNetworkConfig = async () => {
@@ -388,18 +400,18 @@ const updateScreenWidth = () => {
const menuRef = ref();
const actionMenu: Ref<MenuItem[]> = ref([
{
label: t('web.device_management.edit_network'),
label: () => t('web.device_management.edit_network'),
icon: 'pi pi-pencil',
visible: () => !(networkIsDisabled.value ?? true) && currentNetworkControl.editable.value,
command: () => editNetwork()
},
{
label: t('web.device_management.export_config'),
label: () => t('web.device_management.export_config'),
icon: 'pi pi-download',
command: () => exportConfig()
},
{
label: t('web.device_management.delete_network'),
label: () => t('web.device_management.delete_network'),
icon: 'pi pi-trash',
class: 'p-error',
visible: () => currentNetworkControl.deletable.value,
@@ -539,13 +551,15 @@ onUnmounted(() => {
:label="t('web.device_management.edit_as_file')" iconPos="left" severity="secondary" />
<Button @click="importConfig" icon="pi pi-upload" :label="t('web.device_management.import_config')"
iconPos="left" severity="help" />
<Button v-if="networkIsDisabled" @click="saveNetworkConfig" icon="pi pi-save"
:label="t('web.device_management.save_config')" iconPos="left" severity="success" />
<Button v-if="networkIsDisabled" @click="saveNetworkConfig" :disabled="!currentNetworkConfig"
icon="pi pi-save" :label="t('web.device_management.save_config')" iconPos="left"
severity="success" />
</div>
<Divider />
<Config :cur-network="currentNetworkConfig" @run-network="saveAndRunNewNetwork"></Config>
<Config :cur-network="currentNetworkConfig" :config-invalid="!currentNetworkConfig"
@run-network="saveAndRunNewNetwork"></Config>
</div>
<!-- Network Status (for running networks) -->
@@ -183,6 +183,12 @@ const myNodeInfoChips = computed(() => {
if (!my_node_info)
return chips
// peer id
chips.push({
label: `Peer ID: ${my_node_info.peer_id}`,
icon: '',
} as Chip)
// TUN Device Name
const dev_name = props.curNetworkInst.detail?.dev_name
if (dev_name) {
@@ -0,0 +1,232 @@
<script setup lang="ts">
import { AutoComplete, Button, Dialog, InputNumber, InputText } from 'primevue'
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { computed, onMounted, onUnmounted, ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'
const props = defineProps<{
placeholder?: string
protos: { [proto: string]: number }
}>()
const { t } = useI18n()
const url = defineModel<string>({ required: true })
const editing = ref(false)
const container = ref<HTMLElement | null>(null)
const internalCompact = ref(false)
const hostFocused = ref(false)
onMounted(() => {
if (container.value) {
const observer = new ResizeObserver(entries => {
for (const entry of entries) {
internalCompact.value = entry.contentRect.width < 400
}
})
observer.observe(container.value)
onUnmounted(() => {
observer.disconnect()
})
}
})
const parseUrl = (val: string | null | undefined) => {
const getValidPort = (portStr: string, proto: string) => {
const p = parseInt(portStr)
return isNaN(p) ? (props.protos[proto] ?? 11010) : p
}
const parseByPattern = (input: string) => {
const trimmed = input.trim()
if (!trimmed) {
return null
}
const match = trimmed.match(/^(\w+):\/\/(.*)$/)
const proto = match ? match[1] : 'tcp'
const rest = match ? match[2] : trimmed
const authority = rest.split(/[/?#]/)[0]
if (!authority) {
return null
}
const hostAndMaybePort = authority.includes('@') ? authority.slice(authority.lastIndexOf('@') + 1) : authority
if (hostAndMaybePort.startsWith('[')) {
const ipv6End = hostAndMaybePort.indexOf(']')
if (ipv6End > 0) {
const host = hostAndMaybePort.slice(0, ipv6End + 1)
const remain = hostAndMaybePort.slice(ipv6End + 1)
const port = remain.startsWith(':') ? getValidPort(remain.slice(1), proto) : (props.protos[proto] ?? 11010)
return { proto, host, port }
}
}
const portMatch = hostAndMaybePort.match(/^(.*):(\d+)$/)
const host = portMatch ? portMatch[1] : hostAndMaybePort
const port = portMatch ? parseInt(portMatch[2]) : (props.protos[proto] ?? 11010)
return { proto, host, port }
}
if (!val) {
return { proto: 'tcp', host: '', port: props.protos['tcp'] ?? 11010 }
}
const parsedByPattern = parseByPattern(val)
if (parsedByPattern) {
return parsedByPattern
}
return { proto: 'tcp', host: '', port: 11010 }
}
const internalValue = ref(parseUrl(url.value))
const defaultHost = '0.0.0.0'
const buildUrlValue = (value: { proto: string, host: string, port: number }, forceDefaultHost = false) => {
const proto = value.proto || 'tcp'
const rawHost = (value.host ?? '').trim()
const host = rawHost || (forceDefaultHost ? defaultHost : '')
if (!host) {
return null
}
let port = value.port
if (isNaN(parseInt(port as any))) {
port = props.protos[proto] ?? 11010
}
if (props.protos[proto] === 0) {
return `${proto}://${host}`
}
return `${proto}://${host}:${port}`
}
const syncUrlFromInternal = (forceDefaultHost = false) => {
const nextUrl = buildUrlValue(internalValue.value, forceDefaultHost)
if (!nextUrl || nextUrl === url.value) {
return
}
url.value = nextUrl
}
const onHostBlur = () => {
hostFocused.value = false
syncUrlFromInternal(true)
}
const onHostFocus = () => {
hostFocused.value = true
}
const onDialogConfirm = () => {
syncUrlFromInternal(true)
editing.value = false
}
const isNoPortProto = computed(() => {
return props.protos[internalValue.value.proto] === 0
})
// Sync from external
watch(() => url.value, (newVal) => {
if (hostFocused.value) {
return
}
const parsed = parseUrl(newVal)
const internalHost = internalValue.value.host ?? ''
const sameHost = parsed.host === internalHost || (!internalHost.trim() && parsed.host === defaultHost)
if (parsed.proto !== internalValue.value.proto ||
!sameHost ||
parsed.port !== internalValue.value.port) {
internalValue.value = parsed
}
})
// Sync to external
watch(internalValue, () => {
syncUrlFromInternal(false)
}, { deep: true })
const protoOptions = computed(() => Object.keys(props.protos))
const filteredProtos = ref<string[]>([])
const searchProtos = (event: { query: string }) => {
if (!event.query.trim().length) {
filteredProtos.value = [...protoOptions.value]
} else {
filteredProtos.value = protoOptions.value.filter((proto) => {
return proto.toLowerCase().startsWith(event.query.toLowerCase())
})
}
}
const onProtoChange = (newProto: string) => {
const oldProto = internalValue.value.proto
const oldDefault = props.protos[oldProto]
const newDefault = props.protos[newProto]
if (oldDefault !== undefined && internalValue.value.port === oldDefault && newDefault !== undefined) {
internalValue.value.port = newDefault
}
internalValue.value.proto = newProto
}
</script>
<template>
<div ref="container" class="w-full">
<InputGroup v-if="!internalCompact" class="w-full">
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown
class="max-w-32 proto-autocomplete-in-group" @complete="searchProtos"
@update:model-value="onProtoChange" />
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow"
@focus="onHostFocus" @blur="onHostBlur" />
<template v-if="!isNoPortProto">
<InputGroupAddon>
<span style="font-weight: bold">:</span>
</InputGroupAddon>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="max-w-24"
fluid />
</template>
<slot name="actions"></slot>
</InputGroup>
<div v-else class="flex justify-between items-center p-2 border rounded w-full">
<span class="truncate mr-2">{{ url }}</span>
<div class="flex items-center">
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" @click="editing = true" />
<slot name="actions"></slot>
</div>
</div>
<Dialog v-model:visible="editing" modal :header="placeholder" :style="{ width: '90vw', maxWidth: '500px' }">
<div class="flex flex-col gap-4 py-4">
<div class="flex flex-col gap-2">
<label>{{ t('tunnel_proto') }}</label>
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown fluid
@complete="searchProtos" @update:model-value="onProtoChange" />
</div>
<div class="flex flex-col gap-2">
<label>{{ t('web.common.address') || 'Address' }}</label>
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="w-full"
@focus="onHostFocus" @blur="onHostBlur" />
</div>
<div v-if="!isNoPortProto" class="flex flex-col gap-2">
<label>{{ t('port') }}</label>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="w-full" />
</div>
</div>
<template #footer>
<Button :label="t('web.common.confirm') || 'Done'" icon="pi pi-check" @click="onDialogConfirm"
autofocus />
</template>
</Dialog>
</div>
</template>
<style scoped>
.proto-autocomplete-in-group,
.proto-autocomplete-in-group :deep(.p-autocomplete-input),
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
border-top-right-radius: 0 !important;
border-bottom-right-radius: 0 !important;
}
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
border-right: 0 !important;
}
</style>
@@ -0,0 +1,38 @@
<script setup lang="ts">
import { Button } from 'primevue'
import UrlInput from './UrlInput.vue'
const props = defineProps<{
protos: { [proto: string]: number }
addLabel: string
placeholder?: string
defaultUrl?: string
}>()
const list = defineModel<string[]>({ required: true })
const addUrl = () => {
list.value.push(props.defaultUrl || 'tcp://0.0.0.0:11010')
}
const removeUrl = (index: number) => {
list.value.splice(index, 1)
}
</script>
<template>
<div class="flex flex-col gap-y-2 w-full">
<div v-for="(_, index) in list" :key="index" class="flex gap-2 items-center w-full">
<UrlInput v-model="list[index]" :protos="protos" :placeholder="placeholder">
<template #actions>
<Button icon="pi pi-trash" severity="danger" text rounded @click="removeUrl(index)" />
</template>
</UrlInput>
</div>
<div class="flex justify-center items-center w-full h-10 border-2 border-dashed border-surface-300 dark:border-surface-600 rounded-lg cursor-pointer hover:border-primary hover:bg-surface-50 dark:hover:bg-surface-800 transition-colors duration-200 gap-2 text-surface-500 dark:text-surface-400"
@click="addUrl">
<i class="pi pi-plus text-sm"></i>
<span class="text-sm font-medium">{{ addLabel }}</span>
</div>
</div>
</template>
+42 -2
View File
@@ -3,6 +3,14 @@ networking_method: 网络方式
public_server: 公共服务器
manual: 手动
standalone: 独立
initial_nodes: 初始节点
initial_nodes_help: |
EasyTier 不分服务端/客户端。
• 填“初始节点” = 插上网线,直接加入已有网络。
• 留空 = 节点独立启动,等别人来连,或你后续手动连。
• 无论直接还是间接连通(通过其他节点搭桥),都能组网互通。
初始节点可以用自己的,也可以用别人分享的。
initial_node_placeholder: 例如:node.example.com
virtual_ipv4: 虚拟IPv4地址
virtual_ipv4_dhcp: DHCP
network_name: 网络名称
@@ -18,12 +26,17 @@ advanced_settings: 高级设置
basic_settings: 基础设置
listener_urls: 监听地址
rpc_port: RPC端口
port: 端口
rpc_portal_whitelists: RPC白名单
config_network: 配置网络
running: 运行中
error_msg: 错误信息
detail: 详情
add_new_network: 添加新网络
add_peer_url: 添加节点
add_initial_node: 添加初始节点
add_listener_url: 添加监听地址
add_mapped_listener: 添加监听映射
del_cur_network: 删除当前网络
select_network: 选择网络
network_instances: 网络实例
@@ -104,11 +117,14 @@ disable_quic_input: 禁用 QUIC 输入
disable_quic_input_help: 禁用 QUIC 入站流量,其他开启 QUIC 代理的节点仍然使用 TCP 连接到本节点。
disable_p2p: 禁用 P2P
disable_p2p_help: 禁用 P2P 模式,所有流量通过手动指定的服务器中转
disable_p2p_help: 禁用普通自动 P2P。开启 need-p2p 的节点仍可与当前节点建立 P2P
p2p_only: 仅 P2P
p2p_only_help: 仅与已经建立P2P连接的对等节点通信,不通过其他节点中转。
lazy_p2p: 延迟 P2P
lazy_p2p_help: 仅在实际流量需要某个对等节点时才尝试建立 P2P。开启 need-p2p 的节点仍会被主动连接。
bind_device: 仅使用物理网卡
bind_device_help: 仅使用物理网卡,避免 EasyTier 通过其他虚拟网建立连接。
@@ -123,6 +139,9 @@ relay_all_peer_rpc_help: |
允许转发所有对等节点的RPC数据包,即使对等节点不在转发网络白名单中。
这可以帮助白名单外网络中的对等节点建立P2P连接。
need_p2p: 需要 P2P
need_p2p_help: 即使其他节点启用了 lazy p2p,也要求它们主动与当前节点建立 P2P 连接。
multi_thread: 启用多线程
multi_thread_help: 使用多线程运行时
@@ -130,7 +149,7 @@ proxy_forward_by_system: 系统转发
proxy_forward_by_system_help: 通过系统内核转发子网代理数据包,禁用内置NAT
disable_encryption: 禁用加密
disable_encryption_help: 禁用对等节点通信的加密,默认为false,必须与对等节点相同
disable_encryption_help: 禁用对等节点通信的加密。注意:默认启用加密,若勾选此项则关闭,必须与对等节点设置一致。
disable_tcp_hole_punching: 禁用TCP打洞
disable_tcp_hole_punching_help: 禁用TCP打洞功能
@@ -177,6 +196,12 @@ mtu_help: |
TUN设备的MTU,默认为非加密时为1380,加密时为1360。范围:400-1380
mtu_placeholder: 留空为默认值1380
instance_recv_bps_limit: 实例接收限速
instance_recv_bps_limit_help: |
限制当前实例整体入站流量的总接收速率,单位为字节每秒。
留空表示不限速。
instance_recv_bps_limit_placeholder: 留空表示不限速
mapped_listeners: 监听映射
mapped_listeners_help: |
手动指定监听器的公网地址,其他节点可以使用该地址连接到本节点。
@@ -242,6 +267,7 @@ web:
captcha: 验证码
back_to_login: 返回登录
login: 登录
sso_login: "SSO 登录"
register:
title: 注册
@@ -260,6 +286,9 @@ web:
logout: 退出登录
language: 语言
change_password: 修改密码
change_password_now: 立即修改密码
default_password_warning: 当前账号仍在使用系统默认密码。为保障安全,请部署完成后立即修改密码。
password_changed_relogin: 密码已修改,请重新登录。
device:
list: 设备列表
@@ -334,6 +363,14 @@ web:
success: 成功
warning: 警告
info: 提示
password_empty: 密码不能为空
password_min_length: 密码至少需要 8 位
password_too_weak: 密码强度不足
password_mismatch: 两次输入的密码不一致
password_strength_hint: 密码至少 8 位,且需包含大小写字母、数字、特殊字符中的至少 2 类
enable: 开启
disable: 关闭
address: 地址
settings:
title: 设置
@@ -350,6 +387,8 @@ mode:
switch_mode: 切换模式
config_dir: 配置目录
rpc_portal: RPC端口
enable_rpc_tcp_listen: 开启 RPC 端口监听(TCP
rpc_listen_port: RPC 监听端口
log_level: 日志级别
log_dir: 日志目录
remote_rpc_address: 远程RPC地址
@@ -370,6 +409,7 @@ mode:
stop_service_success: 服务停止成功
remote_rpc_address_empty: 远程RPC地址不能为空
service_config_empty: 服务配置不能为空
rpc_connection_failed: "RPC 连接失败:{error}"
config-server:
title: 配置服务器
+42 -2
View File
@@ -3,6 +3,14 @@ networking_method: Networking Method
public_server: Public Server
manual: Manual
standalone: Standalone
initial_nodes: Initial Nodes
initial_nodes_help: |
EasyTier does not distinguish between server and client roles.
• Filling in Initial Nodes = plugging in the cable and joining an existing network.
• Leaving it empty = the node starts alone until others connect to it, or you connect it later yourself.
• Direct or indirect connectivity, including through relay nodes, can form one network.
Initial nodes can be your own nodes or ones shared by others.
initial_node_placeholder: "Example: node.example.com"
virtual_ipv4: Virtual IPv4
virtual_ipv4_dhcp: DHCP
network_name: Network Name
@@ -18,12 +26,17 @@ advanced_settings: Advanced Settings
basic_settings: Basic Settings
listener_urls: Listener URLs
rpc_port: RPC Port
port: Port
rpc_portal_whitelists: RPC Whitelist
config_network: Config Network
running: Running
error_msg: Error Message
detail: Detail
add_new_network: New Network
add_peer_url: Add Peer
add_initial_node: Add Initial Node
add_listener_url: Add Listener
add_mapped_listener: Add Mapped Listener
del_cur_network: Delete Current Network
select_network: Select Network
network_instances: Network Instances
@@ -103,11 +116,14 @@ disable_quic_input: Disable QUIC Input
disable_quic_input_help: Disable inbound QUIC traffic, while nodes with QUIC proxy enabled continue to connect using TCP.
disable_p2p: Disable P2P
disable_p2p_help: Disable P2P mode; route all traffic through a manually specified relay server.
disable_p2p_help: Disable ordinary automatic P2P. Nodes with need-p2p enabled can still establish P2P with this node.
p2p_only: P2P Only
p2p_only_help: Only communicate with peers that have already established P2P connections, do not relay through other nodes.
lazy_p2p: Lazy P2P
lazy_p2p_help: Only try to establish P2P when traffic actually targets a peer. Peers with need-p2p enabled are still connected proactively.
bind_device: Bind to Physical Device Only
bind_device_help: Use only the physical network interface to prevent EasyTier from connecting via virtual networks.
@@ -122,6 +138,9 @@ relay_all_peer_rpc_help: |
Relay all peer rpc packets, even if the peer is not in the relay network whitelist.
This can help peers not in relay network whitelist to establish p2p connection.
need_p2p: Need P2P
need_p2p_help: Ask other peers to proactively establish P2P connections to this node even when they enable lazy P2P.
multi_thread: Multi Thread
multi_thread_help: Use multi-thread runtime
@@ -129,7 +148,7 @@ proxy_forward_by_system: System Forward
proxy_forward_by_system_help: Forward packet to proxy networks via system kernel, disable internal nat for network proxy
disable_encryption: Disable Encryption
disable_encryption_help: Disable encryption for peers communication, default is false, must be same with peers
disable_encryption_help: Disable encryption for peers communication. Encryption is enabled by default, this option must be same with peers.
disable_tcp_hole_punching: Disable TCP Hole Punching
disable_tcp_hole_punching_help: Disable tcp hole punching
@@ -177,6 +196,12 @@ mtu_help: |
MTU of the TUN device, default is 1380 for non-encryption, 1360 for encryption. Range:400-1380
mtu_placeholder: Leave blank as default value 1380
instance_recv_bps_limit: Instance Receive Limit
instance_recv_bps_limit_help: |
Limit the total receive bandwidth for the whole instance. Unit: bytes per second.
Leave blank for no limit.
instance_recv_bps_limit_placeholder: Leave blank for no limit
mapped_listeners: Map Listeners
mapped_listeners_help: |
Manually specify the public address of the listener, other nodes can use this address to connect to this node.
@@ -242,6 +267,7 @@ web:
captcha: Captcha
back_to_login: Back to Login
login: Login
sso_login: "SSO Login"
register:
title: Register
@@ -260,6 +286,9 @@ web:
logout: Logout
language: Language
change_password: Change Password
change_password_now: Change Password Now
default_password_warning: This account is still using the default password. Change it immediately after deployment to keep your instance secure.
password_changed_relogin: Password changed. Please log in again.
device:
list: Device List
@@ -334,6 +363,14 @@ web:
success: Success
warning: Warning
info: Info
password_empty: Password cannot be empty
password_min_length: Password must be at least 8 characters long
password_too_weak: Password is too weak
password_mismatch: Passwords do not match
password_strength_hint: Password must be at least 8 characters and include at least 2 of uppercase letters, lowercase letters, numbers, or special characters
enable: Enable
disable: Disable
address: Address
settings:
title: Settings
@@ -350,6 +387,8 @@ mode:
switch_mode: Switch Mode
config_dir: Config Dir
rpc_portal: RPC Portal
enable_rpc_tcp_listen: Enable RPC port listening (TCP)
rpc_listen_port: RPC Listen Port
log_level: Log Level
log_dir: Log Dir
remote_rpc_address: Remote RPC Address
@@ -370,6 +409,7 @@ mode:
stop_service_success: Service stopped successfully
remote_rpc_address_empty: Remote RPC Address cannot be empty
service_config_empty: Service Config cannot be empty
rpc_connection_failed: "RPC connection failed: {error}"
config-server:
title: Config Server
+3 -1
View File
@@ -49,4 +49,6 @@
.v-popper__inner {
white-space: pre-wrap;
}
max-width: 32rem;
line-height: 1.5;
}
+54 -4
View File
@@ -6,6 +6,14 @@ export enum NetworkingMethod {
Standalone = 2,
}
export interface SecureModeConfig {
enabled: boolean
// Keep protocol compatibility with backend/import-export flows even though the GUI
// does not render secure-mode or credential inputs.
local_private_key?: string
local_public_key?: string
}
export interface NetworkConfig {
instance_id: string
@@ -14,7 +22,9 @@ export interface NetworkConfig {
network_length: number
hostname?: string
network_name: string
network_secret: string
network_secret?: string
credential_file?: string
secure_mode?: SecureModeConfig
networking_method: NetworkingMethod
@@ -43,10 +53,12 @@ export interface NetworkConfig {
disable_quic_input?: boolean
disable_p2p?: boolean
p2p_only?: boolean
lazy_p2p?: boolean
bind_device?: boolean
no_tun?: boolean
enable_exit_node?: boolean
relay_all_peer_rpc?: boolean
need_p2p?: boolean
multi_thread?: boolean
proxy_forward_by_system?: boolean
disable_encryption?: boolean
@@ -66,6 +78,7 @@ export interface NetworkConfig {
socks5_port: number
mtu: number | null
instance_recv_bps_limit: number | null
mapped_listeners: string[]
enable_magic_dns?: boolean
@@ -83,10 +96,10 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
network_length: 24,
network_name: 'easytier',
network_secret: '',
credential_file: '',
networking_method: NetworkingMethod.PublicServer,
public_server_url: 'tcp://public.easytier.top:11010',
networking_method: NetworkingMethod.Manual,
public_server_url: '',
peer_urls: [],
proxy_cidrs: [],
@@ -114,10 +127,12 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
disable_quic_input: false,
disable_p2p: false,
p2p_only: false,
lazy_p2p: false,
bind_device: true,
no_tun: false,
enable_exit_node: false,
relay_all_peer_rpc: false,
need_p2p: false,
multi_thread: true,
proxy_forward_by_system: false,
disable_encryption: false,
@@ -132,6 +147,7 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
enable_socks5: false,
socks5_port: 1080,
mtu: null,
instance_recv_bps_limit: null,
mapped_listeners: [],
enable_magic_dns: false,
enable_private_mode: false,
@@ -139,6 +155,39 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
}
}
function cleanPeerUrls(urls: string[] | undefined): string[] {
return (urls ?? []).map((url) => url.trim()).filter((url) => url.length > 0)
}
export function normalizeNetworkConfig(config: NetworkConfig): NetworkConfig {
const normalized: NetworkConfig = {
...config,
peer_urls: cleanPeerUrls(config.peer_urls),
}
const publicServerUrl = normalized.public_server_url?.trim() ?? ''
switch (normalized.networking_method) {
case NetworkingMethod.PublicServer:
normalized.peer_urls = publicServerUrl ? [publicServerUrl] : []
break
case NetworkingMethod.Manual:
break
case NetworkingMethod.Standalone:
default:
normalized.peer_urls = []
break
}
normalized.networking_method = NetworkingMethod.Manual
normalized.public_server_url = ''
return normalized
}
export function toBackendNetworkConfig(config: NetworkConfig): NetworkConfig {
return normalizeNetworkConfig(config)
}
export interface NetworkInstance {
instance_id: string
@@ -204,6 +253,7 @@ export interface NodeInfo {
stun_info: StunInfo
listeners: Url[]
vpn_portal_cfg?: string
peer_id: number
}
export interface StunInfo {
+2 -2
View File
@@ -11,7 +11,7 @@
"dependencies": {
"@modyfi/vite-plugin-yaml": "^1.1.0",
"@primeuix/themes": "^1.2.3",
"axios": "^1.7.7",
"axios": "^1.13.5",
"easytier-frontend-lib": "workspace:*",
"primevue": "^4.3.9",
"tailwindcss-primeui": "^0.3.4",
@@ -28,7 +28,7 @@
"postcss": "^8.4.47",
"tailwindcss": "=3.4.17",
"typescript": "~5.6.2",
"vite": "^5.4.10",
"vite": "^5.4.21",
"vite-plugin-singlefile": "^2.0.3",
"vue-tsc": "^2.1.10"
}
@@ -1,17 +1,80 @@
<script lang="ts" setup>
import { computed, inject, ref } from 'vue';
import { Card, Password, Button } from 'primevue';
import { useToast } from 'primevue/usetoast';
import { useRouter } from 'vue-router';
import { useI18n } from 'vue-i18n';
import ApiClient from '../modules/api';
import { clearMustChangePasswordFlag } from '../modules/auth-status';
import { validatePasswordStrength } from '../modules/password-policy';
const dialogRef = inject<any>('dialogRef');
const api = computed<ApiClient>(() => dialogRef.value.data.api);
const password = ref('');
const confirmPassword = ref('');
const toast = useToast();
const router = useRouter();
const { t } = useI18n();
const passwordValidation = computed(() => validatePasswordStrength(password.value));
const passwordMatches = computed(() => password.value === confirmPassword.value);
const passwordErrorMessage = computed(() => {
if (password.value.length === 0 || passwordValidation.value.valid) {
return '';
}
return t(passwordValidation.value.reasonKey!);
});
const confirmPasswordErrorMessage = computed(() => {
if (confirmPassword.value.length === 0 || passwordMatches.value) {
return '';
}
return t('web.common.password_mismatch');
});
const canSubmit = computed(() => passwordValidation.value.valid && passwordMatches.value);
const changePassword = async () => {
await api.value.change_password(password.value);
dialogRef.value.close();
if (!passwordValidation.value.valid) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t(passwordValidation.value.reasonKey!),
life: 3000,
});
return;
}
if (!passwordMatches.value) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t('web.common.password_mismatch'),
life: 3000,
});
return;
}
try {
await api.value.change_password(password.value);
toast.add({
severity: 'success',
summary: t('web.common.success'),
detail: t('web.main.password_changed_relogin'),
life: 3000,
});
clearMustChangePasswordFlag();
dialogRef.value.close();
router.push({ name: 'login' });
} catch (error) {
toast.add({
severity: 'error',
summary: t('web.common.error'),
detail: error instanceof Error ? error.message : String(error),
life: 3000,
});
}
}
</script>
@@ -19,15 +82,28 @@ const changePassword = async () => {
<div class="flex items-center justify-center">
<Card class="w-full max-w-md p-6">
<template #header>
<h2 class="text-2xl font-semibold text-center">Change Password
<h2 class="text-2xl font-semibold text-center">{{ t('web.main.change_password') }}
</h2>
</template>
<template #content>
<div class="flex flex-col space-y-4">
<Password v-model="password" placeholder="New Password" :feedback="false" toggleMask />
<Button @click="changePassword" label="Ok" />
<Password v-model="password" :placeholder="t('web.settings.new_password')" :feedback="false"
toggleMask />
<Password v-model="confirmPassword" :placeholder="t('web.settings.confirm_password')"
:feedback="false" toggleMask />
<small class="text-surface-500 dark:text-surface-400">
{{ t('web.common.password_strength_hint') }}
</small>
<small v-if="passwordErrorMessage" class="text-red-500 dark:text-red-400">
{{ passwordErrorMessage }}
</small>
<small v-if="confirmPasswordErrorMessage" class="text-red-500 dark:text-red-400">
{{ confirmPasswordErrorMessage }}
</small>
<Button @click="changePassword" :label="t('web.common.confirm')"
:disabled="!canSubmit" />
</div>
</template>
</Card>
</div>
</template>
</template>
+102 -4
View File
@@ -1,5 +1,5 @@
<script setup lang="ts">
import { computed, onMounted, ref } from 'vue';
import { computed, onBeforeUnmount, onMounted, ref, watch } from 'vue';
import { Card, InputText, Password, Button, AutoComplete } from 'primevue';
import { useRouter } from 'vue-router';
import { useToast } from 'primevue/usetoast';
@@ -7,6 +7,8 @@ import { I18nUtils } from 'easytier-frontend-lib';
import { getInitialApiHost, cleanAndLoadApiHosts, saveApiHost } from "../modules/api-host"
import { useI18n } from 'vue-i18n'
import ApiClient, { Credential, RegisterData } from '../modules/api';
import { setMustChangePasswordFlag } from '../modules/auth-status';
import { validatePasswordStrength } from '../modules/password-policy';
const { t } = useI18n()
@@ -22,8 +24,26 @@ const username = ref('');
const password = ref('');
const registerUsername = ref('');
const registerPassword = ref('');
const registerConfirmPassword = ref('');
const captcha = ref('');
const captchaSrc = computed(() => api.value.captcha_url());
const registerPasswordValidation = computed(() => validatePasswordStrength(registerPassword.value));
const registerPasswordsMatch = computed(() => registerPassword.value === registerConfirmPassword.value);
const registerPasswordErrorMessage = computed(() => {
if (registerPassword.value.length === 0 || registerPasswordValidation.value.valid) {
return '';
}
return t(registerPasswordValidation.value.reasonKey!);
});
const registerConfirmPasswordErrorMessage = computed(() => {
if (registerConfirmPassword.value.length === 0 || registerPasswordsMatch.value) {
return '';
}
return t('web.common.password_mismatch');
});
const canRegister = computed(() => registerPasswordValidation.value.valid && registerPasswordsMatch.value);
const onSubmit = async () => {
@@ -33,6 +53,7 @@ const onSubmit = async () => {
let ret = await api.value?.login(credential);
if (ret.success) {
localStorage.setItem('apiHost', btoa(apiHost.value));
setMustChangePasswordFlag(Boolean(ret.mustChangePassword));
router.push({
name: 'dashboard',
params: { apiHost: btoa(apiHost.value) },
@@ -43,6 +64,26 @@ const onSubmit = async () => {
};
const onRegister = async () => {
if (!registerPasswordValidation.value.valid) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t(registerPasswordValidation.value.reasonKey!),
life: 3000,
});
return;
}
if (!registerPasswordsMatch.value) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t('web.common.password_mismatch'),
life: 3000,
});
return;
}
saveApiHost(apiHost.value);
const credential: Credential = { username: registerUsername.value, password: registerPassword.value };
const registerReq: RegisterData = { credentials: credential, captcha: captcha.value };
@@ -68,8 +109,43 @@ const apiHostSearch = async (event: { query: string }) => {
});
}
onMounted(() => {
const oidcEnabled = ref(false);
const lastCheckedHost = ref('');
const oidcCheckTimer = ref<ReturnType<typeof setTimeout> | null>(null);
const checkOidcConfig = () => {
if (oidcCheckTimer.value) clearTimeout(oidcCheckTimer.value);
oidcCheckTimer.value = setTimeout(async () => {
const host = apiHost.value;
if (host === lastCheckedHost.value) return;
const enabled = (await new ApiClient(host).getOidcConfig()).enabled;
// If host changes while request is in-flight, do not overwrite UI state.
if (apiHost.value !== host) return;
lastCheckedHost.value = host;
oidcEnabled.value = enabled;
}, 300);
};
watch(apiHost, () => {
checkOidcConfig();
});
const onSsoLogin = () => {
saveApiHost(apiHost.value);
localStorage.setItem('apiHost', btoa(apiHost.value));
window.location.href = api.value.oidcLoginUrl();
};
onMounted(() => {
checkOidcConfig();
});
onBeforeUnmount(() => {
if (oidcCheckTimer.value) {
clearTimeout(oidcCheckTimer.value);
oidcCheckTimer.value = null;
}
});
</script>
@@ -104,6 +180,10 @@ onMounted(() => {
<Button :label="t('web.login.register')" type="button" class="w-full"
@click="saveApiHost(apiHost); $router.replace({ name: 'register' })" severity="secondary" />
</div>
<div v-if="oidcEnabled" class="flex items-center justify-between">
<Button :label="t('web.login.sso_login')" type="button" class="w-full" severity="info"
@click="onSsoLogin" />
</div>
</form>
<form v-else @submit.prevent="onRegister" class="space-y-4">
@@ -117,6 +197,23 @@ onMounted(() => {
}}</label>
<Password id="register-password" v-model="registerPassword" required toggleMask
:feedback="false" class="w-full" />
<small class="text-surface-500 dark:text-surface-400">
{{ t('web.common.password_strength_hint') }}
</small>
<small v-if="registerPasswordErrorMessage" class="block text-red-500 dark:text-red-400">
{{ registerPasswordErrorMessage }}
</small>
</div>
<div class="p-field">
<label for="register-confirm-password" class="block text-sm font-medium">
{{ t('web.settings.confirm_password') }}
</label>
<Password id="register-confirm-password" v-model="registerConfirmPassword" required toggleMask
:feedback="false" class="w-full" />
<small v-if="registerConfirmPasswordErrorMessage"
class="block text-red-500 dark:text-red-400">
{{ registerConfirmPasswordErrorMessage }}
</small>
</div>
<div class="p-field">
<label for="captcha" class="block text-sm font-medium">{{ t('web.login.captcha') }}</label>
@@ -124,7 +221,8 @@ onMounted(() => {
<img :src="captchaSrc" alt="Captcha" class="mt-2 mb-2" />
</div>
<div class="flex items-center justify-between">
<Button :label="t('web.login.register')" type="submit" class="w-full" />
<Button :label="t('web.login.register')" type="submit" class="w-full"
:disabled="!canRegister" />
</div>
<div class="flex items-center justify-between">
<Button :label="t('web.login.back_to_login')" type="button" class="w-full"
@@ -144,4 +242,4 @@ onMounted(() => {
</div>
</template>
<style scoped></style>
<style scoped></style>
@@ -1,13 +1,18 @@
<script setup lang="ts">
import { I18nUtils } from 'easytier-frontend-lib'
import { computed, onMounted, ref, onUnmounted, nextTick } from 'vue';
import { Button, TieredMenu } from 'primevue';
import { Button, Message, TieredMenu } from 'primevue';
import { useRoute, useRouter } from 'vue-router';
import { useDialog } from 'primevue/usedialog';
import ChangePassword from './ChangePassword.vue';
import Icon from '../assets/easytier.png'
import { useI18n } from 'vue-i18n'
import ApiClient from '../modules/api';
import {
clearMustChangePasswordFlag,
getMustChangePasswordFlag,
setMustChangePasswordFlag,
} from '../modules/auth-status';
const { t } = useI18n()
const route = useRoute();
@@ -15,6 +20,7 @@ const router = useRouter();
const api = computed<ApiClient | undefined>(() => {
try {
return new ApiClient(atob(route.params.apiHost as string), () => {
clearMustChangePasswordFlag();
router.push({ name: 'login' });
})
} catch (e) {
@@ -23,25 +29,42 @@ const api = computed<ApiClient | undefined>(() => {
});
const dialog = useDialog();
const mustChangePassword = ref(false);
const openChangePasswordDialog = () => {
dialog.open(ChangePassword, {
props: {
modal: true,
},
data: {
api: api.value,
}
});
};
const loadAuthStatus = async () => {
const cachedStatus = getMustChangePasswordFlag();
if (cachedStatus !== null) {
mustChangePassword.value = cachedStatus;
}
try {
const status = await api.value?.check_login_status();
mustChangePassword.value = Boolean(
status?.loggedIn && status?.mustChangePassword,
);
setMustChangePasswordFlag(mustChangePassword.value);
} catch (e) {
console.error('Failed to load auth status', e);
}
};
const userMenu = ref();
const userMenuItems = ref([
{
label: t('web.main.change_password'),
icon: 'pi pi-key',
command: () => {
console.log('File');
let ret = dialog.open(ChangePassword, {
props: {
modal: true,
},
data: {
api: api.value,
}
});
console.log("return", ret)
},
command: openChangePasswordDialog,
},
{
label: t('web.main.logout'),
@@ -52,6 +75,7 @@ const userMenuItems = ref([
} catch (e) {
console.error("logout failed", e);
}
clearMustChangePasswordFlag();
router.push({ name: 'login' });
},
},
@@ -92,6 +116,7 @@ onMounted(async () => {
// DOM
await nextTick();
document.addEventListener('click', handleClickOutside);
await loadAuthStatus();
});
onUnmounted(() => {
@@ -171,6 +196,13 @@ onUnmounted(() => {
<div class="p-4 sm:ml-64">
<div class="p-4 border-2 border-gray-200 border-dashed rounded-lg dark:border-gray-700">
<div class="grid grid-cols-1 gap-4">
<Message v-if="mustChangePassword" severity="warn" :closable="false">
<div class="flex flex-col gap-3 sm:flex-row sm:items-center sm:justify-between">
<span>{{ t('web.main.default_password_warning') }}</span>
<Button size="small" icon="pi pi-key" :label="t('web.main.change_password_now')"
@click="openChangePasswordDialog" />
</div>
</Message>
<RouterView v-slot="{ Component }">
<component :is="Component" :api="api" />
</RouterView>
+68 -21
View File
@@ -1,15 +1,31 @@
import axios, { AxiosError, AxiosInstance, AxiosResponse, InternalAxiosRequestConfig } from 'axios';
import { type Api, type NetworkTypes, Utils } from 'easytier-frontend-lib';
import { type Api, NetworkTypes, Utils } from 'easytier-frontend-lib';
import { Md5 } from 'ts-md5';
const hashAuthPassword = (password: string) => Md5.hashStr(password);
export interface ValidateConfigResponse {
toml_config: string;
}
export interface OidcConfigResponse {
enabled: boolean;
}
// 定义接口返回的数据结构
export interface LoginResponse {
success: boolean;
message: string;
mustChangePassword?: boolean;
}
export interface AuthStatusResponse {
must_change_password: boolean;
}
export interface CheckLoginStatusResponse {
loggedIn: boolean;
mustChangePassword: boolean;
}
export interface RegisterResponse {
@@ -78,7 +94,6 @@ export class ApiClient {
// 添加响应拦截器
this.client.interceptors.response.use((response: AxiosResponse) => {
console.debug('Axios Response:', response);
return response.data; // 假设服务器返回的数据都在data属性中
}, (error: any) => {
if (error.response) {
@@ -104,9 +119,8 @@ export class ApiClient {
// 注册
public async register(data: RegisterData): Promise<RegisterResponse> {
try {
data.credentials.password = Md5.hashStr(data.credentials.password);
const response = await this.client.post<RegisterResponse>('/auth/register', data);
console.log("register response:", response);
data.credentials.password = hashAuthPassword(data.credentials.password);
await this.client.post<RegisterResponse>('/auth/register', data);
return { success: true, message: 'Register success', };
} catch (error) {
if (error instanceof AxiosError) {
@@ -119,10 +133,13 @@ export class ApiClient {
// 登录
public async login(data: Credential): Promise<LoginResponse> {
try {
data.password = Md5.hashStr(data.password);
const response = await this.client.post<any>('/auth/login', data);
console.log("login response:", response);
return { success: true, message: 'Login success', };
data.password = hashAuthPassword(data.password);
const response = await this.client.post<any, AuthStatusResponse>('/auth/login', data);
return {
success: true,
message: 'Login success',
mustChangePassword: response.must_change_password,
};
} catch (error) {
if (error instanceof AxiosError) {
if (error.response?.status === 401) {
@@ -143,16 +160,26 @@ export class ApiClient {
}
public async change_password(new_password: string) {
await this.client.put('/auth/password', { new_password: Md5.hashStr(new_password) });
await this.client.put('/auth/password', { new_password: hashAuthPassword(new_password) });
}
public async check_login_status() {
public async check_login_status(): Promise<CheckLoginStatusResponse> {
try {
await this.client.get('/auth/check_login_status');
return true;
const response = await this.client.get<any, AuthStatusResponse>('/auth/check_login_status');
return {
loggedIn: true,
mustChangePassword: response.must_change_password,
};
} catch (error) {
return false;
}
if (error instanceof AxiosError && error.response?.status === 401) {
return {
loggedIn: false,
mustChangePassword: false,
};
}
throw error;
};
}
public async list_session() {
@@ -174,6 +201,19 @@ export class ApiClient {
return this.client.defaults.baseURL + '/auth/captcha';
}
public async getOidcConfig(): Promise<OidcConfigResponse> {
try {
const response = await this.client.get<any, OidcConfigResponse>('/auth/oidc/config');
return response;
} catch (error) {
return { enabled: false };
}
}
public oidcLoginUrl() {
return this.client.defaults.baseURL + '/auth/oidc/login';
}
public get_remote_client(machine_id: string): Api.RemoteClient {
return new WebRemoteClient(machine_id, this.client);
}
@@ -189,13 +229,13 @@ class WebRemoteClient implements Api.RemoteClient {
}
async validate_config(config: NetworkTypes.NetworkConfig): Promise<Api.ValidateConfigResponse> {
const response = await this.client.post<NetworkTypes.NetworkConfig, ValidateConfigResponse>(`/machines/${this.machine_id}/validate-config`, {
config: config,
config: NetworkTypes.toBackendNetworkConfig(config),
});
return response;
}
async run_network(config: NetworkTypes.NetworkConfig, save: boolean): Promise<undefined> {
await this.client.post<string>(`/machines/${this.machine_id}/networks`, {
config: config,
config: NetworkTypes.toBackendNetworkConfig(config),
save: save
});
}
@@ -216,15 +256,19 @@ class WebRemoteClient implements Api.RemoteClient {
});
}
async save_config(config: NetworkTypes.NetworkConfig): Promise<undefined> {
await this.client.put(`/machines/${this.machine_id}/networks/config/${config.instance_id}`, { config });
await this.client.put(`/machines/${this.machine_id}/networks/config/${config.instance_id}`, {
config: NetworkTypes.toBackendNetworkConfig(config)
});
}
async get_network_config(inst_id: string): Promise<NetworkTypes.NetworkConfig> {
const response = await this.client.get<any, NetworkTypes.NetworkConfig>('/machines/' + this.machine_id + '/networks/config/' + inst_id);
return response;
return NetworkTypes.normalizeNetworkConfig(response);
}
async generate_config(config: NetworkTypes.NetworkConfig): Promise<Api.GenerateConfigResponse> {
try {
const response = await this.client.post<any, GenerateConfigResponse>('/generate-config', { config });
const response = await this.client.post<any, GenerateConfigResponse>('/generate-config', {
config: NetworkTypes.toBackendNetworkConfig(config)
});
return response;
} catch (error) {
if (error instanceof AxiosError) {
@@ -236,6 +280,9 @@ class WebRemoteClient implements Api.RemoteClient {
async parse_config(toml_config: string): Promise<Api.ParseConfigResponse> {
try {
const response = await this.client.post<any, ParseConfigResponse>('/parse-config', { toml_config });
if (response.config) {
response.config = NetworkTypes.normalizeNetworkConfig(response.config);
}
return response;
} catch (error) {
if (error instanceof AxiosError) {
@@ -252,4 +299,4 @@ class WebRemoteClient implements Api.RemoteClient {
}
}
export default ApiClient;
export default ApiClient;
@@ -0,0 +1,18 @@
const MUST_CHANGE_PASSWORD_STORAGE_KEY = 'auth.mustChangePassword';
export const getMustChangePasswordFlag = (): boolean | null => {
const value = sessionStorage.getItem(MUST_CHANGE_PASSWORD_STORAGE_KEY);
if (value === null) {
return null;
}
return value === 'true';
};
export const setMustChangePasswordFlag = (value: boolean) => {
sessionStorage.setItem(MUST_CHANGE_PASSWORD_STORAGE_KEY, value ? 'true' : 'false');
};
export const clearMustChangePasswordFlag = () => {
sessionStorage.removeItem(MUST_CHANGE_PASSWORD_STORAGE_KEY);
};
@@ -0,0 +1,55 @@
export type PasswordValidationReasonKey =
| 'web.common.password_empty'
| 'web.common.password_min_length'
| 'web.common.password_too_weak';
export interface PasswordValidationResult {
valid: boolean;
reasonKey?: PasswordValidationReasonKey;
}
const PASSWORD_MIN_LENGTH = 8;
export const countPasswordClasses = (password: string) => {
let count = 0;
if (/[a-z]/.test(password)) {
count += 1;
}
if (/[A-Z]/.test(password)) {
count += 1;
}
if (/\d/.test(password)) {
count += 1;
}
if (/[^A-Za-z0-9\s]/.test(password)) {
count += 1;
}
return count;
};
export const validatePasswordStrength = (password: string): PasswordValidationResult => {
if (password.trim().length === 0) {
return {
valid: false,
reasonKey: 'web.common.password_empty',
};
}
if (password.length < PASSWORD_MIN_LENGTH) {
return {
valid: false,
reasonKey: 'web.common.password_min_length',
};
}
if (countPasswordClasses(password) < 2) {
return {
valid: false,
reasonKey: 'web.common.password_too_weak',
};
}
return { valid: true };
};
+39 -3
View File
@@ -17,14 +17,20 @@ cli:
en: "The port to listen for the config server, used by the easytier-core to connect to"
zh-CN: "配置服务器的监听端口,用于被 easytier-core 连接"
config_server_protocol:
en: "The protocol to listen for the config server, used by the easytier-core to connect to"
zh-CN: "配置服务器的监听协议,用于被 easytier-core 连接, 可能的值:udp, tcp"
en: "The protocol to listen for the config server, used by the easytier-core to connect to, possible values: udp, tcp, ws"
zh-CN: "配置服务器的监听协议,用于被 easytier-core 连接, 可能的值:udp, tcp, ws"
api_server_port:
en: "The port to listen for the restful server, acting as ApiHost and used by the web frontend"
zh-CN: "restful 服务器的监听端口,作为 ApiHost 并被 web 前端使用"
api_server_addr:
en: "The listen address for the restful server, e.g. 0.0.0.0, ::, 127.0.0.1"
zh-CN: "restful 服务器的监听地址, 例如 0.0.0.0, ::, 127.0.0.1"
web_server_port:
en: "The port to listen for the web dashboard server, default is same as the api server port"
zh-CN: "web dashboard 服务器的监听端口, 默认为与 api 服务器端口相同"
web_server_addr:
en: "The listen address for the web dashboard server (only effective when web_server_port differs from api_server_port or web_server_addr differs from api_server_addr), e.g. 0.0.0.0, ::, 127.0.0.1"
zh-CN: "web dashboard 服务器的监听地址(仅在 web_server_port 与 api_server_port 不同,或 web_server_addr 与 api_server_addr 不同时生效), 例如 0.0.0.0, ::, 127.0.0.1"
no_web:
en: "Do not run the web dashboard server"
zh-CN: "不运行 web dashboard 服务器"
@@ -33,4 +39,34 @@ cli:
zh-CN: "API 服务器的 URL,用于 web 前端连接"
geoip_db:
en: "The path to the GeoIP2 database file, used to lookup the location of the client, default is the embedded file (only country information) , recommend https://github.com/P3TERX/GeoLite.mmdb"
zh-CN: "GeoIP2 数据库文件路径,用于查找客户端的位置,默认为嵌入文件(仅国家信息),推荐 https://github.com/P3TERX/GeoLite.mmdb"
zh-CN: "GeoIP2 数据库文件路径,用于查找客户端的位置,默认为嵌入文件(仅国家信息),推荐 https://github.com/P3TERX/GeoLite.mmdb"
disable_registration:
en: "Disable user registration"
zh-CN: "禁用用户注册"
oidc_issuer_url:
en: "The OIDC issuer URL for single sign-on authentication"
zh-CN: "OIDC 签发者 URL,用于单点登录认证"
oidc_client_id:
en: "The OIDC client ID"
zh-CN: "OIDC 客户端 ID"
oidc_client_secret:
en: "The OIDC client secret (can also be set via OIDC_CLIENT_SECRET env var)"
zh-CN: "OIDC 客户端密钥(也可通过 OIDC_CLIENT_SECRET 环境变量设置)"
oidc_username_claim:
en: "The OIDC claim to use as the local username, default: preferred_username"
zh-CN: "用作本地用户名的 OIDC claim 字段,默认: preferred_username"
oidc_scopes:
en: "OIDC scopes to request during login. Supports comma-separated values or repeated --oidc-scopes flags, default: openid,profile"
zh-CN: "登录时请求的 OIDC scopes。支持逗号分隔或多次指定 --oidc-scopes,默认: openid,profile"
oidc_redirect_url:
en: "The OIDC redirect URL (callback URL), must match exactly what is registered with your Identity Provider. Required when using OIDC. Example: http://your-domain.com:11211/api/v1/auth/oidc/callback"
zh-CN: "OIDC 重定向 URL(回调 URL),必须与身份提供商注册的地址完全一致。使用 OIDC 时必须提供。示例: http://your-domain.com:11211/api/v1/auth/oidc/callback"
allow_auto_create_user:
en: "Allow auto-creating local user when easytier-core connects with an unknown username"
zh-CN: "当 easytier-core 使用未知用户名连接时,允许自动创建本地用户"
oidc_disable_pkce:
en: "Disable PKCE (Proof Key for Code Exchange) for OIDC authentication"
zh-CN: "禁用 OIDC 认证的 PKCE(授权码交换证明密钥)"
oidc_frontend_base_url:
en: "Frontend base URL to redirect to after successful OIDC callback. Required when frontend and API are deployed separately (non-embed build, --no-web mode, or different web_server_port)"
zh-CN: "OIDC 回调成功后跳转的前端入口地址。当前端与 API 分离部署时必须提供(非 embed 构建、--no-web 模式、或 web_server_port 与 api_server_port 不同)"
+83 -17
View File
@@ -13,10 +13,14 @@ use easytier::{
},
rpc_service::remote_client::{self, RemoteClientManager},
tunnel::TunnelListener,
web_client::security,
};
use maxminddb::geoip2;
use session::{Location, Session};
use storage::{Storage, StorageToken};
use crate::webhook::SharedWebhookConfig;
use crate::FeatureFlags;
use tokio::task::JoinSet;
use crate::db::{entity::user_running_network_configs, Db, UserIdInDb};
@@ -55,11 +59,19 @@ pub struct ClientManager {
client_sessions: Arc<DashMap<url::Url, Arc<Session>>>,
storage: Storage,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
geoip_db: Arc<Option<maxminddb::Reader<Vec<u8>>>>,
}
impl ClientManager {
pub fn new(db: Db, geoip_db: Option<String>) -> Self {
pub fn new(
db: Db,
geoip_db: Option<String>,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
) -> Self {
let client_sessions = Arc::new(DashMap::new());
let sessions: Arc<DashMap<url::Url, Arc<Session>>> = client_sessions.clone();
let mut tasks = JoinSet::new();
@@ -76,6 +88,9 @@ impl ClientManager {
client_sessions,
storage: Storage::new(db),
feature_flags,
webhook_config,
geoip_db: Arc::new(load_geoip_db(geoip_db)),
}
}
@@ -90,17 +105,33 @@ impl ClientManager {
let storage = self.storage.weak_ref();
let listeners_cnt = self.listeners_cnt.clone();
let geoip_db = self.geoip_db.clone();
let feature_flags = self.feature_flags.clone();
let webhook_config = self.webhook_config.clone();
self.tasks.spawn(async move {
while let Ok(tunnel) = listener.accept().await {
let (tunnel, secure) = match security::accept_or_upgrade_server_tunnel(tunnel).await {
Ok(v) => v,
Err(error) => {
tracing::warn!(%error, "failed to accept secure tunnel, dropping connection");
continue;
}
};
let info = tunnel.info().unwrap();
let client_url: url::Url = info.remote_addr.unwrap().into();
let location = Self::lookup_location(&client_url, geoip_db.clone());
tracing::info!(
"New session from {:?}, location: {:?}",
"New session from {:?}, secure: {}, location: {:?}",
client_url,
secure,
location
);
let mut session = Session::new(storage.clone(), client_url.clone(), location);
let mut session = Session::new(
storage.clone(),
client_url.clone(),
location,
feature_flags.clone(),
webhook_config.clone(),
);
session.serve(tunnel).await;
sessions.insert(client_url, Arc::new(session));
}
@@ -144,6 +175,24 @@ impl ClientManager {
.map(|item| item.value().clone())
}
pub async fn disconnect_session_by_machine_id(
&self,
user_id: UserIdInDb,
machine_id: &uuid::Uuid,
) -> bool {
let Some(client_url) = self
.storage
.get_client_url_by_machine_id(user_id, machine_id)
else {
return false;
};
let Some((_, session)) = self.client_sessions.remove(&client_url) else {
return false;
};
session.stop().await;
true
}
pub async fn list_machine_by_user_id(&self, user_id: UserIdInDb) -> Vec<url::Url> {
self.storage.list_user_clients(user_id)
}
@@ -291,12 +340,19 @@ mod tests {
};
use sqlx::Executor;
use crate::{client_manager::ClientManager, db::Db};
use crate::{client_manager::ClientManager, db::Db, FeatureFlags};
#[tokio::test]
async fn test_client() {
let listener = UdpTunnelListener::new("udp://0.0.0.0:54333".parse().unwrap());
let mut mgr = ClientManager::new(Db::memory_db().await, None);
let mut mgr = ClientManager::new(
Db::memory_db().await,
None,
Arc::new(FeatureFlags::default()),
Arc::new(crate::webhook::WebhookConfig::new(
None, None, None, None, None,
)),
);
mgr.add_listener(Box::new(listener)).await.unwrap();
mgr.db()
@@ -310,26 +366,36 @@ mod tests {
connector,
"test",
"test",
false,
Arc::new(NetworkInstanceManager::new()),
None,
);
wait_for_condition(
|| async { mgr.client_sessions.len() == 1 },
Duration::from_secs(6),
|| async { !mgr.client_sessions.is_empty() },
Duration::from_secs(12),
)
.await;
let mut a = mgr
.client_sessions
.iter()
.next()
.unwrap()
.data()
.read()
.await
.heartbeat_waiter();
let req = a.recv().await.unwrap();
let req = tokio::time::timeout(Duration::from_secs(12), async {
loop {
let session = mgr
.client_sessions
.iter()
.next()
.map(|item| item.value().clone());
let Some(session) = session else {
tokio::time::sleep(Duration::from_millis(100)).await;
continue;
};
let mut waiter = session.data().read().await.heartbeat_waiter();
if let Ok(req) = waiter.recv().await {
break req;
}
}
})
.await
.unwrap();
println!("{:?}", req);
println!("{:?}", mgr);
}
+428 -21
View File
@@ -1,4 +1,9 @@
use std::{fmt::Debug, str::FromStr as _, sync::Arc};
use std::{
collections::{HashMap, HashSet},
fmt::Debug,
str::FromStr as _,
sync::Arc,
};
use anyhow::Context;
use easytier::{
@@ -18,6 +23,8 @@ use easytier::{
use tokio::sync::{broadcast, RwLock};
use super::storage::{Storage, StorageToken, WeakRefStorage};
use crate::webhook::SharedWebhookConfig;
use crate::FeatureFlags;
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct Location {
@@ -29,22 +36,36 @@ pub struct Location {
#[derive(Debug)]
pub struct SessionData {
storage: WeakRefStorage,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
client_url: url::Url,
storage_token: Option<StorageToken>,
binding_version: Option<u64>,
applied_config_revision: Option<String>,
notifier: broadcast::Sender<HeartbeatRequest>,
req: Option<HeartbeatRequest>,
location: Option<Location>,
}
impl SessionData {
fn new(storage: WeakRefStorage, client_url: url::Url, location: Option<Location>) -> Self {
fn new(
storage: WeakRefStorage,
client_url: url::Url,
location: Option<Location>,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
) -> Self {
let (tx, _rx1) = broadcast::channel(2);
SessionData {
storage,
feature_flags,
webhook_config,
client_url,
storage_token: None,
binding_version: None,
applied_config_revision: None,
notifier: tx,
req: None,
location,
@@ -69,6 +90,27 @@ impl Drop for SessionData {
if let Ok(storage) = Storage::try_from(self.storage.clone()) {
if let Some(token) = self.storage_token.as_ref() {
storage.remove_client(token);
// Notify the webhook receiver when a node disconnects.
if self.webhook_config.is_enabled() {
let webhook = self.webhook_config.clone();
let machine_id = token.machine_id.to_string();
let user_id = Some(token.user_id);
let token_value = token.token.clone();
let web_instance_id = webhook.web_instance_id.clone();
let binding_version = self.binding_version;
tokio::spawn(async move {
webhook
.notify_node_disconnected(&crate::webhook::NodeDisconnectedRequest {
machine_id,
token: token_value,
user_id,
web_instance_id,
binding_version,
})
.await;
});
}
}
}
}
@@ -82,6 +124,89 @@ struct SessionRpcService {
}
impl SessionRpcService {
fn normalize_network_config(
mut network_config: serde_json::Value,
inst_id: uuid::Uuid,
) -> anyhow::Result<NetworkConfig> {
let network_name = network_config
.get("network_name")
.and_then(|v| v.as_str())
.filter(|v| !v.is_empty())
.ok_or_else(|| anyhow::anyhow!("webhook response missing network_name"))?
.to_string();
let config_obj = network_config
.as_object_mut()
.ok_or_else(|| anyhow::anyhow!("webhook network_config must be a JSON object"))?;
config_obj.insert(
"instance_id".to_string(),
serde_json::Value::String(inst_id.to_string()),
);
config_obj
.entry("instance_name".to_string())
.or_insert_with(|| serde_json::Value::String(network_name));
Ok(serde_json::from_value::<NetworkConfig>(network_config)?)
}
async fn reconcile_managed_network_configs(
storage: &Storage,
user_id: i32,
machine_id: uuid::Uuid,
desired_configs: Vec<crate::webhook::ManagedNetworkConfig>,
) -> anyhow::Result<()> {
let existing_configs = storage
.db()
.list_network_configs((user_id, machine_id), ListNetworkProps::All)
.await
.map_err(|e| anyhow::anyhow!("failed to list existing network configs: {:?}", e))?;
let existing_ids = existing_configs
.iter()
.filter_map(|cfg| uuid::Uuid::parse_str(&cfg.network_instance_id).ok())
.collect::<HashSet<_>>();
let mut desired_ids = HashSet::with_capacity(desired_configs.len());
let mut normalized = HashMap::with_capacity(desired_configs.len());
for desired in desired_configs {
let inst_id = uuid::Uuid::parse_str(&desired.instance_id).with_context(|| {
format!(
"invalid desired managed instance id: {}",
desired.instance_id
)
})?;
let config = Self::normalize_network_config(desired.network_config, inst_id)?;
desired_ids.insert(inst_id);
normalized.insert(inst_id, config);
}
for (inst_id, config) in normalized {
storage
.db()
.insert_or_update_user_network_config((user_id, machine_id), inst_id, config)
.await
.map_err(|e| {
anyhow::anyhow!(
"failed to persist managed network config {}: {:?}",
inst_id,
e
)
})?;
}
let stale_ids = existing_ids
.difference(&desired_ids)
.copied()
.collect::<Vec<_>>();
if !stale_ids.is_empty() {
storage
.db()
.delete_network_configs((user_id, machine_id), &stale_ids)
.await
.map_err(|e| anyhow::anyhow!("failed to delete stale network configs: {:?}", e))?;
}
Ok(())
}
async fn handle_heartbeat(
&self,
req: HeartbeatRequest,
@@ -98,20 +223,98 @@ impl SessionRpcService {
req.machine_id
))?;
let user_id = storage
.db()
.get_user_id_by_token(req.user_token.clone())
.await
.with_context(|| {
format!(
"Failed to get user id by token from db: {:?}",
let (
user_id,
webhook_managed_network_configs,
webhook_config_revision,
webhook_validated,
binding_version,
) = if data.webhook_config.is_enabled() {
let webhook_req = crate::webhook::ValidateTokenRequest {
token: req.user_token.clone(),
machine_id: machine_id.to_string(),
hostname: req.hostname.clone(),
version: req.easytier_version.clone(),
os_type: req.device_os.as_ref().map(|info| info.os_type.clone()),
os_version: req.device_os.as_ref().map(|info| info.version.clone()),
os_distribution: req.device_os.as_ref().map(|info| info.distribution.clone()),
web_instance_id: data.webhook_config.web_instance_id.clone(),
web_instance_api_base_url: data.webhook_config.web_instance_api_base_url.clone(),
};
let resp = data
.webhook_config
.validate_token(&webhook_req)
.await
.map_err(|e| anyhow::anyhow!("Webhook token validation failed: {:?}", e))?;
if resp.valid {
let user_id = match storage
.db()
.get_user_id_by_token(req.user_token.clone())
.await
.map_err(|e| anyhow::anyhow!("DB error: {:?}", e))?
{
Some(id) => id,
None => storage
.auto_create_user(&req.user_token)
.await
.with_context(|| {
format!("Failed to auto-create webhook user: {:?}", req.user_token)
})?,
};
(
user_id,
resp.managed_network_configs,
resp.config_revision,
true,
Some(resp.binding_version),
)
} else {
return Err(anyhow::anyhow!(
"Webhook rejected token for machine {:?}: {:?}",
machine_id,
req.user_token
)
})?
.ok_or(anyhow::anyhow!(
"User not found by token: {:?}",
req.user_token
))?;
.into());
}
} else {
let user_id = match storage
.db()
.get_user_id_by_token(req.user_token.clone())
.await
.with_context(|| {
format!(
"Failed to get user id by token from db: {:?}",
req.user_token
)
})? {
Some(id) => id,
None if data.feature_flags.allow_auto_create_user => storage
.auto_create_user(&req.user_token)
.await
.with_context(|| format!("Failed to auto-create user: {:?}", req.user_token))?,
None => {
return Err(
anyhow::anyhow!("User not found by token: {:?}", req.user_token).into(),
);
}
};
(user_id, Vec::new(), String::new(), false, None)
};
if webhook_validated
&& data.applied_config_revision.as_deref() != Some(webhook_config_revision.as_str())
{
Self::reconcile_managed_network_configs(
&storage,
user_id,
machine_id,
webhook_managed_network_configs,
)
.await
.map_err(rpc_types::error::Error::from)?;
data.applied_config_revision = Some(webhook_config_revision);
}
if data.req.replace(req.clone()).is_none() {
assert!(data.storage_token.is_none());
@@ -121,6 +324,27 @@ impl SessionRpcService {
machine_id,
user_id,
});
data.binding_version = binding_version;
// Notify the webhook receiver on the first successful heartbeat.
if data.webhook_config.is_enabled() {
let webhook = data.webhook_config.clone();
let connect_req = crate::webhook::NodeConnectedRequest {
machine_id: machine_id.to_string(),
token: req.user_token.clone(),
user_id: Some(user_id),
hostname: req.hostname.clone(),
version: req.easytier_version.clone(),
os_type: req.device_os.as_ref().map(|info| info.os_type.clone()),
os_version: req.device_os.as_ref().map(|info| info.version.clone()),
os_distribution: req.device_os.as_ref().map(|info| info.distribution.clone()),
web_instance_id: webhook.web_instance_id.clone(),
binding_version,
};
tokio::spawn(async move {
webhook.notify_node_connected(&connect_req).await;
});
}
}
let Ok(report_time) = chrono::DateTime::<chrono::Local>::from_str(&req.report_time) else {
@@ -154,6 +378,16 @@ impl WebServerService for SessionRpcService {
}
ret
}
async fn get_feature(
&self,
_: BaseController,
_: easytier::proto::web::GetFeatureRequest,
) -> rpc_types::error::Result<easytier::proto::web::GetFeatureResponse> {
Ok(easytier::proto::web::GetFeatureResponse {
support_encryption: true,
})
}
}
pub struct Session {
@@ -173,8 +407,15 @@ impl Debug for Session {
type SessionRpcClient = Box<dyn WebClientService<Controller = BaseController> + Send>;
impl Session {
pub fn new(storage: WeakRefStorage, client_url: url::Url, location: Option<Location>) -> Self {
let session_data = SessionData::new(storage, client_url, location);
pub fn new(
storage: WeakRefStorage,
client_url: url::Url,
location: Option<Location>,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
) -> Self {
let session_data =
SessionData::new(storage, client_url, location, feature_flags, webhook_config);
let data = Arc::new(RwLock::new(session_data));
let rpc_mgr =
@@ -211,6 +452,8 @@ impl Session {
storage: WeakRefStorage,
rpc_client: SessionRpcClient,
) {
let mut cleaned_web_managed_instances = false;
let mut last_desired_inst_ids: Option<HashSet<String>> = None;
loop {
heartbeat_waiter = heartbeat_waiter.resubscribe();
let req = heartbeat_waiter.recv().await;
@@ -232,7 +475,7 @@ impl Session {
.running_network_instances
.iter()
.map(|x| x.to_string())
.collect::<Vec<_>>();
.collect::<HashSet<_>>();
let Some(storage) = storage.upgrade() else {
tracing::error!("Failed to get storage");
return;
@@ -267,6 +510,63 @@ impl Session {
};
let mut has_failed = false;
let should_be_alive_inst_ids = local_configs
.iter()
.map(|cfg| cfg.network_instance_id.clone())
.collect::<HashSet<_>>();
let desired_changed = last_desired_inst_ids
.as_ref()
.is_none_or(|last| last != &should_be_alive_inst_ids);
if !cleaned_web_managed_instances || desired_changed {
let all_local_configs = match storage
.db
.list_network_configs((user_id, machine_id.into()), ListNetworkProps::All)
.await
{
Ok(configs) => configs,
Err(e) => {
tracing::error!("Failed to list all network configs, error: {:?}", e);
return;
}
};
let all_inst_ids = all_local_configs
.iter()
.map(|cfg| cfg.network_instance_id.clone())
.collect::<HashSet<_>>();
let should_delete_ids = running_inst_ids
.iter()
.chain(all_inst_ids.iter())
.filter(|inst_id| !should_be_alive_inst_ids.contains(*inst_id))
.filter_map(|inst_id| uuid::Uuid::parse_str(inst_id).ok())
.map(Into::into)
.collect::<Vec<_>>();
if !should_delete_ids.is_empty() {
let ret = rpc_client
.delete_network_instance(
BaseController::default(),
easytier::proto::api::manage::DeleteNetworkInstanceRequest {
inst_ids: should_delete_ids,
},
)
.await;
tracing::info!(
?user_id,
"Clean non-web-managed network instances on start: {:?}, user_token: {:?}",
ret,
req.user_token
);
has_failed |= ret.is_err();
}
if !has_failed {
cleaned_web_managed_instances = true;
last_desired_inst_ids = Some(should_be_alive_inst_ids.clone());
}
}
for c in local_configs {
if running_inst_ids.contains(&c.network_instance_id) {
@@ -295,8 +595,7 @@ impl Session {
}
if !has_failed {
tracing::info!(?req, "All network instances are running");
break;
last_desired_inst_ids = Some(should_be_alive_inst_ids);
}
}
}
@@ -305,14 +604,22 @@ impl Session {
self.rpc_mgr.is_running()
}
pub async fn stop(&self) {
self.rpc_mgr.stop().await;
}
pub fn data(&self) -> SharedSessionData {
self.data.clone()
}
pub fn scoped_rpc_client(&self) -> SessionRpcClient {
pub fn scoped_client<F: rpc_types::__rt::RpcClientFactory>(&self) -> F::ClientImpl {
self.rpc_mgr
.rpc_client()
.scoped_client::<WebClientServiceClientFactory<BaseController>>(1, 1, "".to_string())
.scoped_client::<F>(1, 1, "".to_string())
}
pub fn scoped_rpc_client(&self) -> SessionRpcClient {
self.scoped_client::<WebClientServiceClientFactory<BaseController>>()
}
pub async fn get_token(&self) -> Option<StorageToken> {
@@ -323,3 +630,103 @@ impl Session {
self.data.read().await.req()
}
}
#[cfg(test)]
mod tests {
use easytier::rpc_service::remote_client::{ListNetworkProps, Storage as _};
use serde_json::json;
use super::{super::storage::Storage, *};
#[tokio::test]
async fn reconcile_managed_network_configs_upserts_and_deletes_exact_set() {
let storage = Storage::new(crate::db::Db::memory_db().await);
let user_id = storage
.db()
.auto_create_user("webhook-user")
.await
.unwrap()
.id;
let machine_id = uuid::Uuid::new_v4();
let keep_id = uuid::Uuid::new_v4();
let stale_id = uuid::Uuid::new_v4();
let new_id = uuid::Uuid::new_v4();
storage
.db()
.insert_or_update_user_network_config(
(user_id, machine_id),
keep_id,
NetworkConfig {
network_name: Some("old-name".to_string()),
..Default::default()
},
)
.await
.unwrap();
storage
.db()
.insert_or_update_user_network_config(
(user_id, machine_id),
stale_id,
NetworkConfig {
network_name: Some("stale".to_string()),
..Default::default()
},
)
.await
.unwrap();
SessionRpcService::reconcile_managed_network_configs(
&storage,
user_id,
machine_id,
vec![
crate::webhook::ManagedNetworkConfig {
instance_id: keep_id.to_string(),
network_config: json!({
"instance_id": keep_id.to_string(),
"network_name": "updated-name"
}),
},
crate::webhook::ManagedNetworkConfig {
instance_id: new_id.to_string(),
network_config: json!({
"instance_id": new_id.to_string(),
"network_name": "new-name"
}),
},
],
)
.await
.unwrap();
let configs = storage
.db()
.list_network_configs((user_id, machine_id), ListNetworkProps::All)
.await
.unwrap();
let config_ids = configs
.iter()
.map(|cfg| cfg.network_instance_id.clone())
.collect::<HashSet<_>>();
assert_eq!(configs.len(), 2);
assert!(config_ids.contains(&keep_id.to_string()));
assert!(config_ids.contains(&new_id.to_string()));
assert!(!config_ids.contains(&stale_id.to_string()));
let updated_keep = storage
.db()
.get_network_config((user_id, machine_id), &keep_id.to_string())
.await
.unwrap()
.unwrap();
let updated_keep_config: NetworkConfig =
serde_json::from_str(&updated_keep.network_config).unwrap();
assert_eq!(
updated_keep_config.network_name.as_deref(),
Some("updated-name")
);
}
}
+65 -14
View File
@@ -21,7 +21,6 @@ struct ClientInfo {
#[derive(Debug)]
pub struct StorageInner {
// some map for indexing
user_clients_map: DashMap<UserIdInDb, DashMap<uuid::Uuid, ClientInfo>>,
pub db: Db,
}
@@ -46,18 +45,14 @@ impl Storage {
}))
}
fn remove_mid_to_client_info_map(
map: &DashMap<uuid::Uuid, ClientInfo>,
machine_id: &uuid::Uuid,
client_url: &url::Url,
) {
map.remove_if(machine_id, |_, v| v.storage_token.client_url == *client_url);
fn remove_client_info_map(map: &DashMap<uuid::Uuid, ClientInfo>, stoken: &StorageToken) {
map.remove_if(&stoken.machine_id, |_, v| {
v.storage_token.client_url == stoken.client_url
&& v.storage_token.user_id == stoken.user_id
});
}
fn update_mid_to_client_info_map(
map: &DashMap<uuid::Uuid, ClientInfo>,
client_info: &ClientInfo,
) {
fn update_client_info_map(map: &DashMap<uuid::Uuid, ClientInfo>, client_info: &ClientInfo) {
map.entry(client_info.storage_token.machine_id)
.and_modify(|e| {
if e.report_time < client_info.report_time {
@@ -78,15 +73,14 @@ impl Storage {
storage_token: stoken.clone(),
report_time,
};
Self::update_mid_to_client_info_map(&inner, &client_info);
Self::update_client_info_map(&inner, &client_info);
}
pub fn remove_client(&self, stoken: &StorageToken) {
self.0
.user_clients_map
.remove_if(&stoken.user_id, |_, set| {
Self::remove_mid_to_client_info_map(set, &stoken.machine_id, &stoken.client_url);
Self::remove_client_info_map(set, stoken);
set.is_empty()
});
}
@@ -123,4 +117,61 @@ impl Storage {
pub fn db(&self) -> &Db {
&self.0.db
}
pub async fn auto_create_user(&self, username: &str) -> anyhow::Result<UserIdInDb> {
let new_user = self.db().auto_create_user(username).await?;
tracing::info!("Auto-created user '{}' with id {}", username, new_user.id);
Ok(new_user.id)
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_storage_token(
user_id: UserIdInDb,
machine_id: uuid::Uuid,
client_url: &str,
) -> StorageToken {
StorageToken {
token: format!("token-{machine_id}"),
client_url: client_url.parse().unwrap(),
machine_id,
user_id,
}
}
#[tokio::test]
async fn machine_id_is_scoped_within_each_user() {
let storage = Storage::new(Db::memory_db().await);
let machine_id = uuid::Uuid::new_v4();
let user1_token = make_storage_token(1, machine_id, "tcp://127.0.0.1:1001");
let user2_token = make_storage_token(2, machine_id, "tcp://127.0.0.1:1002");
storage.update_client(user1_token.clone(), 10);
storage.update_client(user2_token.clone(), 20);
assert_eq!(
storage.get_client_url_by_machine_id(1, &machine_id),
Some(user1_token.client_url.clone())
);
assert_eq!(
storage.get_client_url_by_machine_id(2, &machine_id),
Some(user2_token.client_url.clone())
);
storage.remove_client(&user1_token);
assert_eq!(storage.get_client_url_by_machine_id(1, &machine_id), None);
assert_eq!(
storage.get_client_url_by_machine_id(2, &machine_id),
Some(user2_token.client_url.clone())
);
storage.remove_client(&user2_token);
assert_eq!(storage.get_client_url_by_machine_id(2, &machine_id), None);
}
}
+1
View File
@@ -11,6 +11,7 @@ pub struct Model {
#[sea_orm(unique)]
pub username: String,
pub password: String,
pub must_change_password: bool,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
+146 -11
View File
@@ -9,7 +9,7 @@ use easytier::{
use entity::user_running_network_configs;
use sea_orm::{
prelude::Expr, sea_query::OnConflict, ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait,
QueryFilter as _, SqlxSqliteConnector, TransactionTrait as _,
QueryFilter as _, Set, SqlxSqliteConnector, TransactionTrait as _,
};
use sea_orm_migration::MigratorTrait as _;
use sqlx::{migrate::MigrateDatabase as _, types::chrono, Sqlite, SqlitePool};
@@ -82,6 +82,58 @@ impl Db {
Ok(user.map(|u| u.id))
}
/// `password_hash` must be pre-hashed by the caller.
/// Creates user + joins "users" group in one transaction. Returns the created user model.
pub async fn create_user_and_join_users_group(
&self,
username: &str,
password_hash: String,
) -> Result<entity::users::Model, DbErr> {
use entity::{groups, users, users_groups};
let txn = self.orm_db().begin().await?;
let user_active = users::ActiveModel {
username: Set(username.to_string()),
password: Set(password_hash),
must_change_password: Set(false),
..Default::default()
};
let insert_result = users::Entity::insert(user_active).exec(&txn).await?;
let new_user = users::Entity::find_by_id(insert_result.last_insert_id)
.one(&txn)
.await?
.ok_or_else(|| DbErr::Custom("Failed to find newly created user".to_string()))?;
let users_group = groups::Entity::find()
.filter(groups::Column::Name.eq("users"))
.one(&txn)
.await?
.ok_or_else(|| DbErr::Custom("Users group not found".to_string()))?;
let ug_active = users_groups::ActiveModel {
user_id: Set(new_user.id),
group_id: Set(users_group.id),
..Default::default()
};
users_groups::Entity::insert(ug_active).exec(&txn).await?;
txn.commit().await?;
Ok(new_user)
}
pub async fn auto_create_user(&self, username: &str) -> Result<entity::users::Model, DbErr> {
let random_password = uuid::Uuid::new_v4().to_string();
let hashed_password =
tokio::task::spawn_blocking(move || password_auth::generate_hash(&random_password))
.await
.map_err(|e| DbErr::Custom(format!("Failed to hash password: {}", e)))?;
self.create_user_and_join_users_group(username, hashed_password)
.await
}
// TODO: currently we don't have a token system, so we just use the user name as token
pub async fn get_user_id_by_token<T: ToString>(
&self,
@@ -103,13 +155,17 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
use entity::user_running_network_configs as urnc;
let on_conflict = OnConflict::column(urnc::Column::NetworkInstanceId)
.update_columns([
urnc::Column::NetworkConfig,
urnc::Column::Disabled,
urnc::Column::UpdateTime,
])
.to_owned();
let on_conflict = OnConflict::columns([
urnc::Column::UserId,
urnc::Column::DeviceId,
urnc::Column::NetworkInstanceId,
])
.update_columns([
urnc::Column::NetworkConfig,
urnc::Column::Disabled,
urnc::Column::UpdateTime,
])
.to_owned();
let insert_m = urnc::ActiveModel {
user_id: sea_orm::Set(user_id),
device_id: sea_orm::Set(device_id.to_string()),
@@ -133,13 +189,14 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
async fn delete_network_configs(
&self,
(user_id, _): (UserIdInDb, Uuid),
(user_id, device_id): (UserIdInDb, Uuid),
network_inst_ids: &[Uuid],
) -> Result<(), DbErr> {
use entity::user_running_network_configs as urnc;
urnc::Entity::delete_many()
.filter(urnc::Column::UserId.eq(user_id))
.filter(urnc::Column::DeviceId.eq(device_id.to_string()))
.filter(
urnc::Column::NetworkInstanceId
.is_in(network_inst_ids.iter().map(|id| id.to_string())),
@@ -152,7 +209,7 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
async fn update_network_config_state(
&self,
(user_id, _): (UserIdInDb, Uuid),
(user_id, device_id): (UserIdInDb, Uuid),
network_inst_id: Uuid,
disabled: bool,
) -> Result<(), DbErr> {
@@ -160,6 +217,7 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
urnc::Entity::update_many()
.filter(urnc::Column::UserId.eq(user_id))
.filter(urnc::Column::DeviceId.eq(device_id.to_string()))
.filter(urnc::Column::NetworkInstanceId.eq(network_inst_id.to_string()))
.col_expr(urnc::Column::Disabled, Expr::value(disabled))
.col_expr(
@@ -223,7 +281,28 @@ mod tests {
use easytier::{proto::api::manage::NetworkConfig, rpc_service::remote_client::Storage};
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter as _};
use crate::db::{entity::user_running_network_configs, Db, ListNetworkProps};
use crate::db::{
entity::{user_running_network_configs, users},
Db, ListNetworkProps,
};
#[tokio::test]
async fn created_users_default_to_not_requiring_password_change() {
let db = Db::memory_db().await;
let user = db
.create_user_and_join_users_group("created-user", "pre-hashed-password".to_string())
.await
.unwrap();
let stored = users::Entity::find_by_id(user.id)
.one(db.orm_db())
.await
.unwrap()
.unwrap();
assert!(!stored.must_change_password);
}
#[tokio::test]
async fn test_user_network_config_management() {
@@ -290,4 +369,60 @@ mod tests {
.unwrap();
assert!(result3.is_none());
}
#[tokio::test]
async fn test_user_network_config_same_instance_id_is_scoped_by_device() {
let db = Db::memory_db().await;
let user_id = db.auto_create_user("user-1").await.unwrap().id;
let device1 = uuid::Uuid::new_v4();
let device2 = uuid::Uuid::new_v4();
let inst_id = uuid::Uuid::new_v4();
db.insert_or_update_user_network_config(
(user_id, device1),
inst_id,
NetworkConfig {
network_name: Some("cfg-1".to_string()),
..Default::default()
},
)
.await
.unwrap();
db.insert_or_update_user_network_config(
(user_id, device2),
inst_id,
NetworkConfig {
network_name: Some("cfg-2".to_string()),
..Default::default()
},
)
.await
.unwrap();
let first = db
.get_network_config((user_id, device1), &inst_id.to_string())
.await
.unwrap()
.unwrap();
let second = db
.get_network_config((user_id, device2), &inst_id.to_string())
.await
.unwrap()
.unwrap();
assert_eq!(first.user_id, user_id);
assert_eq!(first.device_id, device1.to_string());
assert_eq!(second.user_id, user_id);
assert_eq!(second.device_id, device2.to_string());
let device1_configs = db
.list_network_configs((user_id, device1), ListNetworkProps::All)
.await
.unwrap();
let device2_configs = db
.list_network_configs((user_id, device2), ListNetworkProps::All)
.await
.unwrap();
assert_eq!(device1_configs.len(), 1);
assert_eq!(device2_configs.len(), 1);
}
}
+153 -27
View File
@@ -3,28 +3,32 @@
#[macro_use]
extern crate rust_i18n;
use std::net::IpAddr;
use std::sync::Arc;
use clap::Parser;
use easytier::tunnel::websocket::WsTunnelListener;
use easytier::{
common::{
config::{ConsoleLoggerConfig, FileLoggerConfig, LoggingConfigLoader},
constants::EASYTIER_VERSION,
error::Error,
log,
network::{local_ipv4, local_ipv6},
},
tunnel::{
tcp::TcpTunnelListener, udp::UdpTunnelListener, websocket::WSTunnelListener, TunnelListener,
},
utils::{init_logger, setup_panic_handler},
tunnel::{tcp::TcpTunnelListener, udp::UdpTunnelListener, TunnelListener},
utils::setup_panic_handler,
};
use easytier::tunnel::IpScheme;
use easytier::utils::BoxExt;
use mimalloc::MiMalloc;
mod client_manager;
mod db;
mod migrator;
mod restful;
mod webhook;
#[cfg(feature = "embed")]
mod web;
@@ -82,6 +86,13 @@ struct Cli {
)]
api_server_port: u16,
#[arg(
long,
default_value = "0.0.0.0",
help = t!("cli.api_server_addr").to_string(),
)]
api_server_addr: IpAddr,
#[arg(
long,
help = t!("cli.geoip_db").to_string(),
@@ -96,6 +107,14 @@ struct Cli {
)]
web_server_port: Option<u16>,
#[cfg(feature = "embed")]
#[arg(
long,
default_value = "0.0.0.0",
help = t!("cli.web_server_addr").to_string(),
)]
web_server_addr: IpAddr,
#[cfg(feature = "embed")]
#[arg(
long,
@@ -110,6 +129,51 @@ struct Cli {
help = t!("cli.api_host").to_string()
)]
api_host: Option<url::Url>,
#[command(flatten)]
feature_flags: FeatureFlags,
#[command(flatten)]
oidc: restful::oidc::OidcOptions,
#[command(flatten)]
webhook: WebhookOptions,
}
#[derive(Debug, Clone, Default, clap::Args)]
pub struct WebhookOptions {
/// Base URL of the webhook endpoint for token validation and event delivery.
/// When set, incoming tokens are validated via this webhook before local fallback.
#[arg(long)]
pub webhook_url: Option<String>,
/// Shared secret used to authenticate outbound webhook calls.
#[arg(long)]
pub webhook_secret: Option<String>,
/// Token for X-Internal-Auth header. When set, API requests with this header
/// bypass session authentication.
#[arg(long)]
pub internal_auth_token: Option<String>,
/// Stable identifier for this easytier-web instance when routing webhook callbacks.
#[arg(long)]
pub web_instance_id: Option<String>,
/// Reachable base URL for this easytier-web instance's internal REST API.
#[arg(long)]
pub web_instance_api_base_url: Option<String>,
}
#[derive(Debug, Clone, Default, clap::Args)]
pub struct FeatureFlags {
/// Whether user registration via the web UI is disabled.
#[arg(long, default_value = "false", help = t!("cli.disable_registration").to_string())]
pub disable_registration: bool,
/// Whether to auto-create users when they connect via heartbeat with an unknown token.
#[arg(long, default_value = "false", help = t!("cli.allow_auto_create_user").to_string())]
pub allow_auto_create_user: bool,
}
impl LoggingConfigLoader for &Cli {
@@ -130,14 +194,12 @@ impl LoggingConfigLoader for &Cli {
}
}
pub fn get_listener_by_url(l: &url::Url) -> Result<Box<dyn TunnelListener>, Error> {
Ok(match l.scheme() {
"tcp" => Box::new(TcpTunnelListener::new(l.clone())),
"udp" => Box::new(UdpTunnelListener::new(l.clone())),
"ws" => Box::new(WSTunnelListener::new(l.clone())),
_ => {
return Err(Error::InvalidUrl(l.to_string()));
}
pub fn get_listener_by_url(scheme: IpScheme, l: &url::Url) -> Option<Box<dyn TunnelListener>> {
Some(match scheme {
IpScheme::Tcp => TcpTunnelListener::new(l.clone()).boxed(),
IpScheme::Udp => UdpTunnelListener::new(l.clone()).boxed(),
IpScheme::Ws => WsTunnelListener::new(l.clone()).boxed(),
_ => return None,
})
}
@@ -151,15 +213,23 @@ async fn get_dual_stack_listener(
),
Error,
> {
let is_protocol_support_dual_stack =
protocol.trim().to_lowercase() == "tcp" || protocol.trim().to_lowercase() == "udp";
let v6_listener = if is_protocol_support_dual_stack && local_ipv6().await.is_ok() {
get_listener_by_url(&format!("{}://[::0]:{}", protocol, port).parse().unwrap()).ok()
} else {
None
};
let scheme = protocol
.parse()
.map_err(|_| Error::InvalidUrl(protocol.to_string()))?;
let v6_listener =
if local_ipv6().await.is_ok() && matches!(scheme, IpScheme::Tcp | IpScheme::Udp) {
get_listener_by_url(
scheme,
&format!("{protocol}://[::]:{port}").parse().unwrap(),
)
} else {
None
};
let v4_listener = if local_ipv4().await.is_ok() {
get_listener_by_url(&format!("{}://0.0.0.0:{}", protocol, port).parse().unwrap()).ok()
get_listener_by_url(
scheme,
&format!("{protocol}://0.0.0.0:{port}").parse().unwrap(),
)
} else {
None
};
@@ -173,11 +243,50 @@ async fn main() {
setup_panic_handler();
let cli = Cli::parse();
init_logger(&cli, false).unwrap();
log::init(&cli, false).unwrap();
// Validate OIDC configuration: check split-deploy specific requirements
// Basic OIDC parameter validation is handled in OidcConfig::from_params
if cli.oidc.any_param_provided() {
let is_split_deploy = {
#[cfg(feature = "embed")]
{
let embed_split_by_port = cli.web_server_port.is_some()
&& cli.web_server_port != Some(cli.api_server_port);
cli.no_web || embed_split_by_port
}
#[cfg(not(feature = "embed"))]
{
true
}
};
if is_split_deploy && cli.oidc.oidc_frontend_base_url.is_none() {
eprintln!("Error: --oidc-frontend-base-url is required in split-deploy mode");
eprintln!(
"When frontend and API are deployed separately, you must specify the frontend URL"
);
eprintln!("Example: --oidc-frontend-base-url http://your-frontend-domain.com");
std::process::exit(1);
}
}
// let db = db::Db::new(":memory:").await.unwrap();
let db = db::Db::new(cli.db).await.unwrap();
let mut mgr = client_manager::ClientManager::new(db.clone(), cli.geoip_db);
let feature_flags = Arc::new(cli.feature_flags);
let webhook_config = Arc::new(webhook::WebhookConfig::new(
cli.webhook.webhook_url,
cli.webhook.webhook_secret,
cli.webhook.internal_auth_token,
cli.webhook.web_instance_id,
cli.webhook.web_instance_api_base_url,
));
let mut mgr = client_manager::ClientManager::new(
db.clone(),
cli.geoip_db,
feature_flags.clone(),
webhook_config.clone(),
);
let (v6_listener, v4_listener) =
get_dual_stack_listener(&cli.config_server_protocol, cli.config_server_port)
.await
@@ -199,7 +308,10 @@ async fn main() {
(None, None)
} else {
let web_router = web::build_router(cli.api_host.clone());
if cli.web_server_port.is_none() || cli.web_server_port == Some(cli.api_server_port) {
if cli.web_server_port.is_none()
|| (cli.web_server_port == Some(cli.api_server_port)
&& cli.web_server_addr == cli.api_server_addr)
{
(Some(web_router), None)
} else {
(None, Some(web_router))
@@ -208,11 +320,27 @@ async fn main() {
#[cfg(not(feature = "embed"))]
let web_router_restful = None;
let oidc_config = if cli.oidc.oidc_issuer_url.is_some() {
match restful::oidc::OidcConfig::from_params(cli.oidc).await {
Ok(config) => config,
Err(e) => {
eprintln!("Failed to initialize OIDC: {:?}", e);
eprintln!("Please check your OIDC configuration (issuer URL, client ID, etc.)");
std::process::exit(1);
}
}
} else {
restful::oidc::OidcConfig::disabled()
};
let _restful_server_tasks = restful::RestfulServer::new(
format!("0.0.0.0:{}", cli.api_server_port).parse().unwrap(),
std::net::SocketAddr::new(cli.api_server_addr, cli.api_server_port),
mgr.clone(),
db,
web_router_restful,
feature_flags,
oidc_config,
webhook_config,
)
.await
.unwrap()
@@ -224,9 +352,7 @@ async fn main() {
let _web_server_task = if let Some(web_router) = web_router_static {
Some(
web::WebServer::new(
format!("0.0.0.0:{}", cli.web_server_port.unwrap_or(0))
.parse()
.unwrap(),
std::net::SocketAddr::new(cli.web_server_addr, cli.web_server_port.unwrap_or(0)),
web_router,
)
.await
@@ -0,0 +1,120 @@
use sea_orm_migration::prelude::*;
pub struct Migration;
impl MigrationName for Migration {
fn name(&self) -> &str {
"m20260403_000002_scope_network_config_unique"
}
}
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
let db = manager.get_connection();
db.execute_unprepared(
r#"
CREATE TABLE user_running_network_configs_new (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
user_id INTEGER NOT NULL,
device_id TEXT NOT NULL,
network_instance_id TEXT NOT NULL,
network_config TEXT NOT NULL,
disabled BOOLEAN NOT NULL DEFAULT FALSE,
create_time TEXT NOT NULL,
update_time TEXT NOT NULL,
CONSTRAINT fk_user_running_network_configs_user_id_to_users_id
FOREIGN KEY (user_id) REFERENCES users(id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
INSERT INTO user_running_network_configs_new (
id,
user_id,
device_id,
network_instance_id,
network_config,
disabled,
create_time,
update_time
)
SELECT
id,
user_id,
device_id,
network_instance_id,
network_config,
disabled,
create_time,
update_time
FROM user_running_network_configs;
DROP TABLE user_running_network_configs;
ALTER TABLE user_running_network_configs_new RENAME TO user_running_network_configs;
CREATE INDEX idx_user_running_network_configs_user_id
ON user_running_network_configs(user_id);
CREATE UNIQUE INDEX idx_user_running_network_configs_scope_inst
ON user_running_network_configs(user_id, device_id, network_instance_id);
"#,
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
let db = manager.get_connection();
db.execute_unprepared(
r#"
CREATE TABLE user_running_network_configs_old (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
user_id INTEGER NOT NULL,
device_id TEXT NOT NULL,
network_instance_id TEXT NOT NULL UNIQUE,
network_config TEXT NOT NULL,
disabled BOOLEAN NOT NULL DEFAULT FALSE,
create_time TEXT NOT NULL,
update_time TEXT NOT NULL,
CONSTRAINT fk_user_running_network_configs_user_id_to_users_id
FOREIGN KEY (user_id) REFERENCES users(id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
INSERT INTO user_running_network_configs_old (
id,
user_id,
device_id,
network_instance_id,
network_config,
disabled,
create_time,
update_time
)
SELECT
id,
user_id,
device_id,
network_instance_id,
network_config,
disabled,
create_time,
update_time
FROM user_running_network_configs;
DROP TABLE user_running_network_configs;
ALTER TABLE user_running_network_configs_old RENAME TO user_running_network_configs;
CREATE INDEX idx_user_running_network_configs_user_id
ON user_running_network_configs(user_id);
"#,
)
.await?;
Ok(())
}
}
@@ -0,0 +1,129 @@
use sea_orm_migration::prelude::*;
pub struct Migration;
const DEFAULT_USER_PASSWORD_HASH: &str =
"$argon2i$v=19$m=16,t=2,p=1$aGVyRDBrcnRycnlaMDhkbw$449SEcv/qXf+0fnI9+fYVQ";
const DEFAULT_ADMIN_PASSWORD_HASH: &str =
"$argon2i$v=19$m=16,t=2,p=1$bW5idXl0cmY$61n+JxL4r3dwLPAEDlDdtg";
#[derive(DeriveIden)]
enum Users {
Table,
Username,
Password,
MustChangePassword,
}
impl MigrationName for Migration {
fn name(&self) -> &str {
"m20260405_000003_add_must_change_password"
}
}
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Users::Table)
.add_column(
ColumnDef::new(Users::MustChangePassword)
.boolean()
.not_null()
.default(false),
)
.to_owned(),
)
.await?;
manager
.exec_stmt(
Query::update()
.table(Users::Table)
.value(Users::MustChangePassword, true)
.cond_where(any![
Expr::col(Users::Username)
.eq("admin")
.and(Expr::col(Users::Password).eq(DEFAULT_ADMIN_PASSWORD_HASH)),
Expr::col(Users::Username)
.eq("user")
.and(Expr::col(Users::Password).eq(DEFAULT_USER_PASSWORD_HASH)),
])
.to_owned(),
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Users::Table)
.drop_column(Users::MustChangePassword)
.to_owned(),
)
.await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter as _, SqlxSqliteConnector};
use sea_orm_migration::prelude::SchemaManager;
use sqlx::sqlite::SqlitePoolOptions;
use super::{Migration, MigrationTrait, DEFAULT_USER_PASSWORD_HASH};
use crate::db::entity::users;
async fn find_user(db: &sea_orm::DatabaseConnection, username: &str) -> users::Model {
users::Entity::find()
.filter(users::Column::Username.eq(username))
.one(db)
.await
.unwrap()
.unwrap()
}
#[tokio::test]
async fn migration_only_marks_seeded_accounts_still_using_default_passwords() {
let pool = SqlitePoolOptions::new()
.max_connections(1)
.connect("sqlite::memory:")
.await
.unwrap();
sqlx::query(
"CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL UNIQUE,
password TEXT NOT NULL
)",
)
.execute(&pool)
.await
.unwrap();
let changed_admin_password = password_auth::generate_hash("already-changed");
sqlx::query("INSERT INTO users (username, password) VALUES (?, ?), (?, ?)")
.bind("admin")
.bind(changed_admin_password)
.bind("user")
.bind(DEFAULT_USER_PASSWORD_HASH)
.execute(&pool)
.await
.unwrap();
let db = SqlxSqliteConnector::from_sqlx_sqlite_pool(pool);
Migration.up(&SchemaManager::new(&db)).await.unwrap();
assert!(!find_user(&db, "admin").await.must_change_password);
assert!(find_user(&db, "user").await.must_change_password);
}
}
+7 -1
View File
@@ -1,12 +1,18 @@
use sea_orm_migration::prelude::*;
mod m20241029_000001_init;
mod m20260403_000002_scope_network_config_unique;
mod m20260405_000003_add_must_change_password;
pub struct Migrator;
#[async_trait::async_trait]
impl MigratorTrait for Migrator {
fn migrations() -> Vec<Box<dyn MigrationTrait>> {
vec![Box::new(m20241029_000001_init::Migration)]
vec![
Box::new(m20241029_000001_init::Migration),
Box::new(m20260403_000002_scope_network_config_unique::Migration),
Box::new(m20260405_000003_add_must_change_password::Migration),
]
}
}
+44 -18
View File
@@ -4,19 +4,22 @@ use axum::{
Router,
};
use axum_login::login_required;
use axum_messages::Message;
use serde::{Deserialize, Serialize};
use serde::Serialize;
use crate::restful::users::Backend;
use std::sync::Arc;
use crate::FeatureFlags;
use super::{
users::{AuthSession, Credentials},
AppStateInner,
};
#[derive(Debug, Deserialize, Serialize)]
pub struct LoginResult {
messages: Vec<Message>,
#[derive(Debug, Serialize)]
pub struct AuthStatusResponse {
must_change_password: bool,
}
pub fn router() -> Router<AppStateInner> {
@@ -36,12 +39,15 @@ pub fn router() -> Router<AppStateInner> {
}
mod put {
use crate::restful::{
other_error,
users::{ChangePassword, ChangePasswordError},
HttpHandleError,
};
use axum::Json;
use axum_login::AuthUser;
use easytier::proto::common::Void;
use crate::restful::{other_error, users::ChangePassword, HttpHandleError};
use super::*;
pub async fn change_password(
@@ -54,20 +60,26 @@ mod put {
.await
{
tracing::error!("Failed to change password: {:?}", e);
return Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json::from(other_error(format!("{:?}", e))),
));
let (status, message) = match &e {
ChangePasswordError::EmptyPassword => {
(StatusCode::BAD_REQUEST, "password cannot be empty")
}
ChangePasswordError::UserNotFound | ChangePasswordError::Db(_) => (
StatusCode::INTERNAL_SERVER_ERROR,
"failed to change password",
),
};
return Err((status, Json::from(other_error(message.to_string()))));
}
let _ = auth_session.logout().await;
Ok(Void::default().into())
Ok(Json(Void::default()))
}
}
mod post {
use axum::Json;
use axum::{extract::Extension, Json};
use easytier::proto::common::Void;
use crate::restful::{
@@ -82,7 +94,7 @@ mod post {
pub async fn login(
mut auth_session: AuthSession,
Json(creds): Json<Credentials>,
) -> Result<Json<Void>, HttpHandleError> {
) -> Result<Json<AuthStatusResponse>, HttpHandleError> {
let user = match auth_session.authenticate(creds.clone()).await {
Ok(Some(user)) => user,
Ok(None) => {
@@ -106,14 +118,26 @@ mod post {
));
}
Ok(Void::default().into())
Ok(Json(AuthStatusResponse {
must_change_password: user.db_user.must_change_password,
}))
}
pub async fn register(
Extension(feature_flags): Extension<Arc<FeatureFlags>>,
auth_session: AuthSession,
captcha_session: tower_sessions::Session,
Json(req): Json<RegisterNewUser>,
) -> Result<Json<Void>, HttpHandleError> {
// Check if registration is disabled
if feature_flags.disable_registration {
tracing::warn!("Registration attempt blocked: registration is disabled");
return Err((
StatusCode::FORBIDDEN,
other_error("Registration is disabled").into(),
));
}
// 调用CaptchaUtil的静态方法验证验证码是否正确
if !CaptchaUtil::ver(&req.captcha, &captcha_session).await {
return Err((
@@ -175,9 +199,11 @@ mod get {
pub async fn check_login_status(
auth_session: AuthSession,
) -> Result<Json<Void>, HttpHandleError> {
if auth_session.user.is_some() {
Ok(Json(Void::default()))
) -> Result<Json<AuthStatusResponse>, HttpHandleError> {
if let Some(user) = auth_session.user {
Ok(Json(AuthStatusResponse {
must_change_password: user.db_user.must_change_password,
}))
} else {
Err((
StatusCode::UNAUTHORIZED,
+103 -16
View File
@@ -1,13 +1,18 @@
mod auth;
pub(crate) mod captcha;
mod network;
pub(crate) mod oidc;
mod rpc;
mod users;
use std::{net::SocketAddr, sync::Arc};
use axum::http::StatusCode;
use axum::routing::post;
use axum::{extract::State, routing::get, Json, Router};
use axum::extract::Path;
use axum::http::{header, Request, StatusCode};
use axum::middleware::{self as axum_mw, Next};
use axum::response::Response;
use axum::routing::{delete, post};
use axum::{extract::State, routing::get, Extension, Json, Router};
use axum_login::tower_sessions::{ExpiredDeletion, SessionManagerLayer};
use axum_login::{login_required, AuthManagerLayerBuilder, AuthUser, AuthzBackend};
use axum_messages::MessagesManagerLayer;
@@ -19,14 +24,16 @@ use network::NetworkApi;
use sea_orm::DbErr;
use tokio::net::TcpListener;
use tower_sessions::cookie::time::Duration;
use tower_sessions::cookie::Key;
use tower_sessions::cookie::{Key, SameSite};
use tower_sessions::Expiry;
use tower_sessions_sqlx_store::SqliteStore;
use users::{AuthSession, Backend};
use crate::client_manager::storage::StorageToken;
use crate::client_manager::ClientManager;
use crate::db::Db;
use crate::db::{Db, UserIdInDb};
use crate::webhook::SharedWebhookConfig;
use crate::FeatureFlags;
/// Embed assets for web dashboard, build frontend first
#[cfg(feature = "embed")]
@@ -37,11 +44,10 @@ struct Assets;
pub struct RestfulServer {
bind_addr: SocketAddr,
client_mgr: Arc<ClientManager>,
feature_flags: Arc<FeatureFlags>,
webhook_config: SharedWebhookConfig,
db: Db,
// serve_task: Option<ScopedTask<()>>,
// delete_task: Option<ScopedTask<tower_sessions::session_store::Result<()>>>,
// network_api: NetworkApi<WebClientManager>,
oidc_config: oidc::OidcConfig,
web_router: Option<Router>,
}
@@ -104,18 +110,19 @@ impl RestfulServer {
client_mgr: Arc<ClientManager>,
db: Db,
web_router: Option<Router>,
feature_flags: Arc<FeatureFlags>,
oidc_config: oidc::OidcConfig,
webhook_config: SharedWebhookConfig,
) -> anyhow::Result<Self> {
assert!(client_mgr.is_running());
// let network_api = NetworkApi::new();
Ok(RestfulServer {
bind_addr,
client_mgr,
feature_flags,
webhook_config,
db,
// serve_task: None,
// delete_task: None,
// network_api,
oidc_config,
web_router,
})
}
@@ -219,6 +226,7 @@ impl RestfulServer {
let session_layer = SessionManagerLayer::new(session_store)
.with_secure(false)
.with_same_site(SameSite::Lax)
.with_expiry(Expiry::OnInactivity(Duration::days(1)))
.with_signed(key);
@@ -235,23 +243,54 @@ impl RestfulServer {
.zstd(true)
.quality(tower_http::compression::CompressionLevel::Default);
let app = Router::new()
// Token-authenticated management routes that bypass session auth.
let internal_app = if self.webhook_config.has_internal_auth() {
let internal_token = self.webhook_config.internal_auth_token.clone().unwrap();
let internal_routes = Router::new()
.route(
"/api/internal/sessions",
get(Self::handle_list_all_sessions_internal),
)
.route(
"/api/internal/users/:user-id/sessions/:machine-id",
delete(Self::handle_disconnect_session_internal),
)
.merge(NetworkApi::build_route_internal())
.merge(rpc::router_internal())
.with_state(self.client_mgr.clone())
.layer(axum_mw::from_fn(move |req, next| {
let token = internal_token.clone();
internal_auth_middleware(token, req, next)
}));
Some(internal_routes)
} else {
None
};
let mut app = Router::new()
.route("/api/v1/summary", get(Self::handle_get_summary))
.route("/api/v1/sessions", get(Self::handle_list_all_sessions))
.merge(NetworkApi::build_route())
.merge(rpc::router())
.route_layer(login_required!(Backend))
.merge(auth::router())
.merge(auth::router().layer(Extension(self.feature_flags.clone())))
.merge(oidc::router())
.with_state(self.client_mgr.clone())
.route(
"/api/v1/generate-config",
post(Self::handle_generate_config),
)
.route("/api/v1/parse-config", post(Self::handle_parse_config))
.layer(Extension(self.oidc_config.clone()))
.layer(MessagesManagerLayer)
.layer(auth_layer)
.layer(tower_http::cors::CorsLayer::very_permissive())
.layer(compression_layer);
if let Some(internal_routes) = internal_app {
app = app.merge(internal_routes);
}
#[cfg(feature = "embed")]
let app = if let Some(web_router) = self.web_router.take() {
app.merge(web_router)
@@ -266,4 +305,52 @@ impl RestfulServer {
Ok((serve_task, delete_task))
}
/// Session listing endpoint for token-authenticated management clients.
async fn handle_list_all_sessions_internal(
State(client_mgr): AppState,
) -> Result<Json<ListSessionJsonResp>, HttpHandleError> {
let ret = client_mgr.list_sessions().await;
Ok(ListSessionJsonResp(ret).into())
}
async fn handle_disconnect_session_internal(
Path((user_id, machine_id)): Path<(UserIdInDb, uuid::Uuid)>,
State(client_mgr): AppState,
) -> Result<StatusCode, HttpHandleError> {
if client_mgr
.disconnect_session_by_machine_id(user_id, &machine_id)
.await
{
Ok(StatusCode::NO_CONTENT)
} else {
Err((
StatusCode::NOT_FOUND,
other_error("session not found").into(),
))
}
}
}
/// Middleware that validates X-Internal-Auth for token-authenticated routes.
async fn internal_auth_middleware(
expected_token: String,
req: Request<axum::body::Body>,
next: Next,
) -> Response {
let auth_header = req
.headers()
.get("X-Internal-Auth")
.and_then(|v| v.to_str().ok());
match auth_header {
Some(token) if token == expected_token => next.run(req).await,
_ => Response::builder()
.status(StatusCode::UNAUTHORIZED)
.header(header::CONTENT_TYPE, "application/json")
.body(axum::body::Body::from(
r#"{"error":"unauthorized: invalid or missing X-Internal-Auth header"}"#,
))
.unwrap(),
}
}
+64
View File
@@ -295,6 +295,70 @@ impl NetworkApi {
.into())
}
// --- Token-authenticated machine-scoped handlers (no AuthSession) ---
async fn handle_run_network_instance_internal(
State(client_mgr): AppState,
Path((user_id, machine_id)): Path<(UserIdInDb, uuid::Uuid)>,
Json(payload): Json<RunNetworkJsonReq>,
) -> Result<Json<Void>, HttpHandleError> {
client_mgr
.handle_run_network_instance((user_id, machine_id), payload.config, payload.save)
.await
.map_err(convert_error)?;
Ok(Void::default().into())
}
async fn handle_remove_network_instance_internal(
State(client_mgr): AppState,
Path((user_id, machine_id, inst_id)): Path<(UserIdInDb, uuid::Uuid, uuid::Uuid)>,
) -> Result<(), HttpHandleError> {
client_mgr
.handle_remove_network_instances((user_id, machine_id), vec![inst_id])
.await
.map_err(convert_error)
}
async fn handle_list_network_instance_ids_internal(
State(client_mgr): AppState,
Path((user_id, machine_id)): Path<(UserIdInDb, uuid::Uuid)>,
) -> Result<Json<ListNetworkInstanceIdsJsonResp>, HttpHandleError> {
Ok(client_mgr
.handle_list_network_instance_ids((user_id, machine_id))
.await
.map_err(convert_error)?
.into())
}
async fn handle_collect_network_info_internal(
State(client_mgr): AppState,
Path((user_id, machine_id)): Path<(UserIdInDb, uuid::Uuid)>,
Json(payload): Json<CollectNetworkInfoJsonReq>,
) -> Result<Json<CollectNetworkInfoResponse>, HttpHandleError> {
Ok(client_mgr
.handle_collect_network_info((user_id, machine_id), payload.inst_ids)
.await
.map_err(convert_error)?
.into())
}
pub fn build_route_internal() -> Router<AppStateInner> {
Router::new()
.route(
"/api/internal/users/:user-id/machines/:machine-id/networks",
post(Self::handle_run_network_instance_internal)
.get(Self::handle_list_network_instance_ids_internal),
)
.route(
"/api/internal/users/:user-id/machines/:machine-id/networks/:inst-id",
delete(Self::handle_remove_network_instance_internal),
)
.route(
"/api/internal/users/:user-id/machines/:machine-id/networks/info",
get(Self::handle_collect_network_info_internal),
)
}
pub fn build_route() -> Router<AppStateInner> {
Router::new()
.route("/api/v1/machines", get(Self::handle_list_machines))
+734
View File
@@ -0,0 +1,734 @@
use std::collections::HashMap;
use std::sync::Arc;
use std::time::Duration;
use subtle::ConstantTimeEq;
use axum::routing::get;
use axum::Router;
use openidconnect::core::{
CoreAuthDisplay, CoreAuthPrompt, CoreErrorResponseType, CoreGenderClaim, CoreJsonWebKey,
CoreJweContentEncryptionAlgorithm, CoreJwsSigningAlgorithm, CoreProviderMetadata,
CoreRevocableToken, CoreRevocationErrorResponse, CoreTokenIntrospectionResponse, CoreTokenType,
};
use openidconnect::{
Client, ClientId, ClientSecret, EmptyExtraTokenFields, EndpointMaybeSet, EndpointNotSet,
EndpointSet, IdTokenFields, IssuerUrl, RedirectUrl, StandardErrorResponse,
StandardTokenResponse,
};
use serde::{Deserialize, Serialize};
use super::AppStateInner;
const DEFAULT_OIDC_SCOPES: [&str; 2] = ["openid", "profile"];
fn normalize_oidc_scopes(scopes: &[String]) -> Vec<String> {
let mut normalized: Vec<String> = scopes
.iter()
.map(|scope| scope.trim().to_string())
.filter(|scope| !scope.is_empty())
.collect();
if normalized.is_empty() {
normalized = DEFAULT_OIDC_SCOPES
.iter()
.map(|scope| scope.to_string())
.collect();
}
if !normalized.iter().any(|scope| scope == "openid") {
normalized.insert(0, "openid".to_string());
}
normalized
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct JsonAdditionalClaims {
#[serde(flatten)]
pub claims: HashMap<String, serde_json::Value>,
}
impl openidconnect::AdditionalClaims for JsonAdditionalClaims {}
pub type AppIdTokenFields = IdTokenFields<
JsonAdditionalClaims,
EmptyExtraTokenFields,
CoreGenderClaim,
CoreJweContentEncryptionAlgorithm,
CoreJwsSigningAlgorithm,
>;
pub type AppTokenResponse = StandardTokenResponse<AppIdTokenFields, CoreTokenType>;
pub type AppClient<
HasAuthUrl = EndpointNotSet,
HasDeviceAuthUrl = EndpointNotSet,
HasIntrospectionUrl = EndpointNotSet,
HasRevocationUrl = EndpointNotSet,
HasTokenUrl = EndpointNotSet,
HasUserInfoUrl = EndpointNotSet,
> = Client<
JsonAdditionalClaims,
CoreAuthDisplay,
CoreGenderClaim,
CoreJweContentEncryptionAlgorithm,
CoreJsonWebKey,
CoreAuthPrompt,
StandardErrorResponse<CoreErrorResponseType>,
AppTokenResponse,
CoreTokenIntrospectionResponse,
CoreRevocableToken,
CoreRevocationErrorResponse,
HasAuthUrl,
HasDeviceAuthUrl,
HasIntrospectionUrl,
HasRevocationUrl,
HasTokenUrl,
HasUserInfoUrl,
>;
pub type ConfiguredAppClient = AppClient<
EndpointSet,
EndpointNotSet,
EndpointNotSet,
EndpointNotSet,
EndpointMaybeSet,
EndpointMaybeSet,
>;
/// Convert a dot-path (e.g. `realm_access.roles.0`) to a JSON Pointer (e.g. `/realm_access/roles/0`).
/// Each segment is escaped per RFC 6901: `~` → `~0`, `/` → `~1`.
fn dot_path_to_json_pointer(dot_path: &str) -> String {
let mut pointer = String::new();
for segment in dot_path.split('.') {
pointer.push('/');
for ch in segment.chars() {
match ch {
'~' => pointer.push_str("~0"),
'/' => pointer.push_str("~1"),
_ => pointer.push(ch),
}
}
}
pointer
}
/// Timing-safe string comparison via constant-time equality check.
/// Prevents timing side-channel attacks on CSRF token verification.
fn timing_safe_eq(a: &str, b: &str) -> bool {
if a.len() != b.len() {
return false;
}
a.as_bytes().ct_eq(b.as_bytes()).into()
}
#[derive(Debug, Clone, clap::Args)]
pub struct OidcOptions {
#[arg(long, help = t!("cli.oidc_issuer_url").to_string())]
pub oidc_issuer_url: Option<String>,
#[arg(long, help = t!("cli.oidc_client_id").to_string())]
pub oidc_client_id: Option<String>,
#[arg(long, env = "OIDC_CLIENT_SECRET", help = t!("cli.oidc_client_secret").to_string())]
pub oidc_client_secret: Option<String>,
#[arg(long, default_value = "preferred_username", help = t!("cli.oidc_username_claim").to_string())]
pub oidc_username_claim: String,
#[arg(
long,
value_delimiter = ',',
default_values = DEFAULT_OIDC_SCOPES,
help = t!("cli.oidc_scopes").to_string()
)]
pub oidc_scopes: Vec<String>,
#[arg(long, help = t!("cli.oidc_redirect_url").to_string())]
pub oidc_redirect_url: Option<String>,
#[arg(long, default_value = "false", help = t!("cli.oidc_disable_pkce").to_string())]
pub oidc_disable_pkce: bool,
#[arg(long, help = t!("cli.oidc_frontend_base_url").to_string())]
pub oidc_frontend_base_url: Option<String>,
}
impl OidcOptions {
pub fn any_param_provided(&self) -> bool {
self.oidc_issuer_url.is_some()
|| self.oidc_client_id.is_some()
|| self.oidc_client_secret.is_some()
|| self.oidc_redirect_url.is_some()
|| self.oidc_frontend_base_url.is_some()
|| self.oidc_username_claim != "preferred_username"
|| self.oidc_scopes != DEFAULT_OIDC_SCOPES
|| self.oidc_disable_pkce
}
}
#[derive(Clone)]
pub struct OidcConfig {
pub enabled: bool,
pub provider_metadata: Option<Arc<CoreProviderMetadata>>,
pub client_id: String,
pub client_secret: Option<String>,
pub redirect_url: Option<RedirectUrl>,
pub username_claim: String,
pub scopes: Vec<String>,
pub pkce_enabled: bool,
pub frontend_base_url: Option<String>,
pub http_client: Option<reqwest::Client>,
cached_client: Option<Arc<ConfiguredAppClient>>,
}
impl OidcConfig {
pub fn disabled() -> Self {
Self {
enabled: false,
provider_metadata: None,
client_id: String::new(),
client_secret: None,
redirect_url: None,
username_claim: "preferred_username".to_string(),
scopes: DEFAULT_OIDC_SCOPES
.iter()
.map(|scope| scope.to_string())
.collect(),
pkce_enabled: false,
frontend_base_url: None,
http_client: None,
cached_client: None,
}
}
pub async fn from_params(opts: OidcOptions) -> anyhow::Result<Self> {
let OidcOptions {
oidc_issuer_url,
oidc_client_id,
oidc_client_secret,
oidc_username_claim,
oidc_scopes,
oidc_redirect_url,
oidc_disable_pkce,
oidc_frontend_base_url,
} = opts;
if oidc_issuer_url.is_none() || oidc_client_id.is_none() || oidc_redirect_url.is_none() {
return Err(anyhow::anyhow!("--oidc-issuer-url, --oidc-client-id and --oidc-redirect-url are required when using OIDC authentication"));
}
if oidc_username_claim.trim().is_empty() {
return Err(anyhow::anyhow!("--oidc-username-claim cannot be empty"));
}
let http_client = reqwest::ClientBuilder::new()
.redirect(reqwest::redirect::Policy::none())
.timeout(Duration::from_secs(30))
.build()?;
let issuer_url = oidc_issuer_url.ok_or_else(|| {
anyhow::anyhow!("--oidc-issuer-url is required when using OIDC authentication")
})?;
let provider_metadata =
CoreProviderMetadata::discover_async(IssuerUrl::new(issuer_url)?, &http_client).await?;
let client_id = oidc_client_id.ok_or_else(|| {
anyhow::anyhow!("--oidc-client-id is required when using OIDC authentication")
})?;
let redirect_url = oidc_redirect_url
.ok_or_else(|| anyhow::anyhow!("--oidc-redirect-url is required when using OIDC authentication. The redirect URL must match exactly what is registered with your Identity Provider. Example: --oidc-redirect-url http://your-domain.com:11211/api/v1/auth/oidc/callback"))?;
let provider_metadata = Arc::new(provider_metadata);
let redirect_url = RedirectUrl::new(redirect_url)?;
let client_secret = oidc_client_secret;
let cached_client = {
let c = AppClient::from_provider_metadata(
provider_metadata.as_ref().clone(),
ClientId::new(client_id.clone()),
client_secret.as_ref().map(|s| ClientSecret::new(s.clone())),
)
.set_redirect_uri(redirect_url.clone());
Arc::new(c)
};
Ok(Self {
enabled: true,
provider_metadata: Some(provider_metadata),
client_id,
client_secret,
redirect_url: Some(redirect_url),
username_claim: oidc_username_claim,
scopes: normalize_oidc_scopes(&oidc_scopes),
pkce_enabled: !oidc_disable_pkce,
frontend_base_url: oidc_frontend_base_url,
http_client: Some(http_client),
cached_client: Some(cached_client),
})
}
pub fn client(&self) -> Option<&ConfiguredAppClient> {
self.cached_client.as_deref()
}
}
pub fn router() -> Router<AppStateInner> {
Router::new()
.route("/api/v1/auth/oidc/config", get(self::route::oidc_config))
.route("/api/v1/auth/oidc/login", get(self::route::oidc_login))
.route(
"/api/v1/auth/oidc/callback",
get(self::route::oidc_callback),
)
}
mod route {
use axum::extract::Query;
use axum::http::StatusCode;
use axum::response::{IntoResponse, Redirect, Response};
use axum::{Extension, Json};
use openidconnect::core::CoreAuthenticationFlow;
use openidconnect::{
AccessTokenHash, AuthorizationCode, CsrfToken, Nonce, OAuth2TokenResponse,
PkceCodeChallenge, PkceCodeVerifier, Scope, TokenResponse,
};
use serde::Deserialize;
use crate::restful::other_error;
use crate::restful::users::AuthSession;
use super::OidcConfig;
pub async fn oidc_config(Extension(oidc): Extension<OidcConfig>) -> Json<serde_json::Value> {
Json(serde_json::json!({ "enabled": oidc.enabled }))
}
pub async fn oidc_login(
Extension(oidc): Extension<OidcConfig>,
session: tower_sessions::Session,
) -> Response {
if !oidc.enabled {
return (
StatusCode::BAD_REQUEST,
Json(other_error("OIDC is not enabled")),
)
.into_response();
}
let client = match oidc.client() {
Some(c) => c,
None => {
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("OIDC client not initialized")),
)
.into_response();
}
};
let scopes = oidc.scopes.clone();
let pkce_enabled = oidc.pkce_enabled;
let (pkce_challenge, pkce_verifier) = if pkce_enabled {
let (challenge, verifier) = PkceCodeChallenge::new_random_sha256();
(Some(challenge), Some(verifier))
} else {
(None, None)
};
let mut auth_request = client.authorize_url(
CoreAuthenticationFlow::AuthorizationCode,
CsrfToken::new_random,
Nonce::new_random,
);
for scope in &scopes {
auth_request = auth_request.add_scope(Scope::new(scope.clone()));
}
if let Some(challenge) = pkce_challenge {
auth_request = auth_request.set_pkce_challenge(challenge);
}
let (auth_url, csrf_token, nonce) = auth_request.url();
if let Err(e) = session
.insert("oidc_csrf_token", csrf_token.secret().clone())
.await
{
tracing::error!("Failed to store csrf_token in session: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Session error")),
)
.into_response();
}
if let Err(e) = session.insert("oidc_nonce", nonce.secret().clone()).await {
tracing::error!("Failed to store nonce in session: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Session error")),
)
.into_response();
}
if let Some(verifier) = pkce_verifier {
if let Err(e) = session
.insert("oidc_pkce_verifier", verifier.secret().clone())
.await
{
tracing::error!("Failed to store pkce_verifier in session: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Session error")),
)
.into_response();
}
}
if let Err(e) = session.insert("oidc_pkce_used", pkce_enabled).await {
tracing::error!("Failed to store pkce_used in session: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Session error")),
)
.into_response();
}
Redirect::temporary(auth_url.as_str()).into_response()
}
#[derive(Deserialize)]
pub struct CallbackParams {
code: Option<String>,
state: Option<String>,
error: Option<String>,
error_description: Option<String>,
}
async fn cleanup_oidc_session(session: &tower_sessions::Session) {
let _ = session.remove::<String>("oidc_csrf_token").await;
let _ = session.remove::<String>("oidc_nonce").await;
let _ = session.remove::<String>("oidc_pkce_verifier").await;
let _ = session.remove::<bool>("oidc_pkce_used").await;
}
pub async fn oidc_callback(
Extension(oidc): Extension<OidcConfig>,
Query(params): Query<CallbackParams>,
session: tower_sessions::Session,
mut auth_session: AuthSession,
) -> Response {
if !oidc.enabled {
return (
StatusCode::BAD_REQUEST,
Json(other_error("OIDC is not enabled")),
)
.into_response();
}
if let Some(ref error) = params.error {
tracing::error!(
"OIDC provider returned error: {}, description: {:?}",
error,
params.error_description
);
return (
StatusCode::BAD_REQUEST,
Json(other_error(
"Authentication failed at the identity provider",
)),
)
.into_response();
}
let code = match params.code {
Some(ref c) => c.clone(),
None => {
return (
StatusCode::BAD_REQUEST,
Json(other_error("Missing authorization code")),
)
.into_response();
}
};
let callback_state = match params.state {
Some(ref s) => s.clone(),
None => {
return (
StatusCode::BAD_REQUEST,
Json(other_error("Missing state parameter in callback")),
)
.into_response();
}
};
let stored_csrf: String = match session.get("oidc_csrf_token").await {
Ok(Some(v)) => v,
_ => {
return (
StatusCode::BAD_REQUEST,
Json(other_error("Missing or invalid CSRF token in session")),
)
.into_response();
}
};
if !super::timing_safe_eq(&stored_csrf, &callback_state) {
return (
StatusCode::BAD_REQUEST,
Json(other_error("CSRF state mismatch")),
)
.into_response();
}
let stored_nonce: String = match session.get("oidc_nonce").await {
Ok(Some(v)) => v,
_ => {
return (
StatusCode::BAD_REQUEST,
Json(other_error("Missing nonce in session")),
)
.into_response();
}
};
let stored_pkce_verifier: Option<String> =
session.get("oidc_pkce_verifier").await.ok().flatten();
let pkce_was_used: Option<bool> = session.get("oidc_pkce_used").await.ok().flatten();
cleanup_oidc_session(&session).await;
let client = match oidc.client() {
Some(c) => c,
None => {
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("OIDC client not initialized")),
)
.into_response();
}
};
let http_client = match oidc.http_client.as_ref() {
Some(c) => c,
None => {
tracing::error!("HTTP client not initialized in OIDC config");
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("OIDC internal error")),
)
.into_response();
}
};
let mut token_request = match client.exchange_code(AuthorizationCode::new(code)) {
Ok(req) => req,
Err(e) => {
tracing::error!("Failed to create token request: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to create token exchange request")),
)
.into_response();
}
};
if let Some(stored_pkce_verifier) = stored_pkce_verifier {
token_request =
token_request.set_pkce_verifier(PkceCodeVerifier::new(stored_pkce_verifier));
} else if pkce_was_used == Some(true) {
return (
StatusCode::BAD_REQUEST,
Json(other_error(
"PKCE was enabled but verifier is missing from session (session may have expired)",
)),
)
.into_response();
}
let token_response = match token_request.request_async(http_client).await {
Ok(resp) => resp,
Err(e) => {
tracing::error!("Failed to exchange code for token: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Token exchange failed")),
)
.into_response();
}
};
let id_token = match token_response.id_token() {
Some(t) => t,
None => {
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("No ID token in response")),
)
.into_response();
}
};
let claims = match id_token.claims(&client.id_token_verifier(), &Nonce::new(stored_nonce)) {
Ok(c) => c,
Err(e) => {
tracing::error!("Failed to verify ID token: {:?}", e);
return (
StatusCode::UNAUTHORIZED,
Json(other_error("ID token verification failed")),
)
.into_response();
}
};
if let Some(expected_at_hash) = claims.access_token_hash() {
let id_token_verifier = client.id_token_verifier();
let (Ok(signing_alg), Ok(signing_key)) = (
id_token.signing_alg(),
id_token.signing_key(&id_token_verifier),
) else {
tracing::error!("Failed to get signing algorithm or key for at_hash verification");
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to determine token signing algorithm")),
)
.into_response();
};
let actual_at_hash = match AccessTokenHash::from_token(
token_response.access_token(),
signing_alg,
signing_key,
) {
Ok(hash) => hash,
Err(e) => {
tracing::error!("Failed to compute access token hash: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to verify access token hash")),
)
.into_response();
}
};
if actual_at_hash != *expected_at_hash {
tracing::error!("Access token hash mismatch");
return (
StatusCode::UNAUTHORIZED,
Json(other_error("Access token hash mismatch")),
)
.into_response();
}
}
let claims_json = match serde_json::to_value(claims) {
Ok(v) => v,
Err(e) => {
tracing::error!("Failed to serialize claims: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to process ID token claims")),
)
.into_response();
}
};
let pointer = super::dot_path_to_json_pointer(&oidc.username_claim);
let username: Option<String> = claims_json
.pointer(&pointer)
.and_then(|v| v.as_str())
.map(|s| s.to_string());
let username = match username {
Some(u) if !u.is_empty() => u,
_ => {
tracing::error!(
"Could not extract username from claim '{}' in token",
oidc.username_claim
);
return (
StatusCode::BAD_REQUEST,
Json(other_error("Could not extract username from token claims")),
)
.into_response();
}
};
let user = match auth_session
.backend
.find_or_create_oidc_user(&username)
.await
{
Ok(u) => u,
Err(e) => {
tracing::error!("Failed to find or create OIDC user '{}': {:?}", username, e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to provision user account")),
)
.into_response();
}
};
if let Err(e) = auth_session.login(&user).await {
tracing::error!("Failed to login user via OIDC: {:?}", e);
return (
StatusCode::INTERNAL_SERVER_ERROR,
Json(other_error("Failed to establish session")),
)
.into_response();
}
if let Err(e) = session.cycle_id().await {
tracing::error!("Failed to cycle session ID after OIDC login: {:?}", e);
}
if let Some(frontend_url) = &oidc.frontend_base_url {
Redirect::temporary(frontend_url).into_response()
} else {
Redirect::temporary("/").into_response()
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_dot_path_to_json_pointer() {
use serde_json::json;
let cases = vec![
(
"realm_access.roles.0",
"/realm_access/roles/0",
json!({ "realm_access": { "roles": ["admin", "user"] } }),
"admin",
),
(
"preferred_username",
"/preferred_username",
json!({ "preferred_username": "bob" }),
"bob",
),
("a~b.c", "/a~0b/c", json!({ "a~b": { "c": "v" } }), "v"),
("a/b.c", "/a~1b/c", json!({ "a/b": { "c": "w" } }), "w"),
("~/.x", "/~0~1/x", json!({ "~/": { "x": "z" } }), "z"),
("a..b", "/a//b", json!({ "a": { "": { "b": "x" } } }), "x"),
("", "/", json!({ "": "root" }), "root"),
];
for (path, expected_ptr, json_val, expected_val) in cases {
let ptr = dot_path_to_json_pointer(path);
assert_eq!(ptr, expected_ptr, "Pointer mismatch for path: {}", path);
assert_eq!(
json_val.pointer(&ptr).and_then(|v| v.as_str()),
Some(expected_val),
"Value extraction failed for path: {}, pointer: {}",
path,
ptr
);
}
}
}
+184
View File
@@ -0,0 +1,184 @@
use axum::{
extract::{Path, State},
http::StatusCode,
routing::post,
Json, Router,
};
use axum_login::AuthUser as _;
use easytier::proto::rpc_types::controller::BaseController;
use crate::db::UserIdInDb;
use super::{other_error, AppState, HttpHandleError};
#[derive(Debug, serde::Deserialize)]
pub struct ProxyRpcRequest {
pub service_name: String,
pub method_name: String,
pub payload: serde_json::Value,
}
macro_rules! match_service {
($factory:ty, $method_name:expr, $payload:expr, $session:expr) => {{
let client = $session.scoped_client::<$factory>();
client
.json_call_method(BaseController::default(), &$method_name, $payload)
.await
}};
}
async fn handle_proxy_rpc_by_session(
session: &crate::client_manager::session::Session,
req: ProxyRpcRequest,
) -> Result<Json<serde_json::Value>, HttpHandleError> {
let ProxyRpcRequest {
service_name,
method_name,
payload,
} = req;
let resp = match service_name.as_str() {
"api.manage.WebClientService" => match_service!(
easytier::proto::api::manage::WebClientServiceClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.PeerManageRpcService" => match_service!(
easytier::proto::api::instance::PeerManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.PeerCenterManageRpcService" => match_service!(
easytier::proto::peer_rpc::PeerCenterRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.ConnectorManageRpcService" => match_service!(
easytier::proto::api::instance::ConnectorManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.MappedListenerManageRpcService" => match_service!(
easytier::proto::api::instance::MappedListenerManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.VpnPortalRpcService" => match_service!(
easytier::proto::api::instance::VpnPortalRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.TcpProxyRpcService" => match_service!(
easytier::proto::api::instance::TcpProxyRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.AclManageRpcService" => match_service!(
easytier::proto::api::instance::AclManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.PortForwardManageRpcService" => match_service!(
easytier::proto::api::instance::PortForwardManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.StatsRpcService" => match_service!(
easytier::proto::api::instance::StatsRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.instance.CredentialManageRpcService" => match_service!(
easytier::proto::api::instance::CredentialManageRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.logger.LoggerRpcService" => match_service!(
easytier::proto::api::logger::LoggerRpcClientFactory<BaseController>,
method_name,
payload,
session
),
"api.config.ConfigRpcService" => match_service!(
easytier::proto::api::config::ConfigRpcClientFactory<BaseController>,
method_name,
payload,
session
),
_ => {
return Err((
StatusCode::BAD_REQUEST,
other_error(format!("Unknown service: {}", service_name)).into(),
))
}
};
match resp {
Ok(v) => Ok(Json(v)),
Err(e) => Err((
StatusCode::INTERNAL_SERVER_ERROR,
other_error(format!("RPC Error: {:?}", e)).into(),
)),
}
}
pub async fn handle_proxy_rpc(
auth_session: super::users::AuthSession,
State(client_mgr): AppState,
Path(machine_id): Path<uuid::Uuid>,
Json(req): Json<ProxyRpcRequest>,
) -> Result<Json<serde_json::Value>, HttpHandleError> {
let user_id = auth_session
.user
.as_ref()
.ok_or((StatusCode::UNAUTHORIZED, other_error("Unauthorized").into()))?
.id();
let session = client_mgr
.get_session_by_machine_id(user_id, &machine_id)
.ok_or((
StatusCode::NOT_FOUND,
other_error("Session not found").into(),
))?;
handle_proxy_rpc_by_session(session.as_ref(), req).await
}
pub fn router() -> Router<super::AppStateInner> {
Router::new().route(
"/api/v1/machines/:machine-id/proxy-rpc",
post(handle_proxy_rpc),
)
}
/// Internal proxy-rpc handler: no AuthSession, resolves the active session by machine_id.
pub async fn handle_proxy_rpc_internal(
State(client_mgr): AppState,
Path((user_id, machine_id)): Path<(UserIdInDb, uuid::Uuid)>,
Json(req): Json<ProxyRpcRequest>,
) -> Result<Json<serde_json::Value>, HttpHandleError> {
let session = client_mgr
.get_session_by_machine_id(user_id, &machine_id)
.ok_or((
StatusCode::NOT_FOUND,
other_error("Session not found").into(),
))?;
handle_proxy_rpc_by_session(session.as_ref(), req).await
}
pub fn router_internal() -> Router<super::AppStateInner> {
Router::new().route(
"/api/internal/users/:user-id/machines/:machine-id/proxy-rpc",
post(handle_proxy_rpc_internal),
)
}
+164 -34
View File
@@ -4,17 +4,19 @@ use async_trait::async_trait;
use axum_login::{AuthUser, AuthnBackend, AuthzBackend, UserId};
use password_auth::verify_password;
use sea_orm::{
ActiveModelTrait as _, ColumnTrait, EntityTrait, FromQueryResult, IntoActiveModel, JoinType,
QueryFilter, QuerySelect as _, RelationTrait, Set, TransactionTrait,
ColumnTrait, EntityTrait, FromQueryResult, IntoActiveModel, JoinType, QueryFilter,
QuerySelect as _, RelationTrait, Set,
};
use serde::{Deserialize, Serialize};
use tokio::task;
use crate::db::{self, entity};
const EMPTY_PASSWORD_MD5: &str = "d41d8cd98f00b204e9800998ecf8427e";
#[derive(Clone, Serialize, Deserialize)]
pub struct User {
db_user: entity::users::Model,
pub(crate) db_user: entity::users::Model,
pub tokens: Vec<String>,
}
@@ -64,6 +66,18 @@ pub struct ChangePassword {
pub new_password: String,
}
#[derive(Debug, thiserror::Error)]
pub enum ChangePasswordError {
#[error("Password cannot be empty")]
EmptyPassword,
#[error("User not found")]
UserNotFound,
#[error(transparent)]
Db(#[from] sea_orm::DbErr),
}
#[derive(Debug, Clone)]
pub struct Backend {
db: db::Db,
@@ -74,45 +88,59 @@ impl Backend {
Self { db }
}
pub fn db(&self) -> &db::Db {
&self.db
}
pub async fn register_new_user(&self, new_user: &RegisterNewUser) -> anyhow::Result<()> {
let hashed_password = password_auth::generate_hash(new_user.credentials.password.as_str());
let txn = self.db.orm_db().begin().await?;
entity::users::ActiveModel {
username: Set(new_user.credentials.username.clone()),
password: Set(hashed_password.clone()),
..Default::default()
}
.save(&txn)
.await?;
entity::users_groups::ActiveModel {
user_id: Set(entity::users::Entity::find()
.filter(entity::users::Column::Username.eq(new_user.credentials.username.as_str()))
.one(&txn)
.await?
.unwrap()
.id),
group_id: Set(entity::groups::Entity::find()
.filter(entity::groups::Column::Name.eq("users"))
.one(&txn)
.await?
.unwrap()
.id),
..Default::default()
}
.save(&txn)
.await?;
txn.commit().await?;
self.db
.create_user_and_join_users_group(&new_user.credentials.username, hashed_password)
.await?;
Ok(())
}
/// Find a user by username, or auto-create one for OIDC-authenticated users.
///
/// Unlike the heartbeat auto-creation path (controlled by `allow_auto_create_user`),
/// OIDC users are always provisioned automatically because their identity has already
/// been verified by a trusted external Identity Provider (IdP).
pub async fn find_or_create_oidc_user(&self, username: &str) -> anyhow::Result<User> {
use entity::users;
// Try to find an existing user first.
if let Some(db_user) = users::Entity::find()
.filter(users::Column::Username.eq(username))
.one(self.db.orm_db())
.await?
{
return Ok(User {
tokens: vec![db_user.username.clone()],
db_user,
});
}
// User not found auto-provision a local account backed by the IdP identity.
let db_user = self.db.auto_create_user(username).await?;
tracing::info!("Auto-provisioned OIDC user '{username}'");
Ok(User {
tokens: vec![db_user.username.clone()],
db_user,
})
}
pub async fn change_password(
&self,
id: <User as AuthUser>::Id,
req: &ChangePassword,
) -> anyhow::Result<()> {
) -> Result<(), ChangePasswordError> {
// With the existing pre-hashed protocol the backend can only reject the
// exact empty-string digest; whitespace-only passwords must be blocked
// on the client before hashing.
if req.new_password == EMPTY_PASSWORD_MD5 {
return Err(ChangePasswordError::EmptyPassword);
}
let hashed_password = password_auth::generate_hash(req.new_password.as_str());
use entity::users;
@@ -120,9 +148,10 @@ impl Backend {
let mut user = users::Entity::find_by_id(id)
.one(self.db.orm_db())
.await?
.ok_or(anyhow::anyhow!("User not found"))?
.ok_or(ChangePasswordError::UserNotFound)?
.into_active_model();
user.password = Set(hashed_password.clone());
user.must_change_password = Set(false);
entity::users::Entity::update(user)
.exec(self.db.orm_db())
@@ -235,6 +264,107 @@ impl AuthzBackend for Backend {
}
}
#[cfg(test)]
mod tests {
use axum_login::AuthnBackend;
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter as _};
use super::{Backend, ChangePassword, ChangePasswordError, EMPTY_PASSWORD_MD5};
use crate::db::{entity::users, Db};
async fn find_user(db: &Db, username: &str) -> users::Model {
users::Entity::find()
.filter(users::Column::Username.eq(username))
.one(db.orm_db())
.await
.unwrap()
.unwrap()
}
#[tokio::test]
async fn seeded_default_users_require_password_change() {
let db = Db::memory_db().await;
assert!(find_user(&db, "admin").await.must_change_password);
assert!(find_user(&db, "user").await.must_change_password);
}
#[tokio::test]
async fn auto_created_user_does_not_require_password_change() {
let db = Db::memory_db().await;
db.auto_create_user("oidc-user").await.unwrap();
assert!(!find_user(&db, "oidc-user").await.must_change_password);
}
#[tokio::test]
async fn change_password_clears_must_change_password_flag() {
let db = Db::memory_db().await;
let backend = Backend::new(db.clone());
let admin = find_user(&db, "admin").await;
backend
.change_password(
admin.id,
&ChangePassword {
new_password: "f1086f68460b65771de50a970cd1242d".to_string(),
},
)
.await
.unwrap();
assert!(!find_user(&db, "admin").await.must_change_password);
}
#[tokio::test]
async fn change_password_rejects_empty_password_digest() {
let db = Db::memory_db().await;
let backend = Backend::new(db.clone());
let admin = find_user(&db, "admin").await;
let error = backend
.change_password(
admin.id,
&ChangePassword {
new_password: EMPTY_PASSWORD_MD5.to_string(),
},
)
.await
.unwrap_err();
assert!(matches!(error, ChangePasswordError::EmptyPassword));
assert!(find_user(&db, "admin").await.must_change_password);
}
#[tokio::test]
async fn can_authenticate_with_new_password_after_change() {
let db = Db::memory_db().await;
let backend = Backend::new(db.clone());
let admin = find_user(&db, "admin").await;
backend
.change_password(
admin.id,
&ChangePassword {
new_password: "f1086f68460b65771de50a970cd1242d".to_string(),
},
)
.await
.unwrap();
let authenticated = backend
.authenticate(super::Credentials {
username: "admin".to_string(),
password: "f1086f68460b65771de50a970cd1242d".to_string(),
})
.await
.unwrap();
assert!(authenticated.is_some());
}
}
// We use a type alias for convenience.
//
// Note that we've supplied our concrete backend here.
+185
View File
@@ -0,0 +1,185 @@
use std::sync::Arc;
use serde::{Deserialize, Serialize};
/// Webhook configuration for external integrations.
#[derive(Debug, Clone)]
pub struct WebhookConfig {
pub webhook_url: Option<String>,
pub webhook_secret: Option<String>,
pub internal_auth_token: Option<String>,
pub web_instance_id: Option<String>,
pub web_instance_api_base_url: Option<String>,
client: reqwest::Client,
}
impl WebhookConfig {
pub fn new(
webhook_url: Option<String>,
webhook_secret: Option<String>,
internal_auth_token: Option<String>,
web_instance_id: Option<String>,
web_instance_api_base_url: Option<String>,
) -> Self {
WebhookConfig {
webhook_url,
webhook_secret,
internal_auth_token,
web_instance_id,
web_instance_api_base_url,
client: reqwest::Client::new(),
}
}
pub fn is_enabled(&self) -> bool {
self.webhook_url
.as_deref()
.is_some_and(|url| !url.trim().is_empty())
}
pub fn has_internal_auth(&self) -> bool {
self.internal_auth_token.is_some()
}
}
// --- Request/Response types ---
#[derive(Debug, Serialize)]
pub struct ValidateTokenRequest {
pub token: String,
pub machine_id: String,
pub hostname: String,
pub version: String,
pub os_type: Option<String>,
pub os_version: Option<String>,
pub os_distribution: Option<String>,
pub web_instance_id: Option<String>,
pub web_instance_api_base_url: Option<String>,
}
#[derive(Debug, Deserialize)]
pub struct ValidateTokenResponse {
pub valid: bool,
#[serde(default)]
pub pre_approved: bool,
#[serde(default)]
pub binding_version: u64,
pub managed_network_configs: Vec<ManagedNetworkConfig>,
pub config_revision: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct ManagedNetworkConfig {
pub instance_id: String,
pub network_config: serde_json::Value,
}
#[derive(Debug, Serialize)]
pub struct NodeConnectedRequest {
pub machine_id: String,
pub token: String,
pub user_id: Option<i32>,
pub hostname: String,
pub version: String,
pub os_type: Option<String>,
pub os_version: Option<String>,
pub os_distribution: Option<String>,
pub web_instance_id: Option<String>,
pub binding_version: Option<u64>,
}
#[derive(Debug, Serialize)]
pub struct NodeDisconnectedRequest {
pub machine_id: String,
pub token: String,
pub user_id: Option<i32>,
pub web_instance_id: Option<String>,
pub binding_version: Option<u64>,
}
// --- Webhook client ---
impl WebhookConfig {
fn webhook_base_url(&self) -> anyhow::Result<&str> {
self.webhook_url
.as_deref()
.map(str::trim)
.filter(|url| !url.is_empty())
.ok_or_else(|| anyhow::anyhow!("webhook_url is not configured"))
}
fn webhook_endpoint(&self, path: &str) -> anyhow::Result<String> {
Ok(format!(
"{}/{}",
self.webhook_base_url()?.trim_end_matches('/'),
path.trim_start_matches('/'),
))
}
/// Validate a token through the configured webhook endpoint.
pub async fn validate_token(
&self,
req: &ValidateTokenRequest,
) -> anyhow::Result<ValidateTokenResponse> {
let url = self.webhook_endpoint("validate-token")?;
let resp = self
.client
.post(&url)
.header("X-Internal-Auth", self.webhook_auth_secret())
.json(req)
.send()
.await?;
if !resp.status().is_success() {
anyhow::bail!("webhook validate-token returned status {}", resp.status());
}
Ok(resp.json().await?)
}
/// Notify the webhook receiver that a node has connected.
pub async fn notify_node_connected(&self, req: &NodeConnectedRequest) {
if !self.is_enabled() {
return;
}
let Ok(url) = self.webhook_endpoint("webhook/node-connected") else {
tracing::warn!("skip node-connected webhook because webhook_url is not configured");
return;
};
let _ = self
.client
.post(&url)
.header("X-Internal-Auth", self.webhook_auth_secret())
.json(req)
.send()
.await;
}
/// Notify the webhook receiver that a node has disconnected.
pub async fn notify_node_disconnected(&self, req: &NodeDisconnectedRequest) {
if !self.is_enabled() {
return;
}
let Ok(url) = self.webhook_endpoint("webhook/node-disconnected") else {
tracing::warn!("skip node-disconnected webhook because webhook_url is not configured");
return;
};
let _ = self
.client
.post(&url)
.header("X-Internal-Auth", self.webhook_auth_secret())
.json(req)
.send()
.await;
}
fn webhook_auth_secret(&self) -> &str {
self.webhook_secret
.as_deref()
.or(self.internal_auth_token.as_deref())
.unwrap_or("")
}
}
pub type SharedWebhookConfig = Arc<WebhookConfig>;
+98 -30
View File
@@ -3,12 +3,12 @@ name = "easytier"
description = "A full meshed p2p VPN, connecting all your devices in one network with one command."
homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier"
version = "2.5.0"
version = "2.6.0"
edition = "2021"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.89.0"
rust-version = "1.93.0"
license-file = "LICENSE"
readme = "README.md"
@@ -36,7 +36,12 @@ tracing-subscriber = { version = "0.3", features = [
"local-time",
"time",
] }
derivative = "2.2.0"
derive_more = { version = "2.1.1", features = ["full"] }
console-subscriber = { version = "0.4.1", optional = true }
indoc = "2.0.7"
regex = "1.8"
paste = "1.0"
thiserror = "1.0"
auto_impl = "1.1.0"
crossbeam = "0.8.4"
@@ -45,6 +50,12 @@ time = "0.3"
toml = "0.8.12"
chrono = { version = "0.4.37", features = ["serde"] }
cfg-if = "1.0"
itertools = "0.14.0"
strum = { version = "0.27.2", features = ["derive"] }
gethostname = "0.5.0"
futures = { version = "0.3", features = ["bilock", "unstable"] }
@@ -64,15 +75,18 @@ zerocopy = { version = "0.7.32", features = ["derive", "simd"] }
bytes = "1.5.0"
pin-project-lite = "0.2.13"
atomic_refcell = "0.1.13"
quinn = { version = "0.11.8", optional = true, features = ["ring"] }
quinn-plaintext = { version = "0.3.0", optional = true }
rustls = { version = "0.23.0", features = [
"ring",
"ring", "tls12"
], default-features = false, optional = true }
rcgen = { version = "0.12.1", optional = true }
# for websocket
tokio-websockets = { version = "0.8", optional = true, features = [
tokio-websockets = { version = "0.13.2", optional = true, features = [
"rustls-webpki-roots",
"client",
"server",
@@ -82,14 +96,21 @@ tokio-websockets = { version = "0.8", optional = true, features = [
http = { version = "1", default-features = false, features = [
"std",
], optional = true }
forwarded-header-value = { version = "0.1.1", optional = true }
tokio-rustls = { version = "0.26", default-features = false, optional = true }
# for tap device
tun = { package = "tun-easytier", git="https://github.com/EasyTier/rust-tun", features = [
tun = { package = "tun-easytier", git = "https://github.com/EasyTier/rust-tun", features = [
"async",
], optional = true }
# for net ns
nix = { version = "0.29.0", features = ["sched", "socket", "ioctl", "net", "fs"] }
nix = { version = "0.29.0", features = [
"sched",
"socket",
"ioctl",
"net",
"fs",
] }
uuid = { version = "1.5.0", features = [
"v4",
@@ -102,8 +123,9 @@ uuid = { version = "1.5.0", features = [
once_cell = "1.18.0"
# for rpc
prost = "0.13"
prost-types = "0.13"
prost = "0.13.5"
prost-wkt = "0.6"
prost-wkt-types = "0.6"
anyhow = "1.0"
url = { version = "2.5", features = ["serde"] }
@@ -135,6 +157,7 @@ clap = { version = "4.5.30", features = [
"env",
] }
clap_complete = { version = "4.5.55" }
clap_complete_nushell = { version = "4.5.10" }
async-recursion = "1.0.5"
@@ -153,10 +176,14 @@ ring = { version = "0.17", optional = true }
bitflags = "2.5"
aes-gcm = { version = "0.10.3", optional = true }
openssl = { version = "0.10", optional = true, features = ["vendored"] }
snow = "0.10.0"
x25519-dalek = { version = "2.0", features = ["static_secrets"] }
# for cli
tabled = "0.16"
humansize = "2.1.3"
terminal_size = "0.4"
unicode-width = "0.1"
base64 = "0.22"
@@ -180,7 +207,7 @@ smoltcp = { git = "https://github.com/smoltcp-rs/smoltcp.git", rev = "0a926767a6
# "socket-tcp-cubic",
"async",
] }
parking_lot = { version = "0.12.0", optional = true }
parking_lot = { version = "0.12.0" }
wildmatch = "2.3.4"
@@ -192,9 +219,9 @@ async-ringbuf = "0.3.1"
service-manager = { git = "https://github.com/EasyTier/service-manager-rs.git", branch = "main" }
zstd = { version = "0.13" }
zstd = { version = "0.13", optional = true }
kcp-sys = { git = "https://github.com/EasyTier/kcp-sys", rev = "71eff18c573a4a71bf99c7fabc6a8b9f211c84c1" }
kcp-sys = { git = "https://github.com/EasyTier/kcp-sys", rev = "94964794caaed5d388463137da59b97499619e5f", optional = true }
prost-reflect = { version = "0.14.5", default-features = false, features = [
"derive",
@@ -210,8 +237,11 @@ hickory-resolver = "0.25.2"
hickory-proto = "0.25.2"
# for magic dns
hickory-client = "0.25.2"
hickory-server = { version = "0.25.2", features = ["resolver"] }
hickory-client = { version = "0.25.2", optional = true }
hickory-server = { version = "0.25.2", features = [
"resolver",
], optional = true }
derive_builder = "0.20.2"
humantime-serde = "1.1.1"
multimap = "0.10.1"
@@ -221,8 +251,7 @@ sha2 = "0.10.8"
shellexpand = "3.1.1"
# for fake tcp
flume = "0.12"
cfg-if = "1.0"
flume = { version = "0.12", optional = true }
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows", target_os = "freebsd"))'.dependencies]
machine-uid = "0.5.3"
@@ -238,7 +267,9 @@ dbus = { version = "0.9.7", features = ["vendored"] }
which = "7.0.3"
[target.'cfg(all(windows, any(target_arch = "x86_64", target_arch = "x86")))'.dependencies]
windivert = { git = "https://github.com/EasyTier/windivert-rust.git", rev = "adcc56d1550f7b5377ec2b3429f413ee24a77375", features = ["static"] }
windivert = { git = "https://github.com/EasyTier/windivert-rust.git", rev = "adcc56d1550f7b5377ec2b3429f413ee24a77375", features = [
"static",
] }
[target.'cfg(windows)'.dependencies]
windows = { version = "0.52.0", features = [
@@ -256,18 +287,20 @@ winreg = "0.52"
windows-service = "0.7.0"
windows-sys = { version = "0.52", features = [
"Win32_NetworkManagement_IpHelper",
"Win32_NetworkManagement_Ndis",
"Win32_NetworkManagement_Ndis",
"Win32_Networking_WinSock",
"Win32_Foundation"
]}
"Win32_Foundation",
"Win32_System_Diagnostics",
"Win32_System_Diagnostics_Debug",
] }
winapi = { version = "0.3.9", features = ["impl-default"] }
[target.'cfg(not(windows))'.dependencies]
jemallocator = { package = "tikv-jemallocator", version = "0.6.0", optional = true, features = [
"unprefixed_malloc_on_supported_platforms"
"unprefixed_malloc_on_supported_platforms",
] }
jemalloc-ctl = { package = "tikv-jemalloc-ctl", version = "0.6.0", optional = true, features = [
"use_std"
"use_std",
] }
[target.'cfg(not(target_os = "macos"))'.dependencies]
@@ -281,11 +314,13 @@ jemalloc-sys = { package = "tikv-jemalloc-sys", version = "0.6.0", features = [
], optional = true }
[build-dependencies]
cfg_aliases = "0.2.1"
tonic-build = "0.12"
globwalk = "0.8.1"
regex = "1"
prost-build = "0.13.2"
rpc_build = { package = "easytier-rpc-build", version = "0.1.0", features = [
prost-build = "0.13.5"
prost-wkt-build = "0.6"
easytier-rpc-build = { path = "../easytier-rpc-build", features = [
"internal-namespace",
] }
prost-reflect-build = { version = "0.14.0" }
@@ -296,10 +331,14 @@ zip = "4.0.0"
# enable thunk-rs when compiling for x86_64 or i686 windows
[target.x86_64-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[target.i686-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[dev-dependencies]
@@ -315,7 +354,18 @@ tokio-socks = "0.5.2"
[features]
default = ["wireguard", "websocket", "smoltcp", "tun", "socks5", "quic"]
default = [
"wireguard",
"websocket",
"smoltcp",
"tun",
"socks5",
"kcp",
"quic",
"faketcp",
"magic-dns",
"zstd",
]
full = [
"websocket",
"wireguard",
@@ -324,9 +374,15 @@ full = [
"smoltcp",
"tun",
"socks5",
"kcp",
"quic",
"faketcp",
"magic-dns",
"zstd",
]
wireguard = ["dep:boringtun", "dep:ring"]
quic = ["dep:quinn", "dep:rustls", "dep:rcgen"]
quic = ["dep:quinn", "dep:quinn-plaintext", "dep:rustls", "dep:rcgen"]
kcp = ["dep:kcp-sys"]
mimalloc = ["dep:mimalloc"]
aes-gcm = ["dep:aes-gcm"]
openssl-crypto = ["dep:openssl"]
@@ -334,12 +390,24 @@ tun = ["dep:tun"]
websocket = [
"dep:tokio-websockets",
"dep:http",
"dep:forwarded-header-value",
"dep:tokio-rustls",
"dep:rustls",
"dep:rcgen",
]
smoltcp = ["dep:smoltcp", "dep:parking_lot"]
socks5 = ["dep:smoltcp"]
smoltcp = ["dep:smoltcp"]
socks5 = ["smoltcp"]
jemalloc = ["dep:jemallocator", "dep:jemalloc-sys"]
jemalloc-prof = ["jemalloc", "dep:jemalloc-ctl", "jemalloc-ctl/stats", "jemalloc-sys/profiling", "jemalloc-sys/stats"]
jemalloc-prof = [
"jemalloc",
"dep:jemalloc-ctl",
"jemalloc-ctl/stats",
"jemalloc-sys/profiling",
"jemalloc-sys/stats",
]
tracing = ["tokio/tracing", "dep:console-subscriber"]
magic-dns = ["dep:hickory-client", "dep:hickory-server"]
faketcp = ["dep:flume"]
zstd = ["dep:zstd"]
# For Network Extension on macOS
macos-ne = []
+30 -17
View File
@@ -1,5 +1,8 @@
use cfg_aliases::cfg_aliases;
use prost_wkt_build::{FileDescriptorSet, Message as _};
#[cfg(target_os = "windows")]
use std::{env, io::Cursor, path::PathBuf};
use std::io::Cursor;
use std::{env, path::PathBuf};
#[cfg(target_os = "windows")]
struct WindowsBuild {}
@@ -127,6 +130,17 @@ fn check_locale() {
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
cfg_aliases! {
mobile: {
any(
target_os = "android",
target_os = "ios",
all(target_os = "macos", feature = "macos-ne"),
target_env = "ohos"
)
}
}
// enable thunk-rs when target os is windows and arch is x86_64 or i686
#[cfg(target_os = "windows")]
if !std::env::var("TARGET")
@@ -157,33 +171,28 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("cargo:rerun-if-changed={proto_file}");
}
let out = PathBuf::from(env::var("OUT_DIR").unwrap());
let descriptor_file = out.join("descriptors.bin");
let mut config = prost_build::Config::new();
config
.type_attribute(".", "#[derive(serde::Serialize,serde::Deserialize)]")
.extern_path(".google.protobuf.Any", "::prost_wkt_types::Any")
.extern_path(".google.protobuf.Timestamp", "::prost_wkt_types::Timestamp")
.extern_path(".google.protobuf.Value", "::prost_wkt_types::Value")
.file_descriptor_set_path(&descriptor_file)
.protoc_arg("--experimental_allow_proto3_optional")
.type_attribute(".acl", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".common", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".error", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".api", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".web", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".config", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(
"peer_rpc.GetIpListResponse",
"#[derive(serde::Serialize, serde::Deserialize)]",
)
.type_attribute("peer_rpc.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("peer_rpc.PeerInfoForGlobalMap", "#[derive(Hash)]")
.type_attribute("peer_rpc.ForeignNetworkRouteInfoKey", "#[derive(Hash, Eq)]")
.type_attribute(
"peer_rpc.RouteForeignNetworkSummary.Info",
"#[derive(Hash, Eq, serde::Serialize, serde::Deserialize)]",
)
.type_attribute(
"peer_rpc.RouteForeignNetworkSummary",
"#[derive(Hash, Eq, serde::Serialize, serde::Deserialize)]",
"#[derive(Hash, Eq)]",
)
.type_attribute("peer_rpc.RouteForeignNetworkSummary", "#[derive(Hash, Eq)]")
.type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]")
.field_attribute(".api.manage.NetworkConfig", "#[serde(default)]")
.service_generator(Box::new(rpc_build::ServiceGenerator::new()))
.service_generator(Box::new(easytier_rpc_build::ServiceGenerator::default()))
.btree_map(["."])
.skip_debug([".common.Ipv4Addr", ".common.Ipv6Addr", ".common.UUID"]);
@@ -193,6 +202,10 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
.file_descriptor_set_bytes("crate::proto::DESCRIPTOR_POOL_BYTES")
.compile_protos_with_config(config, &proto_files_reflect, &["src/proto/"])?;
let descriptor_bytes = std::fs::read(descriptor_file).unwrap();
let descriptor = FileDescriptorSet::decode(&descriptor_bytes[..]).unwrap();
prost_wkt_build::add_serde(out, descriptor);
check_locale();
Ok(())
}
+724
View File
@@ -0,0 +1,724 @@
# 临时凭据(Credential)系统实现计划
## Context
EasyTier 的 secure mode 已实现 Noise XX 握手 + X25519 静态公钥认证。当前节点通过 `network_secret` 双向确认身份。用户需要一种"临时凭据"机制:
- **管理节点**(任何持有 network_secret 的节点)可为当前网络生成凭据
- **新节点**可使用凭据代替 `network_secret` 加入网络
- **管理节点**可撤销凭据
- **撤销后**,使用该凭据接入的节点被全网踢出
**核心设计**:凭据 = X25519 密钥对。完全复用现有 Noise `Noise_XX_25519_ChaChaPoly_SHA256` 握手流程,无需修改握手消息格式。通过 OSPF 路由同步传播可信公钥列表,撤销时全网自然断开。
## 整体架构
```
凭据 = X25519 密钥对
- 管理节点生成密钥对,将公钥加入可信列表
- 临时节点持有私钥,用作 Noise static key
- 全网通过 OSPF 路由同步可信公钥列表
管理节点 (持有 network_secret):
1. generate_credential() → 生成 X25519 密钥对
2. 公钥记入 trusted_credential_pubkeys → 随 RoutePeerInfo 通过 OSPF 传播
3. revoke → 从 trusted 列表移除 → OSPF 同步 → 全网感知
临时节点 (持有凭据私钥):
1. 使用凭据私钥作为 SecureModeConfig.local_private_key
2. Noise 握手完全走现有流程(XX 模式交换 static pubkey
3. 不持有 network_secretsecret_proof 验证会失败,但公钥在可信列表中即可
4. RoutePeerInfo.noise_static_pubkey 自然携带凭据公钥
校验逻辑(每个节点在路由同步时执行):
1. 从全网 RoutePeerInfo 中收集管理节点的 trusted_credential_pubkeys(取并集)
**安全约束: 仅信任 secure_auth_level=NetworkSecretConfirmed 的节点发布的列表**
临时节点(CredentialAuthenticated)发布的 trusted_credential_pubkeys 必须被忽略
2. 对每个 peer,如果其 secure_auth_level < NetworkSecretConfirmed:
- 检查其 noise_static_pubkey 是否在可信公钥集合中
- 不在 → 从路由表移除 → 断开连接
```
## 详细设计
### Step 1: Protobuf 定义
**文件: `easytier/src/proto/peer_rpc.proto`**
`RoutePeerInfo` 新增字段(利用已有 `noise_static_pubkey` 字段 #18:
```protobuf
message TrustedCredentialPubkey {
bytes pubkey = 1; // X25519 公钥 (32 bytes)
repeated string groups = 2; // 该凭据所属的 ACL group(管理节点声明,无需 proof)
bool allow_relay = 3; // 是否允许该临时节点提供 peer relay 能力
int64 expiry_unix = 4; // 必选:过期时间(Unix timestamp),过期后自动失效
repeated string allowed_proxy_cidrs = 5; // 允许该临时节点声明的 proxy_cidrs 范围
}
message RoutePeerInfo {
// ... existing fields 1-18 ...
// 管理节点发布的可信凭据公钥列表(含 group 关联)
repeated TrustedCredentialPubkey trusted_credential_pubkeys = 19;
}
```
临时节点无需新字段——其 `noise_static_pubkey`(字段 18)已经在 OSPF 中传播,只需在校验端判断该公钥是否在可信列表中。
新增 `SecureAuthLevel` 枚举值:
```protobuf
enum SecureAuthLevel {
None = 0;
EncryptedUnauthenticated = 1;
SharedNodePubkeyVerified = 2;
NetworkSecretConfirmed = 3;
CredentialAuthenticated = 4; // 新增:凭据公钥已验证
}
```
**文件: `easytier/src/proto/api_instance.proto`**
新增凭据管理 RPC:
```protobuf
message GenerateCredentialRequest {
repeated string groups = 1; // 可选: 凭据关联的 ACL group
bool allow_relay = 2; // 可选: 是否允许该临时节点提供 peer relay
repeated string allowed_proxy_cidrs = 3; // 可选: 限制可声明的 proxy_cidrs
int64 ttl_seconds = 4; // 必选: 凭据有效期(秒)
}
message GenerateCredentialResponse {
string credential_id = 1; // 公钥的 base64
string credential_secret = 2; // 私钥的 base64
}
message RevokeCredentialRequest { string credential_id = 1; }
message RevokeCredentialResponse { bool success = 1; }
message ListCredentialsRequest {}
message CredentialInfo {
string credential_id = 1; // 公钥 base64
google.protobuf.Timestamp created_at = 2;
}
message ListCredentialsResponse { repeated CredentialInfo credentials = 1; }
service CredentialManageRpc {
rpc GenerateCredential(GenerateCredentialRequest) returns (GenerateCredentialResponse);
rpc RevokeCredential(RevokeCredentialRequest) returns (RevokeCredentialResponse);
rpc ListCredentials(ListCredentialsRequest) returns (ListCredentialsResponse);
}
```
### Step 2: 凭据管理模块
**新文件: `easytier/src/peers/credential_manager.rs`**
```rust
use x25519_dalek::{StaticSecret, PublicKey};
pub struct CredentialManager {
// 本节点管理的可信凭据
credentials: DashMap<String, CredentialEntry>, // credential_id (pubkey base64) -> entry
storage_path: Option<PathBuf>, // 可选: 凭据 JSON 文件路径
}
struct CredentialEntry {
pubkey_bytes: [u8; 32],
groups: Vec<String>, // 关联的 ACL group(管理节点声明)
allow_relay: bool, // 是否允许 relay
allowed_proxy_cidrs: Vec<String>, // 允许声明的 proxy_cidrs 范围
expiry: SystemTime, // 过期时间(必选)
created_at: SystemTime,
}
impl CredentialManager {
/// 生成新凭据(含 group 关联)
/// 返回 (credential_id=公钥base64, credential_secret=私钥base64)
pub fn generate_credential(&self, groups: Vec<String>, allow_relay: bool, expiry: SystemTime) -> (String, String) {
let private = StaticSecret::random_from_rng(OsRng);
let public = PublicKey::from(&private);
let id = BASE64_STANDARD.encode(public.as_bytes());
let secret = BASE64_STANDARD.encode(private.as_bytes());
self.credentials.insert(id.clone(), CredentialEntry {
pubkey_bytes: *public.as_bytes(),
groups,
allow_relay,
expiry, // 由调用方传入
created_at: SystemTime::now(),
});
self.save_to_disk(); // 持久化
(id, secret)
}
/// 撤销凭据
pub fn revoke_credential(&self, credential_id: &str) -> bool;
/// 获取可信凭据列表(用于 RoutePeerInfo.trusted_credential_pubkeys
pub fn get_trusted_pubkeys(&self) -> Vec<TrustedCredentialPubkey>;
/// 列出所有凭据
pub fn list_credentials(&self) -> Vec<CredentialInfo>;
}
```
### Step 3: Noise 握手适配(最小改动)
**文件: `easytier/src/peers/peer_conn.rs`**
临时节点的握手流程**完全不需要修改**,因为:
- 临时节点配置 `SecureModeConfig { enabled: true, local_private_key: 凭据私钥, local_public_key: 凭据公钥 }`
- `get_keypair()` (line 434) 自然返回凭据密钥对
- Noise XX 握手正常交换 static pubkey
- 唯一区别:`secret_proof_32` 验证会失败(临时节点没有 network_secret
需要修改 `do_noise_handshake_as_server()` (line 934):
- **当前行为**: `secret_proof` 验证失败 → 返回错误断开连接 (line 1059)
- **修改为**: `secret_proof` 验证失败时,不立即断开,而是将 `secure_auth_level` 保持为 `EncryptedUnauthenticated`
- 后续由 OSPF 路由同步阶段决定该 peer 是否可信(公钥是否在 trusted 列表中)
同样修改 `do_noise_handshake_as_client()` (line 680):
- 当临时节点连接管理节点时,`secret_proof` 验证失败不应报错
- 临时节点可以通过 `pinned_remote_pubkey` 或不验证来处理
**NoiseHandshakeResult** 新增:
```rust
// 标记此连接使用了凭据而非 network_secret
is_credential_conn: bool,
```
### Step 4: RoutePeerInfo 传播凭据信息
**文件: `easytier/src/peers/peer_ospf_route.rs`**
修改 `RoutePeerInfo::new_updated_self()` (line 164):
- 管理节点(持有 network_secret: 从 `CredentialManager.get_trusted_pubkeys()` 获取列表,填入 `trusted_credential_pubkeys`
- 临时节点: **不填写 `trusted_credential_pubkeys`**(该字段留空),即使收到其他管理节点传播的列表也不转发
- 实现方式: 在 `new_updated_self()` 中检查节点身份,临时节点跳过 trusted_credential_pubkeys 填充
- 临时节点: 无需额外操作,`noise_static_pubkey` 已自然包含凭据公钥
### Step 5: 全网校验与自动踢出(核心逻辑)
**文件: `easytier/src/peers/peer_ospf_route.rs`**
`SyncedRouteInfo` 中新增:
```rust
// 从全网管理节点汇总的可信凭据公钥集合
trusted_credential_pubkeys: DashSet<Vec<u8>>, // pubkey bytes
```
新增校验方法(类似 `verify_and_update_group_trusts` line 743:
```rust
fn verify_credential_peers(&self, peer_infos: &[RoutePeerInfo]) {
// 1. 收集管理节点的 trusted_credential_pubkeys(取并集)
// **安全约束: 仅信任 secret_digest 与本网络匹配的节点(即持有 network_secret 的管理节点)**
// 临时节点的 trusted_credential_pubkeys 直接忽略,防止恶意临时节点自我授权
let mut all_trusted = HashSet::new();
for info in peer_infos {
if self.is_peer_secret_verified(info.peer_id) {
// 该 peer 通过了 network_secret 双向确认,是合法管理节点
for tc in &info.trusted_credential_pubkeys {
all_trusted.insert(tc.pubkey.clone());
}
}
// else: 该 peer 未通过 network_secret 确认(含临时节点),忽略其 trusted 列表
}
self.trusted_credential_pubkeys = all_trusted;
// 2. 检查所有 peer 的凭据状态
for info in peer_infos {
if !self.is_peer_secret_verified(info.peer_id)
&& !info.noise_static_pubkey.is_empty()
{
if !self.trusted_credential_pubkeys.contains(&info.noise_static_pubkey) {
// 该 peer 既不持有 network_secret,其公钥也不在可信列表中
// → 标记为不可信,后续从路由表移除
self.mark_peer_untrusted(info.peer_id);
}
}
}
}
```
`do_sync_route_info()` (line 2614) 中调用此校验。
在路由表构建中(`update_route_table_and_cached_local_conn_bitmap()`:
- 不可信 peer 不加入路由图
- 已连接的不可信 peer 调用 `PeerMap::close_peer()` 断开
**判断 peer 是否持有 network_secret**: 利用现有 `secret_digest` 字段。管理节点的 `RoutePeerInfo``secret_digest` 与本节点匹配,说明双方持有相同的 network_secret。
### Step 6: GlobalCtx / Config 集成
**文件: `easytier/src/common/global_ctx.rs`**
`GlobalCtx` 新增:
```rust
credential_manager: Arc<CredentialManager>, // 所有节点都持有,管理节点用于生成/撤销
```
**文件: `easytier/src/common/global_ctx.rs` - `GlobalCtxEvent`**
新增:
```rust
CredentialChanged, // 触发 OSPF 立即同步
```
**文件: `easytier/src/common/config.rs`**
临时节点的配置方式: 直接使用凭据私钥作为 `SecureModeConfig.local_private_key`
可在 `TomlConfigLoader` 中新增便捷字段或 CLI 参数:
- `--credential <私钥base64>`: 临时节点使用凭据私钥加入网络
- `--credential-file <path>`: 管理节点指定凭据存储 JSON 文件路径
### Step 7: RPC 服务 + CLI
**文件: `easytier/src/peers/rpc_service.rs`**
实现 `CredentialManageRpc`,参考 `PeerManagerRpcService` 模式。
**CLI** (`easytier-cli`):
```
easytier-cli credential generate
输出: credential_id=<公钥base64> credential_secret=<私钥base64>
easytier-cli credential revoke <credential_id>
easytier-cli credential list
```
**临时节点启动**:
```bash
# 方式1: 直接传入凭据私钥
easytier-core --network-name test \
--secure-mode \
--credential <私钥base64> \
--peers tcp://管理节点:11010
# 内部实现: 将凭据私钥设为 SecureModeConfig.local_private_key
```
### Step 8: 连接时验证(握手后快速拒绝,必选)
`do_noise_handshake_as_server()` 完成后,**必须**进行快速检查:
- 如果对端 `secret_proof` 验证失败(非管理节点),且对端 `noise_static_pubkey` 不在本节点已知的 `trusted_credential_pubkeys`
- 立即断开连接
这是**必选的安全措施**(非可选优化)。因为 Step 3 放宽了 secret_proof 失败的处理,如果不做快速拒绝,任何随机节点都能与管理节点建立加密连接并持有,浪费资源。
```rust
// 在 handshake 完成后
if !secret_proof_verified {
let remote_pubkey = handshake_result.remote_static_pubkey;
if !self.global_ctx.credential_manager.is_pubkey_trusted(&remote_pubkey) {
return Err(Error::AuthError("unknown credential".to_string()));
}
// 公钥在 trusted 列表中 → 允许连接,标记为 CredentialAuthenticated
handshake_result.secure_auth_level = SecureAuthLevel::CredentialAuthenticated;
}
```
## 关键文件清单
| 文件 | 修改内容 |
|------|----------|
| `easytier/src/proto/peer_rpc.proto` | `RoutePeerInfo``trusted_credential_pubkeys`; `SecureAuthLevel``CredentialAuthenticated` |
| `easytier/src/proto/api_instance.proto` | 新增 `CredentialManageRpc` 服务及消息定义 |
| `easytier/src/peers/credential_manager.rs` | **新文件** — 凭据管理器(密钥对生成/撤销/列表) |
| `easytier/src/peers/mod.rs` | 导出 credential_manager |
| `easytier/src/peers/peer_ospf_route.rs` | `new_updated_self()` 填 trusted_pubkeys; 新增 `verify_credential_peers()`; 路由表过滤 |
| `easytier/src/peers/peer_conn.rs` | `do_noise_handshake_as_server()` 放宽 secret_proof 失败为非致命; 可选握手阶段快速拒绝 |
| `easytier/src/peers/peer_manager.rs` | 集成 CredentialManager; 不可信 peer 断连逻辑 |
| `easytier/src/common/global_ctx.rs` | 持有 CredentialManager; 新增 CredentialChanged 事件 |
| `easytier/src/common/config.rs` | 新增 `--credential` 参数处理 |
| `easytier/src/peers/rpc_service.rs` | 实现 CredentialManageRpc |
| `easytier/src/proto/common.rs` | SecureModeConfig 可选: credential 模式识别 |
## 复用现有机制
| 现有机制 | 路径 | 复用方式 |
|----------|------|----------|
| Noise XX 握手 | `peer_conn.rs:680,934` | 临时节点直接使用凭据密钥对走完整 Noise 流程 |
| `SecureModeConfig` | `proto/common.rs:367` | 临时节点的凭据私钥直接设为 local_private_key |
| `noise_static_pubkey` | `RoutePeerInfo` 字段 18 | 临时节点的凭据公钥已在 OSPF 中传播 |
| `verify_and_update_group_trusts()` | `peer_ospf_route.rs:743` | 凭据校验逻辑参考此模式 |
| `PeerMap::close_peer()` | `peer_map.rs:317` | 断开不可信 peer |
| OSPF 路由同步 | `SyncRouteInfoRequest` | 可信公钥列表随 RoutePeerInfo 自然传播 |
| `PeerManagerRpcService` | `rpc_service.rs:24` | RPC 服务实现模式 |
| `GlobalCtxEvent` | `global_ctx.rs:32` | 新增事件触发同步 |
## 验证方案
1. **单元测试**:
- `credential_manager.rs`: 密钥对生成、撤销、列表
- `peer_conn.rs`: 凭据节点 Noise 握手成功(无 network_secret
2. **集成测试** (参考 `tests/three_node.rs`):
- 3 节点: A + B (管理节点, network_secret) + C (临时节点, credential)
- A 生成凭据(groups=["guest"])→ C 使用凭据连接 → 验证 C 加入路由表、可达
- 验证 C 的 ACL group 为 "guest",配置 group ACL 规则后生效
- A 撤销凭据 → 等待 OSPF 同步 (~1-3s) → 验证 C 被 A 和 B 断开
- C 尝试重连 → 验证握手阶段被拒
3. **手动测试**:
```bash
# A: 管理节点
easytier-core -n test -s secret --secure-mode --listeners tcp://0.0.0.0:11010
easytier-cli credential generate # → credential_id + credential_secret
# C: 临时节点
easytier-core -n test --secure-mode --credential <私钥base64> --peers tcp://A:11010
# 验证后撤销
easytier-cli credential revoke <credential_id>
# C 数秒内被踢出
```
### Step 9: 临时节点 OSPF 路由限制
**约束**: 临时节点传播的路由信息不可信,需严格限制。
#### 9a. 管理节点不主动发起到临时节点的 OSPF session
**核心原则**: OSPF `maintain_sessions()` 构建最小生成树时,只在管理节点之间选择 initiator,不将临时节点纳入 `dst_peer_id_to_initiate`。但管理节点**被动接受**临时节点发起的 session。
**文件: `easytier/src/peers/peer_ospf_route.rs`**
修改 `maintain_sessions()` (line 2485):
- 在构建 `dst_peer_id_to_initiate` 候选列表时,过滤掉临时节点
- 管理节点之间的 MST 不受影响
```rust
// 在 maintain_sessions() 中,构建 initiator 候选时过滤临时节点
let peers: Vec<PeerId> = peers.into_iter().filter(|peer_id| {
// 只主动发起到管理节点的 session,不主动连临时节点
!self.is_credential_peer(*peer_id)
}).collect();
```
- **临时节点自身**: 在 `maintain_sessions()` 中只将管理节点作为 initiator 候选,跳过其他临时节点
```rust
// 临时节点侧: 只主动连管理节点
if self.is_credential_node() {
let peers: Vec<PeerId> = peers.into_iter().filter(|peer_id| {
!self.is_credential_peer(*peer_id) // 只连管理节点
}).collect();
}
```
**session 建立方式**:
- **管理节点 → 管理节点**: 正常 MST initiator 选择(不变)
- **临时节点 → 管理节点**: 临时节点主动发起 session,管理节点被动接受
- **临时节点 → 临时节点**: 不建立(双方都过滤掉对方)
- **管理节点 → 临时节点**: 不主动发起(不在 initiator 候选中)
**路由信息传播**: 临时节点通过其主动发起的 session 调用 `sync_route_info` 推送自身 RoutePeerInfo。管理节点在正常 OSPF sync 中将其代理传播给其他管理节点。管理节点也通过该 session 向临时节点推送完整路由表。
#### 9b. 管理节点只选择性接收临时节点的路由信息
**文件: `easytier/src/peers/peer_ospf_route.rs`**
临时节点通过其主动发起的 session 调用 `sync_route_info`,管理节点在处理时需做过滤:
- 只接收该临时节点**自己的** `RoutePeerInfo``route_info.peer_id == dst_peer_id`),丢弃其声称的其他 peer 的路由信息
- 对临时节点自身的 RoutePeerInfo,过滤其 `proxy_cidrs`:只保留在 `TrustedCredentialPubkey.allowed_proxy_cidrs` 范围内的网段,移除超出范围的声明
- 临时节点的 `foreign_network_infos` 应忽略
- 临时节点的 `conn_info`(连接拓扑)**根据 `allow_relay` 标志决定**(见下方)
修改 `update_peer_infos()` (line 461):
```rust
fn update_peer_infos(
&self, my_peer_id, my_peer_route_id, dst_peer_id,
peer_infos, raw_peer_infos,
) -> Result<(), Error> {
let dst_is_credential_peer = self.is_credential_peer(dst_peer_id);
for (idx, route_info) in peer_infos.iter().enumerate() {
// 临时节点只允许传播自己的路由信息
if dst_is_credential_peer && route_info.peer_id != dst_peer_id {
tracing::debug!(
?dst_peer_id, peer_id=?route_info.peer_id,
"ignoring route info from credential peer for other peer"
);
continue;
}
// 过滤临时节点的 proxy_cidrs,只保留凭据允许的范围
if dst_is_credential_peer {
let allowed = self.get_credential_allowed_proxy_cidrs(dst_peer_id);
if let Some(allowed_cidrs) = allowed {
route_info.proxy_cidrs.retain(|cidr| {
allowed_cidrs.iter().any(|a| cidr_is_subset(cidr, a))
});
}
}
// ... existing logic ...
}
}
```
修改 `do_sync_route_info()` (line 2614):
```rust
// 在 do_sync_route_info 中
let from_is_credential = self.is_credential_peer(from_peer_id);
let credential_allows_relay = from_is_credential
&& self.is_credential_relay_allowed(from_peer_id);
if let Some(peer_infos) = &peer_infos {
// update_peer_infos 内部会过滤临时节点的非自身信息
service_impl.synced_route_info.update_peer_infos(...);
}
// 临时节点的 conn_info: 仅当 allow_relay=true 时接收
if let Some(conn_info) = &conn_info {
if !from_is_credential || credential_allows_relay {
service_impl.synced_route_info.update_conn_info(conn_info);
}
}
// 临时节点的 foreign_network_infos 始终不接收
if let Some(foreign_network) = &foreign_network {
if !from_is_credential {
service_impl.synced_route_info.update_foreign_network(foreign_network);
}
}
```
**conn_info 处理**:
- 临时节点的 `conn_info`: 根据凭据的 `allow_relay` 标志决定是否接收
- `allow_relay = true`: 管理节点接收并传播该临时节点的 conn_info,使其参与路由图,可作为 relay 转发数据
- `allow_relay = false`(默认): 忽略 conn_info,该临时节点不参与中继(仅作为叶子节点存在于路由图中)
- 临时节点的 `foreign_network_infos` 始终忽略
**`is_credential_relay_allowed()` 实现**:
```rust
fn is_credential_relay_allowed(&self, peer_id: PeerId) -> bool {
// 从全网汇总的 trusted_credential_pubkeys 中查找该 peer 的凭据
// 检查对应 TrustedCredentialPubkey.allow_relay 标志
let peer_info = self.peer_infos.read();
if let Some(info) = peer_info.get(&peer_id) {
for tc in &self.all_trusted_credentials {
if tc.pubkey == info.noise_static_pubkey {
return tc.allow_relay;
}
}
}
false
}
```
**注意**: 即使 `allow_relay=true`,临时节点仍然不能转发握手包(Step 10b 限制不变),因此不会有新节点通过 relay 临时节点接入网络。relay 能力仅用于已建立连接的 peer 之间的数据转发。
#### 9c. 临时节点的 `RoutePeerInfo` 中的 `trusted_credential_pubkeys` 被忽略
已在 Step 5 中说明:只信任 `secret_digest` 匹配的管理节点发布的 trusted 列表。
#### 判断 peer 是否为临时节点的方法
`SyncedRouteInfo` / `PeerRouteServiceImpl` 中新增:
```rust
fn is_credential_peer(&self, peer_id: PeerId) -> bool {
// 方法: 检查该 peer 的 RoutePeerInfo
// 1. 如果 peer 的 noise_static_pubkey 在 trusted_credential_pubkeys 中 → 是临时节点
// 2. 如果 peer 通过了 network_secret 确认 (secret_digest 匹配) → 是管理节点
// 3. 在 peer_conn 握手后,可以记录 secure_auth_level 到连接信息中
let peer_info = self.synced_route_info.peer_infos.read();
if let Some(info) = peer_info.get(&peer_id) {
if !info.noise_static_pubkey.is_empty()
&& self.trusted_credential_pubkeys.contains(&info.noise_static_pubkey) {
return true;
}
}
false
}
```
对于直连 peer,也可以在握手阶段直接记录 `secure_auth_level`,用于快速判断。
### Step 10: 禁止通过临时节点接入网络
**约束**: 不得有新节点(无论是否持有 network_secret)通过临时节点的 listener 接入网络。但允许通过管理节点中继后建立 P2P 连接。
#### 10a. 临时节点天然无法接受新节点接入(无需额外代码)
临时节点作为 listener 时,新节点的连接会**自然失败**,因为:
1. 临时节点没有 `network_secret`,无法验证对端的 `secret_proof` → 无法确认对端是管理节点
2. 临时节点不发布 `trusted_credential_pubkeys` → 对端公钥不在可信列表中
3. 对端也无法验证临时节点的 `secret_proof`(临时节点没有 network_secret
因此 **不需要在 `add_tunnel_as_server()` 中添加显式拦截逻辑**。已有的 Noise 握手 + 凭据校验机制已足够阻止新节点通过临时节点接入。
**例外**: 已知的管理节点可以连接到临时节点(如 P2P hole punch 场景),因为管理节点的公钥已通过 OSPF 同步被临时节点知晓,握手可以成功。
#### 10b. 临时节点不转发来自未知 peer 的连接请求
**文件: `easytier/src/peers/peer_manager.rs`**
在 packet forwarding 路径 (line 718-766) 中:
- 临时节点不应转发 `HandShake` / `NoiseHandshakeMsg*` 类型的包
- 这防止新节点通过临时节点的中继接入网络
```rust
// 在 peer_recv 循环的 forward 分支中
if to_peer_id != my_peer_id {
// 临时节点不转发握手包(阻止新节点通过临时节点接入)
if is_credential_node && (
hdr.packet_type == PacketType::HandShake as u8
|| hdr.packet_type == PacketType::NoiseHandshakeMsg1 as u8
|| hdr.packet_type == PacketType::NoiseHandshakeMsg2 as u8
|| hdr.packet_type == PacketType::NoiseHandshakeMsg3 as u8
) {
tracing::debug!("credential node dropping forwarded handshake packet");
continue;
}
// ... existing forward logic ...
}
```
#### 10c. P2P 连接通过管理节点中继仍然允许
P2P hole punch 的流程:
1. 两个节点通过管理节点交换打洞信息(RPC)
2. 建立直接 P2P tunnel
3. 在 P2P tunnel 上握手
这个流程不受影响,因为:
- 打洞信息交换通过管理节点中继(RPC),不经过临时节点
- P2P tunnel 建立后的握手是直连,不通过临时节点的 listener
- `is_directly_connected=false` 的连接(hole punch 结果)可以被临时节点接受
**设计思路**: 将凭据映射为 ACL Group,复用现有的 group-based ACL 规则系统。
现有 ACL 系统已支持基于 group 的规则匹配:
- `Rule.source_groups` / `Rule.destination_groups` (acl.proto:72-73)
- `PeerGroupInfo` 通过 HMAC proof 验证 peer 所属 group (peer_rpc.rs:8-38)
- `verify_and_update_group_trusts()` 在 OSPF 同步时更新 group trust map (peer_ospf_route.rs:743)
- `get_peer_groups()` 返回 peer 所属的 group 列表,用于 ACL 匹配 (peer_ospf_route.rs:2287)
**方案**: 生成凭据时,为每个凭据创建一个隐式 ACL Group。
1. **凭据生成时**: 管理节点为凭据创建一个关联的 group:
- group_name = `"credential:<credential_id>"` 或用户自定义名称
- group_secret = 由 credential_secret 派生的密钥
- 可选:指定凭据所属的 group_name(批量管理,如 `"guest"`, `"contractor"`
2. **临时节点加入时**: 临时节点使用凭据私钥连接。其 group 归属由管理节点在 `TrustedCredentialPubkey.groups` 中声明(无需临时节点自己提供 group proof)。验证节点在 `verify_credential_peers()` 中匹配公钥后,直接将声明的 groups 加入 `group_trust_map`
3. **ACL 规则配置**: 管理员可配置基于 group 的 ACL 规则:
```toml
# 示例配置: 限制 "guest" group 只能访问特定子网
[[acl.acl_v1.chains]]
name = "inbound"
chain_type = "Inbound"
default_action = "Allow"
[[acl.acl_v1.chains.rules]]
name = "restrict_guest"
source_groups = ["guest"]
destination_ips = ["10.0.0.0/24"]
action = "Drop"
```
4. **管理节点发布 group 信息**:
- 在 `RoutePeerInfo.trusted_credential_pubkeys` 中传播可信公钥时,同时包含关联的 group 信息
- 扩展 proto:
(使用 Step 1 中定义的 `TrustedCredentialPubkey`,group 归属由管理节点声明,无需 proof 验证)
- 替换 `repeated bytes trusted_credential_pubkeys``repeated TrustedCredentialPubkey trusted_credential_pubkeys`
5. **校验节点处理**: 在 `verify_credential_peers()` 中:
- 验证凭据公钥在可信列表中后
- 直接将 `TrustedCredentialPubkey.groups` 中声明的 group 加入 `group_trust_map` / `group_trust_map_cache`(无需验证 group proof,因为管理节点的声明已是可信的)
- ACL filter 在处理数据包时自动基于 group 匹配规则
**API 扩展**:
生成凭据时可指定 group:
```protobuf
message GenerateCredentialRequest {
repeated string groups = 1; // 可选: 为该凭据关联的 group 名称
bool allow_relay = 2; // 可选: 是否允许 relay
repeated string allowed_proxy_cidrs = 3; // 可选: 限制可声明的 proxy_cidrs
int64 ttl_seconds = 4; // 必选: 凭据有效期(秒)
}
```
CLI:
```bash
# 生成带 group 的凭据,有效期 24 小时
easytier-cli credential generate --groups guest,restricted --ttl 86400
# 生成允许 relay 的凭据,有效期 7 天
easytier-cli credential generate --groups relay-node --allow-relay --ttl 604800
# 最简用法(默认 group 名为 "credential"
easytier-cli credential generate --ttl 3600
```
## 安全审查
### 已覆盖的安全性
- **端到端加密**: 数据包在源端加密、目的端解密,relay 节点(含 `allow_relay` 的临时节点)无法看到明文
- **临时节点自我授权防护**: 只信任 `secret_digest` 匹配的管理节点发布的 `trusted_credential_pubkeys`
- **临时节点路由篡改防护**: 只接收临时节点自身的 RoutePeerInfo,忽略其转发的其他路由
- **临时节点网络接入防护**: 临时节点天然无法接受新节点接入(无 network_secret、不发布 trusted 列表)
### 需要关注的安全问题
**1. Step 8 握手后快速拒绝应为必选(非可选)**
当前 Step 8 标记为"可选优化",但实际上是**必须的安全措施**。如果不做快速拒绝:
- 任何随机节点(无 credential、无 network_secret)都能完成 Noise 握手(因为 Step 3 放宽了 secret_proof 失败)
- 在等待 OSPF 同步验证期间,该节点持有一个有效的加密连接,浪费资源
- **修改**: Step 8 改为必选。握手完成后立即检查:对端 secret_proof 失败 + 公钥不在本节点已知的 trusted 列表中 → 立即断开
**2. Group proof 验证机制需要明确**
当前方案:临时节点在 `RoutePeerInfo.groups` 中携带 `PeerGroupInfo`HMAC proof),管理节点在 `TrustedCredentialPubkey` 中传播 `group_secret_hash`
问题:HMAC 验证需要**原始 secret**,不是 hash。验证节点如何知道 credential 的 group secret
**解决方案**: `TrustedCredentialPubkey.group_secret_hash` 改为 `group_secret_digest`,使用与现有 `NetworkIdentity.network_secret_digest` 相同的 digest 算法。验证时:
- 管理节点在 `TrustedCredentialPubkey` 中包含 `group_secret_digest`
- 临时节点发送的 `PeerGroupInfo` 中包含 `group_proof`HMAC
- 验证节点无法直接验证 HMAC(没有原始 secret),但可以信任管理节点的声明:如果管理节点在 `TrustedCredentialPubkey.groups` 中列出了某个 group,且临时节点的公钥匹配,就直接信任该 group 归属
- 即:**group 归属由管理节点在 `TrustedCredentialPubkey` 中声明,无需临时节点提供 proof**
- 这简化了实现,且安全性不降低(管理节点已是可信源)
**3. 凭据持久化**
`CredentialManager` 当前设计为内存存储。管理节点重启后所有凭据丢失,导致使用这些凭据的临时节点被踢出。
**解决方案**:
- 管理节点可配置凭据存储的 JSON 文件路径(如 `--credential-file /path/to/credentials.json`
- `CredentialManager` 启动时从该文件加载已有凭据
- 生成/撤销凭据时自动写入该文件
- 未配置文件路径时,凭据仅存内存(重启丢失)
**4. 同一凭据多节点复用**
同一个 credential 私钥可以被多个节点同时使用。它们有不同的 `peer_id` 但相同的 `noise_static_pubkey`。这会导致:
- 路由表中多个 RoutePeerInfo 有相同的 `noise_static_pubkey`
- 撤销时所有使用该凭据的节点同时被踢出(符合预期)
- **这是预期行为**,但应在文档中说明
**5. 临时节点 proxy_cidrs 限制**
临时节点可能声明虚假的 `proxy_cidrs`(子网代理),导致流量黑洞。
**解决方案**(已纳入设计):
- 生成凭据时通过 `allowed_proxy_cidrs` 字段限制该凭据可声明的网段范围
- 管理节点在 Step 9b 的 `update_peer_infos()` 中过滤:只保留临时节点声明的 proxy_cidrs 中属于 `allowed_proxy_cidrs` 子集的网段
- 未配置 `allowed_proxy_cidrs` 时(空列表),临时节点不允许声明任何 proxy_cidrs
**6. 凭据过期时间(TTL**
凭据必须设置过期时间。过期后自动失效,等同于被撤销。
- 生成凭据时必须指定 `--ttl``--expiry`
- `verify_credential_peers()` 中检查 `expiry_unix`,过期的凭据从可信列表中移除
- 过期检查在每次路由同步时执行,无需额外定时器
## 优势
- **最小改动**: Noise 握手消息格式不变,完全复用现有流程
- **安全性**: X25519 密钥对提供强身份认证,不弱于 network_secret;端到端加密保护 relay 场景
- **自然传播**: 利用 OSPF 已有基础设施,无需新 RPC
- **去中心化撤销**: 任何管理节点都可撤销,全网通过路由同步感知
- **ACL 复用**: 凭据映射为 ACL Group,完全复用现有 group-based ACL 规则系统,无需新的 ACL 机制
+509
View File
@@ -0,0 +1,509 @@
# PeerConn Secure Mode(乱序隧道友好)
本文是对“PeerConn 安全模式”下一阶段协议的完整规格草案,目标是在底层 `Tunnel` **不保证顺序交付**(可能乱序/丢包)的前提下:
- 仍使用 Noise 进行握手(加密、认证、channel binding
- 数据面不使用 `snow::TransportState` 逐包加解密(因为它隐式递增 nonce,要求有序)
- 在数据包尾部携带 **12B 明文 nonce**(与项目当前的“包尾 nonce”加密格式对齐),并把 **epoch 编进 nonce**
- 以尽可能低的内存开销实现 anti-replay(默认窗口 256
- 多条 PeerConn 之间复用同一份 Peer 级安全会话(PeerSession
该文档只描述协议与数据结构;实现时以本文为准做迭代。
---
## 背景
### 节点角色
系统中常见两类角色(由配置与信任锚点决定,而非代码里硬编码的“节点类型”):
- **用户节点(同网节点)**:通常持有 `network_secret`,期望与同一 `network_name` 的其他节点建立强认证连接。
- **共享节点(基础设施节点)**:通常不持有用户的 `network_secret`,用于为多个用户网络提供转发/中继能力;客户端可通过 pinning 共享节点的长期 static 公钥获得“服务器认证”。
基于握手中交换的 `network_name` 可以得到 **角色提示(role hint**
- `a_network_name == b_network_name`:同网提示
- 否则:共享节点/外网提示
`network_name` 不是认证锚点;安全决策应基于 pinning 或 `network_secret_confirmed`(见 8)。
### 连接方式与 PeerConn
在实现中,“peer 与 peer 之间的连接”由 `PeerConn` 表示,它绑定一条底层 `Tunnel`(可能是 tcp/udp/quic/wg/ring 等),并以 `PeerManagerHeader` 承载上层消息:
- `PacketType::HandShake`:传统 PeerConn 握手
- `PacketType::NoiseHandshake`:安全模式下的 Noise 握手
参考:[packet_def.rs](file:///data/project/EasyTier/easytier/src/tunnel/packet_def.rs#L59-L77)。
PeerManager 会在连接建立时走不同入口:
- 主动方:`add_client_tunnel` -> `PeerConn::do_handshake_as_client`
- 被动方:`add_tunnel_as_server` -> `PeerConn::do_handshake_as_server`
参考:[peer_manager.rs](file:///data/project/EasyTier/easytier/src/peers/peer_manager.rs#L361-L379)。
### 多连接与 foreign network
同一对 peer 之间可能出现多条 PeerConn(多路径、多协议、重连等),因此需要一个 Peer 级别的“安全会话”来复用认证结果与数据面密钥(见 7.3 的 PeerSession)。
此外,当握手得到的对端 `network_name` 与本地不一致时,PeerManager 会将该连接纳入 foreign network 相关逻辑(例如 foreign network client/manager)以支持“共享节点”模式与跨网络转发:
参考:[peer_manager.rs](file:///data/project/EasyTier/easytier/src/peers/peer_manager.rs#L361-L377)。
### 为什么需要本方案
若数据面直接使用 `snow::TransportState` 逐包加解密,会隐式递增 nonce,从而要求底层按序交付。由于本项目的数据加密格式本就采用“包尾明文 nonce”(例如 `AesGcmTail.nonce[12]`,以及 ring ChaCha20-Poly1305 的同形 tail),因此本文延续“尾部 nonce”风格,并将 nonce 结构化为 `epoch||seq`,以满足:
- 乱序可解密
- 低内存 anti-replay
- epoch/key 轮换
- 多 PeerConn 复用 PeerSession
参考:[packet_def.rs:AesGcmTail](file:///data/project/EasyTier/easytier/src/tunnel/packet_def.rs#L266-L273)、[ring_chacha20.rs](file:///data/project/EasyTier/easytier/src/peers/encrypt/ring_chacha20.rs#L69-L93)。
---
## 0. 约束与假设
- 底层 tunnel 可能乱序/丢包;因此数据面必须支持乱序解密。
- 外层已有 `PeerManagerHeader`,包含 `from_peer_id` / `to_peer_id`,可作为对端身份索引,数据面无需额外携带 `session_id`
- 保持既有的安全语义目标:
- 共享节点 pinning(基于对端 Noise static pubkey
- network_secret 的 channel binding 确认(`handshake_hash`
- “尽早交换 network_name”用于角色判断(同网/共享节点)
---
## 1. 术语
- **PeerConn**:一条具体的底层连接/路径(可能同一对 peer 存在多条)。
- **PeerSession**:Peer 级别的安全会话状态(密钥、epoch、nonce、anti-replay、认证等级等)。
- **epoch**:数据面密钥版本号(key id)。编码进 12B nonce 的高 4B。
- **seq**:发送序号(per-direction 单调递增 u64)。编码进 12B nonce 的低 8B。
- **nonce12**:明文 12B nonce,按 `epoch||seq` 编码,附在密文尾部。
- **AAD**:AEAD 的附加认证数据。本文建议使用空 AAD(与项目当前的 ring/openssl 加密实现一致),未来可扩展为覆盖部分 header。
---
## 2. 关键 wire 结构(参考)
### 2.1 PeerManagerHeader16B
来自 [packet_def.rs](file:///data/project/EasyTier/easytier/src/tunnel/packet_def.rs#L93-L105)
| 字段 | 类型 | 大小 |
| --------------- | -------: | ------: |
| from_peer_id | u32 (LE) | 4 |
| to_peer_id | u32 (LE) | 4 |
| packet_type | u8 | 1 |
| flags | u8 | 1 |
| forward_counter | u8 | 1 |
| reserved | u8 | 1 |
| len | u32 (LE) | 4 |
| **合计** | | **16B** |
### 2.2 AES-GCM 包尾(28B
来自 [packet_def.rs](file:///data/project/EasyTier/easytier/src/tunnel/packet_def.rs#L266-L273)
```text
AesGcmTail {
tag: [u8; 16] // 16B
nonce: [u8; 12] // 12B
} // 合计 28B
```
ring ChaCha20-Poly1305 的尾部结构与之同形:
[ring_chacha20.rs](file:///data/project/EasyTier/easytier/src/peers/encrypt/ring_chacha20.rs#L8-L16)。
---
## 3. 数据面:nonce/epoch/seq 规格
### 3.1 Nonce1212B,明文附在包尾)
定义:
| 字段 | 编码 | 大小 |
| -------- | -------------- | ------: |
| epoch | u32 big-endian | 4 |
| seq | u64 big-endian | 8 |
| **合计** | | **12B** |
记为:
```text
nonce12 = epoch_be_u32 || seq_be_u64
```
### 3.2 发送端规则(每方向)
- `seq`:u64 单调递增,从 0 开始,每发送一个包 `seq += 1`
- `epoch`u32,初始为 0。轮换时 `epoch += 1` 并切换到新 key。
- `nonce12`:按 `epoch||seq` 生成,作为 AEAD nonce,同时明文写入包尾。
**安全性要求**:同一把 data key 下,`nonce12` 必须不重复。该要求通过“epoch 变化必然对应 key 变化 + 同一 epoch 内 seq 单调递增”保证。
### 3.3 接收端规则(每方向)
通信是双向的:双方都会为每个对端 peer 维护一份 `PeerSession`。其中发送方向状态用于生成 `nonce12`(见 3.2),接收方向状态用于乱序解密与 anti-replay(见本节与第 5 节)。本节仅描述“接收路径”的处理流程:
1. 从包尾读取 `nonce12`,解析出 `(epoch, seq)`
2. 选择对应 epoch 的 data key(允许短期保留多个 epoch,见 6.2)。
3. 执行 anti-replay 检查(见 5)。
4. AEAD 解密 payloadtag 校验失败视为丢包)。
---
## 4. 数据面:AEAD 封装
### 4.1 选择算法
本文以“尾部 tag(16) + nonce(12)”为基准,兼容:
- AES-256-GCMtag=16, nonce=12, key=32
- ChaCha20-Poly1305tag=16, nonce=12, key=32
### 4.2 密文布局(以 AEAD tail 形式描述)
```text
wire_payload = ciphertext || tag16 || nonce12
```
其中:
- `ciphertext`:对原 payload 的加密结果(与原明文等长)
- `tag16`AEAD tag16B
- `nonce12`:明文(12B),用于乱序解密与 anti-replay
### 4.3 AAD
默认:`AAD = empty`(与项目当前 ring encryptor 一致)。
扩展(可选):未来可把 `PeerManagerHeader` 的部分字段纳入 AAD(例如 from/to/packet_type/flags),以抵御“改 header 不改密文”的攻击面。该扩展不影响 nonce/epoch/seq 设计。
---
## 5. anti-replay(最小内存配置)
### 5.1 默认窗口参数
- `window_size = 256`
- `keep_epochs = 2`current + previous
- `evict_idle_after = 30s`(某 epoch 长时间无包则回收其窗口与 key)
### 5.2 ReplayWindow256(概念结构与大小)
按“尽可能低内存”为目标,建议使用固定大小窗口(bitmap):
```text
ReplayWindow256 {
max_seq: u64 // 8B
bitmap: [u8; 32] // 256bit = 32B
} // 合计 40B(按字段大小计,不含语言实现的对齐/额外元数据)
```
说明:
- `bitmap` 的第 0 位表示 `max_seq` 是否已见;第 i 位表示 `max_seq - i` 是否已见。
- 若 `seq > max_seq`:右移窗口并置位。
- 若 `seq <= max_seq`:计算 `delta = max_seq - seq`,若 `delta >= 256` 丢弃(视为太旧);否则检查 bitmap 是否已置位,已置位则丢弃(重放),未置位则接受并置位。
### 5.3 ReplayState(每个对端、每个方向、每个 epoch)
为减少内存,可用“固定 2 个 epoch 槽位”而非 HashMap
```text
EpochRxSlot {
epoch: u32 // 4B
window: ReplayWindow256 // 40B
last_rx_ms: u64 // 8B(用于 30s 淘汰)
valid: bool // 1B(实现细节)
}
```
每个对端、每个方向保留 2 个 `EpochRxSlot`
- current_epoch_slot
- previous_epoch_slot
内存量级(按字段大小粗算):
- 单方向:约 2 * (4 + 40 + 8 + 1) ≈ 106B
- 双方向:约 212B
加上 epoch key 缓存(见 6.2)仍处于“每对端几百字节”级别。
---
## 6. epoch 与密钥派生/轮换
### 6.1 密钥层次
推荐将 Noise 仅用于握手与认证绑定,数据面 key 由一个会话根密钥 `root_key` 派生:
```text
root_key: [u8; 32] // 会话根密钥材料
```
随后对每个 epoch 与方向派生 traffic key
```text
k(epoch, dir) = HKDF(root_key, "et-traffic" || epoch_u32_be || dir_byte)
```
- `dir_byte`:发送方向标识(例如 0=tx, 1=rx)
- 输出长度:32B(用于 AES-256-GCM 或 ChaCha20-Poly1305
### 6.2 key 缓存(keep_epochs = 2
对每个对端 peer、每个方向,缓存 2 个 epoch 的 key
```text
EpochKeySlot {
epoch: u32
key: [u8; 32] // 32B
valid: bool
}
```
接收时按 `(epoch)` 选择 key;若 epoch 是 current 或 previous 则可解密,否则丢弃(或可选:尝试少量临近 epoch,代价是试解密)。
### 6.3 轮换策略(默认无额外控制消息)
为减少协议复杂度,默认策略:
- 发送端在满足“包数阈值”或“时间阈值”时将 `epoch += 1` 并开始使用新 key。
- 接收端不需要提前知道轮换点:从明文 `nonce12.epoch` 即可选择正确 key。
- 接收端保留 `keep_epochs=2`,保证轮换期间乱序旧包仍可解密。
可选增强(未来):
- 若希望更强一致性,可定义一个控制包通告 `epoch_advance`,但不是本方案的必要条件。
---
## 7. 握手层:Noise_XX + 角色/认证/会话根密钥
### 7.1 目标
在每条 PeerConn 建立时运行 Noise_XX 握手,完成:
- 交换 `network_name`(尽早判断同网/共享节点)
- 完成共享节点 pubkey pinning(若配置)
- 完成 `network_secret_confirmed`(若双方都有 secret
- 协商 PeerSessionjoin 或 create),并在需要时同步 `root_key``epoch` 起点
### 7.2 prologue
prologue 固定为协议版本字符串,不包含 `network_name`,以避免跨 network_name 的连接被拒绝:
```text
prologue = "easytier-peerconn-noise"
```
### 7.3 PeerSessionjoin / create / sync 规则
本文引入 Peer 级会话 `PeerSession`(每个对端 peer 一份),用于跨多条 PeerConn 复用数据面密钥与 anti-replay 状态。
#### 7.3.1 PeerSession 的身份字段
数据面不携带 `session_id`,因此会话的“索引键”是外层 `PeerManagerHeader.from_peer_id`(对端 peer_id)。但为了在握手阶段判断 join/create/sync,需要额外维护:
```text
PeerSessionMeta {
session_generation: u32 // 4B,单调递增,会话根密钥 root_key 的版本号
auth_level: u8 // 1B,对齐 secure_auth_level 的枚举语义
}
```
语义:
- `session_generation` 变化表示 `root_key` 发生轮换(create)。
- `session_generation` 不变表示复用已有 `root_key`join)。
#### 7.3.2 参与方角色
- Initiator:发起连接的一方(A)
- Responder:接收连接的一方(B)
在本方案中,**Responder 对会话选择具有权威性**:最终使用哪一代 `root_key` 以 msg2 返回为准。
#### 7.3.3 Responder 的决策(核心)
Responder 在收到 msg1 后,读取 Initiator 提供的 `a_session_generation`(可选)并与本地 PeerSession 进行对比,按以下优先级决策:
1. **本地不存在 PeerSession**:执行 `CREATE`(生成新的 `root_key``session_generation=1`)。
2. **本地存在 PeerSession 且 a_session_generation 与本地一致**:执行 `JOIN`(不轮换 root_key)。
3. **本地存在 PeerSession 但 a_session_generation 缺失或不一致**:执行 `SYNC`(不轮换 root_key,但在 msg2 中携带当前 `root_key``session_generation`,使对端同步到本地会话)。
安全性与 DoS 说明:
- 默认不允许对端通过握手触发 root_key 轮换(避免对端反复拨号导致会话重置)。
- 只有在“本地不存在会话”或“本地策略显式要求轮换”(例如人工触发、密钥泄露处置)时才执行 `CREATE`
#### 7.3.4 Initiator 的行为
Initiator 在握手开始前读取本地是否已有对端 PeerSession
- 若存在:在 msg1 中携带本地 `a_session_generation`
- 若不存在:msg1 不携带 `a_session_generation`
Initiator 在收到 msg2 后:
- 若 msg2 为 `JOIN`:继续使用本地 `root_key``session_generation`(不重置 epoch/seq)。
- 若 msg2 为 `SYNC` / `CREATE` 且携带 `root_key`:用 msg2 携带的 `root_key` 覆盖本地会话,并将数据面计数器重置为 `initial_epoch``seq=0`,重放窗口清空。
### 7.4 握手 payload 编码:protobuf vs 固定布局
推荐使用 protobufpb)来编码 Noise 握手消息的 payload,原因:
- 易演进(字段可选、可扩展、兼容旧版本)
- 项目内已广泛使用 pb(例如 `HandshakeRequest`
- 开销可控:除去字符串外,核心字段均为固定长度 bytes(16/32/12),pb 仅增加少量 tag/len varint
可选方案:固定布局。若你追求极致性能/可预测大小,可将同等字段按固定布局编码。本文以下默认以 protobuf 形式定义字段;固定布局可按同样字段直接平铺实现。
### 7.5 握手消息(Noise_XX 的 3 条消息)
记:
- msg1: A -> Bpayload 明文)
- msg2: B -> Apayload 加密)
- msg3: A -> Bpayload 加密)
#### 7.5.1 pb 定义(字段类型与语义大小)
下述为“协议级定义”(概念 proto),不要求立刻落入代码生成;实现可在 proto 中新增 message,或先在 Rust 侧用 prost 定义本地 message。
```proto
message PeerConnNoiseMsg1Pb {
uint32 version = 1; // varint
string a_network_name = 2; // len <= 64 bytes (建议约束)
optional uint32 a_session_generation = 3; // varint,可选
bytes a_conn_id = 4; // 16B (UUID)
}
enum PeerConnSessionActionPb {
JOIN = 0; // 不发送 root_key,表示“继续使用既有会话”
SYNC = 1; // 发送 root_key,用于对端同步到本地会话
CREATE = 2; // 发送 root_key,表示本地新建会话
}
message PeerConnNoiseMsg2Pb {
string b_network_name = 1; // len <= 64 bytes
uint32 role_hint = 2; // 1=同网提示, 2=共享节点/外网提示
PeerConnSessionActionPb action = 3; // JOIN/SYNC/CREATE
uint32 b_session_generation = 4; // varint
optional bytes root_key_32 = 5; // 32B,当 action=SYNC/CREATE 时必须存在
uint32 initial_epoch = 6; // u32(编码为 varint 或 fixed32 均可),建议语义为 BE u32 值
bytes b_conn_id = 7; // 16B (UUID)
bytes a_conn_id_echo = 8; // 16B (UUID)
}
message PeerConnNoiseMsg3Pb {
bytes a_conn_id_echo = 1; // 16B
bytes b_conn_id_echo = 2; // 16B
// 可选:network_secret_confirmed 的 proof
optional bytes secret_proof_32 = 3; // 32B
}
```
字段语义大小(不含 pb tag/len):
- UUID16B
- root_key32B
- secret_proof32B
- initial_epoch4B(逻辑大小;pb 编码本身为 varint/fixed32wire 大小可变或 4B
#### 7.5.2 msg1 payloadA -> B,明文)
```text
payload_bytes = PeerConnNoiseMsg1Pb.encode_to_vec()
```
说明:
- 该 payload 为明文,因此不放 `root_key` 等敏感材料。
- `a_network_name` 用于角色提示。
- `a_session_generation` 用于 Responder 做 join/sync/create 决策。
- `a_conn_id` 用于本次连接绑定(防拼接),将在 msg2/msg3 回显。
#### 7.5.3 msg2 payloadB -> A,加密)
```text
payload_bytes = PeerConnNoiseMsg2Pb.encode_to_vec()
```
说明:
- `action` 决定本次握手是否会更新会话根密钥:
- `JOIN`:不发送 `root_key_32`,表示“继续使用既有会话”
- `SYNC`:发送 `root_key_32`,用于对端同步到本地既有会话
- `CREATE`:发送 `root_key_32`,表示本地创建新会话
- `initial_epoch` 默认 0;若需要随机化,可设置为随机 u32,但需要接收端 key/窗口缓存支持更复杂的淘汰策略。
- `a_conn_id_echo``b_conn_id` 用于连接绑定;msg3 将回显两者以确认双方看到同一组值。
#### 7.5.4 msg3 payloadA -> B,加密)
```text
payload_bytes = PeerConnNoiseMsg3Pb.encode_to_vec()
```
`secret_proof_32`(可选)用于 `network_secret_confirmed`
```text
secret_proof = HMAC-SHA256(
key = derive(network_secret),
data = role_byte || handshake_hash
)
```
其中 `handshake_hash` 由 Noise 提供,`role_byte` 用于区分双方角色(client/server)。
### 7.6 pinning(共享节点)
- 配置位置:`PeerConfig.peer_public_key`base6432B)。
- 校验时机:Noise 握手结束后,A 读取 `remote_static_pubkey`,若配置了 pinned 则必须匹配,否则断连。
---
## 8. 角色判断与安全语义
- `network_name` 的比较足以用于 **角色提示**
- `a_network_name == b_network_name`:同网提示
- 否则:共享节点/外网提示
- 但 `network_name` **不是认证锚点**。安全决策仅应基于:
- 共享节点 pinning 成功(`shared_node_pubkey_verified`
- 或 network_secret_confirmed 成功(`network_secret_confirmed`
- 在未完成上述任一认证前,连接为 `encrypted_unauthenticated`:仅保证机密性/完整性,不保证对端身份,存在 MITM 风险。
---
## 9. 与包尾 nonce 加密格式的关系
项目当前的 ring chacha20 加密实现使用随机 nonce 并将 nonce 明文附在包尾:
[ring_chacha20.rs](file:///data/project/EasyTier/easytier/src/peers/encrypt/ring_chacha20.rs#L69-L93)
本文将随机 nonce 替换为结构化 `epoch||seq`
- 仍为 12B
- 仍明文放包尾
- 但语义从“随机唯一”变为“可乱序解密 + 可 anti-replay + 可轮换”
---
## 10. 默认参数汇总
- nonce12B = epoch(u32 BE) + seq(u64 BE)
- tag16B
- key32BAES-256-GCM 或 ChaCha20-Poly1305
- replay window256bitmap 32B
- keep_epochs2current + previous
- evict_idle_after30s
+177
View File
@@ -0,0 +1,177 @@
# Relay Peer 管理模块设计文档
## 背景与现状
当前出站转发路径中,PeerManager 根据路由直接选择下一跳并发送,转发路径以“取下一跳 → 发送”为核心流程:
- 发送内部路径:[peer_manager.rs:L1053-L1082](file:///data/project/EasyTier/easytier/src/peers/peer_manager.rs#L1053-L1082)
- 数据面发送入口:[peer_manager.rs:L1187-L1238](file:///data/project/EasyTier/easytier/src/peers/peer_manager.rs#L1187-L1238)
现状缺少面向“非直连目标”的统一管理模块,无法对 Relay Peer 进行会话、状态与策略层面的治理。
## 设计目标
- 对非直连 Relay Peer 做生命周期管理
- 提供统一的会话(如 PeerSession)与路径选择入口
- 与现有路由模块解耦,只消费下一跳候选与路由变更信息
- 不改变现有数据面主路径流程
## 架构设计
### 模块命名
**RelayPeerMap**
### 引用关系
- **PeerManager**: 作为顶层协调者,同时持有 `Arc<PeerMap>``Arc<RelayPeerMap>`
- **RelayPeerMap**: 持有 `Arc<PeerMap>`(或 `Weak<PeerMap>`),用于在决策后调用底层发送能力。
- **PeerMap**: 专注直连 Peer 管理与基础路由表维护,不直接持有 RelayPeerMap(避免循环依赖)。
### 职责划分
- **PeerManager**:
- 发送入口。
- 判断目标是否直连:
- 若目标在 PeerMap:直接调用 `PeerMap` 发送。
- 若目标不在 PeerMap:调用 `RelayPeerMap` 处理。
- **RelayPeerMap**:
- 维护非直连 Peer 的状态(会话、健康度)。
- 决策下一跳(Next Hop)。
- 调用 `PeerMap` 将数据包发送给下一跳。
- **ForeignNetworkManager**:
- 拥有独立的 RelayPeerMap 实例,用于 foreign network 的非直连转发。
- **PeerMap**:
- 维护直连 Peer 连接。
- 提供基础路由表查询。
- 执行向直连邻居的物理发送。
## 数据模型
### RelayPeerKey
- **dst_peer_id** (PeerId)
- 注:RelayPeerMap 实例隶属于特定网络上下文,因此 Key 仅需 PeerId。
### RelayPeerState
- selected_next_hop: PeerId
- session: Option<PeerSessionHandle>
- last_active_at: Instant
- path_metrics: latency, loss, hop_count (可选)
### RelayPathCandidate
- next_hop_peer_id
- cost / latency / availability
## 简化状态管理
不再引入复杂状态机(如 Establishing/Suspect 等),仅依赖以下状态判断:
- **会话是否存在**`session.is_some()`
- **会话是否有效**:检查 session 过期时间或 generation
- **路由是否可达**:检查路由表中是否有 next hop
## 关键流程
### 出站发送流程(非直连)
1. **PeerManager** 接收发送请求(目标 `dst_peer_id`)。
2. **PeerManager** 检查 `PeerMap` 是否直连 `dst_peer_id`
3. 若非直连,**PeerManager** 将请求转交给 **RelayPeerMap**
4. **RelayPeerMap** 处理:
- 查找 `RelayPeerState`
- 若首次与该 Relay Peer 通信,创建 RelayPeerState 并进入握手流程。
- 确保会话存在(若无则触发握手与同步)。
- 选择下一跳(由 RelayPeerMap 决策)。
- 调用 **PeerMap**`send_msg_directly(next_hop, packet)`
### Relay 数据面握手出站流程(Relay Peer 特例)
说明:Relay Peer 初次通信前必须先完成基于数据面消息的 Noise 握手,否则无法安全发送加密数据面包。握手消息通过普通数据面路径转发,但其目标是创建会话而非携带业务数据。
流程要点(发起方视角):
1. 发送路径命中 `dst_peer_id` 为非直连目标后,进入 RelayPeerMap 流程。
2. 若目标会话不存在或已失效,则发送 **RelayHandshake** 消息(携带 `m1`),通过 `send_msg_directly(next_hop, packet)` 转发给对端。
3. 对端收到后返回 **RelayHandshakeAck**(携带 `m2`)沿原路径回传,双方派生会话并落库。
4. 握手完成后,使用已建立会话的密钥对数据面包加密/鉴别,再走正常转发流程。
5. 若握手失败或控制面公钥信息缺失,则不进入数据发送,返回可重试的错误(由上层决定重试节奏)。
### Relay 会话建立流程(数据面 + Noise 1-RTT
背景:直连 Peer 的 Noise 握手在 `PeerConn` 内完成;Relay Peer 没有 `PeerConn`,因此无法复用该握手逻辑。Relay 会话需要通过 **数据面握手消息** 完成握手与密钥派生,并把结果落到 `PeerSessionStore`(或等价的会话存储)中供数据面复用。
关键假设:Relay Peer 握手前即可拿到对端静态公钥(通过 OSPF 等控制面传播),因此可选用 **1-RTT 的 Noise 握手模式**(例如 IK/KK 一类的两报文握手),并将“两报文”映射为 **RelayHandshake / RelayHandshakeAck** 两种数据面消息。
建议流程(以本端作为 initiator 为例):
1. `ensure_session(dst_peer_id)` 发现无可用会话,触发一次握手流程(可选:对并发请求做 in-flight 去重)。
2. 从控制面缓存中读取 `dst_peer_id` 的静态公钥(若不存在则等待控制面收敛,或退化为非 1-RTT 的握手模式)。
3. 生成 Noise 握手首报文 `m1`(包含必要的认证信息与抗重放字段,例如 session generation / nonce / 时间窗等)。
4. 发送 `RelayHandshake(m1)`,对端返回 `RelayHandshakeAck(m2)`
5. initiator 处理 `m2`,双方派生出相同的会话密钥与会话标识,将会话写入 `PeerSessionStore`,供后续发送复用。
6. 后续 Relay 数据面包使用该会话密钥进行加解密/鉴别(具体包格式不在本层定义,保持与直连会话的语义一致)。
实现要点:
- **角色确定**:为避免并发双向握手导致的竞态,可使用确定性规则选择 initiator(如 `min(peer_id)` 发起),或由第一次发送方发起并在冲突时做幂等合并。
- **幂等与重试**:数据面握手应支持重试(同一 generation/nonce 重放可安全拒绝或复用),并与路由收敛解耦。
- **会话绑定**:握手需绑定 `dst_peer_id` 与其静态公钥指纹,避免控制面短暂不一致造成的密钥混用。
### 会话管理
- PeerSessionStore 仅用于 secure mode,会话创建与密钥派生在该模式下生效。
- 在发送时若发现无会话,则触发 Create/Join/Sync 逻辑。
- 对于 Relay Peer,会话创建阶段由 **数据面握手消息承载 Noise 握手**(见上节),以替代直连 `PeerConn` 内的握手流程。
### PacketType 规划(新增)
- 新增 PacketType
- `RelayHandshake`:承载 `m1`initiator -> responder
- `RelayHandshakeAck`:承载 `m2`responder -> initiator
- 载荷建议:
- `RelayHandshake`: `RelayNoiseMsg1Pb`(包含 a_session_generation/conn_id/算法等字段)
- `RelayHandshakeAck`: `RelayNoiseMsg2Pb`(包含 b_session_generation/root_key/initial_epoch/算法等字段)
- 约束:
- 两类包应与普通 Data 包一样可被转发,但不应被当作业务数据消费。
- 需要在路由转发链路中识别为“握手控制类”消息。
## 策略设计
- 下一跳策略由 RelayPeerMap 决策,可结合 latency_first 选择 LeastHop 或 LatencyFirst。
- 握手策略:优先采用“已知对端静态公钥”的 **1-RTT Noise 握手**,并通过 **RelayHandshake/RelayHandshakeAck** 消息触发会话建立。
- 失败处理:依赖上层重试或底层路由收敛,暂不在此层做复杂的 Failover 状态流转。
- 公钥来源:对端静态公钥以控制面传播为准;在控制面信息缺失或变更时,应阻止复用旧会话或触发重新握手。
## 接口草案
### RelayPeerMap 接口
- `send_msg(packet, dst_peer_id)`: 处理非直连发送逻辑。
- `ensure_session(dst_peer_id)`: 确保会话可用。
- `handshake_session(dst_peer_id)`: 通过握手消息完成 Relay 会话握手(对上层透明,可由 `ensure_session` 内部调用)。
- `remove_peer(dst_peer_id)`: 删除已经失效的 Peer。
## 监控与指标建议
- Relay 会话数
- Relay 发送成功/失败计数
## 渐进式落地计划
### 阶段 1:基础能力
- 引入 RelayPeerMap 结构。
- 在 PeerManager 中集成 RelayPeerMap。
- 实现基础的“非直连转发”委托逻辑。
## 兼容性说明
- 需要新增 PacketType 用于 RelayHandshake/RelayHandshakeAck。
- 在 secure mode 下,压缩由 PeerManager 完成;加密由 PeerConn(直连)或 RelayPeer(非直连)完成。
- RelayPeer 在 secure mode 下需要提供会话级加密/解密入口:
- 发送:在 RelayPeerMap 决策完成后、调用 `send_msg_directly` 前,用 Relay 会话密钥加密。
- 接收:在数据面包进入业务处理前,按 `from_peer_id/to_peer_id` 定位会话并解密。
- PeerSessionStore 为 secure mode 的会话兼容性保留,非 secure mode 仅保持现有行为。
- 不改变路由模块的计算结果。
+38 -11
View File
@@ -4,11 +4,11 @@ core_clap:
config_server:
en: |+
config server address, allow format:
full url: --config-server udp://127.0.0.1:22020/admin
full url: --config-server udp://127.0.0.1:22020/admin, 'udp' can be replaced with tcp, ws, wss (when config server ws is proxied to wss)
only user name: --config-server admin, will use official server
zh-CN: |+
配置服务器地址。允许格式:
完整URL--config-server udp://127.0.0.1:22020/admin
完整URL--config-server udp://127.0.0.1:22020/admin,udp可以根据配置服务器替换为 tcp,ws,wss(配置服务器ws被代理为wss时)
仅用户名:--config-server admin,将使用官方的服务器
machine_id:
en: |+
@@ -67,12 +67,12 @@ core_clap:
en: |+
listeners to accept connections, allow format:
port number: <11010>. means tcp/udp will listen on 11010, ws/wss will listen on 11010 and 11011, wg will listen on 11011
url: <tcp://0.0.0.0:11010>. tcp can be tcp, udp, ring, wg, ws, wss\n
url: <tcp://0.0.0.0:11010>. tcp can be tcp, udp, ring, wg, ws, wss, quic, faketcp\n
proto & port pair: <proto:port>. wg:11011, means listen on 11011 with wireguard protocol url and proto:port can occur multiple times.
zh-CN: |+
监听器用于接受连接,允许以下格式:
端口号:<11010>,意味着tcp/udp将在11010端口监听,ws/wss将在11010和11011端口监听,wg将在11011端口监听。
url<tcp://0.0.0.0:11010>,其中tcp可以是tcp、udp、ring、wg、ws、wss协议。
url<tcp://0.0.0.0:11010>,其中tcp可以是tcp、udp、ring、wg、ws、wss、quic、faketcp协议。
协议和端口对:<proto:port>,例如wg:11011,表示使用WireGuard协议在11011端口监听。URL 和 协议端口对 可以多次出现。
no_listener:
en: "do not listen on any port, only connect to peers"
@@ -152,11 +152,17 @@ core_clap:
如果该参数为空,则禁用转发。默认允许所有网络。
例如:'*'(所有网络),'def*'(以def为前缀的网络),'net1 net2'(只允许net1和net2"
disable_p2p:
en: "disable p2p communication, will only relay packets with peers specified by --peers"
zh-CN: "禁用P2P通信,只通过--peers指定的节点转发数据包"
en: "disable ordinary automatic p2p; still establish p2p with peers marked as need-p2p, and other peers should not proactively connect to this node"
zh-CN: "禁用普通自动P2P;仍会与标记为 need-p2p 的节点建立P2P连接,其他节点不应主动与当前节点建立P2P"
p2p_only:
en: "only communicate with peers that already establish p2p connection"
zh-CN: "仅与已经建立P2P连接的对等节点通信"
lazy_p2p:
en: "only try to establish p2p when traffic actually needs the peer; peers marked as need-p2p are still connected proactively"
zh-CN: "仅在实际流量需要某个对等节点时才尝试建立P2P;被标记为 need-p2p 的节点仍会主动建立连接"
need_p2p:
en: "announce that other peers should proactively establish p2p connections to this node even when they enable lazy-p2p"
zh-CN: "声明即使其他节点启用了 lazy-p2p,也应主动与当前节点建立P2P连接"
disable_tcp_hole_punching:
en: "disable tcp hole punching"
zh-CN: "禁用TCP打洞功能"
@@ -196,9 +202,6 @@ core_clap:
disable_quic_input:
en: "do not allow other nodes to use QUIC to proxy tcp streams to this node. when a node with QUIC proxy enabled accesses this node, the original tcp connection is preserved."
zh-CN: "不允许其他节点使用 QUIC 代理 TCP 流到此节点。开启 QUIC 代理的节点访问此节点时,依然使用原始 TCP 连接。"
quic_listen_port:
en: "the port to listen for quic connections, default is 0 (random port)"
zh-CN: "监听 QUIC 连接的端口,默认值为0(随机端口)。"
port_forward:
en: "forward local port to remote port in virtual network. e.g.: udp://0.0.0.0:12345/10.126.126.1:23456, means forward local udp port 12345 to 10.126.126.1:23456 in the virtual network. can specify multiple."
zh-CN: "将本地端口转发到虚拟网络中的远程端口。例如:udp://0.0.0.0:12345/10.126.126.1:23456,表示将本地UDP端口12345转发到虚拟网络中的10.126.126.1:23456。可以指定多个。"
@@ -209,11 +212,14 @@ core_clap:
en: "specify the top-level domain zone for magic DNS. if not provided, defaults to the value from dns_server module (et.net.). only used when accept_dns is true."
zh-CN: "指定魔法DNS的顶级域名区域。如果未提供,默认使用dns_server模块中的值(et.net.)。仅在accept_dns为true时使用。"
private_mode:
en: "if true, nodes with different network names or passwords from this network are not allowed to perform handshake or relay through this node."
zh-CN: "如果为true,则允许使用了与本网络不相同的网络名称和密码的节点通过本节点进行握手或中转"
en: "if true, foreign networks are only allowed when this node can verify they use the same network secret, or when a foreign credential node is already trusted via admin-issued credential propagation; different or missing secrets are otherwise rejected."
zh-CN: "如果为true,则允许两类 foreign network 接入:本节点能验证其使用相同 network secret 的节点,或已通过 foreign network 管理节点传播而被信任的 credential 节点;否则 secret 不同或缺失时会被拒绝。"
foreign_relay_bps_limit:
en: "the maximum bps limit for foreign network relay, default is no limit. unit: BPS (bytes per second)"
zh-CN: "作为共享节点时,限制非本地网络的流量转发速率,默认无限制,单位 BPS (字节每秒)"
instance_recv_bps_limit:
en: "the maximum total receive bps limit for this instance, default is no limit. unit: BPS (bytes per second)"
zh-CN: "限制当前网络实例整体入站流量的总接收速率,默认无限制,单位 BPS (字节每秒)"
tcp_whitelist:
en: "tcp port whitelist. Supports single ports (80) and ranges (8000-9000)"
zh-CN: "TCP 端口白名单。支持单个端口(80)和范围(8000-9000"
@@ -223,15 +229,36 @@ core_clap:
disable_relay_kcp:
en: "if true, disable relay kcp packets. avoid consuming too many bandwidth. default is false"
zh-CN: "如果为true,则禁止节点转发 KCP 数据包,防止过度消耗流量。默认值为false"
disable_relay_quic:
en: "if true, disable relay quic packets. avoid consuming too many bandwidth. default is false"
zh-CN: "如果为true,则禁止节点转发 QUIC 数据包,防止过度消耗流量。默认值为false"
enable_relay_foreign_network_kcp:
en: "if true, allow relay kcp packets from foreign network. default is false (not forward foreign network kcp packets)"
zh-CN: "如果为true,则作为共享节点时也可以转发其他网络的 KCP 数据包。默认值为false(不转发)"
enable_relay_foreign_network_quic:
en: "if true, allow relay quic packets from foreign network. default is false (not forward foreign network quic packets)"
zh-CN: "如果为true,则作为共享节点时也可以转发其他网络的 QUIC 数据包。默认值为false(不转发)"
stun_servers:
en: "Override default STUN servers; If configured but empty, STUN servers are not used"
zh-CN: "覆盖内置的默认 STUN server 列表;如果设置了但是为空,则不使用 STUN servers;如果没设置,则使用默认 STUN server 列表"
stun_servers_v6:
en: "Override default STUN servers, IPv6; If configured but empty, IPv6 STUN servers are not used"
zh-CN: "覆盖内置的默认 IPv6 STUN server 列表;如果设置了但是为空,则不使用 IPv6 STUN servers;如果没设置,则使用默认 IPv6 STUN server 列表"
secure_mode:
en: "if true, enable secure mode. default is false"
zh-CN: "如果为true,则启用安全模式。默认值为false"
local_private_key:
en: "local private key for secure mode. if not provided, a random key will be generated"
zh-CN: "安全模式下的本地私钥。如果未提供,则会随机生成一个密钥"
local_public_key:
en: "local public key for secure mode. if not provided, a random key will be generated, or use local private key to derive public key"
zh-CN: "安全模式下的本地公钥。如果未提供,则会随机生成一个密钥,或者使用本地私钥派生公钥"
credential:
en: "credential secret (base64-encoded private key) for joining network as a temporary node without network_secret"
zh-CN: "凭据密钥(base64编码的私钥),用于作为临时节点加入网络,无需 network_secret"
credential_file:
en: "path to credential storage file for persisting generated credentials across restarts (admin nodes)"
zh-CN: "凭据存储文件路径,用于在管理节点重启后保留已生成的凭据"
check_config:
en: Check config validity without starting the network
zh-CN: 检查配置文件的有效性并退出
+8 -1
View File
@@ -1,6 +1,10 @@
#[cfg(feature = "zstd")]
use anyhow::Context;
#[cfg(feature = "zstd")]
use dashmap::DashMap;
#[cfg(feature = "zstd")]
use std::cell::RefCell;
#[cfg(feature = "zstd")]
use zstd::bulk;
use zerocopy::{AsBytes as _, FromBytes as _};
@@ -38,6 +42,7 @@ impl DefaultCompressor {
compress_algo: CompressorAlgo,
) -> Result<Vec<u8>, Error> {
match compress_algo {
#[cfg(feature = "zstd")]
CompressorAlgo::ZstdDefault => CTX_MAP.with(|map_cell| {
let map = map_cell.borrow();
let mut ctx_entry = map.entry(compress_algo).or_default();
@@ -58,6 +63,7 @@ impl DefaultCompressor {
compress_algo: CompressorAlgo,
) -> Result<Vec<u8>, Error> {
match compress_algo {
#[cfg(feature = "zstd")]
CompressorAlgo::ZstdDefault => DCTX_MAP.with(|map_cell| {
let map = map_cell.borrow();
let mut ctx_entry = map.entry(compress_algo).or_default();
@@ -169,12 +175,13 @@ impl Compressor for DefaultCompressor {
}
}
#[cfg(feature = "zstd")]
thread_local! {
static CTX_MAP: RefCell<DashMap<CompressorAlgo, bulk::Compressor<'static>>> = RefCell::new(DashMap::new());
static DCTX_MAP: RefCell<DashMap<CompressorAlgo, bulk::Decompressor<'static>>> = RefCell::new(DashMap::new());
}
#[cfg(test)]
#[cfg(all(test, feature = "zstd"))]
pub mod tests {
use super::*;
+135 -76
View File
@@ -6,7 +6,12 @@ use std::{
};
use anyhow::Context;
use base64::{prelude::BASE64_STANDARD, Engine as _};
use cfg_if::cfg_if;
use clap::builder::PossibleValue;
use clap::ValueEnum;
use serde::{Deserialize, Serialize};
use strum::{Display, EnumString, VariantArray};
use tokio::io::AsyncReadExt as _;
use crate::{
@@ -14,7 +19,7 @@ use crate::{
instance::dns_server::DEFAULT_ET_DNS_ZONE,
proto::{
acl::Acl,
common::{CompressionAlgoPb, PortForwardConfigPb, SocketType},
common::{CompressionAlgoPb, PortForwardConfigPb, SecureModeConfig, SocketType},
},
tunnel::generate_digest_from_str,
};
@@ -24,6 +29,7 @@ use super::env_parser;
pub type Flags = crate::proto::common::FlagsInConfig;
pub fn gen_default_flags() -> Flags {
#[allow(deprecated)]
Flags {
default_protocol: "tcp".to_string(),
dev_name: "".to_string(),
@@ -38,6 +44,7 @@ pub fn gen_default_flags() -> Flags {
relay_network_whitelist: "*".to_string(),
disable_p2p: false,
p2p_only: false,
lazy_p2p: false,
relay_all_peer_rpc: false,
disable_tcp_hole_punching: false,
disable_udp_hole_punching: false,
@@ -52,84 +59,67 @@ pub fn gen_default_flags() -> Flags {
private_mode: false,
enable_quic_proxy: false,
disable_quic_input: false,
quic_listen_port: 0,
disable_relay_quic: false,
enable_relay_foreign_network_quic: false,
foreign_relay_bps_limit: u64::MAX,
multi_thread_count: 2,
encryption_algorithm: "aes-gcm".to_string(),
encryption_algorithm: EncryptionAlgorithm::default().to_string(),
disable_sym_hole_punching: false,
tld_dns_zone: DEFAULT_ET_DNS_ZONE.to_string(),
quic_listen_port: u32::MAX,
need_p2p: false,
instance_recv_bps_limit: u64::MAX,
}
}
#[derive(Debug, Clone, PartialEq, Eq, Display, EnumString, VariantArray)]
#[strum(ascii_case_insensitive)]
pub enum EncryptionAlgorithm {
AesGcm,
Aes256Gcm,
#[strum(serialize = "xor")]
Xor,
#[cfg(feature = "wireguard")]
#[cfg(any(feature = "aes-gcm", feature = "wireguard", feature = "openssl-crypto"))]
#[strum(serialize = "aes-gcm")]
AesGcm,
#[cfg(any(feature = "aes-gcm", feature = "wireguard", feature = "openssl-crypto"))]
#[strum(serialize = "aes-256-gcm")]
Aes256Gcm,
#[cfg(any(feature = "wireguard", feature = "openssl-crypto"))]
#[strum(serialize = "chacha20")]
ChaCha20,
#[cfg(feature = "openssl-crypto")]
OpensslAesGcm,
#[cfg(feature = "openssl-crypto")]
OpensslChacha20,
#[cfg(feature = "openssl-crypto")]
OpensslAes256Gcm,
}
impl std::fmt::Display for EncryptionAlgorithm {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::AesGcm => write!(f, "aes-gcm"),
Self::Aes256Gcm => write!(f, "aes-256-gcm"),
Self::Xor => write!(f, "xor"),
#[cfg(feature = "wireguard")]
Self::ChaCha20 => write!(f, "chacha20"),
#[cfg(feature = "openssl-crypto")]
Self::OpensslAesGcm => write!(f, "openssl-aes-gcm"),
#[cfg(feature = "openssl-crypto")]
Self::OpensslChacha20 => write!(f, "openssl-chacha20"),
#[cfg(feature = "openssl-crypto")]
Self::OpensslAes256Gcm => write!(f, "openssl-aes-256-gcm"),
impl ValueEnum for EncryptionAlgorithm {
fn value_variants<'a>() -> &'a [Self] {
Self::VARIANTS
}
fn from_str(input: &str, _ignore_case: bool) -> Result<Self, String> {
input
.parse()
.map_err(|_| format!("'{}' is not a valid encryption algorithm", input))
}
fn to_possible_value(&self) -> Option<PossibleValue> {
Some(PossibleValue::new(self.to_string()))
}
}
#[allow(clippy::derivable_impls)]
impl Default for EncryptionAlgorithm {
fn default() -> Self {
cfg_if! {
if #[cfg(any(feature = "aes-gcm", feature = "wireguard", feature = "openssl-crypto"))] {
EncryptionAlgorithm::AesGcm
} else {
crate::common::log::warn!("no AEAD encryption algorithm is available, using INSECURE XOR");
EncryptionAlgorithm::Xor
}
}
}
}
impl TryFrom<&str> for EncryptionAlgorithm {
type Error = anyhow::Error;
fn try_from(value: &str) -> Result<Self, Self::Error> {
match value {
"aes-gcm" => Ok(Self::AesGcm),
"aes-256-gcm" => Ok(Self::Aes256Gcm),
"xor" => Ok(Self::Xor),
#[cfg(feature = "wireguard")]
"chacha20" => Ok(Self::ChaCha20),
#[cfg(feature = "openssl-crypto")]
"openssl-aes-gcm" => Ok(Self::OpensslAesGcm),
#[cfg(feature = "openssl-crypto")]
"openssl-chacha20" => Ok(Self::OpensslChacha20),
#[cfg(feature = "openssl-crypto")]
"openssl-aes-256-gcm" => Ok(Self::OpensslAes256Gcm),
_ => Err(anyhow::anyhow!("invalid encryption algorithm")),
}
}
}
pub fn get_avaliable_encrypt_methods() -> Vec<&'static str> {
let mut r = vec!["aes-gcm", "aes-256-gcm", "xor"];
if cfg!(feature = "wireguard") {
r.push("chacha20");
}
if cfg!(feature = "openssl-crypto") {
r.extend(vec![
"openssl-aes-gcm",
"openssl-chacha20",
"openssl-aes-256-gcm",
]);
}
r
}
#[auto_impl::auto_impl(Box, &)]
pub trait ConfigLoader: Send + Sync {
fn get_id(&self) -> uuid::Uuid;
@@ -209,6 +199,14 @@ pub trait ConfigLoader: Send + Sync {
fn get_stun_servers_v6(&self) -> Option<Vec<String>>;
fn set_stun_servers_v6(&self, servers: Option<Vec<String>>);
fn get_secure_mode(&self) -> Option<SecureModeConfig>;
fn set_secure_mode(&self, secure_mode: Option<SecureModeConfig>);
fn get_credential_file(&self) -> Option<std::path::PathBuf> {
None
}
fn set_credential_file(&self, _path: Option<std::path::PathBuf>) {}
fn dump(&self) -> String;
}
@@ -289,6 +287,16 @@ impl NetworkIdentity {
network_secret_digest: Some(network_secret_digest),
}
}
/// Create a NetworkIdentity for a credential node (no network_secret).
/// The node identifies by network_name only and authenticates via credential keypair.
pub fn new_credential(network_name: String) -> Self {
NetworkIdentity {
network_name,
network_secret: None,
network_secret_digest: None,
}
}
}
impl Default for NetworkIdentity {
@@ -300,6 +308,7 @@ impl Default for NetworkIdentity {
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct PeerConfig {
pub uri: url::Url,
pub peer_public_key: Option<String>,
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
@@ -382,6 +391,42 @@ impl From<PortForwardConfig> for PortForwardConfigPb {
}
}
pub fn process_secure_mode_cfg(mut user_cfg: SecureModeConfig) -> anyhow::Result<SecureModeConfig> {
if !user_cfg.enabled {
return Ok(user_cfg);
}
let private_key = if user_cfg.local_private_key.is_none() {
// if no private key, generate random one
let private = x25519_dalek::StaticSecret::random_from_rng(rand::rngs::OsRng);
user_cfg.local_private_key = Some(BASE64_STANDARD.encode(private.clone().as_bytes()));
private
} else {
// check if private key is valid
user_cfg.private_key()?
};
let public = x25519_dalek::PublicKey::from(&private_key);
match user_cfg.local_public_key {
None => {
user_cfg.local_public_key = Some(BASE64_STANDARD.encode(public.as_bytes()));
}
Some(ref user_pub) => {
let public = user_cfg.public_key()?;
if *user_pub != BASE64_STANDARD.encode(public.as_bytes()) {
return Err(anyhow::anyhow!(
"local public key {} does not match generated public key {}",
user_pub,
BASE64_STANDARD.encode(public.as_bytes())
));
}
}
}
Ok(user_cfg)
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
struct Config {
netns: Option<String>,
@@ -407,6 +452,8 @@ struct Config {
port_forward: Option<Vec<PortForwardConfig>>,
secure_mode: Option<SecureModeConfig>,
flags: Option<serde_json::Map<String, serde_json::Value>>,
#[serde(skip)]
@@ -418,6 +465,8 @@ struct Config {
udp_whitelist: Option<Vec<String>>,
stun_servers: Option<Vec<String>>,
stun_servers_v6: Option<Vec<String>>,
credential_file: Option<PathBuf>,
}
#[derive(Debug, Clone)]
@@ -626,12 +675,13 @@ impl ConfigLoader for TomlConfigLoader {
fn get_id(&self) -> uuid::Uuid {
let mut locked_config = self.config.lock().unwrap();
if locked_config.instance_id.is_none() {
let id = uuid::Uuid::new_v4();
locked_config.instance_id = Some(id);
id
} else {
*locked_config.instance_id.as_ref().unwrap()
match locked_config.instance_id {
Some(id) => id,
None => {
let id = uuid::Uuid::new_v4();
locked_config.instance_id = Some(id);
id
}
}
}
@@ -802,6 +852,22 @@ impl ConfigLoader for TomlConfigLoader {
self.config.lock().unwrap().stun_servers_v6 = servers;
}
fn get_secure_mode(&self) -> Option<SecureModeConfig> {
self.config.lock().unwrap().secure_mode.clone()
}
fn set_secure_mode(&self, secure_mode: Option<SecureModeConfig>) {
self.config.lock().unwrap().secure_mode = secure_mode;
}
fn get_credential_file(&self) -> Option<PathBuf> {
self.config.lock().unwrap().credential_file.clone()
}
fn set_credential_file(&self, path: Option<PathBuf>) {
self.config.lock().unwrap().credential_file = path;
}
fn dump(&self) -> String {
let default_flags_json = serde_json::to_string(&gen_default_flags()).unwrap();
let default_flags_hashmap =
@@ -1570,7 +1636,6 @@ enable_ipv6 = ${ENABLE_IPV6}
async fn test_numeric_type_env_vars() {
// 设置数字类型的环境变量
std::env::set_var("MTU_VALUE", "1400");
std::env::set_var("QUIC_PORT", "8080");
std::env::set_var("THREAD_COUNT", "4");
let mut temp_file = NamedTempFile::new().unwrap();
@@ -1583,7 +1648,6 @@ network_secret = "secret"
[flags]
mtu = ${MTU_VALUE}
quic_listen_port = ${QUIC_PORT}
multi_thread_count = ${THREAD_COUNT}
"#;
temp_file.write_all(config_content.as_bytes()).unwrap();
@@ -1597,10 +1661,6 @@ multi_thread_count = ${THREAD_COUNT}
// 验证数字值被正确解析
let flags = config.get_flags();
assert_eq!(flags.mtu, 1400, "mtu should be 1400");
assert_eq!(
flags.quic_listen_port, 8080,
"quic_listen_port should be 8080"
);
assert_eq!(
flags.multi_thread_count, 4,
"multi_thread_count should be 4"
@@ -1612,7 +1672,6 @@ multi_thread_count = ${THREAD_COUNT}
// 清理
std::env::remove_var("MTU_VALUE");
std::env::remove_var("QUIC_PORT");
std::env::remove_var("THREAD_COUNT");
}
+3
View File
@@ -29,6 +29,9 @@ define_global_var!(MAX_DIRECT_CONNS_PER_PEER_IN_FOREIGN_NETWORK, u32, 3);
define_global_var!(DIRECT_CONNECT_TO_PUBLIC_SERVER, bool, true);
// must make it true in future.
define_global_var!(HMAC_SECRET_DIGEST, bool, false);
pub const UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID: u32 = 2;
pub const WIN_SERVICE_WORK_DIR_REG_KEY: &str = "SOFTWARE\\EasyTier\\Service\\WorkDir";
+3
View File
@@ -48,6 +48,9 @@ pub enum Error {
#[error("secret key error: {0}")]
SecretKeyError(String),
#[error("noise protocol error: {0}")]
NoiseError(#[from] snow::Error),
}
pub type Result<T> = result::Result<T, Error>;
+339 -62
View File
@@ -1,20 +1,13 @@
use std::collections::hash_map::DefaultHasher;
use std::net::IpAddr;
use std::{
collections::{hash_map::DefaultHasher, HashMap},
hash::Hasher,
net::{IpAddr, SocketAddr},
sync::{Arc, Mutex},
time::{SystemTime, UNIX_EPOCH},
};
use crate::common::config::ProxyNetworkConfig;
use crate::common::stats_manager::StatsManager;
use crate::common::token_bucket::TokenBucketManager;
use crate::peers::acl_filter::AclFilter;
use crate::proto::acl::GroupIdentity;
use crate::proto::api::config::InstanceConfigPatch;
use crate::proto::api::instance::PeerConnInfo;
use crate::proto::common::{PeerFeatureFlag, PortForwardConfigPb};
use crate::proto::peer_rpc::PeerGroupInfo;
use crossbeam::atomic::AtomicCell;
use arc_swap::ArcSwap;
use dashmap::DashMap;
use super::{
config::{ConfigLoader, Flags},
@@ -23,6 +16,24 @@ use super::{
stun::{StunInfoCollector, StunInfoCollectorTrait},
PeerId,
};
use crate::{
common::{
config::ProxyNetworkConfig, shrink_dashmap, stats_manager::StatsManager,
token_bucket::TokenBucketManager,
},
peers::{acl_filter::AclFilter, credential_manager::CredentialManager},
proto::{
acl::GroupIdentity,
api::{config::InstanceConfigPatch, instance::PeerConnInfo},
common::{PeerFeatureFlag, PortForwardConfigPb},
peer_rpc::PeerGroupInfo,
},
tunnel::matches_protocol,
};
use crossbeam::atomic::AtomicCell;
use hmac::{Hmac, Mac};
use sha2::Sha256;
use socket2::Protocol;
pub type NetworkIdentity = crate::common::config::NetworkIdentity;
@@ -57,11 +68,121 @@ pub enum GlobalCtxEvent {
ConfigPatched(InstanceConfigPatch),
ProxyCidrsUpdated(Vec<cidr::Ipv4Cidr>, Vec<cidr::Ipv4Cidr>), // (added, removed)
CredentialChanged,
}
pub type EventBus = tokio::sync::broadcast::Sender<GlobalCtxEvent>;
pub type EventBusSubscriber = tokio::sync::broadcast::Receiver<GlobalCtxEvent>;
/// Source of a trusted public key from OSPF route propagation
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum TrustedKeySource {
/// Peer node's noise static pubkey
OspfNode,
/// Admin-declared trusted credential pubkey
OspfCredential,
}
/// Metadata for a trusted public key
#[derive(Debug, Clone)]
pub struct TrustedKeyMetadata {
pub source: TrustedKeySource,
/// Expiry time in Unix seconds. None means never expires.
pub expiry_unix: Option<i64>,
}
impl TrustedKeyMetadata {
pub fn is_expired(&self) -> bool {
if let Some(expiry) = self.expiry_unix {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
return now >= expiry;
}
false
}
}
// key is (pubkey, network-name)
pub type TrustedKeyMap = HashMap<Vec<u8>, TrustedKeyMetadata>;
struct TrustedKeyMapManager {
network_trusted_keys: DashMap<String, ArcSwap<TrustedKeyMap>>,
}
impl TrustedKeyMapManager {
pub fn new() -> Self {
Self {
network_trusted_keys: DashMap::new(),
}
}
pub fn update_trusted_keys(&self, network_name: &str, trusted_keys: TrustedKeyMap) {
match self.network_trusted_keys.entry(network_name.to_string()) {
dashmap::Entry::Vacant(entry) => {
entry.insert(ArcSwap::new(Arc::new(trusted_keys)));
}
dashmap::Entry::Occupied(entry) => {
entry.get().store(Arc::new(trusted_keys));
}
}
}
pub fn remove_trusted_keys(&self, network_name: &str) {
self.network_trusted_keys.remove(network_name);
shrink_dashmap(&self.network_trusted_keys, None);
}
pub fn verify_trusted_key(&self, pubkey: &[u8], network_name: &str) -> bool {
self.verify_trusted_key_with_source(pubkey, network_name, None)
}
pub fn verify_trusted_key_with_source(
&self,
pubkey: &[u8],
network_name: &str,
source: Option<TrustedKeySource>,
) -> bool {
let Some(trusted_keys) = self
.network_trusted_keys
.get(network_name)
.map(|v| v.load_full())
else {
return false;
};
let Some(metadata) = trusted_keys.get(&pubkey.to_vec()) else {
return false;
};
if let Some(source) = source {
metadata.source == source && !metadata.is_expired()
} else {
!metadata.is_expired()
}
}
pub fn list_trusted_keys(&self, network_name: &str) -> Vec<(Vec<u8>, TrustedKeyMetadata)> {
let Some(trusted_keys) = self
.network_trusted_keys
.get(network_name)
.map(|v| v.load_full())
else {
return Vec::new();
};
let mut items = trusted_keys
.iter()
.filter(|(_, metadata)| !metadata.is_expired())
.map(|(pubkey, metadata)| (pubkey.clone(), metadata.clone()))
.collect::<Vec<_>>();
items.sort_by(|left, right| left.0.cmp(&right.0));
items
}
}
pub struct GlobalCtx {
pub inst_name: String,
pub id: uuid::Uuid,
@@ -83,20 +204,21 @@ pub struct GlobalCtx {
running_listeners: Mutex<Vec<url::Url>>,
enable_exit_node: bool,
proxy_forward_by_system: bool,
no_tun: bool,
p2p_only: bool,
flags: ArcSwap<Flags>,
feature_flags: AtomicCell<PeerFeatureFlag>,
quic_proxy_port: AtomicCell<Option<u16>>,
token_bucket_manager: TokenBucketManager,
stats_manager: Arc<StatsManager>,
acl_filter: Arc<AclFilter>,
credential_manager: Arc<CredentialManager>,
/// OSPF propagated trusted keys (peer pubkeys and admin credentials)
/// Stored in ArcSwap for lock-free reads and atomic batch updates
trusted_keys: Arc<TrustedKeyMapManager>,
}
impl std::fmt::Debug for GlobalCtx {
@@ -114,13 +236,25 @@ impl std::fmt::Debug for GlobalCtx {
pub type ArcGlobalCtx = std::sync::Arc<GlobalCtx>;
impl GlobalCtx {
fn derive_feature_flags(flags: &Flags, current: Option<PeerFeatureFlag>) -> PeerFeatureFlag {
let mut feature_flags = current.unwrap_or_default();
feature_flags.kcp_input = !flags.disable_kcp_input;
feature_flags.no_relay_kcp = flags.disable_relay_kcp;
feature_flags.support_conn_list_sync = true;
feature_flags.quic_input = !flags.disable_quic_input;
feature_flags.no_relay_quic = flags.disable_relay_quic;
feature_flags.need_p2p = flags.need_p2p;
feature_flags.disable_p2p = flags.disable_p2p;
feature_flags
}
pub fn new(config_fs: impl ConfigLoader + 'static) -> Self {
let id = config_fs.get_id();
let network = config_fs.get_network_identity();
let net_ns = NetNS::new(config_fs.get_netns());
let hostname = config_fs.get_hostname();
let (event_bus, _) = tokio::sync::broadcast::channel(8);
let (event_bus, _) = tokio::sync::broadcast::channel(16);
let stun_info_collector = StunInfoCollector::new_with_default_servers();
@@ -138,17 +272,12 @@ impl GlobalCtx {
let stun_info_collector = Arc::new(stun_info_collector);
let enable_exit_node = config_fs.get_flags().enable_exit_node || cfg!(target_env = "ohos");
let proxy_forward_by_system = config_fs.get_flags().proxy_forward_by_system;
let no_tun = config_fs.get_flags().no_tun;
let p2p_only = config_fs.get_flags().p2p_only;
let flags = config_fs.get_flags();
let feature_flags = PeerFeatureFlag {
kcp_input: !config_fs.get_flags().disable_kcp_input,
no_relay_kcp: config_fs.get_flags().disable_relay_kcp,
support_conn_list_sync: true, // Enable selective peer list sync by default
..Default::default()
};
let feature_flags = Self::derive_feature_flags(&flags, None);
let credential_storage_path = config_fs.get_credential_file();
let credential_manager = Arc::new(CredentialManager::new(credential_storage_path));
GlobalCtx {
inst_name: config_fs.get_inst_name(),
@@ -173,19 +302,19 @@ impl GlobalCtx {
running_listeners: Mutex::new(Vec::new()),
enable_exit_node,
proxy_forward_by_system,
no_tun,
p2p_only,
flags: ArcSwap::new(Arc::new(flags)),
feature_flags: AtomicCell::new(feature_flags),
quic_proxy_port: AtomicCell::new(None),
token_bucket_manager: TokenBucketManager::new(),
stats_manager: Arc::new(StatsManager::new()),
acl_filter: Arc::new(AclFilter::new()),
credential_manager,
trusted_keys: Arc::new(TrustedKeyMapManager::new()),
}
}
@@ -257,10 +386,26 @@ impl GlobalCtx {
}
}
pub fn is_ip_local_virtual_ip(&self, ip: &IpAddr) -> bool {
match ip {
IpAddr::V4(v4) => self.get_ipv4().map(|x| x.address() == *v4).unwrap_or(false),
IpAddr::V6(v6) => self.get_ipv6().map(|x| x.address() == *v6).unwrap_or(false),
}
}
pub fn get_network_identity(&self) -> NetworkIdentity {
self.config.get_network_identity()
}
pub fn get_secret_proof(&self, challenge: &[u8]) -> Option<Hmac<Sha256>> {
let network_secret = self.get_network_identity().network_secret?;
let key = network_secret.as_bytes();
let mut mac = Hmac::<Sha256>::new_from_slice(key).unwrap();
mac.update(b"easytier secret proof");
mac.update(challenge);
Some(mac)
}
pub fn get_network_name(&self) -> String {
self.get_network_identity().network_name
}
@@ -303,28 +448,25 @@ impl GlobalCtx {
}
}
pub fn is_port_in_running_listeners(&self, port: u16, is_udp: bool) -> bool {
let check_proto = |listener_proto: &str| {
let listener_is_udp = matches!(listener_proto, "udp" | "wg");
listener_is_udp == is_udp
};
self.running_listeners
.lock()
.unwrap()
.iter()
.any(|x| x.port() == Some(port) && check_proto(x.scheme()))
}
pub fn get_vpn_portal_cidr(&self) -> Option<cidr::Ipv4Cidr> {
self.config.get_vpn_portal_config().map(|x| x.client_cidr)
}
pub fn get_flags(&self) -> Flags {
self.config.get_flags()
self.flags.load().as_ref().clone()
}
pub fn set_flags(&self, flags: Flags) {
self.config.set_flags(flags);
self.config.set_flags(flags.clone());
self.feature_flags.store(Self::derive_feature_flags(
&flags,
Some(self.feature_flags.load()),
));
self.flags.store(Arc::new(flags));
}
pub fn flags_arc(&self) -> Arc<Flags> {
self.flags.load_full()
}
pub fn get_128_key(&self) -> [u8; 16] {
@@ -368,15 +510,15 @@ impl GlobalCtx {
}
pub fn enable_exit_node(&self) -> bool {
self.enable_exit_node
self.flags.load().enable_exit_node || cfg!(target_env = "ohos")
}
pub fn proxy_forward_by_system(&self) -> bool {
self.proxy_forward_by_system
self.flags.load().proxy_forward_by_system
}
pub fn no_tun(&self) -> bool {
self.no_tun
self.flags.load().no_tun
}
pub fn get_feature_flags(&self) -> PeerFeatureFlag {
@@ -387,15 +529,6 @@ impl GlobalCtx {
self.feature_flags.store(flags);
}
pub fn get_quic_proxy_port(&self) -> Option<u16> {
self.quic_proxy_port.load()
}
pub fn set_quic_proxy_port(&self, port: Option<u16>) {
self.acl_filter.set_quic_udp_port(port.unwrap_or(0));
self.quic_proxy_port.store(port);
}
pub fn token_bucket_manager(&self) -> &TokenBucketManager {
&self.token_bucket_manager
}
@@ -408,6 +541,51 @@ impl GlobalCtx {
&self.acl_filter
}
pub fn get_credential_manager(&self) -> &Arc<CredentialManager> {
&self.credential_manager
}
/// Check if a public key is trusted using two-level lookup:
/// 1. OSPF propagated trusted_keys (lock-free)
/// 2. Local credential_manager
pub fn is_pubkey_trusted(&self, pubkey: &[u8], network_name: &str) -> bool {
// First level: check OSPF propagated keys (lock-free)
if self.trusted_keys.verify_trusted_key(pubkey, network_name) {
return true;
}
// Second level: check local credential_manager if in the same network
if network_name == self.get_network_name() {
return self.credential_manager.is_pubkey_trusted(pubkey);
}
false
}
pub fn is_pubkey_trusted_with_source(
&self,
pubkey: &[u8],
network_name: &str,
source: TrustedKeySource,
) -> bool {
self.trusted_keys
.verify_trusted_key_with_source(pubkey, network_name, Some(source))
}
/// Atomically replace all OSPF trusted keys with a new set
/// Called by OSPF route layer after each route update
pub fn update_trusted_keys(&self, keys: TrustedKeyMap, network_name: &str) {
self.trusted_keys.update_trusted_keys(network_name, keys);
}
pub fn remove_trusted_keys(&self, network_name: &str) {
self.trusted_keys.remove_trusted_keys(network_name);
}
pub fn list_trusted_keys(&self, network_name: &str) -> Vec<(Vec<u8>, TrustedKeyMetadata)> {
self.trusted_keys.list_trusted_keys(network_name)
}
pub fn get_acl_groups(&self, peer_id: PeerId) -> Vec<PeerGroupInfo> {
use std::collections::HashSet;
self.config
@@ -440,12 +618,49 @@ impl GlobalCtx {
}
pub fn p2p_only(&self) -> bool {
self.p2p_only
self.flags.load().p2p_only
}
pub fn latency_first(&self) -> bool {
// NOTICE: p2p only is conflict with latency first
self.config.get_flags().latency_first && !self.p2p_only
let flags = self.flags.load();
flags.latency_first && !flags.p2p_only
}
fn is_port_in_running_listeners(&self, port: u16, is_udp: bool) -> bool {
self.running_listeners
.lock()
.unwrap()
.iter()
.any(|x| x.port() == Some(port) && matches_protocol!(x, Protocol::UDP) == is_udp)
}
#[tracing::instrument(ret, skip(self))]
pub fn should_deny_proxy(&self, dst_addr: &SocketAddr, is_udp: bool) -> bool {
let _g = self.net_ns.guard();
let ip = dst_addr.ip();
// first check if ip is virtual ip
// then try bind this ip, if succ means it is local ip
let dst_is_local_virtual_ip = self.is_ip_local_virtual_ip(&ip);
// this is an expensive operation, should be called sparingly
// 1. tcp/kcp/quic call this only after proxy conn is established
// 2. udp cache the result in nat entry
let dst_is_local_phy_ip = std::net::UdpSocket::bind(format!("{}:0", ip)).is_ok();
tracing::trace!(
"check should_deny_proxy: dst_addr={}, dst_is_local_virtual_ip={}, dst_is_local_phy_ip={}, is_udp={}",
dst_addr,
dst_is_local_virtual_ip,
dst_is_local_phy_ip,
is_udp
);
if dst_is_local_virtual_ip || dst_is_local_phy_ip {
// if is local ip, make sure the port is not one of the listening ports
self.is_port_in_running_listeners(dst_addr.port(), is_udp)
} else {
false
}
}
}
@@ -488,6 +703,68 @@ pub mod tests {
);
}
#[tokio::test]
async fn trusted_key_source_lookup_is_precise() {
let config = TomlConfigLoader::default();
let global_ctx = GlobalCtx::new(config);
let network_name = "net1";
let pubkey = vec![1; 32];
global_ctx.update_trusted_keys(
HashMap::from([(
pubkey.clone(),
TrustedKeyMetadata {
source: TrustedKeySource::OspfCredential,
expiry_unix: None,
},
)]),
network_name,
);
assert!(global_ctx.is_pubkey_trusted(&pubkey, network_name));
assert!(!global_ctx.is_pubkey_trusted_with_source(
&pubkey,
network_name,
TrustedKeySource::OspfNode,
));
assert!(global_ctx.is_pubkey_trusted_with_source(
&pubkey,
network_name,
TrustedKeySource::OspfCredential,
));
}
#[tokio::test]
async fn set_flags_keeps_derived_feature_flags_in_sync() {
let config = TomlConfigLoader::default();
let global_ctx = GlobalCtx::new(config);
let mut feature_flags = global_ctx.get_feature_flags();
feature_flags.avoid_relay_data = true;
feature_flags.is_public_server = true;
global_ctx.set_feature_flags(feature_flags);
let mut flags = global_ctx.get_flags().clone();
flags.disable_kcp_input = true;
flags.disable_relay_kcp = true;
flags.disable_quic_input = true;
flags.disable_relay_quic = true;
flags.need_p2p = true;
flags.disable_p2p = true;
global_ctx.set_flags(flags);
let feature_flags = global_ctx.get_feature_flags();
assert!(!feature_flags.kcp_input);
assert!(feature_flags.no_relay_kcp);
assert!(!feature_flags.quic_input);
assert!(feature_flags.no_relay_quic);
assert!(feature_flags.need_p2p);
assert!(feature_flags.disable_p2p);
assert!(feature_flags.support_conn_list_sync);
assert!(feature_flags.avoid_relay_data);
assert!(feature_flags.is_public_server);
}
pub fn get_mock_global_ctx_with_network(
network_identy: Option<NetworkIdentity>,
) -> ArcGlobalCtx {
+9 -3
View File
@@ -1,4 +1,7 @@
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
#[cfg(any(
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "freebsd"
))]
mod darwin;
#[cfg(target_os = "linux")]
mod netlink;
@@ -144,14 +147,17 @@ impl IfConfiguerTrait for DummyIfConfiger {}
#[cfg(target_os = "linux")]
pub type IfConfiger = netlink::NetlinkIfConfiger;
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
#[cfg(any(
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "freebsd"
))]
pub type IfConfiger = darwin::MacIfConfiger;
#[cfg(target_os = "windows")]
pub type IfConfiger = windows::WindowsIfConfiger;
#[cfg(not(any(
target_os = "macos",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "linux",
target_os = "windows",
target_os = "freebsd",
+241
View File
@@ -0,0 +1,241 @@
use std::io::IsTerminal as _;
use crate::common::config::LoggingConfigLoader;
use crate::common::get_logger_timer_rfc3339;
use crate::common::tracing_rolling_appender::{FileAppenderWrapper, RollingFileAppenderBase};
use crate::rpc_service::logger::{CURRENT_LOG_LEVEL, LOGGER_LEVEL_SENDER};
use anyhow::Context;
use paste::paste;
use regex::Regex;
use tracing::level_filters::LevelFilter;
use tracing::{Level, Metadata};
use tracing_subscriber::filter::{filter_fn, FilterExt};
use tracing_subscriber::fmt::layer;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::Registry;
use tracing_subscriber::{EnvFilter, Layer};
macro_rules! __log__ {
(const $var:ident = $target:expr) => {
const $var: &'static str = $target;
__log__!(@impl $target, $);
};
(@impl $target:expr, $_:tt) => {
__log__!(@impl $_, $target, error, warn, info, debug, trace);
};
(@impl $_:tt, $target:expr, $($lvl:ident),+) => {
paste! {
$(
macro_rules! [< __ $lvl __ >] {
(category: $cat:expr, $_ ($arg:tt)+) => {
tracing::$lvl!(target: concat!($target, "::", $cat), $_ ($arg)+)
};
($_ ($arg:tt)+) => {
tracing::$lvl!(target: $target, $_ ($arg)+)
};
}
#[allow(unused_imports)]
pub(crate) use [< __ $lvl __ >] as $lvl;
)+
}
};
}
__log__!(const LOG_TARGET = "CORE");
fn parse_env_filter(default_level: LevelFilter) -> Result<EnvFilter, anyhow::Error> {
let mut filter = EnvFilter::builder()
.with_default_directive(default_level.into())
.from_env()
.with_context(|| "failed to create env filter")?;
let pattern = Regex::new(&format!(r"(^|,){}\s*=", regex::escape(LOG_TARGET)))?;
if !pattern.is_match(&filter.to_string()) {
filter = filter.add_directive(format!("{LOG_TARGET}=info").parse()?);
}
Ok(filter)
}
fn is_log(meta: &Metadata) -> bool {
meta.target() == LOG_TARGET || meta.target().starts_with(&format!("{LOG_TARGET}::"))
}
pub type NewFilterSender = std::sync::mpsc::Sender<String>;
macro_rules! tracing_layer {
($layer:expr) => {
$layer.with_filter(filter_fn(is_log).not()).boxed()
};
}
macro_rules! log_layer {
($layer:expr) => {
$layer
.with_file(false)
.with_line_number(false)
.with_ansi(true)
.with_filter(filter_fn(is_log))
.boxed()
};
}
pub fn init(
config: impl LoggingConfigLoader,
need_reload: bool,
) -> Result<Option<NewFilterSender>, anyhow::Error> {
let mut layers = Vec::new();
let file_config = config.get_file_logger_config();
let file_level = file_config
.level
.map(|s| s.parse().unwrap())
.unwrap_or(LevelFilter::OFF);
let mut ret_sender: Option<NewFilterSender> = None;
// logger to a rolling file
if file_level != LevelFilter::OFF || need_reload {
let dir = file_config.dir.as_deref().unwrap_or(".");
let file = file_config.file.as_deref().unwrap_or("easytier.log");
let path = std::path::Path::new(dir).join(file);
let path_str = path.to_string_lossy().into_owned();
let builder = RollingFileAppenderBase::builder();
let file_appender = builder
.filename(path_str)
.condition_daily()
.max_filecount(file_config.count.unwrap_or(10))
.condition_max_file_size(file_config.size_mb.unwrap_or(100) * 1024 * 1024)
.build()
.unwrap();
// Create a simple wrapper that implements MakeWriter
let wrapper = FileAppenderWrapper::new(file_appender);
let (file_filter, file_filter_reloader) =
tracing_subscriber::reload::Layer::<_, Registry>::new(parse_env_filter(file_level)?);
let layer = |wrapper| {
layer()
.with_ansi(false)
.with_writer(wrapper)
.with_timer(get_logger_timer_rfc3339())
};
layers.push(
vec![
tracing_layer!(layer(wrapper.clone())),
log_layer!(layer(wrapper.clone())),
]
.with_filter(file_filter)
.boxed(),
);
if need_reload {
let (sender, recver) = std::sync::mpsc::channel();
ret_sender = Some(sender.clone());
// 初始化全局状态
let _ = LOGGER_LEVEL_SENDER.set(std::sync::Mutex::new(sender));
let _ = CURRENT_LOG_LEVEL.set(std::sync::Mutex::new(file_level.to_string()));
std::thread::spawn(move || {
while let Ok(lf) = recver.recv() {
let parsed_level = match lf.parse::<LevelFilter>() {
Ok(level) => level,
Err(e) => {
error!("Failed to parse new log level {:?}: {}", lf, e);
continue;
}
};
let mut new_filter = match EnvFilter::builder()
.with_default_directive(parsed_level.into())
.from_env()
.with_context(|| "failed to create file filter")
{
Ok(filter) => Some(filter),
Err(e) => {
error!("Failed to build new log filter for {:?}: {:?}", lf, e);
continue;
}
};
match file_filter_reloader.modify(|f| {
*f = new_filter
.take()
.expect("log filter reloader only applies one filter per reload");
}) {
Ok(()) => {
info!("Reload log filter succeed, new filter level: {:?}", lf);
}
Err(e) => {
error!("Failed to reload log filter: {:?}", e);
}
}
}
info!("Stop log filter reloader");
});
}
}
// logger to console
let console_config = config.get_console_logger_config();
let console_level = console_config
.level
.map(|s| s.parse().unwrap())
.unwrap_or(LevelFilter::OFF);
let (console_filter, _) =
tracing_subscriber::reload::Layer::new(parse_env_filter(console_level)?);
let layer = || {
layer()
.compact()
.with_ansi(std::io::stderr().is_terminal())
.with_timer(get_logger_timer_rfc3339())
.with_writer(std::io::stderr)
};
layers.push(
vec![
tracing_layer!(layer()),
log_layer!(layer()).with_filter(LevelFilter::WARN).boxed(),
log_layer!(layer().with_writer(std::io::stdout))
.with_filter(filter_fn(|metadata| *metadata.level() > Level::WARN))
.boxed(),
]
.with_filter(console_filter)
.boxed(),
);
#[cfg(feature = "tracing")]
{
layers.push(console_subscriber::ConsoleLayer::builder().spawn().boxed());
}
Registry::default().with(layers).init();
Ok(ret_sender)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::common::config::{self};
async fn test_logger_reload() {
println!("current working dir: {:?}", std::env::current_dir());
let config = config::LoggingConfigBuilder::default().build().unwrap();
let s = init(&config, true).unwrap();
tracing::debug!("test not display debug");
s.unwrap().send(LevelFilter::DEBUG.to_string()).unwrap();
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
tracing::debug!("test display debug");
}
}
+15 -2
View File
@@ -21,8 +21,10 @@ pub mod error;
pub mod global_ctx;
pub mod idn;
pub mod ifcfg;
pub mod log;
pub mod netns;
pub mod network;
pub mod os_info;
pub mod scoped_task;
pub mod stats_manager;
pub mod stun;
@@ -101,6 +103,9 @@ pub fn set_default_machine_id(mid: Option<String>) {
pub fn get_machine_id() -> uuid::Uuid {
if let Some(default_mid) = use_global_var!(MACHINE_UID) {
if let Ok(mid) = uuid::Uuid::parse_str(default_mid.trim()) {
return mid;
}
let mut b = [0u8; 16];
crate::tunnel::generate_digest_from_str("", &default_mid, &mut b);
return uuid::Uuid::from_bytes(b);
@@ -120,7 +125,7 @@ pub fn get_machine_id() -> uuid::Uuid {
#[cfg(any(
target_os = "linux",
target_os = "macos",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
))]
@@ -137,7 +142,7 @@ pub fn get_machine_id() -> uuid::Uuid {
#[cfg(not(any(
target_os = "linux",
target_os = "macos",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
)))]
@@ -206,4 +211,12 @@ mod tests {
assert_eq!(weak_js.weak_count(), 0);
assert_eq!(weak_js.strong_count(), 0);
}
#[test]
fn test_get_machine_id_uses_uuid_seed_verbatim() {
let raw = "33333333-3333-3333-3333-333333333333".to_string();
set_default_machine_id(Some(raw.clone()));
assert_eq!(get_machine_id(), uuid::Uuid::parse_str(&raw).unwrap());
set_default_machine_id(None);
}
}
+1 -1
View File
@@ -74,7 +74,7 @@ impl NetNSGuard {
}
}
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct NetNS {
name: Option<String>,
}
+102 -5
View File
@@ -1,6 +1,12 @@
use std::{net::IpAddr, ops::Deref, sync::Arc};
#[cfg(target_os = "windows")]
use network_interface::{
Addr as SystemAddr, NetworkInterface as SystemNetworkInterface, NetworkInterfaceConfig,
};
use pnet::datalink::NetworkInterface;
#[cfg(target_os = "windows")]
use pnet::{ipnetwork::IpNetwork, util::MacAddr};
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
@@ -16,7 +22,12 @@ struct InterfaceFilter {
iface: NetworkInterface,
}
#[cfg(any(target_os = "android", target_env = "ohos"))]
#[cfg(any(
target_os = "android",
target_os = "ios",
all(target_os = "macos", feature = "macos-ne"),
target_env = "ohos"
))]
impl InterfaceFilter {
async fn filter_iface(&self) -> bool {
true
@@ -60,13 +71,16 @@ impl InterfaceFilter {
}
// Cache for networksetup command output
#[cfg(target_os = "macos")]
#[cfg(all(target_os = "macos", not(feature = "macos-ne")))]
static NETWORKSETUP_CACHE: std::sync::OnceLock<Mutex<(String, std::time::Instant)>> =
std::sync::OnceLock::new();
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
#[cfg(any(
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "freebsd"
))]
impl InterfaceFilter {
#[cfg(target_os = "macos")]
#[cfg(all(target_os = "macos", not(feature = "macos-ne")))]
async fn get_networksetup_output() -> String {
use anyhow::Context;
use std::time::{Duration, Instant};
@@ -101,7 +115,7 @@ impl InterfaceFilter {
stdout
}
#[cfg(target_os = "macos")]
#[cfg(all(target_os = "macos", not(feature = "macos-ne")))]
async fn is_interface_physical(&self) -> bool {
let interface_name = &self.iface.name;
let stdout = Self::get_networksetup_output().await;
@@ -256,6 +270,9 @@ impl IPCollector {
pub async fn collect_interfaces(net_ns: NetNS, filter: bool) -> Vec<NetworkInterface> {
let _g = net_ns.guard();
#[cfg(target_os = "windows")]
let ifaces = Self::collect_interfaces_windows();
#[cfg(not(target_os = "windows"))]
let ifaces = pnet::datalink::interfaces();
let mut ret = vec![];
for iface in ifaces {
@@ -273,6 +290,86 @@ impl IPCollector {
ret
}
#[cfg(target_os = "windows")]
fn collect_interfaces_windows() -> Vec<NetworkInterface> {
match SystemNetworkInterface::show() {
Ok(ifaces) => ifaces
.into_iter()
.map(Self::convert_windows_interface)
.collect(),
Err(e) => {
tracing::warn!(
?e,
"failed to enumerate interfaces via network-interface, falling back to pnet"
);
match std::panic::catch_unwind(pnet::datalink::interfaces) {
Ok(ifaces) => ifaces,
Err(_) => {
tracing::error!(
"failed to enumerate interfaces via both network-interface and pnet"
);
Vec::new()
}
}
}
}
}
#[cfg(target_os = "windows")]
fn convert_windows_interface(iface: SystemNetworkInterface) -> NetworkInterface {
let mac = iface.mac_addr.as_deref().and_then(|mac| {
mac.parse::<MacAddr>()
.map_err(|e| {
tracing::debug!(iface = %iface.name, mac, ?e, "failed to parse interface mac")
})
.ok()
});
let ips = iface
.addr
.into_iter()
.filter_map(Self::convert_windows_interface_addr)
.collect();
NetworkInterface {
name: iface.name,
description: String::new(),
index: iface.index,
mac,
ips,
// pnet does not populate Windows flags either, so keep the existing semantics.
flags: 0,
}
}
#[cfg(target_os = "windows")]
fn convert_windows_interface_addr(addr: SystemAddr) -> Option<IpNetwork> {
match addr {
SystemAddr::V4(addr) => {
let netmask = addr
.netmask
.map(IpAddr::V4)
.unwrap_or(IpAddr::V4(std::net::Ipv4Addr::new(255, 255, 255, 255)));
IpNetwork::with_netmask(IpAddr::V4(addr.ip), netmask)
.map_err(|e| {
tracing::debug!(ip = %addr.ip, ?addr.netmask, ?e, "failed to convert ipv4")
})
.ok()
}
SystemAddr::V6(addr) => {
let netmask = addr
.netmask
.map(IpAddr::V6)
.unwrap_or(IpAddr::V6(std::net::Ipv6Addr::from(u128::MAX)));
IpNetwork::with_netmask(IpAddr::V6(addr.ip), netmask)
.map_err(|e| {
tracing::debug!(ip = %addr.ip, ?addr.netmask, ?e, "failed to convert ipv6")
})
.ok()
}
}
}
#[tracing::instrument(skip(net_ns))]
async fn do_collect_local_ip_addrs(net_ns: NetNS) -> GetIpListResponse {
let mut ret = GetIpListResponse::default();
+144
View File
@@ -0,0 +1,144 @@
use std::{collections::HashMap, fs, process::Command};
use crate::proto::web::DeviceOsInfo;
pub fn collect_device_os_info() -> DeviceOsInfo {
let os_type = normalize_os_type(std::env::consts::OS);
let (version, distribution) = detect_os_version_and_distribution(&os_type);
DeviceOsInfo {
os_type,
version,
distribution,
}
}
fn normalize_os_type(raw: &str) -> String {
match raw {
"macos" => "macos".to_string(),
"windows" => "windows".to_string(),
"linux" => "linux".to_string(),
"android" => "android".to_string(),
"ios" => "ios".to_string(),
"freebsd" => "freebsd".to_string(),
other => other.to_string(),
}
}
fn detect_os_version_and_distribution(os_type: &str) -> (String, String) {
match os_type {
"linux" | "android" => linux_version_and_distribution(os_type),
"macos" => (
first_non_empty([
command_output("sw_vers", &["-productVersion"]),
unix_kernel_release(),
]),
"macOS".to_string(),
),
"windows" => (
first_non_empty([windows_version(), None]),
"Windows".to_string(),
),
"freebsd" => (
first_non_empty([
command_output("freebsd-version", &[]),
unix_kernel_release(),
]),
"FreeBSD".to_string(),
),
other => (
unix_kernel_release().unwrap_or_else(|| "unknown".to_string()),
other.to_string(),
),
}
}
fn linux_version_and_distribution(os_type: &str) -> (String, String) {
let os_release = parse_os_release().unwrap_or_default();
let version = first_non_empty([
os_release.get("VERSION_ID").cloned(),
os_release.get("VERSION").cloned(),
unix_kernel_release(),
]);
let distribution = first_non_empty([
os_release.get("NAME").cloned(),
os_release.get("ID").cloned().map(title_case),
Some(if os_type == "android" {
"Android".to_string()
} else {
"Linux".to_string()
}),
]);
(version, distribution)
}
fn parse_os_release() -> Option<HashMap<String, String>> {
["/etc/os-release", "/usr/lib/os-release"]
.into_iter()
.find_map(|path| fs::read_to_string(path).ok())
.map(|content| {
content
.lines()
.filter_map(|line| {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
return None;
}
let (key, value) = line.split_once('=')?;
Some((key.to_string(), trim_os_release_value(value)))
})
.collect()
})
}
fn trim_os_release_value(value: &str) -> String {
value
.trim()
.trim_matches('"')
.trim_matches('\'')
.to_string()
}
fn unix_kernel_release() -> Option<String> {
command_output("uname", &["-r"])
}
fn windows_version() -> Option<String> {
let output = command_output("cmd", &["/C", "ver"])?;
output
.split("Version")
.nth(1)
.map(str::trim)
.map(|part| part.trim_matches(&['[', ']'][..]).to_string())
.filter(|value| !value.is_empty())
}
fn command_output(program: &str, args: &[&str]) -> Option<String> {
let output = Command::new(program).args(args).output().ok()?;
if !output.status.success() {
return None;
}
let value = String::from_utf8(output.stdout).ok()?;
let value = value.trim();
if value.is_empty() {
None
} else {
Some(value.to_string())
}
}
fn first_non_empty<const N: usize>(values: [Option<String>; N]) -> String {
values
.into_iter()
.flatten()
.find(|value| !value.trim().is_empty())
.unwrap_or_else(|| "unknown".to_string())
}
fn title_case(value: String) -> String {
let mut chars = value.chars();
let Some(first) = chars.next() else {
return value;
};
first.to_uppercase().collect::<String>() + chars.as_str()
}
+118 -7
View File
@@ -24,10 +24,22 @@ pub enum MetricName {
/// RPC errors
PeerRpcErrors,
/// Traffic bytes sent
/// Data-plane traffic bytes sent
TrafficBytesTx,
/// Traffic bytes received
/// Data-plane traffic bytes sent, grouped by destination instance
TrafficBytesTxByInstance,
/// Data-plane traffic bytes received
TrafficBytesRx,
/// Data-plane traffic bytes received, grouped by source instance
TrafficBytesRxByInstance,
/// Control-plane traffic bytes sent
TrafficControlBytesTx,
/// Control-plane traffic bytes sent, grouped by destination instance
TrafficControlBytesTxByInstance,
/// Control-plane traffic bytes received
TrafficControlBytesRx,
/// Control-plane traffic bytes received, grouped by source instance
TrafficControlBytesRxByInstance,
/// Traffic bytes forwarded
TrafficBytesForwarded,
/// Traffic bytes sent to self
@@ -41,10 +53,22 @@ pub enum MetricName {
/// Traffic bytes forwarded for foreign network, forward
TrafficBytesForeignForwardForwarded,
/// Traffic packets sent
/// Data-plane traffic packets sent
TrafficPacketsTx,
/// Traffic packets received
/// Data-plane traffic packets sent, grouped by destination instance
TrafficPacketsTxByInstance,
/// Data-plane traffic packets received
TrafficPacketsRx,
/// Data-plane traffic packets received, grouped by source instance
TrafficPacketsRxByInstance,
/// Control-plane traffic packets sent
TrafficControlPacketsTx,
/// Control-plane traffic packets sent, grouped by destination instance
TrafficControlPacketsTxByInstance,
/// Control-plane traffic packets received
TrafficControlPacketsRx,
/// Control-plane traffic packets received, grouped by source instance
TrafficControlPacketsRxByInstance,
/// Traffic packets forwarded
TrafficPacketsForwarded,
/// Traffic packets sent to self
@@ -81,7 +105,17 @@ impl fmt::Display for MetricName {
MetricName::PeerRpcErrors => write!(f, "peer_rpc_errors"),
MetricName::TrafficBytesTx => write!(f, "traffic_bytes_tx"),
MetricName::TrafficBytesTxByInstance => write!(f, "traffic_bytes_tx_by_instance"),
MetricName::TrafficBytesRx => write!(f, "traffic_bytes_rx"),
MetricName::TrafficBytesRxByInstance => write!(f, "traffic_bytes_rx_by_instance"),
MetricName::TrafficControlBytesTx => write!(f, "traffic_control_bytes_tx"),
MetricName::TrafficControlBytesTxByInstance => {
write!(f, "traffic_control_bytes_tx_by_instance")
}
MetricName::TrafficControlBytesRx => write!(f, "traffic_control_bytes_rx"),
MetricName::TrafficControlBytesRxByInstance => {
write!(f, "traffic_control_bytes_rx_by_instance")
}
MetricName::TrafficBytesForwarded => write!(f, "traffic_bytes_forwarded"),
MetricName::TrafficBytesSelfTx => write!(f, "traffic_bytes_self_tx"),
MetricName::TrafficBytesSelfRx => write!(f, "traffic_bytes_self_rx"),
@@ -96,7 +130,21 @@ impl fmt::Display for MetricName {
}
MetricName::TrafficPacketsTx => write!(f, "traffic_packets_tx"),
MetricName::TrafficPacketsTxByInstance => {
write!(f, "traffic_packets_tx_by_instance")
}
MetricName::TrafficPacketsRx => write!(f, "traffic_packets_rx"),
MetricName::TrafficPacketsRxByInstance => {
write!(f, "traffic_packets_rx_by_instance")
}
MetricName::TrafficControlPacketsTx => write!(f, "traffic_control_packets_tx"),
MetricName::TrafficControlPacketsTxByInstance => {
write!(f, "traffic_control_packets_tx_by_instance")
}
MetricName::TrafficControlPacketsRx => write!(f, "traffic_control_packets_rx"),
MetricName::TrafficControlPacketsRxByInstance => {
write!(f, "traffic_control_packets_rx_by_instance")
}
MetricName::TrafficPacketsForwarded => write!(f, "traffic_packets_forwarded"),
MetricName::TrafficPacketsSelfTx => write!(f, "traffic_packets_self_tx"),
MetricName::TrafficPacketsSelfRx => write!(f, "traffic_packets_self_rx"),
@@ -125,6 +173,10 @@ impl fmt::Display for MetricName {
pub enum LabelType {
/// Network Name
NetworkName(String),
/// Destination instance ID
ToInstanceId(String),
/// Source instance ID
FromInstanceId(String),
/// Source peer ID
SrcPeerId(u32),
/// Destination peer ID
@@ -153,6 +205,8 @@ impl fmt::Display for LabelType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
LabelType::NetworkName(name) => write!(f, "network_name={}", name),
LabelType::ToInstanceId(id) => write!(f, "to_instance_id={}", id),
LabelType::FromInstanceId(id) => write!(f, "from_instance_id={}", id),
LabelType::SrcPeerId(id) => write!(f, "src_peer_id={}", id),
LabelType::DstPeerId(id) => write!(f, "dst_peer_id={}", id),
LabelType::ServiceName(name) => write!(f, "service_name={}", name),
@@ -172,6 +226,8 @@ impl LabelType {
pub fn key(&self) -> &'static str {
match self {
LabelType::NetworkName(_) => "network_name",
LabelType::ToInstanceId(_) => "to_instance_id",
LabelType::FromInstanceId(_) => "from_instance_id",
LabelType::SrcPeerId(_) => "src_peer_id",
LabelType::DstPeerId(_) => "dst_peer_id",
LabelType::ServiceName(_) => "service_name",
@@ -189,6 +245,8 @@ impl LabelType {
pub fn value(&self) -> String {
match self {
LabelType::NetworkName(name) => name.clone(),
LabelType::ToInstanceId(id) => id.clone(),
LabelType::FromInstanceId(id) => id.clone(),
LabelType::SrcPeerId(id) => id.to_string(),
LabelType::DstPeerId(id) => id.to_string(),
LabelType::ServiceName(name) => name.clone(),
@@ -523,9 +581,9 @@ impl StatsManager {
break;
};
// Remove entries that haven't been updated for 3 minutes
counters.retain(|_, metric_data: &mut Arc<MetricData>| unsafe {
metric_data.get_last_updated() > cutoff_time
counters.retain(|_, metric_data: &mut Arc<MetricData>| {
Arc::strong_count(metric_data) > 1
|| unsafe { metric_data.get_last_updated() > cutoff_time }
});
counters.shrink_to_fit();
}
@@ -677,6 +735,20 @@ mod tests {
.with_label("method", "ping");
assert_eq!(labels.to_key(), "method=ping,peer_id=peer1");
let instance_labels = LabelSet::new()
.with_label_type(LabelType::NetworkName("default".to_string()))
.with_label_type(LabelType::ToInstanceId(
"87ede5a2-9c3d-492d-9bbe-989b9d07e742".to_string(),
))
.with_label_type(LabelType::FromInstanceId(
"9b7d4368-b688-4897-a1f4-b6caaed9e8a6".to_string(),
));
assert_eq!(
instance_labels.to_key(),
"from_instance_id=9b7d4368-b688-4897-a1f4-b6caaed9e8a6,network_name=default,to_instance_id=87ede5a2-9c3d-492d-9bbe-989b9d07e742"
);
}
#[tokio::test]
@@ -745,12 +817,24 @@ mod tests {
let counter2 = stats.get_counter(MetricName::PeerRpcClientTx, labels);
counter2.set(50);
let traffic_labels = LabelSet::new()
.with_label_type(LabelType::NetworkName("default".to_string()))
.with_label_type(LabelType::ToInstanceId(
"87ede5a2-9c3d-492d-9bbe-989b9d07e742".to_string(),
));
let counter3 = stats.get_counter(MetricName::TrafficBytesTxByInstance, traffic_labels);
counter3.set(25);
let prometheus_output = stats.export_prometheus();
assert!(prometheus_output.contains("# TYPE peer_rpc_client_tx counter"));
assert!(prometheus_output.contains("peer_rpc_client_tx{status=\"success\"} 50"));
assert!(prometheus_output.contains("# TYPE traffic_bytes_tx counter"));
assert!(prometheus_output.contains("traffic_bytes_tx 100"));
assert!(prometheus_output.contains("# TYPE traffic_bytes_tx_by_instance counter"));
assert!(prometheus_output.contains(
"traffic_bytes_tx_by_instance{network_name=\"default\",to_instance_id=\"87ede5a2-9c3d-492d-9bbe-989b9d07e742\"} 25"
));
}
#[tokio::test]
@@ -816,6 +900,33 @@ mod tests {
assert_eq!(counter2.get(), 25);
}
#[tokio::test]
async fn test_cleanup_keeps_metrics_with_live_handles() {
let stats = StatsManager::new();
let counter = stats.get_simple_counter(MetricName::TrafficBytesForwarded);
counter.set(1);
let cutoff_time = Instant::now().checked_add(Duration::from_secs(1)).unwrap();
stats
.counters
.retain(|_, metric_data: &mut Arc<MetricData>| {
Arc::strong_count(metric_data) > 1
|| unsafe { metric_data.get_last_updated() > cutoff_time }
});
assert_eq!(stats.metric_count(), 1);
assert_eq!(stats.get_all_metrics().len(), 1);
drop(counter);
stats
.counters
.retain(|_, metric_data: &mut Arc<MetricData>| {
Arc::strong_count(metric_data) > 1
|| unsafe { metric_data.get_last_updated() > cutoff_time }
});
assert_eq!(stats.metric_count(), 0);
}
#[tokio::test]
async fn test_stats_rpc_data_structures() {
// Test GetStatsRequest
+87 -57
View File
@@ -25,6 +25,25 @@ use crate::common::error::Error;
use super::dns::resolve_txt_record;
use super::stun_codec_ext::*;
const DEFAULT_UDP_STUN_SERVERS: &[&str] = &[
"txt:stun.easytier.cn",
"stun.miwifi.com",
"stun.chat.bilibili.com",
"stun.hitv.com",
];
const DEFAULT_TCP_STUN_SERVERS: &[&str] = &[
"stun.hot-chilli.net",
"stun.fitauto.ru",
"fwa.lifesizecloud.com",
"global.turn.twilio.com",
"turn.cloudflare.com",
"stun.voip.blackberry.com",
"stun.radiojar.com",
];
const DEFAULT_UDP_V6_STUN_SERVERS: &[&str] = &["txt:stun-v6.easytier.cn"];
struct HostResolverIter {
hostnames: Vec<String>,
ips: Vec<SocketAddr>,
@@ -484,18 +503,14 @@ impl StunNatTypeDetectResult {
if self.public_ips().len() != 1
|| self.usable_stun_resp_count() <= 1
|| self.max_port() - self.min_port() > 15
|| self.extra_bind_test.is_none()
|| self
.extra_bind_test
.as_ref()
.unwrap()
.mapped_socket_addr
.is_none()
{
NatType::Symmetric
} else {
let extra_bind_test = self.extra_bind_test.as_ref().unwrap();
let extra_port = extra_bind_test.mapped_socket_addr.unwrap().port();
} else if let Some(extra_bind_mapped) = self
.extra_bind_test
.as_ref()
.and_then(|extra| extra.mapped_socket_addr)
{
let extra_port = extra_bind_mapped.port();
let max_port_diff = extra_port.saturating_sub(self.max_port());
let min_port_diff = self.min_port().saturating_sub(extra_port);
@@ -506,6 +521,8 @@ impl StunNatTypeDetectResult {
} else {
NatType::Symmetric
}
} else {
NatType::Symmetric
}
} else {
NatType::Unknown
@@ -1102,39 +1119,39 @@ impl StunInfoCollector {
}
pub fn get_default_servers() -> Vec<String> {
// NOTICE: we may need to choose stun server based on geolocation
// stun server cross nation may return an external ip address with high latency and loss rate
[
"txt:stun.easytier.cn",
"stun.miwifi.com",
"stun.chat.bilibili.com",
"stun.hitv.com",
]
.iter()
.map(|x| x.to_string())
.collect()
if cfg!(test) {
Vec::new()
} else {
// NOTICE: we may need to choose stun server based on geolocation
// stun server cross nation may return an external ip address with high latency and loss rate
DEFAULT_UDP_STUN_SERVERS
.iter()
.map(ToString::to_string)
.collect()
}
}
pub fn get_default_tcp_servers() -> Vec<String> {
[
"stun.hot-chilli.net",
"stun.fitauto.ru",
"fwa.lifesizecloud.com",
"global.turn.twilio.com",
"turn.cloudflare.com",
"stun.voip.blackberry.com",
"stun.radiojar.com",
]
.iter()
.map(|x| x.to_string())
.collect()
// if test, return empty vector
if cfg!(test) {
Vec::new()
} else {
DEFAULT_TCP_STUN_SERVERS
.iter()
.map(ToString::to_string)
.collect()
}
}
pub fn get_default_servers_v6() -> Vec<String> {
["txt:stun-v6.easytier.cn"]
.iter()
.map(|x| x.to_string())
.collect()
if cfg!(test) {
Vec::new()
} else {
DEFAULT_UDP_V6_STUN_SERVERS
.iter()
.map(ToString::to_string)
.collect()
}
}
async fn get_public_ipv6(servers: &[String]) -> Option<Ipv6Addr> {
@@ -1321,13 +1338,24 @@ impl StunInfoCollectorTrait for MockStunInfoCollector {
#[cfg(test)]
mod tests {
use crate::tunnel::{udp::UdpTunnelListener, TunnelListener};
use crate::{
common::scoped_task::ScopedTask,
tunnel::{udp::UdpTunnelListener, TunnelListener},
};
use tokio::time::{sleep, timeout};
use super::*;
#[tokio::test]
async fn test_udp_nat_type_detector() {
let collector = StunInfoCollector::new_with_default_servers();
let collector = StunInfoCollector::new(
DEFAULT_UDP_STUN_SERVERS
.iter()
.map(ToString::to_string)
.collect(),
vec![],
vec![],
);
collector.update_stun_info();
loop {
let ret = collector.get_stun_info();
@@ -1372,18 +1400,20 @@ mod tests {
async fn test_txt_public_stun_server() {
let stun_servers = vec!["txt:stun.easytier.cn".to_string()];
let detector = UdpNatTypeDetector::new(stun_servers, 1);
for _ in 0..5 {
let ret = detector.detect_nat_type(0).await;
println!("{:#?}, {:?}", ret, ret.as_ref().map(|x| x.nat_type()));
if let Ok(resp) = ret {
assert!(!resp.stun_resps.is_empty());
return;
timeout(Duration::from_secs(30), async {
loop {
let ret = detector.detect_nat_type(0).await;
println!("{:#?}, {:?}", ret, ret.as_ref().map(|x| x.nat_type()));
if let Ok(resp) = ret {
if !resp.stun_resps.is_empty() {
return;
}
}
sleep(Duration::from_secs(1)).await;
}
}
debug_assert!(
false,
"should not reach here, stun server should be available"
);
})
.await
.expect("stun server should be available");
}
#[tokio::test]
@@ -1406,11 +1436,11 @@ mod tests {
use stun_codec::rfc5389::attributes::XorMappedAddress;
use tokio::net::TcpListener;
async fn spawn_tcp_stun_server() -> SocketAddr {
async fn spawn_tcp_stun_server() -> (SocketAddr, ScopedTask<()>) {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let server_addr = listener.local_addr().unwrap();
tokio::spawn(async move {
let task = tokio::spawn(async move {
let (mut stream, peer_addr) = listener.accept().await.unwrap();
let req = TcpStunClient::tcp_read_stun_message(&mut stream, Duration::from_secs(2))
@@ -1430,11 +1460,11 @@ mod tests {
stream.write_all(rsp_buf.as_slice()).await.unwrap();
});
server_addr
(server_addr, task.into())
}
let server1 = spawn_tcp_stun_server().await;
let server2 = spawn_tcp_stun_server().await;
let (server1, _t1) = spawn_tcp_stun_server().await;
let (server2, _t2) = spawn_tcp_stun_server().await;
let stun_servers = vec![server1.to_string(), server2.to_string()];
let detector = TcpNatTypeDetector::new(stun_servers, 1);
@@ -1469,7 +1499,7 @@ mod tests {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let server_addr = listener.local_addr().unwrap();
tokio::spawn(async move {
let _t = ScopedTask::from(tokio::spawn(async move {
for _ in 0..8 {
let Ok((mut stream, peer_addr)) = listener.accept().await else {
break;
@@ -1491,7 +1521,7 @@ mod tests {
let rsp_buf = encoder.encode_into_bytes(resp_msg).unwrap();
stream.write_all(rsp_buf.as_slice()).await.unwrap();
}
});
}));
let collector = StunInfoCollector::new(vec![], vec![server_addr.to_string()], vec![]);
collector.set_tcp_stun_servers(vec![server_addr.to_string()]);
+19
View File
@@ -3,6 +3,7 @@ use dashmap::DashMap;
use std::sync::atomic::Ordering;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
use tokio::sync::Notify;
use tokio::time;
use crate::common::scoped_task::ScopedTask;
@@ -15,6 +16,8 @@ pub struct TokenBucket {
config: BucketConfig, // Immutable configuration
refill_task: Mutex<Option<ScopedTask<()>>>, // Background refill task
start_time: Instant, // Bucket creation time
refill_notifier: Arc<Notify>,
}
#[derive(Clone, Copy)]
@@ -64,11 +67,13 @@ impl TokenBucket {
config,
refill_task: Mutex::new(None),
start_time: std::time::Instant::now(),
refill_notifier: Arc::new(Notify::new()),
});
// Start background refill task
let weak_bucket = Arc::downgrade(&arc_self);
let refill_interval = arc_self.config.refill_interval;
let refill_notifer = arc_self.refill_notifier.clone();
let refill_task = tokio::spawn(async move {
let mut interval = time::interval(refill_interval);
loop {
@@ -77,6 +82,7 @@ impl TokenBucket {
break;
};
bucket.refill();
refill_notifer.notify_waiters();
}
});
@@ -154,6 +160,13 @@ impl TokenBucket {
}
}
}
/// Consume tokens, blocking if not available
pub async fn consume(&self, tokens: u64) {
while !self.try_consume(tokens) {
self.refill_notifier.notified().await;
}
}
}
pub struct TokenBucketManager {
@@ -177,10 +190,16 @@ impl TokenBucketManager {
let retain_task = tokio::spawn(async move {
loop {
// Retain only buckets that are still in use
let old_len = buckets_clone.len();
buckets_clone.retain(|_, bucket| Arc::<TokenBucket>::strong_count(bucket) > 1);
buckets_clone.shrink_to_fit();
// Sleep for a while before next retention check
tokio::time::sleep(Duration::from_secs(5)).await;
tracing::info!(
"Retained buckets: {} ({} dropped)",
buckets_clone.len(),
old_len.saturating_sub(buckets_clone.len())
);
}
});
@@ -184,6 +184,7 @@ where
}
}
#[derive(Debug, Clone)]
pub struct FileAppenderWrapper {
appender: std::sync::Arc<parking_lot::Mutex<RollingFileAppenderBase>>,
}
@@ -206,6 +207,7 @@ impl FileAppenderWrapper {
}
}
#[derive(Debug, Clone)]
pub struct FileAppenderWriter {
appender: std::sync::Arc<parking_lot::Mutex<RollingFileAppenderBase>>,
}
+117 -43
View File
@@ -2,13 +2,13 @@
use std::{
collections::HashSet,
net::{Ipv6Addr, SocketAddr},
net::{IpAddr, Ipv6Addr, SocketAddr},
str::FromStr,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
time::Duration,
time::{Duration, Instant},
};
use crate::{
@@ -31,17 +31,21 @@ use crate::{
},
rpc_types::controller::BaseController,
},
tunnel::{udp::UdpTunnelConnector, IpVersion},
tunnel::{matches_protocol, udp::UdpTunnelConnector, IpVersion},
use_global_var,
};
use super::{
create_connector_by_url, should_background_p2p_with_peer, should_try_p2p_with_peer,
udp_hole_punch,
};
use crate::tunnel::{matches_scheme, FromUrl, IpScheme, TunnelScheme};
use anyhow::Context;
use rand::Rng;
use socket2::Protocol;
use tokio::{net::UdpSocket, task::JoinSet, time::timeout};
use url::Host;
use super::{create_connector_by_url, udp_hole_punch};
pub const DIRECT_CONNECTOR_SERVICE_ID: u32 = 1;
pub const DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC: u64 = 300;
@@ -58,14 +62,28 @@ impl PeerManagerForDirectConnector for PeerManager {
async fn list_peers(&self) -> Vec<PeerId> {
let mut ret = vec![];
let allow_public_server = use_global_var!(DIRECT_CONNECT_TO_PUBLIC_SERVER);
let flags = self.get_global_ctx().get_flags();
let lazy_p2p = flags.lazy_p2p;
let now = Instant::now();
let routes = self.list_routes().await;
for r in routes.iter().filter(|r| {
r.feature_flag
.map(|r| allow_public_server || !r.is_public_server)
.unwrap_or(true)
}) {
ret.push(r.peer_id);
for route in routes.iter() {
let static_allowed = should_background_p2p_with_peer(
route.feature_flag.as_ref(),
allow_public_server,
lazy_p2p,
flags.disable_p2p,
flags.need_p2p,
);
let dynamic_allowed = should_try_p2p_with_peer(
route.feature_flag.as_ref(),
allow_public_server,
flags.disable_p2p,
flags.need_p2p,
) && self.has_recent_traffic(route.peer_id, now);
if static_allowed || dynamic_allowed {
ret.push(route.peer_id);
}
}
ret
@@ -178,31 +196,33 @@ impl DirectConnectorManagerData {
.await;
let udp_connector = UdpTunnelConnector::new(remote_url.clone());
let remote_addr =
super::check_scheme_and_get_socket_addr::<SocketAddr>(remote_url, "udp", IpVersion::V6)
.await?;
let remote_addr = SocketAddr::from_url(remote_url.clone(), IpVersion::V6).await?;
let ret = udp_connector
.try_connect_with_socket(local_socket, remote_addr)
.await?;
// NOTICE: must add as directly connected tunnel
self.peer_manager.add_client_tunnel(ret, true).await
self.peer_manager
.add_client_tunnel_with_peer_id_hint(ret, true, Some(dst_peer_id))
.await
}
async fn do_try_connect_to_ip(&self, dst_peer_id: PeerId, addr: String) -> Result<(), Error> {
let connector = create_connector_by_url(&addr, &self.global_ctx, IpVersion::Both).await?;
let remote_url = connector.remote_url();
let (peer_id, conn_id) =
if remote_url.scheme() == "udp" && matches!(remote_url.host(), Some(Host::Ipv6(_))) {
self.connect_to_public_ipv6(dst_peer_id, &remote_url)
.await?
} else {
timeout(
std::time::Duration::from_secs(3),
self.peer_manager.try_direct_connect(connector),
)
.await??
};
let (peer_id, conn_id) = if matches_scheme!(remote_url, TunnelScheme::Ip(IpScheme::Udp))
&& matches!(remote_url.host(), Some(Host::Ipv6(_)))
{
self.connect_to_public_ipv6(dst_peer_id, &remote_url)
.await?
} else {
timeout(
std::time::Duration::from_secs(3),
self.peer_manager
.try_direct_connect_with_peer_id_hint(connector, Some(dst_peer_id)),
)
.await??
};
if peer_id != dst_peer_id && !TESTING.load(Ordering::Relaxed) {
tracing::info!(
@@ -291,14 +311,42 @@ impl DirectConnectorManagerData {
};
let listener_host = addrs.pop();
tracing::info!(?listener_host, ?listener, "try direct connect to peer");
let is_udp = matches_protocol!(listener, Protocol::UDP);
// Snapshot running listeners once; used for cheap port pre-checks before the
// expensive should_deny_proxy call (which binds a socket per IP) in the
// unspecified-address expansion loops below.
let local_listeners = self.global_ctx.get_running_listeners();
let port_has_local_listener = |port: u16| -> bool {
local_listeners
.iter()
.any(|l| l.port() == Some(port) && matches_protocol!(l, Protocol::UDP) == is_udp)
};
match listener_host {
Some(SocketAddr::V4(s_addr)) => {
if s_addr.ip().is_unspecified() {
// Only pay the should_deny_proxy cost (bind per IP) when a local
// listener actually uses this port+protocol; otherwise the check
// can never return true.
let check_self = port_has_local_listener(s_addr.port());
ip_list
.interface_ipv4s
.iter()
.chain(ip_list.public_ipv4.iter())
.for_each(|ip| {
let sock_addr = SocketAddr::new(
IpAddr::V4(std::net::Ipv4Addr::from(ip.addr)),
s_addr.port(),
);
if check_self && self.global_ctx.should_deny_proxy(&sock_addr, is_udp) {
tracing::debug!(
?ip,
?listener,
"skip self-connection (0.0.0.0 expansion)"
);
return;
}
let mut addr = (*listener).clone();
if addr.set_host(Some(ip.to_string().as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
@@ -316,16 +364,26 @@ impl DirectConnectorManagerData {
}
});
} else if !s_addr.ip().is_loopback() || TESTING.load(Ordering::Relaxed) {
tasks.spawn(Self::try_connect_to_ip(
self.clone(),
dst_peer_id,
listener.to_string(),
));
if self
.global_ctx
.should_deny_proxy(&SocketAddr::from(s_addr), is_udp)
{
tracing::debug!(?listener, "skip self-connection (specific IPv4)");
} else {
tasks.spawn(Self::try_connect_to_ip(
self.clone(),
dst_peer_id,
listener.to_string(),
));
}
}
}
Some(SocketAddr::V6(s_addr)) => {
if s_addr.ip().is_unspecified() {
// for ipv6, only try public ip
// Same port pre-check as IPv4: avoid binding per IP when no local
// listener uses this port+protocol.
let check_self = port_has_local_listener(s_addr.port());
ip_list
.interface_ipv6s
.iter()
@@ -342,6 +400,15 @@ impl DirectConnectorManagerData {
.collect::<HashSet<_>>()
.iter()
.for_each(|ip| {
let sock_addr = SocketAddr::new(IpAddr::V6(*ip), s_addr.port());
if check_self && self.global_ctx.should_deny_proxy(&sock_addr, is_udp) {
tracing::debug!(
?ip,
?listener,
"skip self-connection (:: expansion)"
);
return;
}
let mut addr = (*listener).clone();
if addr.set_host(Some(format!("[{}]", ip).as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
@@ -359,11 +426,18 @@ impl DirectConnectorManagerData {
}
});
} else if !s_addr.ip().is_loopback() || TESTING.load(Ordering::Relaxed) {
tasks.spawn(Self::try_connect_to_ip(
self.clone(),
dst_peer_id,
listener.to_string(),
));
if self
.global_ctx
.should_deny_proxy(&SocketAddr::from(s_addr), is_udp)
{
tracing::debug!(?listener, "skip self-connection (specific IPv6)");
} else {
tasks.spawn(Self::try_connect_to_ip(
self.clone(),
dst_peer_id,
listener.to_string(),
));
}
}
}
p => {
@@ -568,7 +642,11 @@ impl DirectConnectorManager {
global_ctx.clone(),
peer_manager.clone(),
));
let client = PeerTaskManager::new(DirectConnectorLauncher(data.clone()), peer_manager);
let client = PeerTaskManager::new_with_external_signal(
DirectConnectorLauncher(data.clone()),
peer_manager.clone(),
Some(peer_manager.p2p_demand_notify()),
);
Self {
global_ctx,
data,
@@ -578,10 +656,6 @@ impl DirectConnectorManager {
}
pub fn run(&mut self) {
if self.global_ctx.get_flags().disable_p2p {
return;
}
self.run_as_server();
self.run_as_client();
}
@@ -639,7 +713,7 @@ mod tests {
let mut f = p_a.get_global_ctx().get_flags();
f.bind_device = false;
p_a.get_global_ctx().config.set_flags(f);
p_a.get_global_ctx().set_flags(f);
p_c.get_global_ctx()
.config
@@ -708,7 +782,7 @@ mod tests {
}
let mut f = p_c.get_global_ctx().config.get_flags();
f.enable_ipv6 = ipv6;
p_c.get_global_ctx().config.set_flags(f);
p_c.get_global_ctx().set_flags(f);
let mut lis_c = ListenerManager::new(p_c.get_global_ctx(), p_c.clone());
lis_c.prepare_listeners().await.unwrap();

Some files were not shown because too many files have changed in this diff Show More