Compare commits

..

83 Commits

Author SHA1 Message Date
韩嘉乐 16eb3e5f78 Merge branch 'main' into ohos_new_api 2026-05-09 00:03:05 +08:00
FrankHan 1d652bac08 fix: 适配Cidr忽略/32格式路由 2026-05-08 23:24:47 +08:00
fanyang 55f15bb6f0 fix(connector): classify manual reconnect timeouts by stage (#2062) 2026-05-08 22:08:51 +08:00
FrankHan bf427a5d6f fix: rustfmt 2026-05-08 16:35:08 +08:00
FrankHan f030e3ab36 fix: 修复更新路由启动两次TUN问题,并调整日志 2026-05-08 16:16:46 +08:00
Luna Yao 96fd39649a revert UPX version to 4.2.4 in core.yml (#2221) 2026-05-07 18:49:40 +08:00
KKRainbow 74fc8b300d chore: bump version to 2.6.4 (#2219) 2026-05-07 13:48:51 +08:00
KKRainbow baeee40b79 fix machine uid and easytier-web panic (#2215)
1. fix(web-client): persist and migrate machine id
2. fix panic when easytier-web session receive malformat packet
2026-05-07 00:57:42 +08:00
fanyang 4342c8d7a2 fix: add missing CLI help text (#2213) 2026-05-05 17:05:34 +08:00
KKRainbow 1178b312fa fix foreign network entry leak (#2211) 2026-05-05 11:01:44 +08:00
FrankHan 006783f0f9 fix: 添加缺失文件 2026-05-04 21:44:18 +08:00
韩嘉乐 09b4db5d3f [OHOS.with ai] 将配置管理/配置分享/路由聚合/实例状态解析下沉至 Rust 内核,收敛职责并提升性能 (#2209)
* feat: add ohrs config store and startup error logging

* feat: full ability core for ohos

* feat: full ability core for ohos

* feat: clean code

---------

Co-authored-by: FrankHan <frankhan@FrankHans-Mac-mini.local>
2026-05-04 21:13:17 +08:00
fanyang 362aa7a9cd fix: allow omitted ACL config fields (#2206) 2026-05-04 00:47:24 +08:00
KKRainbow 12a7b5a5c5 fix: scope peer center server data to instance (#2198)
Stop sharing PeerCenterServer state through a process-global map so local and foreign-network services cannot mix peer-center data when peer ids overlap.
2026-05-02 01:43:01 +08:00
fanyang 4eba9b07b6 fix(web-client): keep retrying unreachable config server (#2140)
Defer config-server connector creation into the web client retry loop so
service startup does not fail when network or DNS is unavailable.
2026-05-02 00:09:48 +08:00
KKRainbow 1b48029bdc fix: clean stale foreign network state (#2197)
- clear foreign-network traffic metric peer caches on peer removal and network cleanup
- release reserved foreign-network peer IDs on handshake/add-peer error paths
- avoid creating no-op foreign-network token buckets when limits are unlimited
- shrink relay/session maps after cleanup and remove unused peer-center global data entries
2026-05-01 23:30:51 +08:00
KKRainbow 3542e944cb fix(quic): prune stopped endpoints from pool (#2195)
* remove wss port 0 compatibility code
* fix(quic): prune stopped endpoints from pool
2026-05-01 18:51:39 +08:00
KKRainbow 852d1c9e14 feat(gui): add UPnP and public IPv6 advanced options (#2194)
Expose disable-upnp and ipv6_public_addr_auto in the shared web/GUI config editor
bump release metadata to 2.6.3.
2026-05-01 13:45:19 +08:00
KKRainbow 4958394469 fix: protect self peer during credential refresh and allow need-p2p peers through public server (#2192)
* fix: protect self peer during credential refresh

* fix: allow need-p2p peers through public server
2026-05-01 06:59:30 +08:00
KKRainbow 41b6d65604 fix faketcp filter on windows (#2190) 2026-04-30 23:55:56 +08:00
KKRainbow aae30894dd fix: keep file logger disabled by default (#2189) 2026-04-30 21:42:30 +08:00
fanyang 81d169abfc fix: fall back when CLI manage service is unavailable (#2185) 2026-04-30 19:50:50 +08:00
Luna Yao 9c6c210e89 fix: disable SO_EXCLUSIVEADDRUSE on Windows (#2180) 2026-04-30 19:48:54 +08:00
Mg Pig d1c6dcf754 fix: prevent URL input layout flicker with container queries (#2186) 2026-04-30 19:45:01 +08:00
KKRainbow 97c8c4f55a feat: support disabling relay data forwarding (#2188)
- add a disable_relay_data runtime/config patch option
- reuse the existing avoid_relay_data feature flag when relay data forwarding is disabled
2026-04-30 19:44:40 +08:00
KKRainbow ed8df2d58f prevent EasyTier-managed IPv6 from being used as underlay connections (#2181)
When a node has public IPv6 addresses allocated by EasyTier, those addresses
are installed on the host's network interfaces. The system would then pick
them up as candidate source/destination addresses for underlay connections
(direct peer, UDP hole punch, bind addresses), causing overlay traffic to
loop back into the overlay itself.

Add a central predicate is_ip_easytier_managed_ipv6() and apply it at every
point where IPv6 addresses are selected for underlay use:
- Filter managed IPv6 from DNS-resolved connector addresses, including a
  UDP socket getsockname check to detect whether the OS would route through
  the overlay to reach a destination
- Skip managed IPv6 in bind address selection and STUN candidate filtering
- Strip managed IPv6 from GetIpListResponse RPC so peers never learn them
- Pass pre-resolved addresses to tunnel connectors to avoid re-resolution

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 12:17:22 +08:00
lurenjia f66010e6f9 fix: preserve URL type in matches_scheme (#2179)
Avoid resolving Url::as_ref() to the full URL string before TunnelScheme
conversion. Add regression coverage for owned/borrowed URLs and the UDP
IPv6 hole-punch branch condition.

Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-28 23:23:41 +08:00
Luna Yao d5c4700d32 utils: replace defer, ContextGuard, DetachableTask with guarden crate (#2163) 2026-04-27 18:29:46 +08:00
KKRainbow 969ecfc4ca fix(gui): refresh service after core version upgrade (#2172) 2026-04-27 15:54:52 +08:00
KKRainbow 8f862997eb feat: support allocating public IPv6 addresses from a provider (#2162)
* feat: support allocating public IPv6 addresses from a provider

Add a provider/leaser architecture for public IPv6 address allocation
between nodes in the same network:

- A node with `--ipv6-public-addr-provider` advertises a delegable
  public IPv6 prefix (auto-detected from kernel routes or manually
  configured via `--ipv6-public-addr-prefix`).
- Other nodes with `--ipv6-public-addr-auto` request a /128 lease from
  the selected provider via a new RPC service (PublicIpv6AddrRpc).
- Leases have a 30s TTL, renewed every 10s by the client routine.
- The provider allocates addresses deterministically from its prefix
  using instance-UUID-based hashing to prefer stable assignments.
- Routes to peer leases are installed on the TUN device, and each
  client's own /128 is assigned as its IPv6 address.

Also includes netlink IPv6 route table inspection, integration tests,
and event-driven route/address reconciliation.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-26 21:37:34 +08:00
KKRainbow b20075e3dc fix: allow self virtual IP loopback (#2161) 2026-04-25 21:26:16 +08:00
Luna Yao eb3b5aae51 utils: add DetachableTask & ContextGuard (#2138) 2026-04-25 18:24:36 +08:00
datayurei af6b6ab6f1 fix: avoid panic when validating mapped listeners (#2153) 2026-04-25 17:45:57 +08:00
Luna Yao 5a1668c753 refactor: remove ScopedTask (#2125)
* replace ScopedTask with AbortOnDropHandle
2026-04-25 15:20:25 +08:00
Luna Yao 820d9095d3 replace AsyncRuntime with simpler CancellableTask (#2136) 2026-04-25 10:29:53 +08:00
KKRainbow 2fb41ccbba bump version 262 (#2158) 2026-04-25 10:22:24 +08:00
Luna Yao b4666be696 fix: disable SO_REUSEADDR & enable SO_EXCLUSIVEADDRUSE on Windows (#2128) 2026-04-25 00:37:34 +08:00
KKRainbow 4688ad74ad Honor credential reusable flag (#2157)
- propagate reusable through credential storage, CLI, RPC, routing, and tests
- enforce reusable=false owner election with current topology
- preserve proof-backed groups when refreshing credential ACL groups
2026-04-25 00:22:40 +08:00
Luna Yao f7ea78d4f0 lower max_udp_payload_size to 1200 (#2156) 2026-04-24 21:20:37 +08:00
james.zhang ac112440c3 fix(UrlInput): update parseUrl and buildUrlValue to handle null ports correctly (#2146) 2026-04-23 13:45:09 +08:00
KKRainbow 958b246f05 improve webclient (#2151) 2026-04-23 13:44:18 +08:00
james.zhang 263f4c3bc9 fix(peer_route): exclude current peer ID from proxy CIDR lists (#2149) 2026-04-22 20:30:38 +08:00
Luna Yao ffddc517e1 fix: listener parsing (#2143)
Fixes a CLI listener parsing regression where url crate special-casing for ws/wss could misinterpret inputs like ws:11011, and adds coverage to prevent future regressions.

Changes:

Refactors listener parsing to avoid url::Url parsing for proto:port forms and to support additional shorthand inputs (port-only / IP-only / SocketAddr).
Centralizes “expand to all IpScheme variants” logic in a helper (gen_listeners) while preserving the “port=0 is dynamic” behavior.
Adds unit tests covering valid/invalid listener inputs and expansion behavior.
2026-04-21 23:45:22 +08:00
Debugger Chen 5cd0a3e846 feat: add upnp support (#1449) 2026-04-21 17:19:04 +08:00
Luna Yao f4319c4d4f ci(test): always check everything (#2142)
* ci(test): always check everything
* move Cargo.lock check to the last step
2026-04-21 10:08:27 +08:00
Luna Yao 0091a535d5 use mimalloc for FreeBSD (#2144) 2026-04-21 08:40:21 +08:00
Luna Yao d7a5fb8d66 remove --no-deps from lock check (#2134) 2026-04-20 00:46:26 +08:00
KKRainbow f63054e937 fix: resolve Android APK version fallback to 1.0 on CI (#2131) 2026-04-19 19:06:37 +08:00
KKRainbow efc043abbb bump version to v2.6.1 (#2129) 2026-04-19 16:49:45 +08:00
Mg Pig 40c6de8e31 fix(core): restrict implicit config merge to explicit config files (#2127) 2026-04-19 10:39:04 +08:00
KKRainbow 2db655bd6d fix: refresh ACL groups and enable TCP_NODELAY for WebSocket (#2118)
* fix: refresh ACL groups and enable TCP_NODELAY for WebSocket
* add remove_peers to remove list of peer id in ospf route
* fix secure tunnel for unreliable udp tunnel
* fix(web-client): timeout secure tunnel handshake
* fix(web-server): tolerate delayed secure hello
* fix quic endpoint panic
* fix replay check
2026-04-19 10:37:39 +08:00
Mg Pig c49c56612b feat(ui): add ACL graphical configuration interface (#1815) 2026-04-18 20:23:53 +08:00
Mg Pig 6ca074abae feat(nix): 添加 rustfmt 和 clippy 到 Rust 工具链扩展 (#2126) 2026-04-18 20:23:26 +08:00
Luna Yao 84430055ab remove hashbrown (#2108) 2026-04-18 11:06:34 +08:00
Mg Pig 432fcb3fc3 build(nix): add mold to the flake dev shell (#2122) 2026-04-18 09:06:45 +08:00
Luna Yao fae32361f2 chore: update Rust to 1.95; replace cfg_if with cfg_select (#2121) 2026-04-17 23:41:31 +08:00
Luna Yao bcb2e512d4 utils: move code to a dedicated mod; add AsyncRuntime (#2072) 2026-04-16 23:32:07 +08:00
Luna Yao 82ca04a8a7 proto(utils): add MessageModel & RepeatedMessageModel (#2068)
* add FromIterator, Extend, AsRef, AsMut, TryFrom<[Message]>
2026-04-15 19:40:09 +08:00
Luna Yao 2ef3b72224 proto: add some conversion for Url (#2067) 2026-04-15 19:39:24 +08:00
Luna Yao 6d319cba1d tests(relay_peer_e2e_encryption): wait for the key of inst3 before ping test (#2069) 2026-04-15 19:39:00 +08:00
Luna Yao 3687519ef3 turn off ansi for file log (#2110)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-15 19:38:27 +08:00
Luna Yao 3a4ac59467 log: change default log level of tests to WARNING (#2113) 2026-04-14 18:10:38 +08:00
Luna Yao 1cfc135df3 ci: remove -D warnings from test (#2109)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-14 12:35:05 +08:00
KKRainbow 5b35c51da9 fix packet split on udp tunnel and avoid tcp proxy access rpc portal (#2107)
* distinct control / data when forward packets
* fix rpc split for udp tunnel
* feat(easytier-web): pass public ip in validate token webhook
* protect rpc port from subnet proxy
2026-04-13 11:03:09 +08:00
Luna Yao ec7ddd3bad fix: filter overlapped proxy cidrs in ProxyCidrsMonitor (#2079)
* feat(route): add async methods to list proxy CIDRs for IPv4 and IPv6
* refactor(ProxyCidrsMonitor): get proxy cidrs from list_proxy_cidrs
2026-04-12 22:18:54 +08:00
Luna Yao 6f3e708679 tunnel(bind): gather all bind logic to a single function (#2070)
* extract a Bindable trait for binding TcpSocket, TcpListener, and UdpSocket
2026-04-12 22:16:58 +08:00
Luna Yao 869e1b89f5 fix: remove log (file) when level is explicitly set to OFF (#2083)
* fix level filter for OFF
* remove unwrap of file appender creation
2026-04-12 22:16:30 +08:00
Luna Yao 9e0a3b6936 ci: rewrite build workflows (#2089) 2026-04-12 22:14:41 +08:00
Luna Yao c6cb1a77d0 chore: clippy fix some code on Windows (#2106) 2026-04-12 22:13:58 +08:00
deddey 83010861ba Optimize network interface configuration for macOS and FreeBSD to avoid hard-coded IP addresses (#1853)
Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-12 21:00:59 +08:00
Luna Yao daa53e5168 log: auto-init log for tests (#2073) 2026-04-12 13:04:21 +08:00
fanyang 51befdbf87 fix(faketcp): harden packet parsing against malformed frames (#2103)
Discard malformed fake TCP frames instead of panicking so OpenWrt
nodes can survive unexpected or truncated packets.

Also emit the correct IPv6 ethertype and cover the parser with
round-trip and truncation regression tests.
2026-04-12 13:02:23 +08:00
Luna Yao 8311b11713 refactor: remove NoGroAsyncUdpSocket (#1867) 2026-04-10 23:22:08 +08:00
Luna Yao 19c80c7b9c cli: do not add offset when port = 0 (#2085) 2026-04-10 23:21:15 +08:00
Luna Yao a879dd1b14 chore: update Rust to 2024 edition (#2066) 2026-04-10 00:22:12 +08:00
Luna Yao a8feb9ac2b chore: use Debug to print errors (#2086) 2026-04-09 09:45:55 +08:00
Luna Yao c5fbd29c0e ci: fix skip condition for draft pull requests in CI workflows (#2088)
* ci: run xxx-result only when pre_job is run successfully
* fix get-result steps
2026-04-09 09:45:04 +08:00
Luna Yao 26b1794723 ci: accecelerate pipeline (#2078)
* enable concurrency

pr

* do not run build on draft PRs

pr

* enable fail-fast for build workflows
2026-04-08 08:43:03 +08:00
Luna Yao 371b4b70a3 proto(utils): add TransientDigest trait (#2071) 2026-04-08 00:06:48 +08:00
Luna Yao b2cc38ee63 chore(clippy): disallow some methods from itertools (#2075) 2026-04-07 16:27:33 +08:00
Luna Yao 79b562cdc9 drop peer_mgr in time (#2064) 2026-04-06 11:31:05 +08:00
fanyang e3f089251c fix(ospf): mitigate route sync storm under connection flapping (#2063)
Addresses issue #2016 where nodes behind unstable networks
(e.g. campus firewalls) cause excessive traffic that can freeze
the remote node.

Two changes in peer_ospf_route.rs:

- Make do_sync_route_info only trigger reverse sync_now when
  incoming data actually changed the route table or foreign
  network state.  The previous unconditional sync_now created
  an A->B->A->B ping-pong cycle on every RPC exchange.

- Add exponential backoff (50ms..5s) to session_task retry loop.
  The previous fixed 50ms retry produced ~20 RPCs/s during
  sustained network instability.
2026-04-06 11:26:20 +08:00
fanyang cf6dcbc054 Fix IPv6 TCP tunnel display formatting (#1980)
Normalize composite tunnel display values before rendering peer and
debug output so IPv6 tunnel types no longer append `6` to the port.

- Preserve prefixes like `txt-` while converting tunnel schemes to
  their IPv6 form.
- Recover malformed values such as `txt-tcp://...:110106` into
  `txt-tcp6://...:11010`.
- Reuse the normalized remote address display in CLI debug output.
2026-04-05 22:12:55 +08:00
286 changed files with 26938 additions and 7025 deletions
+35 -54
View File
@@ -1,29 +1,40 @@
[target.x86_64-unknown-linux-musl]
linker = "rust-lld"
rustflags = ["-C", "linker-flavor=ld.lld"]
# region Native
[target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.aarch64-unknown-linux-ohos]
ar = "/usr/local/ohos-sdk/linux/native/llvm/bin/llvm-ar"
linker = "/home/runner/sdk/native/llvm/aarch64-unknown-linux-ohos-clang.sh"
[target.'cfg(all(windows, target_env = "msvc"))']
rustflags = ["-C", "target-feature=+crt-static"]
[target.aarch64-unknown-linux-ohos.env]
PKG_CONFIG_PATH = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib/pkgconfig:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib/pkgconfig"
PKG_CONFIG_LIBDIR = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib"
PKG_CONFIG_SYSROOT_DIR = "/usr/local/ohos-sdk/linux/native/sysroot"
SYSROOT = "/usr/local/ohos-sdk/linux/native/sysroot"
# region
# region CI
[target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-feature=+crt-static"]
[target.aarch64-unknown-linux-musl]
linker = "aarch64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.riscv64gc-unknown-linux-musl]
linker = "riscv64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.'cfg(all(windows, target_env = "msvc"))']
[target.armv7-unknown-linux-musleabihf]
rustflags = ["-C", "target-feature=+crt-static"]
[target.armv7-unknown-linux-musleabi]
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabihf]
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabi]
rustflags = ["-C", "target-feature=+crt-static"]
[target.loongarch64-unknown-linux-musl]
rustflags = ["-C", "target-feature=+crt-static"]
[target.mipsel-unknown-linux-musl]
@@ -64,44 +75,14 @@ rustflags = [
"gcc",
]
[target.armv7-unknown-linux-musleabihf]
linker = "armv7-unknown-linux-musleabihf-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.aarch64-unknown-linux-ohos]
ar = "/usr/local/ohos-sdk/linux/native/llvm/bin/llvm-ar"
linker = "/home/runner/sdk/native/llvm/aarch64-unknown-linux-ohos-clang.sh"
[target.armv7-unknown-linux-musleabi]
linker = "armv7-unknown-linux-musleabi-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.aarch64-unknown-linux-ohos.env]
PKG_CONFIG_PATH = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib/pkgconfig:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib/pkgconfig"
PKG_CONFIG_LIBDIR = "/usr/local/ohos-sdk/linux/native/sysroot/usr/lib:/usr/local/ohos-sdk/linux/native/sysroot/usr/local/lib"
PKG_CONFIG_SYSROOT_DIR = "/usr/local/ohos-sdk/linux/native/sysroot"
SYSROOT = "/usr/local/ohos-sdk/linux/native/sysroot"
[target.loongarch64-unknown-linux-musl]
linker = "loongarch64-unknown-linux-musl-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
[target.arm-unknown-linux-musleabihf]
linker = "arm-unknown-linux-musleabihf-gcc"
rustflags = [
"-C",
"target-feature=+crt-static",
"-L",
"./musl_gcc/arm-unknown-linux-musleabihf/arm-unknown-linux-musleabihf/lib",
"-L",
"./musl_gcc/arm-unknown-linux-musleabihf/lib/gcc/arm-unknown-linux-musleabihf/15.1.0",
"-l",
"atomic",
"-l",
"gcc",
]
[target.arm-unknown-linux-musleabi]
linker = "arm-unknown-linux-musleabi-gcc"
rustflags = [
"-C",
"target-feature=+crt-static",
"-L",
"./musl_gcc/arm-unknown-linux-musleabi/arm-unknown-linux-musleabi/lib",
"-L",
"./musl_gcc/arm-unknown-linux-musleabi/lib/gcc/arm-unknown-linux-musleabi/15.1.0",
"-l",
"atomic",
"-l",
"gcc",
]
# endregion
+60 -13
View File
@@ -2,10 +2,17 @@ name: prepare-build
author: Luna
description: Prepare build environment
inputs:
web:
description: 'Whether to prepare the web build environment'
target:
description: 'The target to build for'
required: false
pnpm:
description: 'Whether to run pnpm build'
required: true
default: 'true'
pnpm-build-filter:
description: 'The filter argument for pnpm build (e.g. ./easytier-web/*)'
required: false
default: './easytier-web/*'
gui:
description: 'Whether to prepare the GUI build environment'
required: true
@@ -19,21 +26,61 @@ runs:
- run: mkdir -p easytier-gui/dist
shell: bash
- name: Setup Frontend Environment
if: ${{ inputs.web == 'true' }}
uses: ./.github/actions/prepare-pnpm
with:
build-filter: './easytier-web/*'
- name: Install GUI dependencies (Used by clippy)
if: ${{ inputs.gui == 'true' }}
- name: Install dependencies
if: ${{ runner.os == 'Linux' }}
run: |
bash ./.github/workflows/install_gui_dep.sh
sudo apt-get update
sudo apt-get install -qqy build-essential mold musl-tools
shell: bash
- name: Install Rust
- name: Setup Frontend Environment
if: ${{ inputs.pnpm == 'true' }}
uses: ./.github/actions/prepare-pnpm
with:
build-filter: ${{ inputs.pnpm-build-filter }}
- name: Install GUI dependencies (Linux)
if: ${{ inputs.gui == 'true' && runner.os == 'Linux' }}
run: |
bash ./.github/workflows/install_rust.sh
sudo apt-get install -qq xdg-utils \
libappindicator3-dev \
libgtk-3-dev \
librsvg2-dev \
libwebkit2gtk-4.1-dev \
libxdo-dev
shell: bash
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: 1.95
target: ${{ !contains(inputs.target, 'mips') && inputs.target || '' }}
components: ${{ contains(inputs.target, 'mips') && 'rust-src' || '' }}
cache: false
rustflags: ''
- name: Install Rust (MIPS)
if: ${{ contains(inputs.target, 'mips') }}
run: |
MUSL_TARGET=${{ inputs.target }}sf
mkdir -p ./musl_gcc
wget --inet4-only -c https://github.com/cross-tools/musl-cross/releases/download/20250520/${MUSL_TARGET}.tar.xz -P ./musl_gcc/
tar xf ./musl_gcc/${MUSL_TARGET}.tar.xz -C ./musl_gcc/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/bin/*gcc /usr/bin/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/include/ /usr/include/musl-cross
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/${MUSL_TARGET}/sysroot/ ./musl_gcc/sysroot
sudo chmod -R a+rwx ./musl_gcc
if [[ -d "./musl_gcc/sysroot" ]]; then
echo "BINDGEN_EXTRA_CLANG_ARGS=--sysroot=$(readlink -f ./musl_gcc/sysroot)" >> $GITHUB_ENV
fi
cd "$PWD/musl_gcc/${MUSL_TARGET}/lib/gcc/${MUSL_TARGET}/15.1.0" || exit 255
# for panic-abort
cp libgcc_eh.a libunwind.a
# for mimalloc
ar x libgcc.a _ctzsi2.o _clz.o _bswapsi2.o
ar rcs libctz.a _ctzsi2.o _clz.o _bswapsi2.o
shell: bash
- name: Setup protoc
+124 -147
View File
@@ -2,9 +2,14 @@ name: EasyTier Core
on:
push:
branches: ["develop", "main", "releases/**"]
branches: [ "develop", "main", "releases/**" ]
pull_request:
branches: ["develop", "main"]
branches: [ "develop", "main" ]
types: [ opened, synchronize, reopened, ready_for_review ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job:
# continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output
outputs:
# do not skip push on branch starts with releases/
@@ -30,7 +36,7 @@ jobs:
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh", "easytier-web/**"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/actions/**", "easytier-web/**"]'
build_web:
runs-on: ubuntu-latest
needs: pre_job
@@ -51,41 +57,48 @@ jobs:
easytier-web/frontend/dist/*
build:
strategy:
fail-fast: false
fail-fast: true
matrix:
include:
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-unknown-linux-musl
OS: ubuntu-22.04
OS: ubuntu-24.04
ARTIFACT_NAME: linux-x86_64
- TARGET: riscv64gc-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-riscv64
- TARGET: mips-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-mips
- TARGET: mipsel-unknown-linux-musl
OS: ubuntu-22.04
ARTIFACT_NAME: linux-mipsel
- TARGET: armv7-unknown-linux-musleabihf # raspberry pi 2-3-4, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armv7hf
- TARGET: armv7-unknown-linux-musleabi # raspberry pi 2-3-4, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armv7
- TARGET: arm-unknown-linux-musleabihf # raspberry pi 0-1, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-armhf
- TARGET: arm-unknown-linux-musleabi # raspberry pi 0-1, not tested
OS: ubuntu-22.04
ARTIFACT_NAME: linux-arm
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-24.04-arm
ARTIFACT_NAME: linux-aarch64
- TARGET: riscv64gc-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-riscv64
- TARGET: loongarch64-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-loongarch64
- TARGET: armv7-unknown-linux-musleabihf # raspberry pi 2-3-4, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armv7hf
- TARGET: armv7-unknown-linux-musleabi # raspberry pi 2-3-4, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armv7
- TARGET: arm-unknown-linux-musleabihf # raspberry pi 0-1, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-armhf
- TARGET: arm-unknown-linux-musleabi # raspberry pi 0-1, not tested
OS: ubuntu-24.04
ARTIFACT_NAME: linux-arm
- TARGET: mips-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-mips
- TARGET: mipsel-unknown-linux-musl
OS: ubuntu-24.04
ARTIFACT_NAME: linux-mipsel
- TARGET: x86_64-unknown-freebsd
OS: ubuntu-24.04
ARTIFACT_NAME: freebsd-13.2-x86_64
BSD_VERSION: 13.2
- TARGET: x86_64-apple-darwin
OS: macos-latest
ARTIFACT_NAME: macos-x86_64
@@ -96,17 +109,12 @@ jobs:
- TARGET: x86_64-pc-windows-msvc
OS: windows-latest
ARTIFACT_NAME: windows-x86_64
- TARGET: aarch64-pc-windows-msvc
OS: windows-latest
ARTIFACT_NAME: windows-arm64
- TARGET: i686-pc-windows-msvc
OS: windows-latest
ARTIFACT_NAME: windows-i686
- TARGET: x86_64-unknown-freebsd
OS: ubuntu-22.04
ARTIFACT_NAME: freebsd-13.2-x86_64
BSD_VERSION: 13.2
- TARGET: aarch64-pc-windows-msvc
OS: windows-11-arm
ARTIFACT_NAME: windows-arm64
runs-on: ${{ matrix.OS }}
env:
@@ -131,8 +139,15 @@ jobs:
name: easytier-web-dashboard
path: easytier-web/frontend/dist/
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
target: ${{ matrix.TARGET }}
gui: true
pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
if: ${{ ! endsWith(matrix.TARGET, 'freebsd') }}
with:
# The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust"
@@ -140,96 +155,54 @@ jobs:
shared-key: "core-registry"
cache-targets: "false"
- name: Setup protoc
uses: arduino/setup-protoc@v3
- uses: mlugg/setup-zig@v2
if: ${{ contains(matrix.OS, 'ubuntu') }}
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
version: 0.16.0
use-cache: true
- name: Build Core & Cli
if: ${{ ! endsWith(matrix.TARGET, 'freebsd') }}
run: |
bash ./.github/workflows/install_rust.sh
- uses: taiki-e/install-action@v2
if: ${{ contains(matrix.OS, 'ubuntu') }}
with:
tool: cargo-zigbuild
# loongarch need llvm-18
if [[ $TARGET =~ ^loongarch.*$ ]]; then
sudo apt-get install -qq llvm-18 clang-18
export LLVM_CONFIG_PATH=/usr/lib/llvm-18/bin/llvm-config
fi
# we set the sysroot when sysroot is a dir
# this dir is a soft link generated by install_rust.sh
# kcp-sys need this to gen ffi bindings. without this clang may fail to find some libc headers such as bits/libc-header-start.h
if [[ -d "./musl_gcc/sysroot" ]]; then
export BINDGEN_EXTRA_CLANG_ARGS=--sysroot=$(readlink -f ./musl_gcc/sysroot)
fi
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cargo +nightly-2026-02-02 build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
- name: Build
if: ${{ !contains(matrix.TARGET, 'mips') }}
run: |
if [[ "$TARGET" == *windows* ]]; then
SUFFIX=.exe
else
if [[ $OS =~ ^windows.*$ ]]; then
SUFFIX=.exe
CORE_FEATURES="--features=mimalloc"
elif [[ $TARGET =~ ^riscv64.*$ || $TARGET =~ ^loongarch64.*$ || $TARGET =~ ^aarch64.*$ ]]; then
CORE_FEATURES="--features=mimalloc"
else
CORE_FEATURES="--features=jemalloc"
fi
cargo build --release --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./target/$TARGET/release/easytier-web-embed"$SUFFIX"
cargo build --release --target $TARGET $CORE_FEATURES
SUFFIX=""
fi
# Copied and slightly modified from @lmq8267 (https://github.com/lmq8267)
- name: Build Core & Cli (X86_64 FreeBSD)
uses: vmactions/freebsd-vm@670398e4236735b8b65805c3da44b7a511fb8b27
if: ${{ endsWith(matrix.TARGET, 'freebsd') }}
if [[ "$TARGET" =~ (x86_64-unknown-linux-musl|aarch64-unknown-linux-musl|windows|darwin) ]]; then
BUILD=build
else
BUILD=zigbuild
fi
if [[ "$TARGET" =~ ^(riscv64|loongarch64|aarch64).*$ || "$TARGET" =~ (freebsd|windows) ]]; then
FEATURES="mimalloc"
else
FEATURES="jemalloc"
fi
cargo $BUILD --release --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./target/$TARGET/release/easytier-web-embed"$SUFFIX"
cargo $BUILD --release --target $TARGET --features=$FEATURES
- name: Build (MIPS)
if: ${{ contains(matrix.TARGET, 'mips') }}
env:
TARGET: ${{ matrix.TARGET }}
with:
envs: TARGET
release: ${{ matrix.BSD_VERSION }}
arch: x86_64
usesh: true
mem: 6144
cpu: 4
run: |
uname -a
echo $SHELL
pwd
ls -lah
whoami
env | sort
pkg install -y git protobuf llvm-devel sudo curl
curl --proto 'https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
. $HOME/.cargo/env
rustup set auto-self-update disable
rustup install 1.93
rustup default 1.93
export CC=clang
export CXX=clang++
export CARGO_TERM_COLOR=always
cargo build --release --verbose --target $TARGET --package=easytier-web --features=embed
mv ./target/$TARGET/release/easytier-web ./target/$TARGET/release/easytier-web-embed
cargo build --release --verbose --target $TARGET --features=mimalloc
mkdir -p built-bins/$TARGET/release/
mv ./target/$TARGET/release/easytier-web-embed ./built-bins/$TARGET/release/easytier-web-embed
mv ./target/$TARGET/release/easytier-web ./built-bins/$TARGET/release/easytier-web
mv ./target/$TARGET/release/easytier-core ./built-bins/$TARGET/release/easytier-core
mv ./target/$TARGET/release/easytier-cli ./built-bins/$TARGET/release/easytier-cli
# remove dirs to avoid copy many files back
rm -rf ./target ~/.cargo
mv ./built-bins ./target
RUSTC_BOOTSTRAP: 1
run: |
cargo build -r --target $TARGET -Z build-std=std,panic_abort --package=easytier --features=jemalloc
- name: Compress
run: |
mkdir -p ./artifacts/objects/
# windows is the only OS using a different convention for executable file name
if [[ $OS =~ ^windows.*$ ]]; then
SUFFIX=.exe
@@ -242,26 +215,37 @@ jobs:
find "easytier/third_party/${ARCH_DIR}" -maxdepth 1 -type f \( -name "*.dll" -o -name "*.sys" \) -exec cp {} ./artifacts/objects/ \;
fi
fi
if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then
TAG=$GITHUB_REF_NAME
else
TAG=$GITHUB_SHA
fi
if [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ (loongarch|freebsd) ]]; then
HOST_ARCH=$(uname -m)
case $HOST_ARCH in
x86_64) UPX_ARCH="amd64" ;;
aarch64) UPX_ARCH="arm64" ;;
*) UPX_ARCH="amd64" ;;
esac
if [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^.*freebsd$ && ! $TARGET =~ ^loongarch.*$ && ! $TARGET =~ ^riscv64.*$ ]]; then
UPX_VERSION=4.2.4
curl -L https://github.com/upx/upx/releases/download/v${UPX_VERSION}/upx-${UPX_VERSION}-amd64_linux.tar.xz -s | tar xJvf -
cp upx-${UPX_VERSION}-amd64_linux/upx .
./upx --lzma --best ./target/$TARGET/release/easytier-core"$SUFFIX"
./upx --lzma --best ./target/$TARGET/release/easytier-cli"$SUFFIX"
UPX_PKG="upx-${UPX_VERSION}-${UPX_ARCH}_linux"
curl -L "https://github.com/upx/upx/releases/download/v${UPX_VERSION}/${UPX_PKG}.tar.xz" -s | tar xJvf -
cp "${UPX_PKG}/upx" .
UPX_BIN=./upx
fi
mv ./target/$TARGET/release/easytier-core"$SUFFIX" ./artifacts/objects/
mv ./target/$TARGET/release/easytier-cli"$SUFFIX" ./artifacts/objects/
if [[ ! $TARGET =~ ^mips.*$ ]]; then
mv ./target/$TARGET/release/easytier-web"$SUFFIX" ./artifacts/objects/
mv ./target/$TARGET/release/easytier-web-embed"$SUFFIX" ./artifacts/objects/
fi
for BIN in ./target/$TARGET/release/easytier-{core,cli,web,web-embed}"$SUFFIX"; do
if [[ -f "$BIN" ]]; then
if [[ -n "$UPX_BIN" ]]; then
$UPX_BIN --lzma --best "$BIN" || true
fi
mv "$BIN" ./artifacts/objects/
fi
done
mv ./artifacts/objects/* ./artifacts/
rm -rf ./artifacts/objects/
@@ -273,25 +257,10 @@ jobs:
path: |
./artifacts/*
core-result:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest
needs:
- pre_job
- build_web
- build
steps:
- name: Mark result as failed
if: needs.build.result != 'success'
run: exit 1
magisk_build:
needs:
- pre_job
- build_web
- build
if: needs.pre_job.outputs.should_skip != 'true' && always()
build_magisk:
runs-on: ubuntu-latest
needs: [ pre_job, build_web, build ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Checkout Code
uses: actions/checkout@v5 # 必须先检出代码才能获取模块配置
@@ -311,7 +280,6 @@ jobs:
cp ./downloaded-binaries/easytier-cli ./easytier-contrib/easytier-magisk/
cp ./downloaded-binaries/easytier-web ./easytier-contrib/easytier-magisk/
# 上传生成的模块
- name: Upload Magisk Module
uses: actions/upload-artifact@v5
@@ -322,3 +290,12 @@ jobs:
!./easytier-contrib/easytier-magisk/build.sh
!./easytier-contrib/easytier-magisk/magisk_update.json
if-no-files-found: error
core-result:
runs-on: ubuntu-latest
needs: [ pre_job, build_web, build, build_magisk ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: contains(needs.*.result, 'failure')
run: exit 1
+1 -1
View File
@@ -11,7 +11,7 @@ on:
image_tag:
description: 'Tag for this image build'
type: string
default: 'v2.6.0'
default: 'v2.6.4'
required: true
mark_latest:
description: 'Mark this image as latest'
+39 -86
View File
@@ -5,7 +5,12 @@ on:
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job:
# continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
@@ -29,20 +35,20 @@ jobs:
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh", ".github/workflows/install_gui_dep.sh", "easytier-web/frontend-lib/**"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/actions/**", "easytier-web/frontend-lib/**"]'
build-gui:
strategy:
fail-fast: false
fail-fast: true
matrix:
include:
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-22.04
GUI_TARGET: aarch64-unknown-linux-gnu
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-unknown-linux-musl
OS: ubuntu-22.04
OS: ubuntu-24.04
GUI_TARGET: x86_64-unknown-linux-gnu
ARTIFACT_NAME: linux-x86_64
- TARGET: aarch64-unknown-linux-musl
OS: ubuntu-24.04-arm
GUI_TARGET: aarch64-unknown-linux-gnu
ARTIFACT_NAME: linux-aarch64
- TARGET: x86_64-apple-darwin
OS: macos-latest
@@ -57,16 +63,14 @@ jobs:
OS: windows-latest
GUI_TARGET: x86_64-pc-windows-msvc
ARTIFACT_NAME: windows-x86_64
- TARGET: aarch64-pc-windows-msvc
OS: windows-latest
GUI_TARGET: aarch64-pc-windows-msvc
ARTIFACT_NAME: windows-arm64
- TARGET: i686-pc-windows-msvc
OS: windows-latest
GUI_TARGET: i686-pc-windows-msvc
ARTIFACT_NAME: windows-i686
- TARGET: aarch64-pc-windows-msvc
OS: windows-11-arm
GUI_TARGET: aarch64-pc-windows-msvc
ARTIFACT_NAME: windows-arm64
runs-on: ${{ matrix.OS }}
env:
@@ -80,75 +84,29 @@ jobs:
steps:
- uses: actions/checkout@v5
- name: Install GUI dependencies (x86 only)
if: ${{ matrix.TARGET == 'x86_64-unknown-linux-musl' }}
run: bash ./.github/workflows/install_gui_dep.sh
- name: Install GUI cross compile (aarch64 only)
if: ${{ matrix.TARGET == 'aarch64-unknown-linux-musl' }}
run: |
# see https://tauri.app/v1/guides/building/linux/
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
sudo dpkg --add-architecture arm64
sudo apt update
sudo apt install aptitude
sudo aptitude install -y libgstreamer1.0-0:arm64 gstreamer1.0-plugins-base:arm64 gstreamer1.0-plugins-good:arm64 \
libgstreamer-gl1.0-0:arm64 libgstreamer-plugins-base1.0-0:arm64 libgstreamer-plugins-good1.0-0:arm64 libwebkit2gtk-4.1-0:arm64 \
libwebkit2gtk-4.1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu libsoup-3.0-dev:arm64 libjavascriptcoregtk-4.1-dev:arm64
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
- name: Install rpm package (Linux target only)
if: ${{ contains(matrix.TARGET, '-linux-') }}
run: |
sudo apt update
sudo apt install -y rpm
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
target: ${{ matrix.TARGET }}
gui: true
pnpm: true
pnpm-build-filter: ''
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
with:
# The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust"
prefix-key: ""
- name: Install rust target
run: bash ./.github/workflows/install_rust.sh
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
shared-key: "gui-registry"
cache-targets: "false"
- name: copy correct DLLs
if: ${{ matrix.OS == 'windows-latest' }}
if: ${{ contains(matrix.GUI_TARGET, 'windows') }}
run: |
case $TARGET in
x86_64*) ARCH_DIR=x86_64 ;;
@@ -164,10 +122,9 @@ jobs:
uses: tauri-apps/tauri-action@v0
with:
projectPath: ./easytier-gui
# https://tauri.app/v1/guides/building/linux/#cross-compiling-tauri-applications-for-arm-based-devices
args: --verbose --target ${{ matrix.GUI_TARGET }} ${{ contains(matrix.TARGET, '-linux-') && contains(matrix.TARGET, 'aarch64') && '--bundles deb,rpm' || '' }}
args: --verbose --target ${{ matrix.GUI_TARGET }}
- name: Compress
- name: Collect artifact
run: |
mkdir -p ./artifacts/objects/
@@ -176,18 +133,16 @@ jobs:
else
TAG=$GITHUB_SHA
fi
# copy gui bundle, gui is built without specific target
if [[ $OS =~ ^windows.*$ ]]; then
if [[ $GUI_TARGET =~ windows ]]; then
mv ./target/$GUI_TARGET/release/bundle/nsis/*.exe ./artifacts/objects/
elif [[ $OS =~ ^macos.*$ ]]; then
elif [[ $GUI_TARGET =~ darwin ]]; then
mv ./target/$GUI_TARGET/release/bundle/dmg/*.dmg ./artifacts/objects/
elif [[ $OS =~ ^ubuntu.*$ && ! $TARGET =~ ^mips.*$ ]]; then
elif [[ $GUI_TARGET =~ linux ]]; then
mv ./target/$GUI_TARGET/release/bundle/deb/*.deb ./artifacts/objects/
mv ./target/$GUI_TARGET/release/bundle/rpm/*.rpm ./artifacts/objects/
if [[ $GUI_TARGET =~ ^x86_64.*$ ]]; then
# currently only x86 appimage is supported
mv ./target/$GUI_TARGET/release/bundle/appimage/*.AppImage ./artifacts/objects/
fi
mv ./target/$GUI_TARGET/release/bundle/appimage/*.AppImage ./artifacts/objects/
fi
mv ./artifacts/objects/* ./artifacts/
@@ -201,12 +156,10 @@ jobs:
./artifacts/*
gui-result:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest
needs:
- pre_job
- build-gui
needs: [ pre_job, build-gui ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: needs.build-gui.result != 'success'
if: contains(needs.*.result, 'failure')
run: exit 1
-11
View File
@@ -1,11 +0,0 @@
sudo apt update
sudo apt install -qq libwebkit2gtk-4.1-dev \
build-essential \
curl \
wget \
file \
libgtk-3-dev \
librsvg2-dev \
libxdo-dev \
libssl-dev \
patchelf
-61
View File
@@ -1,61 +0,0 @@
#!/usr/bin/env bash
# env needed:
# - TARGET
# - GUI_TARGET
# - OS
# dependencies are only needed on ubuntu as that's the only place where
# we make cross-compilation
if [[ $OS =~ ^ubuntu.*$ ]]; then
sudo apt-get update && sudo apt-get install -qq musl-tools libappindicator3-dev llvm clang
# https://github.com/cross-tools/musl-cross/releases
# if "musl" is a substring of TARGET, we assume that we are using musl
MUSL_TARGET=$TARGET
# if target is mips or mipsel, we should use soft-float version of musl
if [[ $TARGET =~ ^mips.*$ || $TARGET =~ ^mipsel.*$ ]]; then
MUSL_TARGET=${TARGET}sf
elif [[ $TARGET =~ ^riscv64gc-.*$ ]]; then
MUSL_TARGET=${TARGET/#riscv64gc-/riscv64-}
fi
if [[ $MUSL_TARGET =~ musl ]]; then
mkdir -p ./musl_gcc
wget --inet4-only -c https://github.com/cross-tools/musl-cross/releases/download/20250520/${MUSL_TARGET}.tar.xz -P ./musl_gcc/
tar xf ./musl_gcc/${MUSL_TARGET}.tar.xz -C ./musl_gcc/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/bin/*gcc /usr/bin/
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/include/ /usr/include/musl-cross
sudo ln -sf $(pwd)/musl_gcc/${MUSL_TARGET}/${MUSL_TARGET}/sysroot/ ./musl_gcc/sysroot
sudo chmod -R a+rwx ./musl_gcc
fi
fi
# see https://github.com/rust-lang/rustup/issues/3709
rustup set auto-self-update disable
rustup install 1.93
rustup default 1.93
# mips/mipsel cannot add target from rustup, need compile by ourselves
if [[ $OS =~ ^ubuntu.*$ && $TARGET =~ ^mips.*$ ]]; then
cd "$PWD/musl_gcc/${MUSL_TARGET}/lib/gcc/${MUSL_TARGET}/15.1.0" || exit 255
# for panic-abort
cp libgcc_eh.a libunwind.a
# for mimalloc
ar x libgcc.a _ctzsi2.o _clz.o _bswapsi2.o
ar rcs libctz.a _ctzsi2.o _clz.o _bswapsi2.o
rustup toolchain install nightly-2026-02-02-x86_64-unknown-linux-gnu
rustup component add rust-src --toolchain nightly-2026-02-02-x86_64-unknown-linux-gnu
# https://github.com/rust-lang/rust/issues/128808
# remove it after Cargo or rustc fix this.
RUST_LIB_SRC=$HOME/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/
if [[ -f $RUST_LIB_SRC/library/Cargo.lock && ! -f $RUST_LIB_SRC/Cargo.lock ]]; then
cp -f $RUST_LIB_SRC/library/Cargo.lock $RUST_LIB_SRC/Cargo.lock
fi
else
rustup target add $TARGET
if [[ $GUI_TARGET != '' ]]; then
rustup target add $GUI_TARGET
fi
fi
+41 -38
View File
@@ -5,7 +5,12 @@ on:
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
@@ -18,6 +23,7 @@ jobs:
pre_job:
# continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
@@ -29,20 +35,25 @@ jobs:
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/workflows/install_rust.sh"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/actions/**"]'
build-mobile:
strategy:
fail-fast: false
fail-fast: true
matrix:
include:
- TARGET: android
OS: ubuntu-22.04
ARTIFACT_NAME: android
runs-on: ${{ matrix.OS }}
- TARGET: aarch64-linux-android
ARCH: aarch64
- TARGET: armv7-linux-androideabi
ARCH: armv7
- TARGET: i686-linux-android
ARCH: i686
- TARGET: x86_64-linux-android
ARCH: x86_64
runs-on: ubuntu-latest
env:
NAME: easytier
TARGET: ${{ matrix.TARGET }}
OS: ${{ matrix.OS }}
ARCH: ${{ matrix.ARCH }}
OSS_BUCKET: ${{ secrets.ALIYUN_OSS_BUCKET }}
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
@@ -61,47 +72,41 @@ jobs:
- name: Setup Android SDK
uses: android-actions/setup-android@v3
with:
cmdline-tools-version: 11076708
packages: 'build-tools;34.0.0 ndk;26.0.10792818 tools platform-tools platforms;android-34 '
cmdline-tools-version: 12.0
packages: 'build-tools;34.0.0 ndk;26.0.10792818 platform-tools platforms;android-34 '
- name: Setup Android Environment
run: |
echo "$ANDROID_HOME/platform-tools" >> $GITHUB_PATH
echo "$ANDROID_HOME/ndk/26.0.10792818/toolchains/llvm/prebuilt/linux-x86_64/bin" >> $GITHUB_PATH
echo "NDK_HOME=$ANDROID_HOME/ndk/26.0.10792818/" > $GITHUB_ENV
echo "NDK_HOME=$ANDROID_HOME/ndk/26.0.10792818/" >> $GITHUB_ENV
- name: Setup Frontend Environment
uses: ./.github/actions/prepare-pnpm
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
target: ${{ matrix.TARGET }}
gui: false
pnpm: true
pnpm-build-filter: ''
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
with:
# The prefix cache key, this can be changed to start a new cache manually.
# default: "v0-rust"
prefix-key: ""
shared-key: "gui-registry"
cache-targets: "false"
- name: Install rust target
run: |
bash ./.github/workflows/install_rust.sh
rustup target add aarch64-linux-android
rustup target add armv7-linux-androideabi
rustup target add i686-linux-android
rustup target add x86_64-linux-android
- name: Setup protoc
uses: arduino/setup-protoc@v3
with:
# GitHub repo token to use to avoid rate limiter
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build Android
- name: Build
run: |
cd easytier-gui
pnpm tauri android build
pnpm tauri android build --apk --target "$ARCH" --split-per-abi
- name: Compress
- name: Collect artifact
run: |
mkdir -p ./artifacts/objects/
mv easytier-gui/src-tauri/gen/android/app/build/outputs/apk/universal/release/app-universal-release.apk ./artifacts/objects/
mv easytier-gui/src-tauri/gen/android/app/build/outputs/apk/*/release/*.apk ./artifacts/objects/
if [[ $GITHUB_REF_TYPE =~ ^tag$ ]]; then
TAG=$GITHUB_REF_NAME
@@ -109,23 +114,21 @@ jobs:
TAG=$GITHUB_SHA
fi
mv ./artifacts/objects/* ./artifacts
mv ./artifacts/objects/* ./artifacts/
rm -rf ./artifacts/objects/
- name: Archive artifact
uses: actions/upload-artifact@v5
with:
name: easytier-gui-${{ matrix.ARTIFACT_NAME }}
name: easytier-mobile-android-${{ matrix.ARCH }}
path: |
./artifacts/*
mobile-result:
if: needs.pre_job.outputs.should_skip != 'true' && always()
runs-on: ubuntu-latest
needs:
- pre_job
- build-mobile
needs: [ pre_job, build-mobile ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: needs.build-mobile.result != 'success'
if: contains(needs.*.result, 'failure')
run: exit 1
+15 -1
View File
@@ -6,14 +6,22 @@ on:
paths:
- "**/*.nix"
- "flake.lock"
- "rust-toolchain.toml"
pull_request:
branches: ["main", "develop"]
types: [opened, synchronize, reopened, ready_for_review]
paths:
- "**/*.nix"
- "flake.lock"
- "rust-toolchain.toml"
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-full-shell:
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
@@ -26,5 +34,11 @@ jobs:
- name: Magic Nix Cache
uses: DeterminateSystems/magic-nix-cache-action@v6
- name: Check full devShell
- name: Warm up full devShell
run: nix develop .#full --command true
- name: Cargo check in flake environment
run: nix develop .#full --command cargo check
- name: Cargo build in flake environment
run: nix develop .#full --command cargo build
+33 -12
View File
@@ -8,8 +8,13 @@ on:
- '!*-pre'
pull_request:
branches: ["develop", "main"]
types: [opened, synchronize, reopened, ready_for_review]
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
@@ -20,18 +25,29 @@ defaults:
jobs:
cargo_fmt_check:
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: fmt check
- name: Prepare build environment
uses: ./.github/actions/prepare-build
with:
gui: false
pnpm: false
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: rustfmt
- name: Check formatting
working-directory: ./easytier-contrib/easytier-ohrs
run: |
bash ../../.github/workflows/install_rust.sh
rustup component add rustfmt
cargo fmt --all -- --check
run: cargo fmt --all -- --check
pre_job:
# continue-on-error: true # Uncomment once integration is finished
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || !github.event.pull_request.draft
# Map a step output to a job output
outputs:
# do not skip push on branch starts with releases/
@@ -44,7 +60,8 @@ jobs:
concurrent_skipping: "same_content_newer"
skip_after_successful_duplicate: "true"
cancel_others: "true"
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-contrib/easytier-ohrs/**", ".github/workflows/ohos.yml", ".github/workflows/install_rust.sh"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-contrib/easytier-ohrs/**", ".github/workflows/ohos.yml", ".github/actions/**"]'
build-ohos:
runs-on: ubuntu-latest
needs: pre_job
@@ -56,13 +73,12 @@ jobs:
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y \
sudo apt-get install -qq \
build-essential \
wget \
unzip \
git \
pkg-config curl libgl1-mesa-dev expect
sudo apt-get clean
- name: Resolve easytier version
run: |
@@ -134,6 +150,15 @@ jobs:
run: |
echo "TARGET_ARCH=aarch64-linux-ohos" >> $GITHUB_ENV
rustup install stable
rustup default stable
rustup target add aarch64-unknown-linux-ohos
- uses: taiki-e/install-action@v2
with:
tool: ohrs
- name: Create clang wrapper script
run: |
sudo mkdir -p $OHOS_NDK_HOME/native/llvm
@@ -152,11 +177,7 @@ jobs:
run: |
sudo apt-get install -y llvm clang lldb lld
sudo apt-get install -y protobuf-compiler
bash ../../.github/workflows/install_rust.sh
source env.sh
cargo install ohrs
rustup target add aarch64-unknown-linux-ohos
cargo update easytier
ohrs doctor
ohrs build --release --arch aarch
ohrs artifact
+2 -2
View File
@@ -18,7 +18,7 @@ on:
version:
description: 'Version for this release'
type: string
default: 'v2.6.0'
default: 'v2.6.4'
required: true
make_latest:
description: 'Mark this release as latest'
@@ -92,4 +92,4 @@ jobs:
files: |
./zipped_assets/*
token: ${{ secrets.GITHUB_TOKEN }}
tag_name: ${{ inputs.version }}
tag_name: ${{ inputs.version }}
+28 -19
View File
@@ -6,6 +6,10 @@ on:
pull_request:
branches: [ "develop", "main" ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
# RUSTC_WRAPPER: "sccache"
@@ -30,7 +34,7 @@ jobs:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
skip_after_successful_duplicate: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/test.yml", ".github/workflows/install_gui_dep.sh", ".github/workflows/install_rust.sh"]'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/test.yml", ".github/actions/**"]'
check:
name: Run linters & check
@@ -44,35 +48,36 @@ jobs:
uses: ./.github/actions/prepare-build
with:
gui: true
web: true
pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
- name: Install rustfmt and clippy
run: |
rustup component add rustfmt
rustup component add clippy
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: rustfmt,clippy
rustflags: ''
- uses: taiki-e/install-action@cargo-hack
- name: Check Cargo.lock is up to date
run: |
if ! cargo metadata --format-version 1 --locked --no-deps > /dev/null; then
echo "::error::Cargo.lock is out of date. Run cargo generate-lockfile or cargo build locally, then commit Cargo.lock."
exit 1
fi
- name: Check formatting
if: ${{ !cancelled() }}
run: cargo fmt --all -- --check
- name: Check Clippy
if: ${{ !cancelled() }}
run: cargo clippy --all-targets --features full --all -- -D warnings
- name: Check features
if: ${{ !cancelled() }}
run: cargo hack check --package easytier --each-feature --exclude-features macos-ne --verbose
- name: Check Cargo.lock is up to date
if: ${{ !cancelled() }}
run: |
if ! cargo metadata --format-version 1 --locked > /dev/null; then
echo "::error::Cargo.lock is out of date. Run cargo generate-lockfile or cargo build locally, then commit Cargo.lock."
exit 1
fi
pre-test:
name: Build test
runs-on: ubuntu-latest
@@ -85,7 +90,7 @@ jobs:
uses: ./.github/actions/prepare-build
with:
gui: true
web: true
pnpm: true
token: ${{ secrets.GITHUB_TOKEN }}
- uses: Swatinem/rust-cache@v2
@@ -123,6 +128,10 @@ jobs:
- name: Setup tools for test
run: sudo apt install bridge-utils
- name: Setup upnpd for test
run: |
sudo apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y miniupnpd miniupnpd-iptables iptables
- name: Setup system for test
run: |
@@ -146,9 +155,9 @@ jobs:
test:
runs-on: ubuntu-latest
needs: [ pre_job, test_matrix ]
if: needs.pre_job.outputs.should_skip != 'true' && always()
needs: [ pre_job, check, test_matrix ]
if: needs.pre_job.result == 'success' && needs.pre_job.outputs.should_skip != 'true' && !cancelled()
steps:
- name: Mark result as failed
if: needs.test_matrix.result != 'success'
if: contains(needs.*.result, 'failure')
run: exit 1
+3 -3
View File
@@ -26,7 +26,7 @@ Thank you for your interest in contributing to EasyTier! This document provides
#### Required Tools
- Node.js v21 or higher
- pnpm v9 or higher
- Rust toolchain (version 1.93)
- Rust toolchain (version 1.95)
- LLVM and Clang
- Protoc (Protocol Buffers compiler)
@@ -79,8 +79,8 @@ sudo apt install -y bridge-utils
2. Install dependencies:
```bash
# Install Rust toolchain
rustup install 1.93
rustup default 1.93
rustup install 1.95
rustup default 1.95
# Install project dependencies
pnpm -r install
+3 -3
View File
@@ -34,7 +34,7 @@
#### 必需工具
- Node.js v21 或更高版本
- pnpm v9 或更高版本
- Rust 工具链(版本 1.93
- Rust 工具链(版本 1.95
- LLVM 和 Clang
- ProtocProtocol Buffers 编译器)
@@ -87,8 +87,8 @@ sudo apt install -y bridge-utils
2. 安装依赖:
```bash
# 安装 Rust 工具链
rustup install 1.93
rustup default 1.93
rustup install 1.95
rustup default 1.95
# 安装项目依赖
pnpm -r install
Generated
+1667 -1099
View File
File diff suppressed because it is too large Load Diff
+4
View File
@@ -14,6 +14,10 @@ exclude = [
"easytier-contrib/easytier-ohrs", # it needs ohrs sdk
]
[workspace.package]
edition = "2024"
rust-version = "1.95"
[profile.dev]
panic = "unwind"
debug = 2
+3 -3
View File
@@ -108,9 +108,9 @@ After successful execution, you can check the network status using `easytier-cli
```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.0-70e69a38~ |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.2-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.2-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.2-70e69a38~ |
```
You can test connectivity between nodes:
+3 -3
View File
@@ -108,9 +108,9 @@ sudo easytier-core -d --network-name abc --network-secret abc -p tcp://<共享
```text
| ipv4 | hostname | cost | lat_ms | loss_rate | rx_bytes | tx_bytes | tunnel_proto | nat_type | id | version |
| ------------ | -------------- | ----- | ------ | --------- | -------- | -------- | ------------ | -------- | ---------- | --------------- |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.0-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.0-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.0-70e69a38~ |
| 10.126.126.1 | abc-1 | Local | * | * | * | * | udp | FullCone | 439804259 | 2.6.2-70e69a38~ |
| 10.126.126.2 | abc-2 | p2p | 3.452 | 0 | 17.33 kB | 20.42 kB | udp | FullCone | 390879727 | 2.6.2-70e69a38~ |
| | PublicServer_a | p2p | 27.796 | 0.000 | 50.01 kB | 67.46 kB | tcp | Unknown | 3771642457 | 2.6.2-70e69a38~ |
```
您可以测试节点之间的连通性:
@@ -1,7 +1,7 @@
[package]
name = "easytier-android-jni"
version = "0.1.0"
edition = "2021"
edition.workspace = true
[lib]
crate-type = ["cdylib"]
@@ -1,7 +1,7 @@
use easytier::proto::api::manage::{NetworkInstanceRunningInfo, NetworkInstanceRunningInfoMap};
use jni::JNIEnv;
use jni::objects::{JClass, JObjectArray, JString};
use jni::sys::{jint, jstring};
use jni::JNIEnv;
use once_cell::sync::Lazy;
use std::ffi::{CStr, CString};
use std::ptr;
@@ -15,7 +15,7 @@ pub struct KeyValuePair {
}
// 声明外部 C 函数
extern "C" {
unsafe extern "C" {
fn set_tun_fd(inst_name: *const std::ffi::c_char, fd: std::ffi::c_int) -> std::ffi::c_int;
fn get_error_msg(out: *mut *const std::ffi::c_char);
fn free_string(s: *const std::ffi::c_char);
@@ -68,7 +68,7 @@ fn throw_exception(env: &mut JNIEnv, message: &str) {
}
/// 设置 TUN 文件描述符
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_setTunFd(
mut env: JNIEnv,
_class: JClass,
@@ -87,17 +87,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_setTunFd(
unsafe {
let result = set_tun_fd(inst_name_cstr.as_ptr(), fd);
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
result
}
}
/// 解析配置
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_parseConfig(
mut env: JNIEnv,
_class: JClass,
@@ -115,17 +115,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_parseConfig(
unsafe {
let result = parse_config(config_cstr.as_ptr());
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
result
}
}
/// 运行网络实例
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_runNetworkInstance(
mut env: JNIEnv,
_class: JClass,
@@ -143,17 +143,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_runNetworkInstance(
unsafe {
let result = run_network_instance(config_cstr.as_ptr());
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
result
}
}
/// 保持网络实例
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
mut env: JNIEnv,
_class: JClass,
@@ -165,10 +165,10 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
if instance_names.is_null() {
unsafe {
let result = retain_network_instance(ptr::null(), 0);
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
return result;
}
@@ -187,10 +187,10 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
if array_length == 0 {
unsafe {
let result = retain_network_instance(ptr::null(), 0);
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
return result;
}
@@ -234,17 +234,17 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_retainNetworkInstance(
unsafe {
let result = retain_network_instance(c_string_ptrs.as_ptr(), c_string_ptrs.len());
if result != 0 {
if let Some(error) = get_last_error() {
throw_exception(&mut env, &error);
}
if result != 0
&& let Some(error) = get_last_error()
{
throw_exception(&mut env, &error);
}
result
}
}
/// 收集网络信息
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_collectNetworkInfos(
mut env: JNIEnv,
_class: JClass,
@@ -304,7 +304,7 @@ pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_collectNetworkInfos(
}
/// 获取最后的错误信息
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "system" fn Java_com_easytier_jni_EasyTierJNI_getLastError(
env: JNIEnv,
_class: JClass,
+1 -1
View File
@@ -1,7 +1,7 @@
[package]
name = "easytier-ffi"
version = "0.1.0"
edition = "2021"
edition.workspace = true
[lib]
crate-type = ["cdylib"]
+7 -7
View File
@@ -30,7 +30,7 @@ fn set_error_msg(msg: &str) {
/// # Safety
/// Set the tun fd
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn set_tun_fd(
inst_name: *const std::ffi::c_char,
fd: std::ffi::c_int,
@@ -59,7 +59,7 @@ pub unsafe extern "C" fn set_tun_fd(
/// # Safety
/// Get the last error message
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn get_error_msg(out: *mut *const std::ffi::c_char) {
let msg_buf = ERROR_MSG.lock().unwrap();
if msg_buf.is_empty() {
@@ -74,7 +74,7 @@ pub unsafe extern "C" fn get_error_msg(out: *mut *const std::ffi::c_char) {
}
}
#[no_mangle]
#[unsafe(no_mangle)]
pub extern "C" fn free_string(s: *const std::ffi::c_char) {
if s.is_null() {
return;
@@ -86,7 +86,7 @@ pub extern "C" fn free_string(s: *const std::ffi::c_char) {
/// # Safety
/// Parse the config
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn parse_config(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int {
let cfg_str = unsafe {
assert!(!cfg_str.is_null());
@@ -105,7 +105,7 @@ pub unsafe extern "C" fn parse_config(cfg_str: *const std::ffi::c_char) -> std::
/// # Safety
/// Run the network instance
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn run_network_instance(cfg_str: *const std::ffi::c_char) -> std::ffi::c_int {
let cfg_str = unsafe {
assert!(!cfg_str.is_null());
@@ -144,7 +144,7 @@ pub unsafe extern "C" fn run_network_instance(cfg_str: *const std::ffi::c_char)
/// # Safety
/// Retain the network instance
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn retain_network_instance(
inst_names: *const *const std::ffi::c_char,
length: usize,
@@ -188,7 +188,7 @@ pub unsafe extern "C" fn retain_network_instance(
/// # Safety
/// Collect the network infos
#[no_mangle]
#[unsafe(no_mangle)]
pub unsafe extern "C" fn collect_network_infos(
infos: *mut KeyValuePair,
max_length: usize,
+1 -1
View File
@@ -1,6 +1,6 @@
id=easytier_magisk
name=EasyTier_Magisk
version=v2.6.0
version=v2.6.4
versionCode=1
author=EasyTier
description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier)
+544 -132
View File
File diff suppressed because it is too large Load Diff
+10
View File
@@ -7,6 +7,10 @@ edition = "2024"
crate-type=["cdylib"]
[dependencies]
async-trait = "0.1"
base64 = "0.22"
flate2 = "1.1"
gethostname = "1.1"
ohos-hilog-binding = {version = "*", features = ["redirect"]}
easytier = { path = "../../easytier" }
napi-derive-ohos = "1.1"
@@ -26,10 +30,16 @@ napi-ohos = { version = "1.1", default-features = false, features = [
"web_stream",
] }
once_cell = "1.21.3"
ipnet = "2.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.125"
prost-reflect = { version = "0.14.5", default-features = false, features = ["derive"] }
rusqlite = { version = "0.32", features = ["bundled"] }
tracing-subscriber = "0.3.19"
tracing-core = "0.1.33"
tracing = "0.1.41"
tokio = { version = "1", features = ["rt-multi-thread", "sync", "time"] }
url = "2.5"
uuid = { version = "1.5.0", features = [
"v4",
"fast-rng",
@@ -0,0 +1,4 @@
pub(crate) mod repository;
pub(crate) mod services;
pub(crate) mod storage;
pub(crate) mod types;
@@ -0,0 +1,13 @@
#[path = "../../config_repo/field_store.rs"]
mod field_store;
#[path = "../../config_repo/import_export.rs"]
mod import_export;
#[path = "../../config_repo/legacy_migration.rs"]
mod legacy_migration;
#[path = "../../config_repo/validation.rs"]
mod validation;
#[path = "../../config_repo.rs"]
mod repo;
pub use repo::*;
@@ -0,0 +1,2 @@
pub(crate) mod schema_service;
pub(crate) mod share_link_service;
@@ -0,0 +1,414 @@
use easytier::proto::ALL_DESCRIPTOR_BYTES;
use napi_derive_ohos::napi;
use once_cell::sync::Lazy;
use prost_reflect::{Cardinality, DescriptorPool, FieldDescriptor, Kind, MessageDescriptor};
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct FieldOption {
pub label: String,
pub value: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ValidationRule {
pub rule_type: String,
pub arg: String,
pub message: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct NetworkConfigSchema {
pub node_kind: String,
pub name: String,
pub field_number: i32,
pub type_name: Option<String>,
pub semantic_type: Option<String>,
pub value_kind: String,
pub is_list: bool,
pub required: bool,
pub default_value_text: Option<String>,
pub enum_options: Vec<FieldOption>,
pub validations: Vec<ValidationRule>,
pub children: Vec<NetworkConfigSchema>,
pub definitions: Vec<NetworkConfigSchema>,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ConfigFieldMapping {
pub field_name: String,
pub field_number: i32,
}
static DESCRIPTOR_POOL: Lazy<DescriptorPool> = Lazy::new(|| {
DescriptorPool::decode(ALL_DESCRIPTOR_BYTES)
.expect("easytier descriptor pool should decode from embedded protobuf descriptors")
});
const NETWORK_CONFIG_MESSAGE_NAME: &str = "api.manage.NetworkConfig";
fn descriptor_pool() -> &'static DescriptorPool {
&DESCRIPTOR_POOL
}
fn network_config_descriptor() -> MessageDescriptor {
descriptor_pool()
.get_message_by_name(NETWORK_CONFIG_MESSAGE_NAME)
.expect("api.manage.NetworkConfig descriptor should exist")
}
fn field_default_value_text(field: &FieldDescriptor) -> Option<String> {
if field.is_list() || field.is_map() {
return Some("[]".to_string());
}
match field.kind() {
Kind::Bool => Some("false".to_string()),
Kind::String => Some("\"\"".to_string()),
Kind::Bytes => Some("\"\"".to_string()),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => Some("0".to_string()),
Kind::Enum(enum_desc) => enum_desc
.get_value(0)
.map(|value| value.number().to_string()),
Kind::Message(_) => None,
}
}
fn field_type_name(field: &FieldDescriptor) -> Option<String> {
match field.kind() {
Kind::Enum(enum_desc) => Some(enum_desc.full_name().to_string()),
Kind::Message(message_desc) => Some(message_desc.full_name().to_string()),
_ => None,
}
}
fn field_semantic_type(field: &FieldDescriptor) -> Option<String> {
match field.name() {
"virtual_ipv4" => Some("cidr_ip".to_string()),
"network_length" => Some("cidr_mask".to_string()),
"peer_urls" => Some("peer[]".to_string()),
"proxy_cidrs" => Some("cidr[]".to_string()),
"listener_urls" => Some("listener[]".to_string()),
"routes" => Some("route[]".to_string()),
"exit_nodes" => Some("ip[]".to_string()),
"relay_network_whitelist" => Some("network_name[]".to_string()),
"mapped_listeners" => Some("mapped_listener[]".to_string()),
"port_forwards" => Some("port_forward[]".to_string()),
_ => None,
}
}
fn enum_options(kind: Kind) -> Vec<FieldOption> {
match kind {
Kind::Enum(enum_desc) => enum_desc
.values()
.map(|value| FieldOption {
label: value.name().to_string(),
value: value.number().to_string(),
})
.collect(),
_ => Vec::new(),
}
}
fn should_expose_field(field: &FieldDescriptor) -> bool {
match field.containing_oneof() {
Some(_) => field
.field_descriptor_proto()
.proto3_optional
.unwrap_or(false),
None => true,
}
}
fn build_validations(field: &FieldDescriptor) -> Vec<ValidationRule> {
if field.cardinality() == Cardinality::Required {
return vec![ValidationRule {
rule_type: "required".to_string(),
arg: String::new(),
message: format!("{} is required", field.name()),
}];
}
Vec::new()
}
fn kind_to_value_kind(field: &FieldDescriptor) -> String {
if field.is_map() {
return "object".to_string();
}
match field.kind() {
Kind::Bool => "boolean".to_string(),
Kind::String | Kind::Bytes => "string".to_string(),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => "number".to_string(),
Kind::Enum(_) => "enum".to_string(),
Kind::Message(_) => "object".to_string(),
}
}
fn build_node(
node_kind: &str,
name: String,
field_number: i32,
type_name: Option<String>,
semantic_type: Option<String>,
value_kind: String,
is_list: bool,
required: bool,
default_value_text: Option<String>,
enum_options: Vec<FieldOption>,
validations: Vec<ValidationRule>,
children: Vec<NetworkConfigSchema>,
definitions: Vec<NetworkConfigSchema>,
) -> NetworkConfigSchema {
NetworkConfigSchema {
node_kind: node_kind.to_string(),
name,
field_number,
type_name,
semantic_type,
value_kind,
is_list,
required,
default_value_text,
enum_options,
validations,
children,
definitions,
}
}
fn build_map_entry_node(message_desc: &MessageDescriptor) -> NetworkConfigSchema {
let key_field = message_desc.map_entry_key_field();
let value_field = message_desc.map_entry_value_field();
build_node(
"object",
message_desc.name().to_string(),
0,
Some(message_desc.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
vec![
build_schema_field_node(&key_field),
build_schema_field_node(&value_field),
],
Vec::new(),
)
}
fn field_children(field: &FieldDescriptor) -> Vec<NetworkConfigSchema> {
if field.is_map() {
if let Kind::Message(message_desc) = field.kind() {
return vec![build_map_entry_node(&message_desc)];
}
}
match field.kind() {
Kind::Message(message_desc) => build_message_children(&message_desc),
_ => Vec::new(),
}
}
fn build_message_children(message_desc: &MessageDescriptor) -> Vec<NetworkConfigSchema> {
message_desc
.fields()
.filter(should_expose_field)
.map(|field| build_schema_field_node(&field))
.collect()
}
fn build_schema_field_node(field: &FieldDescriptor) -> NetworkConfigSchema {
build_node(
"field",
field.name().to_string(),
field.number() as i32,
field_type_name(field),
field_semantic_type(field),
kind_to_value_kind(field),
field.is_list() || field.is_map(),
field.cardinality() == Cardinality::Required,
field_default_value_text(field),
enum_options(field.kind()),
build_validations(field),
field_children(field),
Vec::new(),
)
}
fn collect_definitions() -> Vec<NetworkConfigSchema> {
let mut definitions = Vec::new();
for message_desc in descriptor_pool().all_messages() {
let full_name = message_desc.full_name();
if full_name == NETWORK_CONFIG_MESSAGE_NAME || message_desc.is_map_entry() {
continue;
}
definitions.push(build_node(
"object",
full_name.to_string(),
0,
Some(full_name.to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&message_desc),
Vec::new(),
));
}
for enum_desc in descriptor_pool().all_enums() {
definitions.push(build_node(
"enum",
enum_desc.full_name().to_string(),
0,
Some(enum_desc.full_name().to_string()),
None,
"enum".to_string(),
false,
false,
None,
enum_options(Kind::Enum(enum_desc.clone())),
Vec::new(),
Vec::new(),
Vec::new(),
));
}
definitions.sort_by(|a, b| a.name.cmp(&b.name));
definitions
}
fn build_network_config_schema() -> NetworkConfigSchema {
let network_config = network_config_descriptor();
build_node(
"schema",
network_config.name().to_string(),
0,
Some(network_config.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&network_config),
collect_definitions(),
)
}
fn build_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
network_config_descriptor()
.fields()
.filter(should_expose_field)
.map(|field| ConfigFieldMapping {
field_name: field.name().to_string(),
field_number: field.number() as i32,
})
.collect()
}
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn schema_is_exposed_as_single_tree_type() {
let schema = get_network_config_schema();
assert_eq!(schema.node_kind, "schema");
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(
schema.type_name.as_deref(),
Some("api.manage.NetworkConfig")
);
let virtual_ipv4 = schema
.children
.iter()
.find(|field| field.name == "virtual_ipv4")
.expect("virtual_ipv4 field");
assert_eq!(virtual_ipv4.semantic_type.as_deref(), Some("cidr_ip"));
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
let secure_mode_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "common.SecureModeConfig")
.expect("secure mode definition");
assert!(
secure_mode_definition
.children
.iter()
.any(|field| field.name == "local_private_key")
);
let networking_method_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "api.manage.NetworkingMethod")
.expect("networking method enum definition");
assert!(
networking_method_definition
.enum_options
.iter()
.any(|option| option.label == "PublicServer")
);
}
}
@@ -0,0 +1,197 @@
use crate::config::repository::{get_config_record, save_config_record};
use crate::config::services::schema_service::get_network_config_field_mappings;
use crate::config::types::stored_config::SharedConfigLinkPayload;
use base64::{Engine as _, engine::general_purpose::URL_SAFE_NO_PAD};
use easytier::proto::api::manage::NetworkConfig;
use flate2::{Compression, read::ZlibDecoder, write::ZlibEncoder};
use gethostname::gethostname;
use std::collections::HashMap;
use std::io::{Read, Write};
use url::Url;
use uuid::Uuid;
const SHARE_LINK_HOST: &str = "easytier.cn";
const SHARE_LINK_PATH: &str = "/comp_cfg";
fn field_name_to_id_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_name, mapping.field_number.to_string()))
.collect()
}
fn field_id_to_name_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_number.to_string(), mapping.field_name))
.collect()
}
fn prune_empty(value: &serde_json::Value) -> Option<serde_json::Value> {
match value {
serde_json::Value::Null => None,
serde_json::Value::Array(values) if values.is_empty() => None,
_ => Some(value.clone()),
}
}
fn map_config_json(config: &NetworkConfig) -> Result<String, String> {
let field_name_to_id = field_name_to_id_map();
let raw = serde_json::to_value(config).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in raw.as_object().cloned().unwrap_or_default() {
let Some(value) = prune_empty(&value) else {
continue;
};
let mapped_key = field_name_to_id.get(&key).cloned().unwrap_or(key);
mapped.insert(mapped_key, value);
}
serde_json::to_string(&mapped).map_err(|err| err.to_string())
}
fn unmap_config_json(raw: &str) -> Result<NetworkConfig, String> {
let field_id_to_name = field_id_to_name_map();
let value = serde_json::from_str::<serde_json::Value>(raw).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in value.as_object().cloned().unwrap_or_default() {
let field_name = field_id_to_name.get(&key).cloned().unwrap_or(key);
mapped.insert(field_name, value);
}
serde_json::from_value(serde_json::Value::Object(mapped)).map_err(|err| err.to_string())
}
fn compress_to_base64url(raw: &str) -> Result<String, String> {
let mut encoder = ZlibEncoder::new(Vec::new(), Compression::best());
encoder
.write_all(raw.as_bytes())
.map_err(|err| err.to_string())?;
let compressed = encoder.finish().map_err(|err| err.to_string())?;
Ok(URL_SAFE_NO_PAD.encode(compressed))
}
fn decompress_from_base64url(raw: &str) -> Result<String, String> {
let compressed = URL_SAFE_NO_PAD.decode(raw).map_err(|err| err.to_string())?;
let mut decoder = ZlibDecoder::new(compressed.as_slice());
let mut out = String::new();
decoder
.read_to_string(&mut out)
.map_err(|err| err.to_string())?;
Ok(out)
}
pub fn build_config_share_link(
config_id: &str,
display_name: Option<String>,
only_start: bool,
) -> Option<String> {
let record = get_config_record(config_id)?;
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let mapped_json = map_config_json(&config).ok()?;
let compressed = compress_to_base64url(&mapped_json).ok()?;
let final_name = display_name
.or(Some(record.meta.display_name))
.filter(|name| !name.is_empty());
let mut url = Url::parse(&format!("https://{SHARE_LINK_HOST}{SHARE_LINK_PATH}")).ok()?;
url.query_pairs_mut().append_pair("cfg", &compressed);
if let Some(name) = final_name {
url.query_pairs_mut().append_pair("name", &name);
}
if only_start {
url.query_pairs_mut().append_pair("only_start", "true");
}
Some(url.to_string())
}
pub fn parse_config_share_link(share_link: &str) -> Option<SharedConfigLinkPayload> {
let url = Url::parse(share_link).ok()?;
if url.host_str()? != SHARE_LINK_HOST || url.path() != SHARE_LINK_PATH {
return None;
}
let cfg = url
.query_pairs()
.find(|(key, _)| key == "cfg")?
.1
.to_string();
let mapped_json = decompress_from_base64url(&cfg).ok()?;
let mut config = unmap_config_json(&mapped_json).ok()?;
config.instance_id = Some(Uuid::new_v4().to_string());
let hostname = gethostname().to_string_lossy().to_string();
if !hostname.is_empty() {
config.hostname = Some(hostname);
}
let config_json = serde_json::to_string(&config).ok()?;
let display_name = url
.query_pairs()
.find(|(key, _)| key == "name")
.map(|(_, value)| value.to_string())
.filter(|name| !name.is_empty());
let only_start = url
.query_pairs()
.find(|(key, _)| key == "only_start")
.map(|(_, value)| value == "true")
.unwrap_or(false);
Some(SharedConfigLinkPayload {
config_json,
display_name,
only_start,
})
}
pub fn import_config_share_link(
share_link: &str,
display_name_override: Option<String>,
) -> Option<String> {
let payload = parse_config_share_link(share_link)?;
let config = serde_json::from_str::<NetworkConfig>(&payload.config_json).ok()?;
let config_id = config.instance_id.clone()?;
let display_name = display_name_override
.filter(|name| !name.is_empty())
.or(payload.display_name)
.unwrap_or_else(|| config_id.clone());
save_config_record(config_id.clone(), display_name, payload.config_json)?;
Some(config_id)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config_repo::{create_config_record, init_config_store};
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
std::env::temp_dir()
.join(format!("easytier_ohrs_share_test_{unique}"))
.to_string_lossy()
.into_owned()
}
#[test]
fn share_link_roundtrip_works() {
assert!(init_config_store(test_root()));
create_config_record("cfg-share".to_string(), "share-demo".to_string())
.expect("create config");
let link = build_config_share_link("cfg-share", None, true).expect("share link");
let payload = parse_config_share_link(&link).expect("parse link");
let config =
serde_json::from_str::<NetworkConfig>(&payload.config_json).expect("config json");
assert!(payload.only_start);
assert_eq!(payload.display_name.as_deref(), Some("share-demo"));
assert_ne!(config.instance_id.as_deref(), Some("cfg-share"));
let imported_id = import_config_share_link(&link, None).expect("import link");
assert_ne!(imported_id, "cfg-share");
}
}
@@ -0,0 +1,333 @@
use crate::config::types::stored_config::{StoredConfigList, StoredConfigMeta};
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::{Connection, OptionalExtension, params};
use std::path::PathBuf;
use std::sync::Mutex;
use std::time::{SystemTime, UNIX_EPOCH};
static CONFIG_DB_PATH: Mutex<Option<PathBuf>> = Mutex::new(None);
const CONFIG_DB_FILE_NAME: &str = "easytier-config-store.db";
#[derive(Debug, Clone)]
struct StoredConfigMetaRecord {
config_id: String,
display_name: String,
created_at: String,
updated_at: String,
favorite: bool,
temporary: bool,
}
pub(crate) fn now_ts_string() -> String {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|d| d.as_secs().to_string())
.unwrap_or_else(|_| "0".to_string())
}
fn db_file_path() -> Option<PathBuf> {
CONFIG_DB_PATH
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
fn init_schema(conn: &Connection) -> rusqlite::Result<()> {
conn.execute_batch(
"PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS stored_configs (
config_id TEXT PRIMARY KEY,
display_name TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
favorite INTEGER NOT NULL DEFAULT 0,
temporary INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE IF NOT EXISTS stored_config_fields (
config_id TEXT NOT NULL,
field_name TEXT NOT NULL,
field_json TEXT NOT NULL,
updated_at TEXT NOT NULL,
PRIMARY KEY (config_id, field_name),
FOREIGN KEY (config_id) REFERENCES stored_configs(config_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_stored_config_fields_config_id
ON stored_config_fields(config_id);",
)
}
pub(crate) fn open_db() -> Option<Connection> {
let path = db_file_path()?;
let conn = match Connection::open(&path) {
Ok(conn) => conn,
Err(e) => {
hilog_error!("[Rust] failed to open config db {}: {}", path.display(), e);
return None;
}
};
if let Err(e) = init_schema(&conn) {
hilog_error!(
"[Rust] failed to initialize config db {}: {}",
path.display(),
e
);
return None;
}
Some(conn)
}
fn row_to_meta(row: &rusqlite::Row<'_>) -> rusqlite::Result<StoredConfigMetaRecord> {
Ok(StoredConfigMetaRecord {
config_id: row.get(0)?,
display_name: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
favorite: row.get::<_, i64>(4)? != 0,
temporary: row.get::<_, i64>(5)? != 0,
})
}
fn load_meta_record(conn: &Connection, config_id: &str) -> Option<StoredConfigMetaRecord> {
conn.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
}
fn to_meta(record: StoredConfigMetaRecord) -> StoredConfigMeta {
StoredConfigMeta {
config_id: record.config_id,
display_name: record.display_name,
created_at: record.created_at,
updated_at: record.updated_at,
favorite: record.favorite,
temporary: record.temporary,
}
}
pub fn init_config_meta_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
if let Err(e) = std::fs::create_dir_all(&root) {
hilog_error!(
"[Rust] failed to create config db dir {}: {}",
root.display(),
e
);
return false;
}
let db_path = root.join(CONFIG_DB_FILE_NAME);
match CONFIG_DB_PATH.lock() {
Ok(mut guard) => {
*guard = Some(db_path.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config db path: {}", e);
return false;
}
}
if open_db().is_none() {
return false;
}
hilog_debug!("[Rust] initialized config db at {}", db_path.display());
true
}
pub fn list_config_meta_entries() -> StoredConfigList {
let Some(conn) = open_db() else {
return StoredConfigList { configs: vec![] };
};
let mut stmt = match conn.prepare(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs
ORDER BY updated_at DESC, display_name ASC",
) {
Ok(stmt) => stmt,
Err(e) => {
hilog_error!("[Rust] failed to prepare list meta query: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let rows = match stmt.query_map([], row_to_meta) {
Ok(rows) => rows,
Err(e) => {
hilog_error!("[Rust] failed to list config meta rows: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let configs = rows.filter_map(Result::ok).map(to_meta).collect();
StoredConfigList { configs }
}
pub fn get_config_display_name(config_id: &str) -> Option<String> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(|record| record.display_name)
}
pub fn get_config_meta(config_id: &str) -> Option<StoredConfigMeta> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(to_meta)
}
pub fn upsert_config_meta(
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> StoredConfigMeta {
let now = now_ts_string();
let Some(conn) = open_db() else {
return StoredConfigMeta {
config_id,
display_name,
created_at: now.clone(),
updated_at: now,
favorite,
temporary,
};
};
let created_at = load_meta_record(&conn, &config_id)
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
if let Err(e) = conn.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
) {
hilog_error!("[Rust] failed to upsert config meta: {}", e);
}
get_config_meta(&config_id).unwrap_or(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
})
}
pub(crate) fn upsert_config_meta_in_tx(
tx: &rusqlite::Transaction<'_>,
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> Option<StoredConfigMeta> {
let now = now_ts_string();
let created_at = tx
.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
tx.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
)
.ok()?;
tx.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(to_meta)
.or(Some(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
}))
}
pub fn set_config_display_name(
config_id: String,
display_name: String,
) -> Option<StoredConfigMeta> {
let conn = open_db()?;
let mut record = load_meta_record(&conn, &config_id)?;
record.display_name = display_name;
record.updated_at = now_ts_string();
conn.execute(
"UPDATE stored_configs
SET display_name = ?2, updated_at = ?3
WHERE config_id = ?1",
params![config_id, record.display_name, record.updated_at],
)
.ok()?;
Some(to_meta(record))
}
pub fn delete_config_meta(config_id: &str) -> bool {
let Some(conn) = open_db() else {
return false;
};
match conn.execute(
"DELETE FROM stored_configs WHERE config_id = ?1",
params![config_id],
) {
Ok(rows) => rows > 0,
Err(e) => {
hilog_error!("[Rust] failed to delete config meta {}: {}", config_id, e);
false
}
}
}
@@ -0,0 +1 @@
pub(crate) mod config_meta;
@@ -0,0 +1 @@
pub(crate) mod stored_config;
@@ -0,0 +1,68 @@
use napi_derive_ohos::napi;
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigMeta {
pub config_id: String,
pub display_name: String,
pub created_at: String,
pub updated_at: String,
pub favorite: bool,
pub temporary: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigRecord {
pub meta: StoredConfigMeta,
pub config_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigList {
pub configs: Vec<StoredConfigMeta>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct ExportTomlResult {
pub toml_text: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigSummary {
pub config_id: String,
pub display_name: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct SharedConfigLinkPayload {
pub config_json: String,
pub display_name: Option<String>,
pub only_start: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct LocalSocketSyncMessage {
pub message_type: String,
pub payload_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct KeyValuePair {
pub key: String,
pub value: String,
}
@@ -0,0 +1,349 @@
use super::{field_store, import_export, legacy_migration, validation};
use crate::config::storage::config_meta::{
delete_config_meta, get_config_meta, init_config_meta_store, list_config_meta_entries, open_db,
upsert_config_meta_in_tx,
};
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::ConfigLoader;
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::params;
use serde_json::Value;
use std::path::PathBuf;
use std::sync::Mutex;
static CONFIG_ROOT_DIR: Mutex<Option<PathBuf>> = Mutex::new(None);
pub(crate) const CONFIG_DIR_NAME: &str = "easytier-configs";
pub(crate) const KERNEL_SOCKET_FILE_NAME: &str = "easytier-kernel.sock";
pub(crate) fn config_root_dir() -> Option<PathBuf> {
CONFIG_ROOT_DIR
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
pub(crate) fn kernel_socket_path() -> Option<PathBuf> {
config_root_dir().map(|root| root.join(KERNEL_SOCKET_FILE_NAME))
}
pub(crate) fn legacy_config_file_path(config_id: &str) -> Option<PathBuf> {
legacy_migration::legacy_config_file_path(&config_root_dir(), CONFIG_DIR_NAME, config_id)
}
pub fn init_config_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
let configs_dir = root.join(CONFIG_DIR_NAME);
if let Err(e) = std::fs::create_dir_all(&configs_dir) {
hilog_error!(
"[Rust] failed to create config dir {}: {}",
configs_dir.display(),
e
);
return false;
}
match CONFIG_ROOT_DIR.lock() {
Ok(mut guard) => {
*guard = Some(root.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config root dir: {}", e);
return false;
}
}
if !init_config_meta_store(root.to_string_lossy().into_owned()) {
return false;
}
hilog_debug!(
"[Rust] initialized config repo at {}",
configs_dir.display()
);
true
}
fn migrate_legacy_file_if_needed(config_id: &str) -> Option<()> {
legacy_migration::migrate_legacy_file_if_needed(
&config_root_dir(),
CONFIG_DIR_NAME,
config_id,
save_config_record,
)
}
pub fn save_config_record(
config_id: String,
display_name: String,
config_json: String,
) -> Option<StoredConfigRecord> {
let config = match validation::validate_config_json(&config_json, config_id.clone()) {
Ok(config) => config,
Err(e) => {
hilog_error!("[Rust] save_config_record failed {}", e);
return None;
}
};
let normalized_json = match serde_json::to_string(&config) {
Ok(raw) => raw,
Err(e) => {
hilog_error!(
"[Rust] failed to serialize normalized config {}: {}",
config_id,
e
);
return None;
}
};
let fields = match validation::config_to_top_level_map(&config) {
Some(fields) => fields,
None => return None,
};
let conn = open_db()?;
let tx = conn.unchecked_transaction().ok()?;
let existing_meta = get_config_meta(&config_id);
let favorite = existing_meta
.as_ref()
.map(|meta| meta.favorite)
.unwrap_or(false);
let temporary = existing_meta
.as_ref()
.map(|meta| meta.temporary)
.unwrap_or(false);
let meta = upsert_config_meta_in_tx(&tx, config_id.clone(), display_name, favorite, temporary)?;
field_store::replace_config_fields(&tx, &config_id, fields)?;
tx.commit().ok()?;
if let Some(legacy_path) = legacy_config_file_path(&config_id) {
if legacy_path.exists() {
let _ = std::fs::remove_file(legacy_path);
}
}
Some(StoredConfigRecord {
meta,
config_json: normalized_json,
})
}
pub fn load_config_json(config_id: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let object = field_store::load_config_map_from_db(config_id)?;
serde_json::to_string(&Value::Object(object)).ok()
}
pub fn get_config_record(config_id: &str) -> Option<StoredConfigRecord> {
let config_json = load_config_json(config_id)?;
let meta = get_config_meta(config_id)?;
Some(StoredConfigRecord { meta, config_json })
}
pub fn get_config_field_value(config_id: &str, field: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let conn = open_db()?;
conn.query_row(
"SELECT field_json FROM stored_config_fields
WHERE config_id = ?1 AND field_name = ?2",
params![config_id, field],
|row| row.get::<_, String>(0),
)
.ok()
}
pub fn set_config_field_value(config_id: &str, field: &str, json_value: &str) -> bool {
if field.contains('.') {
return false;
}
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
let mut value = match serde_json::from_str::<Value>(&raw) {
Ok(value) => value,
Err(_) => return false,
};
let new_field_value = match serde_json::from_str::<Value>(json_value) {
Ok(value) => value,
Err(_) => return false,
};
let object = match value.as_object_mut() {
Some(object) => object,
None => return false,
};
object.insert(field.to_string(), new_field_value);
let normalized = match serde_json::to_string(&value) {
Ok(raw) => raw,
Err(_) => return false,
};
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, normalized).is_some()
}
pub fn get_display_name(config_id: &str) -> Option<String> {
get_config_meta(config_id).map(|meta| meta.display_name)
}
pub fn get_default_config_json() -> Option<String> {
crate::build_default_network_config_json().ok()
}
pub fn create_config_record(config_id: String, display_name: String) -> Option<StoredConfigRecord> {
let raw = get_default_config_json()?;
let mut config = serde_json::from_str::<NetworkConfig>(&raw).ok()?;
config.instance_id = Some(config_id.clone());
let normalized_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, display_name, normalized_json)
}
pub fn start_kernel_with_config_id(config_id: &str) -> bool {
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
crate::run_network_instance_from_json(&raw)
}
pub fn list_config_meta_json() -> String {
serde_json::to_string(&list_config_meta_entries().configs).unwrap_or_else(|_| "[]".to_string())
}
pub fn delete_config_record(config_id: &str) -> bool {
if let Some(path) = legacy_config_file_path(config_id) {
if path.exists() {
let _ = std::fs::remove_file(path);
}
}
let conn = match open_db() {
Some(conn) => conn,
None => return false,
};
if let Err(e) = conn.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!("[Rust] failed to delete config fields {}: {}", config_id, e);
return false;
}
delete_config_meta(config_id)
}
pub fn export_config_toml(config_id: &str) -> Option<ExportTomlResult> {
let record = get_config_record(config_id)?;
import_export::export_config_toml_from_record(&record)
}
pub fn import_toml_config(
toml_text: String,
display_name: Option<String>,
) -> Option<StoredConfigRecord> {
import_export::import_toml_to_record(toml_text, display_name, save_config_record)
}
#[cfg(test)]
mod tests {
use super::*;
use rusqlite::params;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
let dir = std::env::temp_dir().join(format!("easytier_ohrs_test_{}", unique));
dir.to_string_lossy().into_owned()
}
#[test]
fn save_get_export_delete_roundtrip() {
let root = test_root();
assert!(init_config_store(root.clone()));
let config_json = crate::build_default_network_config_json().expect("default config");
let saved = save_config_record("cfg-1".to_string(), "test-config".to_string(), config_json)
.expect("save config");
assert_eq!(saved.meta.config_id, "cfg-1");
assert_eq!(saved.meta.display_name, "test-config");
let loaded = get_config_record("cfg-1").expect("load config");
assert_eq!(loaded.meta.display_name, "test-config");
assert!(loaded.config_json.contains("cfg-1"));
let legacy_json_path = PathBuf::from(&root)
.join(CONFIG_DIR_NAME)
.join("cfg-1.json");
assert!(
!legacy_json_path.exists(),
"config should no longer be persisted as a per-config json file"
);
let conn = open_db().expect("db should be open");
let field_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM stored_config_fields WHERE config_id = ?1",
params!["cfg-1"],
|row| row.get(0),
)
.expect("count config fields");
assert!(field_count > 0, "config fields should be stored in sqlite");
let exported = export_config_toml("cfg-1").expect("export toml");
assert!(exported.toml_text.contains("instance_id"));
assert!(delete_config_record("cfg-1"));
assert!(get_config_record("cfg-1").is_none());
}
#[test]
fn set_config_field_updates_only_requested_top_level_field() {
let root = test_root();
assert!(init_config_store(root));
let config_json = crate::build_default_network_config_json().expect("default config");
save_config_record(
"cfg-field".to_string(),
"field-config".to_string(),
config_json,
)
.expect("save config");
let before_network_name = get_config_field_value("cfg-field", "network_name");
let before_instance_id = get_config_field_value("cfg-field", "instance_id")
.expect("instance id field should exist");
assert!(set_config_field_value(
"cfg-field",
"network_name",
"\"changed-network\""
));
assert_eq!(
get_config_field_value("cfg-field", "network_name"),
Some("\"changed-network\"".to_string())
);
assert_eq!(
get_config_field_value("cfg-field", "instance_id"),
Some(before_instance_id)
);
assert_ne!(
get_config_field_value("cfg-field", "network_name"),
before_network_name
);
}
}
@@ -0,0 +1,67 @@
use crate::config::storage::config_meta::{now_ts_string, open_db};
use ohos_hilog_binding::hilog_error;
use rusqlite::{Connection, params};
use serde_json::{Map, Value};
pub(super) fn load_config_map_from_db(config_id: &str) -> Option<Map<String, Value>> {
let conn = open_db()?;
let mut stmt = conn
.prepare(
"SELECT field_name, field_json
FROM stored_config_fields
WHERE config_id = ?1",
)
.ok()?;
let rows = stmt
.query_map(params![config_id], |row| {
let field_name: String = row.get(0)?;
let field_json: String = row.get(1)?;
Ok((field_name, field_json))
})
.ok()?;
let mut object = Map::new();
for row in rows {
let (field_name, field_json) = row.ok()?;
let value = serde_json::from_str::<Value>(&field_json).ok()?;
object.insert(field_name, value);
}
if object.is_empty() {
None
} else {
Some(object)
}
}
pub(super) fn replace_config_fields(
tx: &Connection,
config_id: &str,
fields: Map<String, Value>,
) -> Option<()> {
if let Err(e) = tx.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!(
"[Rust] failed to clear existing config fields {}: {}",
config_id,
e
);
return None;
}
for (field_name, value) in fields {
let field_json = serde_json::to_string(&value).ok()?;
if let Err(e) = tx.execute(
"INSERT INTO stored_config_fields (config_id, field_name, field_json, updated_at)
VALUES (?1, ?2, ?3, ?4)",
params![config_id, field_name, field_json, now_ts_string()],
) {
hilog_error!("[Rust] failed to persist config field {}: {}", config_id, e);
return None;
}
}
Some(())
}
@@ -0,0 +1,48 @@
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::{ConfigLoader, TomlConfigLoader};
use easytier::proto::api::manage::NetworkConfig;
pub(super) fn export_config_toml_from_record(
record: &StoredConfigRecord,
) -> Option<ExportTomlResult> {
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let toml = config.gen_config().ok()?;
Some(ExportTomlResult {
toml_text: toml.dump(),
})
}
pub(super) fn import_toml_to_record(
toml_text: String,
display_name: Option<String>,
save_config_record: impl Fn(String, String, String) -> Option<StoredConfigRecord>,
) -> Option<StoredConfigRecord> {
let config =
NetworkConfig::new_from_config(TomlConfigLoader::new_from_str(&toml_text).ok()?).ok()?;
let config_id = config.instance_id.clone()?;
let name_from_toml = toml_text
.lines()
.find_map(|line| {
let trimmed = line.trim();
if !trimmed.starts_with("instance_name") {
return None;
}
trimmed.split_once('=').map(|(_, value)| {
value
.trim()
.trim_matches('"')
.trim_matches('\'')
.to_string()
})
})
.filter(|name| !name.is_empty());
let final_name = display_name
.filter(|name| !name.is_empty())
.or(name_from_toml)
.unwrap_or_else(|| config_id.clone());
let config_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, final_name, config_json)
}
@@ -0,0 +1,45 @@
use crate::config::storage::config_meta::get_config_meta;
use ohos_hilog_binding::hilog_error;
use std::path::PathBuf;
pub(super) fn legacy_config_file_path(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
) -> Option<PathBuf> {
root_dir.as_ref().map(|root| {
root.join(config_dir_name)
.join(format!("{}.json", config_id))
})
}
pub(super) fn migrate_legacy_file_if_needed(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
save_config_record: impl Fn(
String,
String,
String,
) -> Option<crate::config::types::stored_config::StoredConfigRecord>,
) -> Option<()> {
let legacy_path = legacy_config_file_path(root_dir, config_dir_name, config_id)?;
if !legacy_path.exists() {
return Some(());
}
let raw = std::fs::read_to_string(&legacy_path).ok()?;
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, raw)?;
if let Err(e) = std::fs::remove_file(&legacy_path) {
hilog_error!(
"[Rust] failed to remove legacy config file {}: {}",
legacy_path.display(),
e
);
}
Some(())
}
@@ -0,0 +1,30 @@
use easytier::proto::api::manage::NetworkConfig;
use serde_json::{Map, Value};
pub(super) fn normalize_config_id(
mut config: NetworkConfig,
requested_id: String,
) -> Result<NetworkConfig, String> {
if requested_id.is_empty() {
return Err("config_id is required".to_string());
}
config.instance_id = Some(requested_id);
Ok(config)
}
pub(super) fn validate_config_json(
config_json: &str,
config_id: String,
) -> Result<NetworkConfig, String> {
let config = serde_json::from_str::<NetworkConfig>(config_json)
.map_err(|e| format!("parse config json failed: {}", e))?;
let config = normalize_config_id(config, config_id)?;
config
.gen_config()
.map_err(|e| format!("generate toml failed: {}", e))?;
Ok(config)
}
pub(super) fn config_to_top_level_map(config: &NetworkConfig) -> Option<Map<String, Value>> {
serde_json::to_value(config).ok()?.as_object().cloned()
}
@@ -0,0 +1,2 @@
pub(crate) mod config_api;
pub(crate) mod runtime_api;
@@ -0,0 +1,46 @@
use crate::config;
pub(crate) fn init_config_store(root_dir: String) -> bool {
config::repository::init_config_store(root_dir)
}
pub(crate) fn list_configs() -> String {
config::repository::list_config_meta_json()
}
pub(crate) fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
config::repository::save_config_record(config_id, display_name, config_json).is_some()
}
pub(crate) fn create_config(config_id: String, display_name: String) -> bool {
config::repository::create_config_record(config_id, display_name).is_some()
}
pub(crate) fn delete_stored_config_meta(config_id: String) -> bool {
config::repository::delete_config_record(&config_id)
}
pub(crate) fn get_config(config_id: String) -> Option<String> {
config::repository::load_config_json(&config_id)
}
pub(crate) fn get_default_config() -> Option<String> {
config::repository::get_default_config_json()
}
pub(crate) fn get_config_field(config_id: String, field: String) -> Option<String> {
config::repository::get_config_field_value(&config_id, &field)
}
pub(crate) fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
config::repository::set_config_field_value(&config_id, &field, &json_value)
}
pub(crate) fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
config::repository::import_toml_config(toml_text, display_name)
.map(|record| record.meta.config_id)
}
pub(crate) fn export_toml(config_id: String) -> Option<String> {
config::repository::export_config_toml(&config_id).map(|ret| ret.toml_text)
}
@@ -0,0 +1,184 @@
use crate::config::repository::load_config_json;
use crate::config::storage::config_meta::get_config_display_name;
use crate::config::types::stored_config::KeyValuePair;
use crate::kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use crate::runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use crate::{ASYNC_RUNTIME, EASYTIER_VERSION, INSTANCE_MANAGER, WEB_CLIENTS};
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_error, hilog_info};
use std::sync::Arc;
pub(crate) fn start_kernel(
config_id: String,
start_kernel_with_config_id: impl Fn(&str) -> bool,
) -> bool {
start_kernel_with_config_id(&config_id)
}
pub(crate) fn stop_kernel(
config_id: String,
stop_web_client: impl Fn(&str) -> bool,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
maybe_stop_local_socket_server: impl Fn(),
) -> bool {
clear_tun_attached(&config_id);
if stop_web_client(&config_id) {
return true;
}
let Some(instance_id) = parse_instance_uuid(&config_id) else {
return false;
};
let ret = INSTANCE_MANAGER
.delete_network_instance(vec![instance_id])
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!("[Rust] stop_kernel failed {}: {}", config_id, err);
false
});
maybe_stop_local_socket_server();
ret
}
pub(crate) fn stop_network_instance(
config_ids: Vec<String>,
stop_kernel: impl Fn(String) -> bool,
) -> bool {
let mut ok = true;
for config_id in config_ids {
ok = stop_kernel(config_id) && ok;
}
ok
}
pub(crate) fn collect_network_infos() -> Vec<KeyValuePair> {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return vec![];
}
};
infos
.into_iter()
.filter_map(|(key, value)| {
serde_json::to_string(&value)
.ok()
.map(|value_json| KeyValuePair {
key: key.to_string(),
value: value_json,
})
})
.collect()
}
pub(crate) fn set_tun_fd(
config_id: String,
fd: i32,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
) -> bool {
let Some(instance_id) = parse_instance_uuid(&config_id) else {
hilog_error!("[Rust] set_tun_fd invalid instance id: {}", config_id);
return false;
};
INSTANCE_MANAGER
.set_tun_fd(&instance_id, fd)
.map(|_| {
mark_tun_attached(&config_id);
hilog_info!(
"[Rust] set_tun_fd success instance={} fd={} marked_attached=true",
config_id,
fd
);
true
})
.unwrap_or_else(|err| {
hilog_error!("[Rust] set_tun_fd failed {}: {}", config_id, err);
false
})
}
pub(crate) fn get_runtime_snapshot() -> RuntimeAggregateState {
get_runtime_snapshot_inner()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return RuntimeAggregateState {
instances: vec![],
tun: TunAggregateState {
active: false,
attached_instance_ids: vec![],
aggregated_routes: vec![],
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count: 0,
};
}
};
let mut instances = Vec::with_capacity(infos.len());
for (instance_uuid, info) in infos {
let config_id = instance_uuid.to_string();
let display_name = get_config_display_name(&config_id).unwrap_or_else(|| config_id.clone());
let config_json = load_config_json(&config_id);
let stored_config = config_json
.as_deref()
.and_then(|raw| serde_json::from_str::<NetworkConfig>(raw).ok());
let magic_dns_enabled = stored_config
.as_ref()
.and_then(|cfg| cfg.enable_magic_dns)
.unwrap_or(false);
let need_exit_node = stored_config
.as_ref()
.map(|cfg| !cfg.exit_nodes.is_empty())
.unwrap_or(false);
instances.push(runtime_instance_from_running_info(
config_id,
display_name,
magic_dns_enabled,
need_exit_node,
info,
));
}
instances.sort_by(|a, b| {
a.display_name
.cmp(&b.display_name)
.then_with(|| a.instance_id.cmp(&b.instance_id))
});
let attached_instance_ids = instances
.iter()
.filter(|instance| instance.tun_required)
.map(|instance| instance.instance_id.clone())
.collect::<Vec<_>>();
let aggregated_routes = aggregate_requested_tun_routes(&instances);
let running_instance_count =
instances.iter().filter(|instance| instance.running).count() as i32;
let tun_active = !attached_instance_ids.is_empty();
RuntimeAggregateState {
instances,
tun: TunAggregateState {
active: tun_active,
attached_instance_ids,
aggregated_routes,
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count,
}
}
@@ -0,0 +1,6 @@
mod protocol;
mod routing;
mod socket_server;
pub(crate) use routing::aggregate_requested_tun_routes;
pub use socket_server::{start_local_socket_server, stop_local_socket_server};
@@ -0,0 +1,50 @@
use crate::config::types::stored_config::LocalSocketSyncMessage;
use serde::Serialize;
use std::io::{Error, ErrorKind, Write};
use std::os::unix::net::UnixStream;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct TunRequestPayload {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub aggregated_routes: Vec<String>,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
}
pub(crate) fn send_local_socket_message(
stream: &mut UnixStream,
message_type: &str,
payload_json: String,
) -> std::io::Result<()> {
let message = LocalSocketSyncMessage {
message_type: message_type.to_string(),
payload_json,
};
let mut raw = serde_json::to_vec(&message)
.map_err(|err| Error::new(ErrorKind::InvalidData, err.to_string()))?;
raw.push(b'\n');
stream.write_all(&raw)?;
Ok(())
}
pub(crate) fn broadcast_local_socket_message(
clients: &mut Vec<UnixStream>,
message_type: &str,
payload_json: &str,
) -> bool {
let mut active_clients = Vec::with_capacity(clients.len());
let mut delivered = false;
for mut client in clients.drain(..) {
if send_local_socket_message(&mut client, message_type, payload_json.to_string()).is_ok() {
delivered = true;
active_clients.push(client);
}
}
*clients = active_clients;
delivered
}
@@ -0,0 +1,105 @@
use crate::config::repository::load_config_json;
use crate::runtime::state::runtime_state::RuntimeInstanceState;
use easytier::proto::api::manage::NetworkConfig;
use ipnet::IpNet;
use ohos_hilog_binding::hilog_debug;
use std::collections::HashSet;
use std::net::IpAddr;
pub(crate) fn load_manual_routes(config_id: &str) -> Vec<String> {
load_config_json(config_id)
.and_then(|raw| serde_json::from_str::<NetworkConfig>(&raw).ok())
.map(|config| config.routes)
.unwrap_or_default()
}
fn normalize_route_cidr(route: &str) -> Option<String> {
route
.parse::<IpNet>()
.ok()
.map(|network| match network {
IpNet::V4(net) => net.trunc().to_string(),
IpNet::V6(net) => net.trunc().to_string(),
})
.or_else(|| {
route.parse::<IpAddr>().ok().map(|addr| match addr {
IpAddr::V4(ip) => format!("{}/32", ip),
IpAddr::V6(ip) => format!("{}/128", ip),
})
})
}
fn simplify_routes(routes: Vec<String>) -> Vec<String> {
let mut parsed = routes
.into_iter()
.filter_map(|route| normalize_route_cidr(&route))
.filter_map(|route| route.parse::<IpNet>().ok())
.collect::<Vec<_>>();
parsed.sort_by(|left, right| {
left.prefix_len()
.cmp(&right.prefix_len())
.then_with(|| left.network().to_string().cmp(&right.network().to_string()))
});
let mut simplified = Vec::<IpNet>::new();
'outer: for route in parsed {
for existing in &simplified {
if existing.contains(&route.network()) && existing.prefix_len() <= route.prefix_len() {
continue 'outer;
}
}
simplified.retain(|existing| {
!(route.contains(&existing.network()) && route.prefix_len() <= existing.prefix_len())
});
simplified.push(route);
}
let mut seen = HashSet::new();
simplified
.into_iter()
.map(|route| route.to_string())
.filter(|route| seen.insert(route.clone()))
.collect()
}
pub(crate) fn aggregate_tun_routes(instance: &RuntimeInstanceState) -> Vec<String> {
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
let manual_routes = load_manual_routes(&instance.config_id);
let proxy_cidrs = instance
.routes
.iter()
.flat_map(|route| route.proxy_cidrs.iter().cloned())
.collect::<Vec<_>>();
let mut raw_routes = Vec::new();
if let Some(cidr) = virtual_ipv4_cidr.clone() {
raw_routes.push(cidr);
}
raw_routes.extend(manual_routes.iter().cloned());
raw_routes.extend(proxy_cidrs.iter().cloned());
let aggregated_routes = simplify_routes(raw_routes);
hilog_debug!(
"[Rust] aggregate_tun_routes instance={} proxy_cidrs={:?} aggregated_routes={:?}",
instance.instance_id,
proxy_cidrs,
aggregated_routes
);
aggregated_routes
}
pub(crate) fn aggregate_requested_tun_routes(instances: &[RuntimeInstanceState]) -> Vec<String> {
let mut aggregated_routes = Vec::new();
let mut seen_routes = HashSet::new();
for instance in instances.iter().filter(|instance| instance.tun_required) {
for route in aggregate_tun_routes(instance) {
if seen_routes.insert(route.clone()) {
aggregated_routes.push(route);
}
}
}
aggregated_routes
}
@@ -0,0 +1,196 @@
use super::protocol::{TunRequestPayload, broadcast_local_socket_message};
use crate::config::repository::kernel_socket_path;
use crate::get_runtime_snapshot_inner;
use crate::kernel_bridge::routing::aggregate_tun_routes;
use ohos_hilog_binding::{hilog_error, hilog_info};
use once_cell::sync::Lazy;
use std::collections::{HashMap, HashSet};
use std::io::ErrorKind;
use std::os::unix::net::{UnixListener, UnixStream};
use std::path::PathBuf;
use std::sync::Mutex;
use std::sync::atomic::{AtomicBool, Ordering};
use std::thread::{self, JoinHandle};
use std::time::Duration;
struct LocalSocketState {
stop_flag: std::sync::Arc<AtomicBool>,
socket_path: PathBuf,
worker: JoinHandle<()>,
}
static LOCAL_SOCKET_STATE: Lazy<Mutex<Option<LocalSocketState>>> = Lazy::new(|| Mutex::new(None));
pub fn start_local_socket_server() -> bool {
let socket_path = match kernel_socket_path() {
Some(path) => path,
None => {
hilog_error!("[Rust] kernel socket path unavailable");
return false;
}
};
match LOCAL_SOCKET_STATE.lock() {
Ok(guard) if guard.is_some() => return true,
Ok(_) => {}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
}
if socket_path.exists() {
let _ = std::fs::remove_file(&socket_path);
}
let listener = match UnixListener::bind(&socket_path) {
Ok(listener) => listener,
Err(err) => {
hilog_error!(
"[Rust] bind localsocket failed {}: {}",
socket_path.display(),
err
);
return false;
}
};
if let Err(err) = listener.set_nonblocking(true) {
hilog_error!("[Rust] set localsocket nonblocking failed: {}", err);
let _ = std::fs::remove_file(&socket_path);
return false;
}
let stop_flag = std::sync::Arc::new(AtomicBool::new(false));
let worker_stop_flag = stop_flag.clone();
let worker = thread::spawn(move || {
let mut last_snapshot_json = String::new();
let mut delivered_tun_requests = HashSet::new();
let mut last_tun_route_signatures = HashMap::<String, String>::new();
let mut clients = Vec::<UnixStream>::new();
while !worker_stop_flag.load(Ordering::Relaxed) {
let mut accepted_client = false;
loop {
match listener.accept() {
Ok((stream, _addr)) => {
accepted_client = true;
clients.push(stream);
}
Err(err) if err.kind() == ErrorKind::WouldBlock => break,
Err(err) => {
hilog_error!("[Rust] accept localsocket failed: {}", err);
break;
}
}
}
let snapshot = get_runtime_snapshot_inner();
let snapshot_json = match serde_json::to_string(&snapshot) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize runtime snapshot failed: {}", err);
thread::sleep(Duration::from_millis(250));
continue;
}
};
if accepted_client || snapshot_json != last_snapshot_json {
let _ = broadcast_local_socket_message(
&mut clients,
"runtime_snapshot",
&snapshot_json,
);
last_snapshot_json = snapshot_json;
}
for instance in snapshot.instances.iter() {
if instance.running && instance.tun_required {
let virtual_ipv4 = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4.clone());
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
if clients.is_empty() {
continue;
}
if virtual_ipv4.is_none() || virtual_ipv4_cidr.is_none() {
continue;
}
let aggregated_routes = aggregate_tun_routes(instance);
let route_signature = serde_json::to_string(&aggregated_routes)
.unwrap_or_else(|_| "[]".to_string());
let should_send = !delivered_tun_requests.contains(&instance.instance_id)
|| last_tun_route_signatures
.get(&instance.instance_id)
.map(|value| value != &route_signature)
.unwrap_or(true);
if !should_send {
continue;
}
let payload = TunRequestPayload {
config_id: instance.config_id.clone(),
instance_id: instance.instance_id.clone(),
display_name: instance.display_name.clone(),
virtual_ipv4,
virtual_ipv4_cidr,
aggregated_routes,
magic_dns_enabled: instance.magic_dns_enabled,
need_exit_node: instance.need_exit_node,
};
let payload_json = match serde_json::to_string(&payload) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize tun request failed: {}", err);
continue;
}
};
if broadcast_local_socket_message(&mut clients, "tun_request", &payload_json) {
delivered_tun_requests.insert(instance.instance_id.clone());
last_tun_route_signatures
.insert(instance.instance_id.clone(), route_signature);
}
} else {
delivered_tun_requests.remove(&instance.instance_id);
last_tun_route_signatures.remove(&instance.instance_id);
}
}
thread::sleep(Duration::from_millis(250));
}
});
match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => {
*guard = Some(LocalSocketState {
stop_flag,
socket_path,
worker,
});
true
}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
false
}
}
}
pub fn stop_local_socket_server() -> bool {
let state = match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => guard.take(),
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
};
if let Some(state) = state {
state.stop_flag.store(true, Ordering::Relaxed);
let _ = state.worker.join();
let _ = std::fs::remove_file(state.socket_path);
}
true
}
+435 -138
View File
@@ -1,185 +1,482 @@
mod native_log;
mod config;
mod exports;
mod kernel_bridge;
mod platform;
mod runtime;
use config::repository::{
create_config_record, delete_config_record, export_config_toml, get_config_field_value,
get_default_config_json, import_toml_config, init_config_store as init_repo_store,
list_config_meta_json, save_config_record, set_config_field_value, start_kernel_with_config_id,
};
use config::services::schema_service::{
ConfigFieldMapping, NetworkConfigSchema,
get_network_config_field_mappings as build_network_config_field_mappings,
get_network_config_schema as build_network_config_schema,
};
use config::services::share_link_service::{
build_config_share_link as build_config_share_link_inner,
import_config_share_link as import_config_share_link_inner,
parse_config_share_link as parse_config_share_link_inner,
};
use config::storage::config_meta::get_config_display_name;
use config::types::stored_config::{KeyValuePair, SharedConfigLinkPayload};
use easytier::common::config::{ConfigFileControl, ConfigLoader, TomlConfigLoader};
use easytier::common::constants::EASYTIER_VERSION;
use easytier::instance_manager::NetworkInstanceManager;
use easytier::proto::api::manage::NetworkConfig;
use easytier::proto::api::manage::NetworkingMethod;
use easytier::web_client::{WebClient, WebClientHooks, run_web_client};
use kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use napi_derive_ohos::napi;
use ohos_hilog_binding::{hilog_debug, hilog_error};
use ohos_hilog_binding::{hilog_error, hilog_info};
use runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use std::collections::{HashMap, HashSet};
use std::format;
use std::sync::{Arc, Mutex};
use tokio::runtime::{Builder, Runtime};
use uuid::Uuid;
static INSTANCE_MANAGER: once_cell::sync::Lazy<NetworkInstanceManager> =
once_cell::sync::Lazy::new(NetworkInstanceManager::new);
pub(crate) static INSTANCE_MANAGER: once_cell::sync::Lazy<Arc<NetworkInstanceManager>> =
once_cell::sync::Lazy::new(|| Arc::new(NetworkInstanceManager::new()));
static ASYNC_RUNTIME: once_cell::sync::Lazy<Runtime> = once_cell::sync::Lazy::new(|| {
Builder::new_multi_thread()
.enable_all()
.build()
.expect("tokio runtime for easytier-ohrs")
});
static WEB_CLIENTS: once_cell::sync::Lazy<Mutex<HashMap<String, ManagedWebClient>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashMap::new()));
#[napi(object)]
pub struct KeyValuePair {
pub key: String,
pub value: String,
#[derive(Default)]
struct TrackedWebClientHooks {
instance_ids: Mutex<HashSet<Uuid>>,
}
#[napi]
pub fn easytier_version() -> String {
EASYTIER_VERSION.to_string()
struct ManagedWebClient {
_client: WebClient,
hooks: Arc<TrackedWebClientHooks>,
}
#[napi]
pub fn set_tun_fd(inst_id: String, fd: i32) -> bool {
match Uuid::try_parse(&inst_id) {
Ok(uuid) => match INSTANCE_MANAGER.set_tun_fd(&uuid, fd) {
Ok(_) => {
hilog_debug!("[Rust] set tun fd {} to {}.", fd, inst_id);
true
}
Err(e) => {
hilog_error!("[Rust] cant set tun fd {} to {}. {}", fd, inst_id, e);
false
}
},
Err(e) => {
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e);
#[async_trait::async_trait]
impl WebClientHooks for TrackedWebClientHooks {
async fn post_run_network_instance(&self, id: &Uuid) -> Result<(), String> {
self.instance_ids
.lock()
.map_err(|err| err.to_string())?
.insert(*id);
Ok(())
}
async fn post_remove_network_instances(&self, ids: &[Uuid]) -> Result<(), String> {
let mut guard = self.instance_ids.lock().map_err(|err| err.to_string())?;
for id in ids {
guard.remove(id);
}
Ok(())
}
}
fn is_config_server_config(config: &NetworkConfig) -> bool {
matches!(
NetworkingMethod::try_from(config.networking_method.unwrap_or_default())
.unwrap_or_default(),
NetworkingMethod::PublicServer
) && config
.public_server_url
.as_ref()
.is_some_and(|url| !url.trim().is_empty())
}
fn stop_web_client(config_id: &str) -> bool {
let managed = match WEB_CLIENTS.lock() {
Ok(mut guard) => guard.remove(config_id),
Err(err) => {
hilog_error!("[Rust] stop_web_client lock failed {}", err);
return false;
}
};
let Some(managed) = managed else {
return false;
};
let tracked_ids = managed
.hooks
.instance_ids
.lock()
.map(|guard| guard.iter().copied().collect::<Vec<_>>())
.unwrap_or_default();
drop(managed);
if tracked_ids.is_empty() {
maybe_stop_local_socket_server();
return true;
}
let ret = INSTANCE_MANAGER
.delete_network_instance(tracked_ids)
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!(
"[Rust] stop config server instances failed {}: {}",
config_id,
err
);
false
});
maybe_stop_local_socket_server();
ret
}
fn ensure_local_socket_server_started() -> bool {
start_local_socket_server_inner()
}
fn maybe_stop_local_socket_server() {
let no_local_instances = INSTANCE_MANAGER.list_network_instance_ids().is_empty();
let no_web_clients = WEB_CLIENTS
.lock()
.map(|guard| guard.is_empty())
.unwrap_or(false);
if no_local_instances && no_web_clients {
let _ = stop_local_socket_server_inner();
}
}
fn run_config_server_instance(config_id: &str, config: &NetworkConfig) -> bool {
if INSTANCE_MANAGER
.list_network_instance_ids()
.iter()
.next()
.is_some()
{
hilog_error!("[Rust] there is a running instance!");
return false;
}
let Some(config_server_url) = config.public_server_url.clone() else {
hilog_error!("[Rust] public_server_url missing for config server mode");
return false;
};
let hooks = Arc::new(TrackedWebClientHooks::default());
let secure_mode = config
.secure_mode
.as_ref()
.map(|mode| mode.enabled)
.unwrap_or(false);
let hostname = config.hostname.clone();
if !ensure_local_socket_server_started() {
return false;
}
let client = ASYNC_RUNTIME.block_on(run_web_client(
&config_server_url,
None,
hostname,
secure_mode,
INSTANCE_MANAGER.clone(),
Some(hooks.clone()),
));
let client = match client {
Ok(client) => client,
Err(err) => {
hilog_error!("[Rust] start config server failed {}", err);
return false;
}
};
match WEB_CLIENTS.lock() {
Ok(mut guard) => {
guard.insert(
config_id.to_string(),
ManagedWebClient {
_client: client,
hooks,
},
);
true
}
Err(err) => {
hilog_error!("[Rust] store config server client failed {}", err);
false
}
}
}
#[napi]
pub fn default_network_config() -> String {
match NetworkConfig::new_from_config(TomlConfigLoader::default()) {
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)),
Err(e) => {
hilog_error!("[Rust] default_network_config failed {}", e);
format!("ERROR {}", e)
}
}
pub(crate) fn build_default_network_config_json() -> Result<String, String> {
let config = NetworkConfig::new_from_config(TomlConfigLoader::default())
.map_err(|e| format!("default_network_config failed {}", e))?;
serde_json::to_string(&config).map_err(|e| format!("default_network_config failed {}", e))
}
#[napi]
pub fn convert_toml_to_network_config(cfg_str: String) -> String {
match TomlConfigLoader::new_from_str(&cfg_str) {
Ok(cfg) => match NetworkConfig::new_from_config(cfg) {
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)),
Err(e) => {
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
},
Err(e) => {
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
}
fn convert_toml_to_network_config_inner(toml_text: &str) -> Result<String, String> {
let config = NetworkConfig::new_from_config(
TomlConfigLoader::new_from_str(toml_text).map_err(|e| e.to_string())?,
)
.map_err(|e| e.to_string())?;
serde_json::to_string(&config).map_err(|e| e.to_string())
}
#[napi]
pub fn parse_network_config(cfg_json: String) -> bool {
match serde_json::from_str::<NetworkConfig>(&cfg_json) {
Ok(cfg) => match cfg.gen_config() {
Ok(toml) => {
hilog_debug!("[Rust] Convert to Toml {}", toml.dump());
true
}
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
},
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
}
fn parse_network_config_inner(cfg_json: &str) -> bool {
serde_json::from_str::<NetworkConfig>(cfg_json)
.ok()
.and_then(|cfg| cfg.gen_config().ok())
.is_some()
}
#[napi]
pub fn run_network_instance(cfg_json: String) -> bool {
let cfg = match serde_json::from_str::<NetworkConfig>(&cfg_json) {
Ok(cfg) => match cfg.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
},
pub(crate) fn run_network_instance_from_json(cfg_json: &str) -> bool {
let config = match serde_json::from_str::<NetworkConfig>(cfg_json) {
Ok(cfg) => cfg,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
};
if INSTANCE_MANAGER.list_network_instance_ids().len() > 0 {
if is_config_server_config(&config) {
let Some(config_id) = config.instance_id.as_deref() else {
hilog_error!("[Rust] config server config missing instance id");
return false;
};
return run_config_server_instance(config_id, &config);
}
let cfg = match config.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
};
if !INSTANCE_MANAGER.list_network_instance_ids().is_empty() {
hilog_error!("[Rust] there is a running instance!");
return false;
}
if !ensure_local_socket_server_started() {
return false;
}
let inst_id = cfg.get_id();
if INSTANCE_MANAGER
.list_network_instance_ids()
.contains(&inst_id)
{
hilog_error!("[Rust] instance {} already exists", inst_id);
return false;
}
INSTANCE_MANAGER
.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG)
.unwrap();
true
}
#[napi]
pub fn stop_network_instance(inst_names: Vec<String>) {
INSTANCE_MANAGER
.delete_network_instance(
inst_names
.into_iter()
.filter_map(|s| Uuid::parse_str(&s).ok())
.collect(),
)
.unwrap();
hilog_debug!("[Rust] stop_network_instance");
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
let mut result = Vec::new();
match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(map) => {
for (uuid, info) in map.iter() {
// convert value to json string
let value = match serde_json::to_string(&info) {
Ok(value) => value,
Err(e) => {
hilog_error!("[Rust] failed to serialize instance {} info: {}", uuid, e);
continue;
}
};
result.push(KeyValuePair {
key: uuid.clone().to_string(),
value: value.clone(),
});
}
}
Err(_) => {}
}
result
}
#[napi]
pub fn collect_running_network() -> Vec<String> {
INSTANCE_MANAGER
.list_network_instance_ids()
.clone()
.into_iter()
.map(|id| id.to_string())
.collect()
}
#[napi]
pub fn is_running_network(inst_id: String) -> bool {
match Uuid::try_parse(&inst_id) {
Ok(uuid) => INSTANCE_MANAGER.list_network_instance_ids().contains(&uuid),
Err(e) => {
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e);
match INSTANCE_MANAGER.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG) {
Ok(_) => true,
Err(err) => {
hilog_error!("[Rust] start_kernel failed for {}: {}", inst_id, err);
false
}
}
}
fn parse_instance_uuid(config_id: &str) -> Option<Uuid> {
match Uuid::parse_str(config_id) {
Ok(uuid) => Some(uuid),
Err(err) => {
hilog_error!("[Rust] invalid config_id {}: {}", config_id, err);
None
}
}
}
#[napi]
pub fn init_config_store(root_dir: String) -> bool {
exports::config_api::init_config_store(root_dir)
}
#[napi]
pub fn list_configs() -> String {
exports::config_api::list_configs()
}
#[napi]
pub fn get_config_display_name_by_id(config_id: String) -> Option<String> {
get_config_display_name(&config_id)
}
#[napi]
pub fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
exports::config_api::save_config(config_id, display_name, config_json)
}
#[napi]
pub fn create_config(config_id: String, display_name: String) -> bool {
exports::config_api::create_config(config_id, display_name)
}
#[napi]
pub fn rename_stored_config(config_id: String, display_name: String) -> bool {
config::storage::config_meta::set_config_display_name(config_id, display_name).is_some()
}
#[napi]
pub fn delete_stored_config_meta(config_id: String) -> bool {
exports::config_api::delete_stored_config_meta(config_id)
}
#[napi]
pub fn get_config(config_id: String) -> Option<String> {
exports::config_api::get_config(config_id)
}
#[napi]
pub fn get_default_config() -> Option<String> {
exports::config_api::get_default_config()
}
#[napi]
pub fn get_config_field(config_id: String, field: String) -> Option<String> {
exports::config_api::get_config_field(config_id, field)
}
#[napi]
pub fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
exports::config_api::set_config_field(config_id, field, json_value)
}
#[napi]
pub fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
exports::config_api::import_toml(toml_text, display_name)
}
#[napi]
pub fn export_toml(config_id: String) -> Option<String> {
exports::config_api::export_toml(config_id)
}
#[napi]
pub fn start_kernel(config_id: String) -> bool {
exports::runtime_api::start_kernel(config_id, start_kernel_with_config_id)
}
#[napi]
pub fn stop_kernel(config_id: String) -> bool {
exports::runtime_api::stop_kernel(
config_id,
stop_web_client,
parse_instance_uuid,
maybe_stop_local_socket_server,
)
}
#[napi]
pub fn stop_network_instance(config_ids: Vec<String>) -> bool {
exports::runtime_api::stop_network_instance(config_ids, stop_kernel)
}
#[napi]
pub fn easytier_version() -> String {
EASYTIER_VERSION.to_string()
}
#[napi]
pub fn default_network_config() -> String {
get_default_config().unwrap_or_else(|| "{}".to_string())
}
#[napi]
pub fn convert_toml_to_network_config(toml_text: String) -> String {
convert_toml_to_network_config_inner(&toml_text).unwrap_or_else(|err| format!("ERROR: {err}"))
}
#[napi]
pub fn parse_network_config(cfg_json: String) -> bool {
parse_network_config_inner(&cfg_json)
}
#[napi]
pub fn run_network_instance(cfg_json: String) -> bool {
run_network_instance_from_json(&cfg_json)
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
exports::runtime_api::collect_network_infos()
}
#[napi]
pub fn set_tun_fd(config_id: String, fd: i32) -> bool {
exports::runtime_api::set_tun_fd(config_id, fd, parse_instance_uuid)
}
#[napi]
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
#[napi]
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn exported_plain_object_schema_contains_core_networkconfig_metadata() {
let schema = get_network_config_schema();
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(schema.node_kind, "schema");
assert!(
schema
.children
.iter()
.any(|field| field.name == "network_name")
);
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
}
}
#[napi]
pub fn get_runtime_snapshot() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot_inner()
}
#[napi]
pub fn build_config_share_link(config_id: String, only_start: Option<bool>) -> Option<String> {
build_config_share_link_inner(&config_id, None, only_start.unwrap_or(false))
}
#[napi]
pub fn parse_config_share_link(share_link: String) -> Option<SharedConfigLinkPayload> {
parse_config_share_link_inner(&share_link)
}
#[napi]
pub fn import_config_share_link(
share_link: String,
display_name_override: Option<String>,
) -> Option<String> {
import_config_share_link_inner(&share_link, display_name_override)
}
@@ -0,0 +1 @@
pub(crate) mod logging;
@@ -0,0 +1 @@
pub(crate) mod native_log;
@@ -0,0 +1 @@
pub(crate) mod state;
@@ -0,0 +1 @@
pub(crate) mod runtime_state;
@@ -0,0 +1,293 @@
use easytier::proto::{api, common};
use napi_derive_ohos::napi;
use serde::Serialize;
use std::collections::HashSet;
use std::sync::Mutex;
static ATTACHED_TUN_INSTANCE_IDS: once_cell::sync::Lazy<Mutex<HashSet<String>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashSet::new()));
pub fn mark_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.insert(instance_id.to_string());
}
}
pub fn clear_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.remove(instance_id);
}
}
pub fn is_tun_attached(instance_id: &str) -> bool {
ATTACHED_TUN_INSTANCE_IDS
.lock()
.map(|guard| guard.contains(instance_id))
.unwrap_or(false)
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnStats {
pub rx_bytes: i64,
pub tx_bytes: i64,
pub rx_packets: i64,
pub tx_packets: i64,
pub latency_us: i64,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnInfo {
pub conn_id: String,
pub my_peer_id: i64,
pub peer_id: i64,
pub features: Vec<String>,
pub tunnel_type: Option<String>,
pub local_addr: Option<String>,
pub remote_addr: Option<String>,
pub resolved_remote_addr: Option<String>,
pub stats: Option<PeerConnStats>,
pub loss_rate: Option<f64>,
pub is_client: bool,
pub network_name: Option<String>,
pub is_closed: bool,
pub secure_auth_level: Option<i32>,
pub peer_identity_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerInfo {
pub peer_id: i64,
pub default_conn_id: Option<String>,
pub directly_connected_conns: Vec<String>,
pub conns: Vec<PeerConnInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RouteView {
pub peer_id: i64,
pub hostname: Option<String>,
pub ipv4: Option<String>,
pub ipv4_cidr: Option<String>,
pub ipv6_cidr: Option<String>,
pub proxy_cidrs: Vec<String>,
pub next_hop_peer_id: Option<i64>,
pub cost: Option<i32>,
pub path_latency: Option<i64>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
pub inst_id: Option<String>,
pub version: Option<String>,
pub is_public_server: Option<bool>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct MyNodeInfo {
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub hostname: Option<String>,
pub version: Option<String>,
pub peer_id: Option<i64>,
pub listeners: Vec<String>,
pub vpn_portal_cfg: Option<String>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeInstanceState {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub running: bool,
pub tun_required: bool,
pub tun_attached: bool,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
pub error_message: Option<String>,
pub my_node_info: Option<MyNodeInfo>,
pub events: Vec<String>,
pub routes: Vec<RouteView>,
pub peers: Vec<PeerInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct TunAggregateState {
pub active: bool,
pub attached_instance_ids: Vec<String>,
pub aggregated_routes: Vec<String>,
pub dns_servers: Vec<String>,
pub need_rebuild: bool,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeAggregateState {
pub instances: Vec<RuntimeInstanceState>,
pub tun: TunAggregateState,
pub running_instance_count: i32,
}
fn stringify_ipv4_inet(value: Option<common::Ipv4Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_ipv6_inet(value: Option<common::Ipv6Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_url(value: Option<common::Url>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_uuid(value: Option<common::Uuid>) -> Option<String> {
value.map(|v| v.to_string())
}
fn optional_u32_to_i64(value: Option<u32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn optional_i32_to_i64(value: Option<i32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn route_to_view(route: api::instance::Route) -> RouteView {
let stun = route.stun_info;
let feature_flag = route.feature_flag;
RouteView {
peer_id: route.peer_id as i64,
hostname: (!route.hostname.is_empty()).then_some(route.hostname),
ipv4: route
.ipv4_addr
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
ipv4_cidr: stringify_ipv4_inet(route.ipv4_addr),
ipv6_cidr: stringify_ipv6_inet(route.ipv6_addr),
proxy_cidrs: route.proxy_cidrs,
next_hop_peer_id: optional_u32_to_i64(route.next_hop_peer_id_latency_first)
.or_else(|| Some(route.next_hop_peer_id as i64)),
cost: Some(route.cost),
path_latency: optional_i32_to_i64(route.path_latency_latency_first)
.or_else(|| Some(route.path_latency as i64)),
udp_nat_type: stun.as_ref().map(|info| info.udp_nat_type),
tcp_nat_type: stun.as_ref().map(|info| info.tcp_nat_type),
inst_id: (!route.inst_id.is_empty()).then_some(route.inst_id),
version: (!route.version.is_empty()).then_some(route.version),
is_public_server: feature_flag.map(|flag| flag.is_public_server),
}
}
fn peer_conn_to_view(conn: api::instance::PeerConnInfo) -> PeerConnInfo {
let stats = conn.stats.map(|stats| PeerConnStats {
rx_bytes: stats.rx_bytes as i64,
tx_bytes: stats.tx_bytes as i64,
rx_packets: stats.rx_packets as i64,
tx_packets: stats.tx_packets as i64,
latency_us: stats.latency_us as i64,
});
PeerConnInfo {
conn_id: conn.conn_id,
my_peer_id: conn.my_peer_id as i64,
peer_id: conn.peer_id as i64,
features: conn.features,
tunnel_type: conn.tunnel.as_ref().map(|t| t.tunnel_type.clone()),
local_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.local_addr.clone())),
remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.remote_addr.clone())),
resolved_remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.resolved_remote_addr.clone())),
stats,
loss_rate: Some(conn.loss_rate as f64),
is_client: conn.is_client,
network_name: (!conn.network_name.is_empty()).then_some(conn.network_name),
is_closed: conn.is_closed,
secure_auth_level: Some(conn.secure_auth_level),
peer_identity_type: Some(conn.peer_identity_type),
}
}
fn peer_to_view(peer: api::instance::PeerInfo) -> PeerInfo {
PeerInfo {
peer_id: peer.peer_id as i64,
default_conn_id: stringify_uuid(peer.default_conn_id),
directly_connected_conns: peer
.directly_connected_conns
.into_iter()
.map(|id| id.to_string())
.collect(),
conns: peer.conns.into_iter().map(peer_conn_to_view).collect(),
}
}
fn my_node_info_to_view(info: api::manage::MyNodeInfo) -> MyNodeInfo {
MyNodeInfo {
virtual_ipv4: info
.virtual_ipv4
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
virtual_ipv4_cidr: stringify_ipv4_inet(info.virtual_ipv4),
hostname: (!info.hostname.is_empty()).then_some(info.hostname),
version: (!info.version.is_empty()).then_some(info.version),
peer_id: Some(info.peer_id as i64),
listeners: info
.listeners
.into_iter()
.map(|url| url.to_string())
.collect(),
vpn_portal_cfg: info.vpn_portal_cfg,
udp_nat_type: info.stun_info.as_ref().map(|stun| stun.udp_nat_type),
tcp_nat_type: info.stun_info.as_ref().map(|stun| stun.tcp_nat_type),
}
}
pub fn runtime_instance_from_running_info(
config_id: String,
display_name: String,
magic_dns_enabled: bool,
need_exit_node: bool,
info: api::manage::NetworkInstanceRunningInfo,
) -> RuntimeInstanceState {
let tun_attached = info.running && is_tun_attached(&config_id);
let tun_required = info.running && (info.dev_name != "no_tun" || tun_attached);
RuntimeInstanceState {
config_id: config_id.clone(),
instance_id: config_id,
display_name,
running: info.running,
tun_required,
tun_attached,
magic_dns_enabled,
need_exit_node,
error_message: info.error_msg,
my_node_info: info.my_node_info.map(my_node_info_to_view),
events: info.events,
routes: info.routes.into_iter().map(route_to_view).collect(),
peers: info.peers.into_iter().map(peer_to_view).collect(),
}
}
+2 -1
View File
@@ -1,7 +1,7 @@
[package]
name = "easytier-uptime"
version = "0.1.0"
edition = "2021"
edition.workspace = true
[dependencies]
tokio = { version = "1.0", features = ["full"] }
@@ -12,6 +12,7 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1.0", features = ["v4", "serde"] }
guarden = "0.1"
# Axum web framework
axum = { version = "0.8.4", features = ["macros"] }
@@ -1,7 +1,7 @@
use std::ops::{Div, Mul};
use axum::extract::{Path, State};
use axum::Json;
use axum::extract::{Path, State};
use sea_orm::{
ColumnTrait, Condition, EntityTrait, IntoActiveModel, ModelTrait, Order, PaginatorTrait,
QueryFilter, QueryOrder, QuerySelect, Set, TryIntoModel,
@@ -14,7 +14,7 @@ use crate::api::{
models::*,
};
use crate::db::entity::{self, health_records, shared_nodes};
use crate::db::{operations::*, Db};
use crate::db::{Db, operations::*};
use crate::health_checker_manager::HealthCheckerManager;
use axum_extra::extract::Query;
use std::sync::Arc;
@@ -273,7 +273,7 @@ pub struct InstanceFilterParams {
use crate::config::AppConfig;
use axum::http::{HeaderMap, StatusCode};
use chrono::{Duration, Utc};
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
use jsonwebtoken::{DecodingKey, EncodingKey, Header, Validation, decode, encode};
use serde::Serialize;
#[derive(Debug, Serialize, Deserialize)]
@@ -370,19 +370,19 @@ pub async fn admin_get_nodes(
let ids = NodeOperations::filter_node_ids_by_tag(&app_state.db, &tag).await?;
filtered_ids = Some(ids);
}
if let Some(tags) = filters.tags {
if !tags.is_empty() {
let ids_any = NodeOperations::filter_node_ids_by_tags_any(&app_state.db, &tags).await?;
filtered_ids = match filtered_ids {
Some(mut existing) => {
existing.extend(ids_any);
existing.sort();
existing.dedup();
Some(existing)
}
None => Some(ids_any),
};
}
if let Some(tags) = filters.tags
&& !tags.is_empty()
{
let ids_any = NodeOperations::filter_node_ids_by_tags_any(&app_state.db, &tags).await?;
filtered_ids = match filtered_ids {
Some(mut existing) => {
existing.extend(ids_any);
existing.sort();
existing.dedup();
Some(existing)
}
None => Some(ids_any),
};
}
if let Some(ids) = filtered_ids {
if ids.is_empty() {
@@ -1,5 +1,5 @@
use axum::routing::{delete, get, post, put};
use axum::Router;
use axum::routing::{delete, get, post, put};
use tower_http::compression::CompressionLayer;
use tower_http::cors::CorsLayer;
@@ -1,7 +1,7 @@
use crate::db::entity::*;
use crate::db::Db;
use crate::db::entity::*;
use sea_orm::*;
use tokio::time::{sleep, Duration};
use tokio::time::{Duration, sleep};
use tracing::{error, info, warn};
/// 数据清理策略配置
@@ -5,12 +5,12 @@ pub mod operations;
use std::fmt;
use sea_orm::{
prelude::*, sea_query::OnConflict, ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait,
QueryFilter as _, Set, SqlxSqliteConnector, Statement, TransactionTrait as _,
ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait, QueryFilter as _, Set,
SqlxSqliteConnector, Statement, TransactionTrait as _, prelude::*, sea_query::OnConflict,
};
use sea_orm_migration::MigratorTrait as _;
use serde::{Deserialize, Serialize};
use sqlx::{migrate::MigrateDatabase as _, Sqlite, SqlitePool};
use sqlx::{Sqlite, SqlitePool, migrate::MigrateDatabase as _};
use crate::migrator;
@@ -1,8 +1,8 @@
use crate::api::CreateNodeRequest;
use crate::db::entity::*;
use crate::db::Db;
use crate::db::HealthStats;
use crate::db::HealthStatus;
use crate::db::entity::*;
use sea_orm::*;
use std::collections::{HashMap, HashSet};
@@ -7,21 +7,21 @@ use std::{
use anyhow::Context as _;
use dashmap::DashMap;
use easytier::{
common::{
config::{ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader},
scoped_task::ScopedTask,
common::config::{
ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader,
},
defer,
instance_manager::NetworkInstanceManager,
};
use guarden::defer;
use serde::{Deserialize, Serialize};
use sqlx::any;
use tokio_util::task::AbortOnDropHandle;
use tracing::{debug, error, info, instrument, warn};
use crate::db::{
Db, HealthStatus,
entity::shared_nodes,
operations::{HealthOperations, NodeOperations},
Db, HealthStatus,
};
pub struct HealthCheckOneNode {
@@ -240,7 +240,7 @@ pub struct HealthChecker {
db: Db,
instance_mgr: Arc<NetworkInstanceManager>,
inst_id_map: DashMap<i32, uuid::Uuid>,
node_tasks: DashMap<i32, ScopedTask<()>>,
node_tasks: DashMap<i32, AbortOnDropHandle<()>>,
node_records: Arc<DashMap<i32, HealthyMemRecord>>,
node_cfg: Arc<DashMap<i32, TomlConfigLoader>>,
}
@@ -465,7 +465,7 @@ impl HealthChecker {
}
// 启动健康检查任务
let task = ScopedTask::from(tokio::spawn(Self::node_health_check_task(
let task = AbortOnDropHandle::new(tokio::spawn(Self::node_health_check_task(
node_id,
cfg.get_id(),
Arc::clone(&self.instance_mgr),
@@ -1,11 +1,11 @@
use std::{collections::HashSet, sync::Arc, time::Duration};
use anyhow::Context as _;
use tokio::time::{interval, Interval};
use tokio::time::{Interval, interval};
use tracing::{error, info};
use crate::{
db::{entity::shared_nodes, operations::NodeOperations, Db},
db::{Db, entity::shared_nodes, operations::NodeOperations},
health_checker::HealthChecker,
};
+4 -2
View File
@@ -10,7 +10,7 @@ mod migrator;
use api::routes::create_routes;
use clap::Parser;
use config::AppConfig;
use db::{operations::NodeOperations, Db};
use db::{Db, operations::NodeOperations};
use easytier::common::log;
use health_checker::HealthChecker;
use health_checker_manager::HealthCheckerManager;
@@ -49,7 +49,9 @@ async fn main() -> anyhow::Result<()> {
// 如果提供了管理员密码,设置环境变量
if let Some(password) = args.admin_password {
env::set_var("ADMIN_PASSWORD", password);
unsafe {
env::set_var("ADMIN_PASSWORD", password);
}
}
tracing::info!(
+2 -2
View File
@@ -1,7 +1,7 @@
{
"name": "easytier-gui",
"type": "module",
"version": "2.6.0",
"version": "2.6.4",
"private": true,
"packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4",
"scripts": {
@@ -59,4 +59,4 @@
"vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.10"
}
}
}
+10 -11
View File
@@ -1,9 +1,9 @@
[package]
name = "easytier-gui"
version = "2.6.0"
version = "2.6.4"
description = "EasyTier GUI"
authors = ["you"]
edition = "2021"
edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@@ -11,15 +11,6 @@ edition = "2021"
name = "app_lib"
crate-type = ["staticlib", "cdylib", "rlib"]
[build-dependencies]
tauri-build = { version = "2.0.0-rc", features = [] }
# enable thunk-rs when compiling for x86_64 or i686 windows
[target.x86_64-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[target.i686-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[dependencies]
# wry 0.47 may crash on android, see https://github.com/EasyTier/EasyTier/issues/527
@@ -66,6 +57,14 @@ libc = "0.2"
[target.'cfg(target_os = "macos")'.dependencies]
security-framework-sys = "2.9.0"
[build-dependencies]
tauri-build = { version = "2.0.0-rc", features = [] }
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[features]
# This feature is used for production builds or when a dev server is not specified, DO NOT REMOVE!!
custom-protocol = ["tauri/custom-protocol"]
+12 -12
View File
@@ -1,12 +1,12 @@
fn main() {
// enable thunk-rs when target os is windows and arch is x86_64 or i686
#[cfg(target_os = "windows")]
if !std::env::var("TARGET")
.unwrap_or_default()
.contains("aarch64")
{
thunk::thunk();
}
tauri_build::build();
}
use std::env;
fn main() {
let target_os = env::var("CARGO_CFG_TARGET_OS").unwrap_or_default();
let target_arch = env::var("CARGO_CFG_TARGET_ARCH").unwrap_or_default();
// enable thunk-rs when target os is windows and arch is x86_64 or i686
if target_os == "windows" && (target_arch == "x86" || target_arch == "x86_64") {
thunk::thunk();
}
tauri_build::build();
}
@@ -1,5 +1,6 @@
import java.util.Properties
import java.io.FileInputStream
import groovy.json.JsonSlurper
plugins {
id("com.android.application")
@@ -14,6 +15,35 @@ val tauriProperties = Properties().apply {
}
}
val versionPattern = Regex("""^(\d+)\.(\d+)\.(\d+)$""")
val tauriVersionName = tauriProperties.getProperty("tauri.android.versionName")?.ifBlank { null } ?: run {
val tauriConfFile = file("../../../tauri.conf.json")
check(tauriConfFile.exists()) { "Missing tauri.conf.json at ${tauriConfFile.path}" }
val tauriConf = tauriConfFile.reader(Charsets.UTF_8).use { JsonSlurper().parse(it) as? Map<*, *> }
?: error("Failed to parse ${tauriConfFile.path} as a JSON object")
tauriConf["version"] as? String
?: error("Missing string field \"version\" in ${tauriConfFile.path}")
}
val tauriVersionMatch = versionPattern.matchEntire(tauriVersionName)
?: error("Android version must use x.y.z format, but got \"$tauriVersionName\"")
val tauriVersionCode = if (tauriProperties.getProperty("tauri.android.versionName")?.ifBlank { null } != null) {
val versionCodeProp = tauriProperties.getProperty("tauri.android.versionCode")
if (versionCodeProp != null) {
versionCodeProp.toIntOrNull()
?: error("Property \"tauri.android.versionCode\" must be an integer, but got \"$versionCodeProp\"")
} else {
val (major, minor, patch) = tauriVersionMatch.destructured
major.toInt() * 1_000_000 + minor.toInt() * 1_000 + patch.toInt()
}
} else {
val (major, minor, patch) = tauriVersionMatch.destructured
major.toInt() * 1_000_000 + minor.toInt() * 1_000 + patch.toInt()
}
android {
compileSdk = 34
namespace = "com.kkrainbow.easytier"
@@ -22,8 +52,8 @@ android {
applicationId = "com.kkrainbow.easytier"
minSdk = 24
targetSdk = 34
versionCode = tauriProperties.getProperty("tauri.android.versionCode", "1").toInt()
versionName = tauriProperties.getProperty("tauri.android.versionName", "1.0")
versionCode = tauriVersionCode
versionName = tauriVersionName
}
signingConfigs {
create("release") {
@@ -82,4 +112,4 @@ dependencies {
androidTestImplementation("androidx.test.espresso:espresso-core:3.5.0")
}
apply(from = "tauri.build.gradle.kts")
apply(from = "tauri.build.gradle.kts")
+1 -1
View File
@@ -4,7 +4,7 @@
*--------------------------------------------------------------------------------------------*/
use super::Command;
use anyhow::{anyhow, Result};
use anyhow::{Result, anyhow};
use std::env;
use std::ffi::OsStr;
use std::process::{Command as StdCommand, Output};
+2 -2
View File
@@ -30,10 +30,10 @@ use std::os::unix::process::ExitStatusExt;
use std::path::Path;
use std::ptr;
use libc::{fileno, wait, EINTR, SHUT_WR};
use libc::{EINTR, SHUT_WR, fileno, wait};
use security_framework_sys::authorization::{
errAuthorizationSuccess, kAuthorizationFlagDefaults, kAuthorizationFlagDestroyRights,
AuthorizationCreate, AuthorizationExecuteWithPrivileges, AuthorizationFree, AuthorizationRef,
errAuthorizationSuccess, kAuthorizationFlagDefaults, kAuthorizationFlagDestroyRights,
};
const ENV_PATH: &str = "PATH";
@@ -11,11 +11,11 @@ use std::process::{ExitStatus, Output};
use winapi::shared::minwindef::{DWORD, LPVOID};
use winapi::um::processthreadsapi::{GetCurrentProcess, OpenProcessToken};
use winapi::um::securitybaseapi::GetTokenInformation;
use winapi::um::winnt::{TokenElevation, HANDLE, TOKEN_ELEVATION, TOKEN_QUERY};
use windows::core::{w, HSTRING, PCWSTR};
use winapi::um::winnt::{HANDLE, TOKEN_ELEVATION, TOKEN_QUERY, TokenElevation};
use windows::Win32::Foundation::HWND;
use windows::Win32::UI::Shell::ShellExecuteW;
use windows::Win32::UI::WindowsAndMessaging::SW_HIDE;
use windows::core::{HSTRING, PCWSTR, w};
/// The implementation of state check and elevated executing varies on each platform
impl Command {
+241 -64
View File
@@ -15,16 +15,18 @@ use easytier::rpc_service::remote_client::{
use easytier::web_client::{self, WebClient};
use easytier::{
common::{
config::{ConfigLoader, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader},
config::{
ConfigLoader, ConfigSource, FileLoggerConfig, LoggingConfigBuilder, TomlConfigLoader,
},
log,
},
instance_manager::NetworkInstanceManager,
launcher::NetworkConfig,
rpc_service::ApiRpcServer,
tunnel::TunnelListener,
tunnel::ring::RingTunnelListener,
tunnel::tcp::TcpTunnelListener,
tunnel::TunnelListener,
utils::{self},
utils::panic::setup_panic_handler,
};
use std::ops::Deref;
use std::sync::Arc;
@@ -118,7 +120,7 @@ async fn run_network_instance(
let client_manager = get_client_manager!()?;
let toml_config = cfg.gen_config().map_err(|e| e.to_string())?;
client_manager
.pre_run_network_instance_hook(&app, &toml_config)
.pre_run_network_instance_hook(&app, &toml_config, manager::PersistedConfigSource::User)
.await?;
client_manager
.handle_run_network_instance(app.clone(), cfg, save)
@@ -207,13 +209,17 @@ async fn update_network_config_state(
.map_err(|e: uuid::Error| e.to_string())?;
let client_manager = get_client_manager!()?;
if !disabled {
let cfg = client_manager
.handle_get_network_config(app.clone(), instance_id)
let (cfg, source) = client_manager
.handle_get_network_config_with_source(app.clone(), instance_id)
.await
.map_err(|e| e.to_string())?;
let toml_config = cfg.gen_config().map_err(|e| e.to_string())?;
client_manager
.pre_run_network_instance_hook(&app, &toml_config)
.pre_run_network_instance_hook(
&app,
&toml_config,
manager::PersistedConfigSource::from_runtime_source(source),
)
.await?;
}
client_manager
@@ -272,7 +278,7 @@ async fn get_config(app: AppHandle, instance_id: String) -> Result<NetworkConfig
#[tauri::command]
async fn load_configs(
app: AppHandle,
configs: Vec<NetworkConfig>,
configs: Vec<manager::StoredGuiConfig>,
enabled_networks: Vec<String>,
) -> Result<(), String> {
get_client_manager!()?
@@ -484,10 +490,18 @@ async fn init_web_client(app: AppHandle, url: Option<String>) -> Result<(), Stri
.ok_or_else(|| "Instance manager is not available".to_string())?;
let hooks = Arc::new(manager::GuiHooks { app: app.clone() });
let machine_id_state_dir = app
.path()
.app_data_dir()
.with_context(|| "Failed to resolve machine id state directory")
.map_err(|e| format!("{:#}", e))?;
let web_client = web_client::run_web_client(
url.as_str(),
None,
easytier::common::MachineIdOptions {
explicit_machine_id: None,
state_dir: Some(machine_id_state_dir),
},
None,
false,
instance_manager,
@@ -559,10 +573,10 @@ fn toggle_window_visibility(app: &tauri::AppHandle) {
}
fn get_exe_path() -> String {
if let Ok(appimage_path) = std::env::var("APPIMAGE") {
if !appimage_path.is_empty() {
return appimage_path;
}
if let Ok(appimage_path) = std::env::var("APPIMAGE")
&& !appimage_path.is_empty()
{
return appimage_path;
}
std::env::current_exe()
.map(|p| p.to_string_lossy().to_string())
@@ -596,8 +610,8 @@ mod manager {
use easytier::proto::rpc_types::controller::BaseController;
use easytier::rpc_service::logger::LoggerRpcService;
use easytier::rpc_service::remote_client::PersistentConfig;
use easytier::tunnel::ring::RingTunnelConnector;
use easytier::tunnel::TunnelConnector;
use easytier::tunnel::ring::RingTunnelConnector;
use easytier::web_client::WebClientHooks;
pub(super) struct GuiHooks {
@@ -612,7 +626,11 @@ mod manager {
) -> Result<(), String> {
let client_manager = get_client_manager!()?;
client_manager
.pre_run_network_instance_hook(&self.app, cfg)
.pre_run_network_instance_hook(
&self.app,
cfg,
PersistedConfigSource::from_runtime_source(cfg.get_network_config_source()),
)
.await
}
@@ -631,14 +649,87 @@ mod manager {
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)]
#[serde(rename_all = "snake_case")]
#[derive(Default)]
pub(super) enum PersistedConfigSource {
User,
Webhook,
#[serde(other)]
#[default]
Legacy,
}
impl PersistedConfigSource {
pub(super) fn from_runtime_source(source: ConfigSource) -> Self {
match source {
ConfigSource::User => Self::User,
ConfigSource::Webhook => Self::Webhook,
}
}
fn merge_persisted(self, incoming: Self) -> Self {
match (self, incoming) {
// Older runtimes report missing source as `user`. Keep the stronger persisted
// ownership until webhook sync or an explicit user save repairs it.
(Self::Webhook, Self::User) | (Self::Legacy, Self::User) => self,
(_, next) => next,
}
}
fn to_runtime_source(self) -> ConfigSource {
match self {
Self::User | Self::Legacy => ConfigSource::User,
Self::Webhook => ConfigSource::Webhook,
}
}
#[cfg(any(test, target_os = "android"))]
fn is_webhook_like(self) -> bool {
matches!(self, Self::Webhook)
}
}
#[derive(Clone)]
pub(super) struct GUIConfig(String, pub(crate) NetworkConfig);
pub(super) struct GUIConfig {
inst_id: String,
pub(crate) config: NetworkConfig,
source: PersistedConfigSource,
}
#[derive(Clone, serde::Serialize, serde::Deserialize)]
pub(super) struct StoredGuiConfig {
config: NetworkConfig,
#[serde(default)]
source: PersistedConfigSource,
}
impl GUIConfig {
fn new(inst_id: String, config: NetworkConfig, source: PersistedConfigSource) -> Self {
Self {
inst_id,
config,
source,
}
}
fn into_stored(self) -> StoredGuiConfig {
StoredGuiConfig {
config: self.config,
source: self.source,
}
}
}
impl PersistentConfig<anyhow::Error> for GUIConfig {
fn get_network_inst_id(&self) -> &str {
&self.0
&self.inst_id
}
fn get_network_config(&self) -> Result<NetworkConfig, anyhow::Error> {
Ok(self.1.clone())
Ok(self.config.clone())
}
fn get_network_config_source(&self) -> ConfigSource {
self.source.to_runtime_source()
}
}
@@ -655,13 +746,12 @@ mod manager {
}
fn save_configs(&self, app: &AppHandle) -> anyhow::Result<()> {
let configs: Result<Vec<String>, _> = self
let configs = self
.network_configs
.iter()
.map(|entry| serde_json::to_string(&entry.value().1))
.collect();
let payload = format!("[{}]", configs?.join(","));
app.emit_str("save_configs", payload)?;
.map(|entry| entry.value().clone().into_stored())
.collect::<Vec<_>>();
app.emit("save_configs", configs)?;
Ok(())
}
@@ -680,8 +770,14 @@ mod manager {
app: &AppHandle,
inst_id: Uuid,
cfg: NetworkConfig,
source: PersistedConfigSource,
) -> anyhow::Result<()> {
let config = GUIConfig(inst_id.to_string(), cfg);
let source = self
.network_configs
.get(&inst_id)
.map(|existing| existing.source.merge_persisted(source))
.unwrap_or(source);
let config = GUIConfig::new(inst_id.to_string(), cfg, source);
self.network_configs.insert(inst_id, config);
self.save_configs(app)
}
@@ -693,8 +789,14 @@ mod manager {
app: AppHandle,
network_inst_id: Uuid,
network_config: NetworkConfig,
source: ConfigSource,
) -> Result<(), anyhow::Error> {
self.save_config(&app, network_inst_id, network_config)?;
self.save_config(
&app,
network_inst_id,
network_config,
PersistedConfigSource::from_runtime_source(source),
)?;
self.enabled_networks.insert(network_inst_id);
self.save_enabled_networks(&app)?;
Ok(())
@@ -811,17 +913,36 @@ mod manager {
.network_configs
.iter()
.filter(|v| self.storage.enabled_networks.contains(v.key()))
.filter(|v| !v.1.no_tun())
.filter_map(|c| c.1.instance_id().parse::<uuid::Uuid>().ok())
.filter(|v| !v.config.no_tun())
.filter_map(|c| c.config.instance_id().parse::<uuid::Uuid>().ok())
}
#[cfg(target_os = "android")]
pub fn get_enabled_instances_with_webhook_like_tun_ids(
&self,
) -> impl Iterator<Item = uuid::Uuid> + '_ {
self.storage
.network_configs
.iter()
.filter(|v| self.storage.enabled_networks.contains(v.key()))
.filter(|v| !v.config.no_tun())
.filter(|v| v.source.is_webhook_like())
.filter_map(|c| c.config.instance_id().parse::<uuid::Uuid>().ok())
}
#[cfg(target_os = "android")]
pub(super) async fn disable_instances_with_tun(
&self,
app: &AppHandle,
webhook_only: bool,
) -> Result<(), easytier::rpc_service::remote_client::RemoteClientError<anyhow::Error>>
{
let inst_ids: Vec<uuid::Uuid> = self.get_enabled_instances_with_tun_ids().collect();
let inst_ids: Vec<uuid::Uuid> = if webhook_only {
self.get_enabled_instances_with_webhook_like_tun_ids()
.collect()
} else {
self.get_enabled_instances_with_tun_ids().collect()
};
for inst_id in inst_ids {
self.handle_update_network_state(app.clone(), inst_id, true)
.await?;
@@ -842,6 +963,7 @@ mod manager {
&self,
app: &AppHandle,
cfg: &easytier::common::config::TomlConfigLoader,
source: PersistedConfigSource,
) -> Result<(), String> {
let instance_id = cfg.get_id();
app.emit("pre_run_network_instance", instance_id.to_string())
@@ -849,9 +971,24 @@ mod manager {
#[cfg(target_os = "android")]
if !cfg.get_flags().no_tun {
self.disable_instances_with_tun(app)
.await
.map_err(|e| e.to_string())?;
match source {
PersistedConfigSource::User | PersistedConfigSource::Legacy => {
self.disable_instances_with_tun(app, false)
.await
.map_err(|e| e.to_string())?;
}
PersistedConfigSource::Webhook => {
self.disable_instances_with_tun(app, true)
.await
.map_err(|e| e.to_string())?;
if self.get_enabled_instances_with_tun_ids().next().is_some() {
return Err(
"Android only supports one active TUN network; user-managed VPN remains active"
.to_string(),
);
}
}
}
}
self.storage
@@ -859,6 +996,7 @@ mod manager {
app,
instance_id,
NetworkConfig::new_from_config(cfg).map_err(|e| e.to_string())?,
source,
)
.map_err(|e| e.to_string())?;
@@ -962,15 +1100,15 @@ mod manager {
pub(super) async fn load_configs(
&self,
app: AppHandle,
configs: Vec<NetworkConfig>,
configs: Vec<StoredGuiConfig>,
enabled_networks: Vec<String>,
) -> anyhow::Result<()> {
self.storage.network_configs.clear();
for cfg in configs {
let instance_id = cfg.instance_id();
for stored in configs {
let instance_id = stored.config.instance_id();
self.storage.network_configs.insert(
instance_id.parse()?,
GUIConfig(instance_id.to_string(), cfg),
GUIConfig::new(instance_id.to_string(), stored.config, stored.source),
);
}
@@ -979,34 +1117,35 @@ mod manager {
.get_rpc_client(app.clone())
.ok_or_else(|| anyhow::anyhow!("RPC client not found"))?;
for id in enabled_networks {
if let Ok(uuid) = id.parse() {
if !self.storage.enabled_networks.contains(&uuid) {
let config = self
.storage
.network_configs
.get(&uuid)
.map(|i| i.value().1.clone());
let Some(config) = config else {
continue;
};
let toml_config = config.gen_config()?;
self.pre_run_network_instance_hook(&app, &toml_config)
.await
.map_err(|e| anyhow::anyhow!(e))?;
client
.run_network_instance(
BaseController::default(),
RunNetworkInstanceRequest {
inst_id: None,
config: Some(config),
overwrite: false,
},
)
.await?;
self.post_run_network_instance_hook(&app, &uuid)
.await
.map_err(|e| anyhow::anyhow!(e))?;
}
if let Ok(uuid) = id.parse()
&& !self.storage.enabled_networks.contains(&uuid)
{
let config = self
.storage
.network_configs
.get(&uuid)
.map(|i| (i.value().config.clone(), i.value().source));
let Some((config, source)) = config else {
continue;
};
let toml_config = config.gen_config()?;
self.pre_run_network_instance_hook(&app, &toml_config, source)
.await
.map_err(|e| anyhow::anyhow!(e))?;
client
.run_network_instance(
BaseController::default(),
RunNetworkInstanceRequest {
inst_id: None,
config: Some(config),
overwrite: false,
source: source.to_runtime_source().to_rpc(),
},
)
.await?;
self.post_run_network_instance_hook(&app, &uuid)
.await
.map_err(|e| anyhow::anyhow!(e))?;
}
}
Ok(())
@@ -1032,6 +1171,44 @@ mod manager {
&self.storage
}
}
#[cfg(test)]
mod tests {
use super::{PersistedConfigSource, StoredGuiConfig};
use easytier::proto::api::manage::NetworkConfig;
#[test]
fn stored_gui_config_defaults_missing_source_to_legacy() {
let stored: StoredGuiConfig = serde_json::from_value(serde_json::json!({
"config": NetworkConfig::default(),
}))
.unwrap();
assert_eq!(stored.source, PersistedConfigSource::Legacy);
}
#[test]
fn persisted_source_merge_keeps_legacy_and_webhook_over_ambiguous_user() {
assert_eq!(
PersistedConfigSource::Legacy.merge_persisted(PersistedConfigSource::User),
PersistedConfigSource::Legacy
);
assert_eq!(
PersistedConfigSource::Webhook.merge_persisted(PersistedConfigSource::User),
PersistedConfigSource::Webhook
);
assert_eq!(
PersistedConfigSource::Legacy.merge_persisted(PersistedConfigSource::Webhook),
PersistedConfigSource::Webhook
);
}
#[test]
fn only_webhook_configs_are_webhook_like() {
assert!(!PersistedConfigSource::Legacy.is_webhook_like());
assert!(!PersistedConfigSource::User.is_webhook_like());
assert!(PersistedConfigSource::Webhook.is_webhook_like());
}
}
}
#[cfg(not(target_os = "android"))]
@@ -1120,7 +1297,7 @@ pub fn run_gui() -> std::process::ExitCode {
process::exit(0);
}
utils::setup_panic_handler();
setup_panic_handler();
let mut builder = tauri::Builder::default();
+2 -2
View File
@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false
},
"productName": "easytier-gui",
"version": "2.6.0",
"version": "2.6.4",
"identifier": "com.kkrainbow.easytier",
"plugins": {
"shell": {
@@ -36,4 +36,4 @@
"csp": null
}
}
}
}
+39 -2
View File
@@ -6,6 +6,7 @@ import { GetNetworkMetasResponse } from 'node_modules/easytier-frontend-lib/dist
type NetworkConfig = NetworkTypes.NetworkConfig
type ValidateConfigResponse = Api.ValidateConfigResponse
type ListNetworkInstanceIdResponse = Api.ListNetworkInstanceIdResponse
type ConfigSource = 'user' | 'webhook' | 'legacy'
interface ServiceOptions {
config_dir: string
rpc_portal: string
@@ -16,6 +17,39 @@ interface ServiceOptions {
export type ServiceStatus = "Running" | "Stopped" | "NotInstalled"
interface StoredGuiConfig {
config: NetworkConfig
source: ConfigSource
}
function parseStoredConfigs(raw: string | null): StoredGuiConfig[] {
const parsed: unknown = JSON.parse(raw || '[]')
if (!Array.isArray(parsed)) {
return []
}
return parsed.flatMap((entry): StoredGuiConfig[] => {
if (entry && typeof entry === 'object' && 'config' in entry) {
const { config, source } = entry as {
config?: NetworkConfig
source?: ConfigSource
}
if (!config) {
return []
}
return [{
config: NetworkTypes.normalizeNetworkConfig(config),
source: source === 'user' || source === 'webhook' ? source : 'legacy',
}]
}
return [{
config: NetworkTypes.normalizeNetworkConfig(entry as NetworkConfig),
source: 'legacy',
}]
})
}
export async function parseNetworkConfig(cfg: NetworkConfig) {
return invoke<string>('parse_network_config', { cfg: NetworkTypes.toBackendNetworkConfig(cfg) })
}
@@ -71,9 +105,12 @@ export async function getConfig(instanceId: string) {
}
export async function sendConfigs(enabledNetworks: string[]) {
const networkList: NetworkConfig[] = JSON.parse(localStorage.getItem('networkList') || '[]');
const networkList = parseStoredConfigs(localStorage.getItem('networkList'))
return await invoke('load_configs', {
configs: networkList.map((config) => NetworkTypes.toBackendNetworkConfig(NetworkTypes.normalizeNetworkConfig(config))),
configs: networkList.map(({ config, source }) => ({
config: NetworkTypes.toBackendNetworkConfig(config),
source,
})),
enabledNetworks
})
}
+13 -2
View File
@@ -3,6 +3,11 @@ import { type } from "@tauri-apps/plugin-os";
import { NetworkTypes } from "easytier-frontend-lib"
import { Utils } from "easytier-frontend-lib";
interface StoredGuiConfig {
config: NetworkTypes.NetworkConfig
source?: 'user' | 'webhook' | 'legacy'
}
const EVENTS = Object.freeze({
SAVE_CONFIGS: 'save_configs',
PRE_RUN_NETWORK_INSTANCE: 'pre_run_network_instance',
@@ -13,9 +18,15 @@ const EVENTS = Object.freeze({
EVENT_LAGGED: 'event_lagged',
});
function onSaveConfigs(event: Event<NetworkTypes.NetworkConfig[]>) {
function onSaveConfigs(event: Event<StoredGuiConfig[]>) {
console.log(`Received event '${EVENTS.SAVE_CONFIGS}': ${event.payload}`);
localStorage.setItem('networkList', JSON.stringify(event.payload.map((config) => NetworkTypes.normalizeNetworkConfig(config))));
localStorage.setItem(
'networkList',
JSON.stringify(event.payload.map(({ config, source }) => ({
config: NetworkTypes.normalizeNetworkConfig(config),
source: source ?? 'legacy',
}))),
);
}
function normalizeInstanceIdPayload(payload: unknown): string {
+1
View File
@@ -18,6 +18,7 @@ export interface ServiceMode extends WebClientConfig {
rpc_portal: string
file_log_level: 'off' | 'warn' | 'info' | 'debug' | 'trace'
file_log_dir: string
installed_core_version?: string
}
export interface RemoteMode {
+20 -3
View File
@@ -16,7 +16,7 @@ import { useToast, useConfirm } from 'primevue'
import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode'
import { saveLastNetworkInstanceId, loadLastNetworkInstanceId } from '~/composables/config'
import ModeSwitcher from '~/components/ModeSwitcher.vue'
import { getServiceStatus } from '~/composables/backend'
import { getEasytierVersion, getServiceStatus } from '~/composables/backend'
const { t, locale } = useI18n()
const confirm = useConfirm()
@@ -85,6 +85,20 @@ async function onUninstallService() {
});
}
function stripModeMetadata(mode: Mode) {
if (mode.mode !== 'service') {
return mode
}
const serviceConfig = { ...mode }
delete serviceConfig.installed_core_version
return serviceConfig
}
function modeConfigChanged(next: Mode) {
return JSON.stringify(stripModeMetadata(next)) !== JSON.stringify(stripModeMetadata(currentMode.value))
}
async function onStopService() {
isModeSaving.value = true
manualDisconnect.value = true
@@ -134,13 +148,14 @@ async function initWithMode(mode: Mode) {
}
url = mode.remote_rpc_address
break;
case 'service':
case 'service': {
if (!mode.config_dir || !mode.file_log_dir || !mode.file_log_level || !mode.rpc_portal) {
toast.add({ severity: 'error', summary: t('error'), detail: t('mode.service_config_empty'), life: 10000 })
return initWithMode({ ...mode, mode: 'normal' });
}
let serviceStatus = await getServiceStatus()
if (serviceStatus === "NotInstalled" || JSON.stringify(mode) !== JSON.stringify(currentMode.value)) {
const coreVersion = await getEasytierVersion()
if (serviceStatus === "NotInstalled" || modeConfigChanged(mode) || mode.installed_core_version !== coreVersion) {
mode.config_server_url = mode.config_server_url || undefined
await initService({
config_dir: mode.config_dir,
@@ -149,6 +164,7 @@ async function initWithMode(mode: Mode) {
rpc_portal: mode.rpc_portal,
config_server: mode.config_server_url,
})
mode.installed_core_version = coreVersion
serviceStatus = await getServiceStatus()
}
if (serviceStatus === "Stopped") {
@@ -157,6 +173,7 @@ async function initWithMode(mode: Mode) {
url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1")
retrys = 5
break;
}
case 'normal':
url = mode.rpc_portal;
break;
+1 -2
View File
@@ -2,13 +2,12 @@
name = "easytier-rpc-build"
description = "Protobuf RPC Service Generator for EasyTier"
version = "0.1.0"
edition = "2021"
edition.workspace = true
homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.93.0"
license-file = "LICENSE"
readme = "README.md"
+8 -9
View File
@@ -1,7 +1,7 @@
[package]
name = "easytier-web"
version = "2.6.0"
edition = "2021"
version = "2.6.4"
edition.workspace = true
description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server."
[dependencies]
@@ -10,6 +10,7 @@ tracing = { version = "0.1", features = ["log"] }
anyhow = { version = "1.0" }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-util = { version = "0.7", features = ["rt"] }
dashmap = "6.1"
url = "2.2"
async-trait = "0.1"
@@ -69,13 +70,11 @@ subtle = "2.6"
mimalloc = { version = "*" }
[build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = [
"win7",
] }
[features]
default = []
embed = ["dep:axum-embed"]
# enable thunk-rs when compiling for x86_64 or i686 windows
[target.x86_64-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
[target.i686-pc-windows-msvc.build-dependencies]
thunk-rs = { git = "https://github.com/easytier/thunk.git", default-features = false, features = ["win7"] }
+5 -5
View File
@@ -1,10 +1,10 @@
use std::env;
fn main() {
let target_os = env::var("CARGO_CFG_TARGET_OS").unwrap_or_default();
let target_arch = env::var("CARGO_CFG_TARGET_ARCH").unwrap_or_default();
// enable thunk-rs when target os is windows and arch is x86_64 or i686
#[cfg(target_os = "windows")]
if !std::env::var("TARGET")
.unwrap_or_default()
.contains("aarch64")
{
if target_os == "windows" && (target_arch == "x86" || target_arch == "x86_64") {
thunk::thunk();
}
}
@@ -1,7 +1,7 @@
<script setup lang="ts">
import { AutoComplete, Button, Checkbox, Dialog, Divider, InputNumber, InputText, Panel, Password, SelectButton, ToggleButton } from 'primevue'
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { Checkbox, InputText, InputNumber, AutoComplete, Panel, Divider, ToggleButton, Button, Password, Dialog } from 'primevue'
import {
addRow,
DEFAULT_NETWORK_CONFIG,
@@ -11,6 +11,7 @@ import {
} from '../types/network'
import { ref, onMounted, onUnmounted, watch } from 'vue'
import { useI18n } from 'vue-i18n'
import AclManager from './acl/AclManager.vue'
import UrlListInput from './UrlListInput.vue'
const props = defineProps<{
@@ -80,6 +81,7 @@ const bool_flags: BoolFlag[] = [
{ field: 'latency_first', help: 'latency_first_help' },
{ field: 'use_smoltcp', help: 'use_smoltcp_help' },
{ field: 'disable_ipv6', help: 'disable_ipv6_help' },
{ field: 'ipv6_public_addr_auto', help: 'ipv6_public_addr_auto_help' },
{ field: 'enable_kcp_proxy', help: 'enable_kcp_proxy_help' },
{ field: 'disable_kcp_input', help: 'disable_kcp_input_help' },
{ field: 'enable_quic_proxy', help: 'enable_quic_proxy_help' },
@@ -97,6 +99,7 @@ const bool_flags: BoolFlag[] = [
{ field: 'disable_encryption', help: 'disable_encryption_help' },
{ field: 'disable_tcp_hole_punching', help: 'disable_tcp_hole_punching_help' },
{ field: 'disable_udp_hole_punching', help: 'disable_udp_hole_punching_help' },
{ field: 'disable_upnp', help: 'disable_upnp_help' },
{ field: 'disable_sym_hole_punching', help: 'disable_sym_hole_punching_help' },
{ field: 'enable_magic_dns', help: 'enable_magic_dns_help' },
{ field: 'enable_private_mode', help: 'enable_private_mode_help' },
@@ -488,6 +491,18 @@ watch(() => curNetwork.value, syncNormalizedNetwork, { immediate: true, deep: fa
</div>
</Panel>
<Divider />
<Panel :header="t('acl.title')" toggleable collapsed>
<div v-if="curNetwork.acl" class="flex flex-col gap-y-2">
<AclManager v-model="curNetwork.acl" />
</div>
<div v-else class="flex justify-center p-4">
<Button :label="t('acl.enabled')"
@click="curNetwork.acl = { acl_v1: { chains: [], group: { declares: [], members: [] } } }" />
</div>
</Panel>
<div class="flex pt-6 justify-center">
<Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)" />
@@ -2,7 +2,7 @@
import { AutoComplete, Button, Dialog, InputNumber, InputText } from 'primevue'
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { computed, onMounted, onUnmounted, ref, watch } from 'vue'
import { computed, ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'
const props = defineProps<{
@@ -13,26 +13,9 @@ const props = defineProps<{
const { t } = useI18n()
const url = defineModel<string>({ required: true })
const editing = ref(false)
const container = ref<HTMLElement | null>(null)
const internalCompact = ref(false)
const hostFocused = ref(false)
onMounted(() => {
if (container.value) {
const observer = new ResizeObserver(entries => {
for (const entry of entries) {
internalCompact.value = entry.contentRect.width < 400
}
})
observer.observe(container.value)
onUnmounted(() => {
observer.disconnect()
})
}
})
const parseUrl = (val: string | null | undefined) => {
const parseUrl = (val: string | null | undefined): { proto: string; host: string; port: number | null } => {
const getValidPort = (portStr: string, proto: string) => {
const p = parseInt(portStr)
return isNaN(p) ? (props.protos[proto] ?? 11010) : p
@@ -55,13 +38,16 @@ const parseUrl = (val: string | null | undefined) => {
if (ipv6End > 0) {
const host = hostAndMaybePort.slice(0, ipv6End + 1)
const remain = hostAndMaybePort.slice(ipv6End + 1)
const port = remain.startsWith(':') ? getValidPort(remain.slice(1), proto) : (props.protos[proto] ?? 11010)
// null = no explicit port in URL; do not fabricate a default
const port: number | null = remain.startsWith(':') ? getValidPort(remain.slice(1), proto) : null
return { proto, host, port }
}
}
const portMatch = hostAndMaybePort.match(/^(.*):(\d+)$/)
const host = portMatch ? portMatch[1] : hostAndMaybePort
const port = portMatch ? parseInt(portMatch[2]) : (props.protos[proto] ?? 11010)
// null = no explicit port in URL; buildUrlValue will omit the port entirely,
// preserving the protocol's implied standard port (e.g. 443 for wss://).
const port: number | null = portMatch ? parseInt(portMatch[2]) : null
return { proto, host, port }
}
@@ -72,28 +58,26 @@ const parseUrl = (val: string | null | undefined) => {
if (parsedByPattern) {
return parsedByPattern
}
return { proto: 'tcp', host: '', port: 11010 }
return { proto: 'tcp', host: '', port: null }
}
const internalValue = ref(parseUrl(url.value))
const defaultHost = '0.0.0.0'
const buildUrlValue = (value: { proto: string, host: string, port: number }, forceDefaultHost = false) => {
const buildUrlValue = (value: { proto: string, host: string, port: number | null }, forceDefaultHost = false) => {
const proto = value.proto || 'tcp'
const rawHost = (value.host ?? '').trim()
const host = rawHost || (forceDefaultHost ? defaultHost : '')
if (!host) {
return null
}
let port = value.port
if (isNaN(parseInt(port as any))) {
port = props.protos[proto] ?? 11010
}
if (props.protos[proto] === 0) {
// Omit port when the protocol uses no port (protos value = 0), or when the
// original URL had no explicit port (port === null) avoids overwriting an
// implicit standard port (e.g. 443 for wss) with an EasyTier default (11012).
if (props.protos[proto] === 0 || value.port === null) {
return `${proto}://${host}`
}
return `${proto}://${host}:${port}`
return `${proto}://${host}:${value.port}`
}
const syncUrlFromInternal = (forceDefaultHost = false) => {
@@ -168,27 +152,30 @@ const onProtoChange = (newProto: string) => {
</script>
<template>
<div ref="container" class="w-full">
<InputGroup v-if="!internalCompact" class="w-full">
<div class="url-input-container w-full min-w-0 overflow-hidden">
<InputGroup class="url-input-full w-full min-w-0">
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown
class="max-w-32 proto-autocomplete-in-group" @complete="searchProtos"
@update:model-value="onProtoChange" />
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow"
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow min-w-0"
@focus="onHostFocus" @blur="onHostBlur" />
<template v-if="!isNoPortProto">
<InputGroupAddon>
<span style="font-weight: bold">:</span>
</InputGroupAddon>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="max-w-24"
fluid />
:placeholder="String(protos[internalValue.proto] ?? 11010)" fluid />
</template>
<!-- Rendered in both responsive branches; keep action slot content free of side effects and duplicate IDs. -->
<slot name="actions"></slot>
</InputGroup>
<div v-else class="flex justify-between items-center p-2 border rounded w-full">
<span class="truncate mr-2">{{ url }}</span>
<div class="flex items-center">
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" @click="editing = true" />
<div
class="url-input-compact flex justify-between items-center p-2 border rounded w-full min-w-0 overflow-hidden">
<span class="truncate mr-2 min-w-0 flex-1 overflow-hidden">{{ url }}</span>
<div class="flex items-center shrink-0">
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" :aria-label="t('web.common.edit')"
@click="editing = true" />
<slot name="actions"></slot>
</div>
</div>
@@ -207,7 +194,8 @@ const onProtoChange = (newProto: string) => {
</div>
<div v-if="!isNoPortProto" class="flex flex-col gap-2">
<label>{{ t('port') }}</label>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="w-full" />
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="w-full"
:placeholder="String(protos[internalValue.proto] ?? 11010)" />
</div>
</div>
<template #footer>
@@ -219,6 +207,28 @@ const onProtoChange = (newProto: string) => {
</template>
<style scoped>
.url-input-container {
container-type: inline-size;
}
.url-input-full {
display: none;
}
.url-input-compact {
display: flex;
}
@container (min-width: 400px) {
.url-input-full {
display: flex;
}
.url-input-compact {
display: none;
}
}
.proto-autocomplete-in-group,
.proto-autocomplete-in-group :deep(.p-autocomplete-input),
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
@@ -0,0 +1,218 @@
<script setup lang="ts">
import { Button, Column, DataTable, Divider, InputText, Select, SelectButton, ToggleButton } from 'primevue'
import { ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'
import { AclAction, AclChain, AclChainType, AclProtocol, AclRule } from '../../types/network'
import AclRuleDialog from './AclRuleDialog.vue'
const props = defineProps<{
groupNames?: string[]
}>()
const chain = defineModel<AclChain>({ required: true })
const { t } = useI18n()
watch(() => chain.value.rules, (newRules) => {
if (!newRules) return
const isSorted = newRules.every((rule, i) => i === 0 || (rule.priority || 0) <= (newRules[i - 1].priority || 0))
if (!isSorted) {
chain.value.rules.sort((a, b) => (b.priority || 0) - (a.priority || 0))
}
}, { deep: true, immediate: true })
const actionOptions = [
{ label: () => t('acl.allow'), value: AclAction.Allow },
{ label: () => t('acl.drop'), value: AclAction.Drop },
]
const chainTypeOptions = [
{ label: () => t('acl.inbound'), value: AclChainType.Inbound },
{ label: () => t('acl.outbound'), value: AclChainType.Outbound },
{ label: () => t('acl.forward'), value: AclChainType.Forward },
]
const editingRule = ref<AclRule | null>(null)
const editingRuleIndex = ref(-1)
const showRuleDialog = ref(false)
function getProtocolLabel(proto: AclProtocol) {
switch (proto) {
case AclProtocol.Any: return t('acl.any')
case AclProtocol.TCP: return 'TCP'
case AclProtocol.UDP: return 'UDP'
case AclProtocol.ICMP: return 'ICMP'
case AclProtocol.ICMPv6: return 'ICMPv6'
default: return t('event.Unknown')
}
}
function getActionLabel(action: AclAction) {
switch (action) {
case AclAction.Allow: return t('acl.allow')
case AclAction.Drop: return t('acl.drop')
default: return t('event.Unknown')
}
}
function addRule() {
editingRuleIndex.value = -1
editingRule.value = {
name: '',
description: '',
priority: chain.value.rules.length,
enabled: true,
protocol: AclProtocol.Any,
ports: [],
source_ips: [],
destination_ips: [],
source_ports: [],
action: AclAction.Allow,
rate_limit: 0,
burst_limit: 0,
stateful: false,
source_groups: [],
destination_groups: [],
}
showRuleDialog.value = true
}
function editRule(index: number) {
editingRuleIndex.value = index
editingRule.value = JSON.parse(JSON.stringify(chain.value.rules[index]))
showRuleDialog.value = true
}
function deleteRule(index: number) {
chain.value.rules.splice(index, 1)
}
function saveRule(rule: AclRule) {
if (editingRuleIndex.value === -1) {
chain.value.rules.push(rule)
} else {
chain.value.rules[editingRuleIndex.value] = rule
}
chain.value.rules.sort((a, b) => (b.priority || 0) - (a.priority || 0))
}
function onRowReorder(event: any) {
chain.value.rules = event.value
// Update priorities based on new order (higher priority at top)
chain.value.rules.forEach((rule, index) => {
rule.priority = chain.value.rules.length - index - 1
})
}
</script>
<template>
<div class="flex flex-col gap-6">
<!-- Chain Metadata Section -->
<div
class="grid grid-cols-1 md:grid-cols-2 gap-4 p-4 bg-gray-50 rounded-lg border border-gray-200 dark:bg-gray-900 dark:border-gray-700">
<div class="flex flex-col gap-2">
<label class="font-bold text-sm">{{ t('acl.chain.name') }}</label>
<InputText v-model="chain.name" size="small" />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold text-sm">{{ t('acl.rule.description') }}</label>
<InputText v-model="chain.description" size="small" />
</div>
<div class="flex items-center gap-6 col-span-full border-t pt-2 mt-2 dark:border-gray-700">
<div class="flex items-center gap-2">
<label class="font-bold text-sm">{{ t('acl.rule.enabled') }}</label>
<ToggleButton v-model="chain.enabled" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('web.common.enable')" :off-label="t('web.common.disable')" class="w-24" />
</div>
<div class="flex items-center gap-2">
<label class="font-bold text-sm">{{ t('acl.chain.type') }}</label>
<Select v-model="chain.chain_type" :options="chainTypeOptions" :option-label="opt => opt.label()"
option-value="value" size="small" class="w-40" />
</div>
<div class="flex items-center gap-2 ml-auto">
<label class="font-bold text-sm">{{ t('acl.default_action') }}</label>
<SelectButton v-model="chain.default_action" :options="actionOptions" :option-label="opt => opt.label()"
option-value="value" :allow-empty="false" />
</div>
</div>
</div>
<div class="flex flex-row items-center gap-4 justify-between">
<h4 class="text-md font-bold">{{ t('acl.rules') }}</h4>
<Button icon="pi pi-plus" :label="t('acl.add_rule')" severity="success" size="small" @click="addRule" />
</div>
<DataTable :value="chain.rules" @row-reorder="onRowReorder" responsiveLayout="scroll">
<Column rowReorder headerStyle="width: 3rem" />
<Column field="enabled" :header="t('acl.rule.enabled')">
<template #body="{ data }">
<i class="pi" :class="data.enabled ? 'pi-check-circle text-green-500' : 'pi-times-circle text-red-500'"></i>
</template>
</Column>
<Column field="name" :header="t('acl.rule.name')" />
<Column :header="t('acl.match')">
<template #body="{ data }">
<div class="flex flex-col gap-2 py-1">
<div class="flex items-center gap-2">
<span
class="px-2 py-0.5 bg-blue-100 text-blue-700 dark:bg-blue-900/30 dark:text-blue-400 rounded-md text-[10px] font-bold uppercase tracking-wider">
{{ getProtocolLabel(data.protocol) }}
</span>
</div>
<div class="flex flex-col sm:flex-row sm:items-center gap-1 sm:gap-3">
<div class="flex items-center gap-1.5 min-w-0">
<span class="text-[10px] font-bold text-gray-400 uppercase w-7">Src</span>
<div class="flex flex-wrap gap-1 items-center overflow-hidden">
<span v-for="ip in data.source_ips" :key="ip"
class="font-mono text-xs bg-surface-100 dark:bg-surface-800 px-1.5 py-0.5 rounded">{{ ip }}</span>
<span v-for="grp in data.source_groups" :key="grp"
class="text-xs font-bold text-purple-600 dark:text-purple-400">@{{ grp }}</span>
<span v-if="data.source_ports.length" class="text-xs text-blue-600 dark:text-blue-400 font-mono">:{{
data.source_ports.join(',') }}</span>
<span v-if="!data.source_ips.length && !data.source_groups.length" class="text-gray-400">*</span>
</div>
</div>
<i class="pi pi-arrow-right hidden sm:block text-gray-300 text-xs"></i>
<Divider layout="horizontal" class="sm:hidden my-1" />
<div class="flex items-center gap-1.5 min-w-0">
<span class="text-[10px] font-bold text-gray-400 uppercase w-7">Dst</span>
<div class="flex flex-wrap gap-1 items-center overflow-hidden">
<span v-for="ip in data.destination_ips" :key="ip"
class="font-mono text-xs bg-surface-100 dark:bg-surface-800 px-1.5 py-0.5 rounded">{{ ip }}</span>
<span v-for="grp in data.destination_groups" :key="grp"
class="text-xs font-bold text-purple-600 dark:text-purple-400">@{{ grp }}</span>
<span v-if="data.ports.length" class="text-xs text-blue-600 dark:text-blue-400 font-mono">:{{
data.ports.join(',') }}</span>
<span v-if="!data.destination_ips.length && !data.destination_groups.length"
class="text-gray-400">*</span>
</div>
</div>
</div>
</div>
</template>
</Column>
<Column field="action" :header="t('acl.rule.action')">
<template #body="{ data }">
<span :class="data.action === AclAction.Allow ? 'text-green-600' : 'text-red-600 font-bold'">
{{ getActionLabel(data.action) }}
</span>
</template>
</Column>
<Column :header="t('web.common.edit')">
<template #body="{ index }">
<div class="flex gap-2">
<Button icon="pi pi-pencil" text rounded @click="editRule(index)" />
<Button icon="pi pi-trash" severity="danger" text rounded @click="deleteRule(index)" />
</div>
</template>
</Column>
</DataTable>
<AclRuleDialog v-if="showRuleDialog && editingRule" v-model:visible="showRuleDialog" v-model:rule="editingRule"
:group-names="props.groupNames" @save="saveRule" />
</div>
</template>
@@ -0,0 +1,115 @@
<script setup lang="ts">
import { Button, Column, DataTable, Dialog, InputText, MultiSelect, Password } from 'primevue';
import { ref } from 'vue';
import { useI18n } from 'vue-i18n';
import { GroupIdentity, GroupInfo } from '../../types/network';
const props = defineProps<{
groupNames?: string[]
}>()
const group = defineModel<GroupInfo>({ required: true })
const emit = defineEmits(['rename-group'])
const { t } = useI18n()
const editingGroup = ref<GroupIdentity | null>(null)
const editingGroupIndex = ref(-1)
const showGroupDialog = ref(false)
const oldGroupName = ref('')
function addGroup() {
editingGroupIndex.value = -1
editingGroup.value = {
group_name: '',
group_secret: '',
}
oldGroupName.value = ''
showGroupDialog.value = true
}
function editGroup(index: number) {
editingGroupIndex.value = index
editingGroup.value = JSON.parse(JSON.stringify(group.value.declares[index]))
oldGroupName.value = editingGroup.value?.group_name || ''
showGroupDialog.value = true
}
function deleteGroup(index: number) {
group.value.declares.splice(index, 1)
}
function saveGroup() {
if (!editingGroup.value) return
const newName = editingGroup.value.group_name
if (editingGroupIndex.value === -1) {
group.value.declares.push(editingGroup.value)
} else {
if (oldGroupName.value && oldGroupName.value !== newName) {
// Sync in members
group.value.members = group.value.members.map(m => m === oldGroupName.value ? newName : m)
// Notify parent to sync in rules
emit('rename-group', { oldName: oldGroupName.value, newName })
}
group.value.declares[editingGroupIndex.value] = editingGroup.value
}
showGroupDialog.value = false
}
</script>
<template>
<div class="flex flex-col gap-6">
<div class="flex flex-col gap-2">
<div class="flex justify-between items-center">
<div class="flex flex-col">
<label class="font-bold text-lg">{{ t('acl.group.declares') }}</label>
<small class="text-gray-500">{{ t('acl.group.help') }}</small>
</div>
<Button icon="pi pi-plus" :label="t('web.common.add')" severity="success" @click="addGroup" />
</div>
<DataTable :value="group.declares" responsiveLayout="scroll">
<Column field="group_name" :header="t('acl.group.name')" />
<Column field="group_secret" :header="t('acl.group.secret')">
<template #body="{ data }">
<Password v-model="data.group_secret" :feedback="false" toggleMask readonly plain class="w-full" />
</template>
</Column>
<Column :header="t('web.common.edit')" headerStyle="width: 8rem">
<template #body="{ index }">
<div class="flex gap-2">
<Button icon="pi pi-pencil" text rounded @click="editGroup(index)" />
<Button icon="pi pi-trash" severity="danger" text rounded @click="deleteGroup(index)" />
</div>
</template>
</Column>
</DataTable>
</div>
<div class="flex flex-col gap-2">
<label class="font-bold text-lg">{{ t('acl.group.members') }}</label>
<MultiSelect v-model="group.members" :options="props.groupNames" multiple fluid filter
:placeholder="t('acl.group.members')" />
</div>
<!-- Group Identity Dialog -->
<Dialog v-model:visible="showGroupDialog" modal :header="t('acl.groups')" :style="{ width: '400px' }">
<div v-if="editingGroup" class="flex flex-col gap-4 pt-2">
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.group.name') }}</label>
<InputText v-model="editingGroup.group_name" fluid />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.group.secret') }}</label>
<Password v-model="editingGroup.group_secret" :feedback="false" toggleMask fluid />
</div>
</div>
<template #footer>
<Button :label="t('web.common.cancel')" icon="pi pi-times" @click="showGroupDialog = false" text />
<Button :label="t('web.common.save')" icon="pi pi-save" @click="saveGroup" />
</template>
</Dialog>
</div>
</template>
@@ -0,0 +1,150 @@
<script setup lang="ts">
import { Button, Menu, Tab, TabList, TabPanel, TabPanels, Tabs } from 'primevue'
import { computed, ref } from 'vue'
import { useI18n } from 'vue-i18n'
import { Acl, AclAction, AclChainType } from '../../types/network'
import AclChainEditor from './AclChainEditor.vue'
import AclGroupEditor from './AclGroupEditor.vue'
const acl = defineModel<Acl>({ required: true })
const { t } = useI18n()
const activeTab = ref(0)
const menu = ref()
const addMenuModel = ref([
{ label: () => t('acl.inbound'), command: () => addChain(AclChainType.Inbound) },
{ label: () => t('acl.outbound'), command: () => addChain(AclChainType.Outbound) },
{ label: () => t('acl.forward'), command: () => addChain(AclChainType.Forward) },
])
function addChain(type: AclChainType) {
if (!acl.value.acl_v1) {
acl.value.acl_v1 = { chains: [], group: { declares: [], members: [] } }
}
let defaultName = ''
switch (type) {
case AclChainType.Inbound: defaultName = 'Inbound'; break;
case AclChainType.Outbound: defaultName = 'Outbound'; break;
case AclChainType.Forward: defaultName = 'Forward'; break;
}
acl.value.acl_v1.chains.push({
name: defaultName,
chain_type: type,
description: '',
enabled: true,
rules: [],
default_action: AclAction.Allow
})
activeTab.value = acl.value.acl_v1.chains.length - 1
}
function removeChain(index: number) {
if (confirm(t('acl.delete_chain_confirm'))) {
acl.value.acl_v1?.chains.splice(index, 1)
if (activeTab.value >= (acl.value.acl_v1?.chains.length || 0)) {
activeTab.value = Math.max(0, (acl.value.acl_v1?.chains.length || 0))
}
}
}
function handleRenameGroup({ oldName, newName }: { oldName: string, newName: string }) {
if (!acl.value.acl_v1) return
acl.value.acl_v1.chains.forEach(chain => {
chain.rules.forEach(rule => {
rule.source_groups = rule.source_groups.map(g => g === oldName ? newName : g)
rule.destination_groups = rule.destination_groups.map(g => g === oldName ? newName : g)
})
})
}
const groupNames = computed(() => {
return acl.value.acl_v1?.group?.declares.map(g => g.group_name) || []
})
const tabs = computed(() => {
const chains = acl.value.acl_v1?.chains || []
const result: { type: string, label: string, index: number }[] = []
if (chains.length === 0) {
result.push({ type: 'empty', label: t('acl.chains'), index: 0 })
}
else {
chains.forEach((c, index) => {
result.push({
type: 'chain',
label: c.name || `Chain ${index}`,
index
})
})
}
result.push({ type: 'groups', label: t('acl.groups'), index: result.length })
return result
})
</script>
<template>
<div class="flex flex-col gap-4">
<Tabs v-model:value="activeTab">
<div class="flex items-center border-b border-surface-200 dark:border-surface-700">
<TabList class="flex-grow min-w-0 overflow-x-auto" style="border-bottom: none;">
<Tab v-for="tab in tabs" :key="tab.type + tab.index" :value="tab.index">
<div class="flex items-center gap-2 whitespace-nowrap">
{{ tab.label }}
<Button v-if="tab.type === 'chain'" icon="pi pi-times" severity="danger" text rounded size="small"
class="w-6 h-6 p-0" @click.stop="removeChain(tab.index)" />
</div>
</Tab>
</TabList>
<div
class="flex-shrink-0 flex items-center px-2 bg-white dark:bg-gray-900 border-l border-surface-100 dark:border-surface-800">
<Button icon="pi pi-plus" text rounded size="small" class="w-8 h-8 p-0"
@click="(event) => menu.toggle(event)" />
<Menu ref="menu" :model="addMenuModel" :popup="true" />
</div>
</div>
<TabPanels>
<TabPanel v-for="tab in tabs" :key="'panel' + tab.type + tab.index" :value="tab.index">
<!-- Empty State within TabPanel -->
<div v-if="tab.type === 'empty'"
class="py-8 flex flex-col items-center justify-center border-2 border-dashed border-surface-200 rounded-lg bg-surface-50 dark:bg-surface-900 dark:border-surface-700">
<i class="pi pi-shield text-5xl mb-4 text-primary" />
<div class="text-xl font-bold mb-2">{{ t('acl.chains') }}</div>
<p class="text-surface-500 mb-8 text-center max-w-sm px-4">{{ t('acl.help') }}</p>
<div class="flex flex-wrap gap-3 justify-center">
<Button :label="t('acl.inbound')" icon="pi pi-arrow-down-left" @click="addChain(AclChainType.Inbound)" />
<Button :label="t('acl.outbound')" icon="pi pi-arrow-up-right" @click="addChain(AclChainType.Outbound)" />
<Button :label="t('acl.forward')" icon="pi pi-directions" @click="addChain(AclChainType.Forward)" />
</div>
</div>
<!-- Rule Chains -->
<div v-if="tab.type === 'chain' && acl.acl_v1 && acl.acl_v1.chains[tab.index]" class="py-4">
<AclChainEditor v-model="acl.acl_v1.chains[tab.index]" :group-names="groupNames" />
</div>
<!-- Group Management -->
<div v-if="tab.type === 'groups'" class="py-4">
<template v-if="acl.acl_v1">
<AclGroupEditor v-if="acl.acl_v1.group" v-model="acl.acl_v1.group" :group-names="groupNames"
@rename-group="handleRenameGroup" />
<div v-else class="flex justify-center p-4">
<Button :label="t('web.common.add') + ' ' + t('acl.groups')"
@click="acl.acl_v1.group = { declares: [], members: [] }" />
</div>
</template>
<div v-else class="flex justify-center p-4">
<Button :label="t('acl.enabled')"
@click="acl.acl_v1 = { chains: [], group: { declares: [], members: [] } }" />
</div>
</div>
</TabPanel>
</TabPanels>
</Tabs>
</div>
</template>
@@ -0,0 +1,150 @@
<script setup lang="ts">
import { AutoComplete, Button, Checkbox, Dialog, InputNumber, InputText, MultiSelect, Panel, SelectButton, ToggleButton } from 'primevue';
import { computed, ref } from 'vue';
import { useI18n } from 'vue-i18n';
import { AclAction, AclProtocol, AclRule } from '../../types/network';
const props = defineProps<{
visible: boolean
groupNames?: string[]
}>()
const emit = defineEmits(['update:visible', 'save'])
const rule = defineModel<AclRule>('rule', { required: true })
const { t } = useI18n()
const protocolOptions = [
{ label: () => t('acl.any'), value: AclProtocol.Any },
{ label: 'TCP', value: AclProtocol.TCP },
{ label: 'UDP', value: AclProtocol.UDP },
{ label: 'ICMP', value: AclProtocol.ICMP },
{ label: 'ICMPv6', value: AclProtocol.ICMPv6 },
]
const actionOptions = [
{ label: () => t('acl.allow'), value: AclAction.Allow },
{ label: () => t('acl.drop'), value: AclAction.Drop },
]
const showPorts = computed(() => {
return rule.value.protocol === AclProtocol.TCP || rule.value.protocol === AclProtocol.UDP || rule.value.protocol === AclProtocol.Any
})
function close() {
emit('update:visible', false)
}
function save() {
emit('save', rule.value)
close()
}
// Suggestions for IP/Port AutoComplete
const genericSuggestions = ref<string[]>([])
</script>
<template>
<Dialog :visible="visible" @update:visible="emit('update:visible', $event)" modal :header="t('acl.edit_rule')"
:style="{ width: '90vw', maxWidth: '600px' }">
<div class="flex flex-col gap-4">
<div class="flex flex-row gap-4 items-center">
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.name') }}</label>
<InputText v-model="rule.name" fluid />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.enabled') }}</label>
<ToggleButton v-model="rule.enabled" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('web.common.enable')" :off-label="t('web.common.disable')" class="w-24" />
</div>
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.description') }}</label>
<InputText v-model="rule.description" fluid />
</div>
<div class="flex flex-row gap-4 flex-wrap">
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.action') }}</label>
<SelectButton v-model="rule.action" :options="actionOptions" :option-label="opt => opt.label()"
option-value="value" :allow-empty="false" />
</div>
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.protocol') }}</label>
<SelectButton v-model="rule.protocol" :options="protocolOptions"
:option-label="opt => typeof opt.label === 'function' ? opt.label() : opt.label" option-value="value"
:allow-empty="false" />
</div>
</div>
<Panel :header="t('acl.rules')" toggleable>
<div class="flex flex-col gap-4">
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.src_ips') }}</label>
<AutoComplete v-model="rule.source_ips" multiple fluid :suggestions="genericSuggestions"
@complete="genericSuggestions = [$event.query]"
:placeholder="t('chips_placeholder', ['10.126.126.0/24'])" />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.dst_ips') }}</label>
<AutoComplete v-model="rule.destination_ips" multiple fluid :suggestions="genericSuggestions"
@complete="genericSuggestions = [$event.query]"
:placeholder="t('chips_placeholder', ['10.126.126.2/32'])" />
</div>
<div v-if="showPorts" class="flex flex-row gap-4 flex-wrap">
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.src_ports') }}</label>
<AutoComplete v-model="rule.source_ports" multiple fluid :suggestions="genericSuggestions"
@complete="genericSuggestions = [$event.query]" placeholder="e.g. 80, 1000-2000" />
</div>
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.dst_ports') }}</label>
<AutoComplete v-model="rule.ports" multiple fluid :suggestions="genericSuggestions"
@complete="genericSuggestions = [$event.query]" placeholder="e.g. 80, 1000-2000" />
</div>
</div>
</div>
</Panel>
<Panel :header="t('advanced_settings')" toggleable collapsed>
<div class="flex flex-col gap-4">
<div class="flex items-center gap-2">
<Checkbox v-model="rule.stateful" :binary="true" inputId="rule-stateful" />
<label for="rule-stateful" class="font-bold">{{ t('acl.rule.stateful') }}</label>
</div>
<div class="flex flex-row gap-4 flex-wrap">
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.rate_limit') }}</label>
<InputNumber v-model="rule.rate_limit" :min="0" placeholder="0 = no limit" fluid />
</div>
<div class="flex flex-col gap-2 grow">
<label class="font-bold">{{ t('acl.rule.burst_limit') }}</label>
<InputNumber v-model="rule.burst_limit" :min="0" placeholder="0 = no limit" fluid />
</div>
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.src_groups') }}</label>
<MultiSelect v-model="rule.source_groups" :options="props.groupNames" multiple fluid filter
:placeholder="t('acl.rule.src_groups')" />
</div>
<div class="flex flex-col gap-2">
<label class="font-bold">{{ t('acl.rule.dst_groups') }}</label>
<MultiSelect v-model="rule.destination_groups" :options="props.groupNames" multiple fluid filter
:placeholder="t('acl.rule.dst_groups')" />
</div>
</div>
</Panel>
</div>
<template #footer>
<Button :label="t('web.common.cancel')" icon="pi pi-times" @click="close" text />
<Button :label="t('web.common.save')" icon="pi pi-save" @click="save" />
</template>
</Dialog>
</template>
+50 -8
View File
@@ -104,6 +104,9 @@ use_smoltcp_help: 使用用户态 TCP/IP 协议栈,避免操作系统防火墙
disable_ipv6: 禁用IPv6
disable_ipv6_help: 禁用此节点的IPv6功能,仅使用IPv4进行网络通信。
ipv6_public_addr_auto: 自动获取公网 IPv6
ipv6_public_addr_auto_help: 自动从共享了 IPv6 子网的对等节点获取一个公网 IPv6 地址。
enable_kcp_proxy: 启用 KCP 代理
enable_kcp_proxy_help: 将 TCP 流量转为 KCP 流量,降低传输延迟,提升传输速度。
@@ -157,6 +160,9 @@ disable_tcp_hole_punching_help: 禁用TCP打洞功能
disable_udp_hole_punching: 禁用UDP打洞
disable_udp_hole_punching_help: 禁用UDP打洞功能
disable_upnp: 禁用 UPnP
disable_upnp_help: 禁用符合条件监听器的运行时 UPnP/NAT-PMP 端口映射;自动端口映射默认开启。
disable_sym_hole_punching: 禁用对称NAT打洞
disable_sym_hole_punching_help: 禁用对称NAT的打洞(生日攻击),将对称NAT视为锥形NAT处理
@@ -286,9 +292,6 @@ web:
logout: 退出登录
language: 语言
change_password: 修改密码
change_password_now: 立即修改密码
default_password_warning: 当前账号仍在使用系统默认密码。为保障安全,请部署完成后立即修改密码。
password_changed_relogin: 密码已修改,请重新登录。
device:
list: 设备列表
@@ -358,16 +361,12 @@ web:
delete: 删除
edit: 编辑
refresh: 刷新
add: 添加
loading: 加载中...
error: 错误
success: 成功
warning: 警告
info: 提示
password_empty: 密码不能为空
password_min_length: 密码至少需要 8 位
password_too_weak: 密码强度不足
password_mismatch: 两次输入的密码不一致
password_strength_hint: 密码至少 8 位,且需包含大小写字母、数字、特殊字符中的至少 2 类
enable: 开启
disable: 关闭
address: 地址
@@ -430,3 +429,46 @@ config-server:
client:
not_running: 无法连接至远程客户端
retry: 重试
acl:
title: 访问控制
help: 访问控制列表,用于限制节点间的通信。
enabled: 启用 ACL
default_action: 默认动作
chains: 规则链
inbound: 入站
outbound: 出站
forward: 转发
rules: 规则
add_rule: 添加规则
edit_rule: 编辑规则
rule:
name: 规则名称
description: 描述
enabled: 启用
protocol: 协议
action: 动作
src_ips: 来源 IP
dst_ips: 目的 IP
src_ports: 来源端口
dst_ports: 目的端口
rate_limit: 速率限制 (pps)
burst_limit: 爆发限制
stateful: 状态追踪
src_groups: 来源组
dst_groups: 目的组
groups: 组管理
group:
declares: 声明组
members: 加入组
name: 组名
secret: 密钥
help: 在此处定义网络中的组身份,以便在规则中使用。
any: 任意
allow: 允许
drop: 丢弃
delete_chain_confirm: 确定要删除此规则链及其所有规则吗?
chain:
name: 名称
type: 类型
match: 匹配
+50 -8
View File
@@ -103,6 +103,9 @@ use_smoltcp_help: Use a user-space TCP/IP stack to avoid issues with operating s
disable_ipv6: Disable IPv6
disable_ipv6_help: Disable IPv6 functionality for this node, only use IPv4 for network communication.
ipv6_public_addr_auto: Auto Public IPv6
ipv6_public_addr_auto_help: Auto-obtain a public IPv6 address from a peer that shares its IPv6 subnet.
enable_kcp_proxy: Enable KCP Proxy
enable_kcp_proxy_help: Convert TCP traffic to KCP traffic to reduce latency and boost transmission speed.
@@ -156,6 +159,9 @@ disable_tcp_hole_punching_help: Disable tcp hole punching
disable_udp_hole_punching: Disable UDP Hole Punching
disable_udp_hole_punching_help: Disable udp hole punching
disable_upnp: Disable UPnP
disable_upnp_help: Disable runtime UPnP/NAT-PMP port mapping for eligible listeners; automatic port mapping is enabled by default.
disable_sym_hole_punching: Disable Symmetric NAT Hole Punching
disable_sym_hole_punching_help: Disable special hole punching handling for symmetric NAT (based on birthday attack), treat symmetric NAT as cone NAT
@@ -286,9 +292,6 @@ web:
logout: Logout
language: Language
change_password: Change Password
change_password_now: Change Password Now
default_password_warning: This account is still using the default password. Change it immediately after deployment to keep your instance secure.
password_changed_relogin: Password changed. Please log in again.
device:
list: Device List
@@ -358,16 +361,12 @@ web:
delete: Delete
edit: Edit
refresh: Refresh
add: Add
loading: Loading...
error: Error
success: Success
warning: Warning
info: Info
password_empty: Password cannot be empty
password_min_length: Password must be at least 8 characters long
password_too_weak: Password is too weak
password_mismatch: Passwords do not match
password_strength_hint: Password must be at least 8 characters and include at least 2 of uppercase letters, lowercase letters, numbers, or special characters
enable: Enable
disable: Disable
address: Address
@@ -430,3 +429,46 @@ config-server:
client:
not_running: Unable to connect to remote client.
retry: Retry
acl:
title: Access Control (ACL)
help: Access control list to restrict communication between nodes.
enabled: Enable ACL
default_action: Default Action
chains: Rule Chains
inbound: Inbound
outbound: Outbound
forward: Forward
rules: Rules
add_rule: Add Rule
edit_rule: Edit Rule
rule:
name: Rule Name
description: Description
enabled: Enabled
protocol: Protocol
action: Action
src_ips: Source IPs
dst_ips: Destination IPs
src_ports: Source Ports
dst_ports: Destination Ports
rate_limit: Rate Limit (pps)
burst_limit: Burst Limit
stateful: Stateful
src_groups: Source Groups
dst_groups: Destination Groups
groups: Groups
group:
declares: Declared Groups
members: Node Memberships
name: Group Name
secret: Group Secret
help: Define group identities in the network to use them in rules.
any: Any
allow: Allow
drop: Drop
delete_chain_confirm: Are you sure you want to delete this rule chain and all its rules?
chain:
name: Name
type: Type
match: Match
@@ -14,6 +14,74 @@ export interface SecureModeConfig {
local_public_key?: string
}
export enum AclProtocol {
Unspecified = 0,
TCP = 1,
UDP = 2,
ICMP = 3,
ICMPv6 = 4,
Any = 5,
}
export enum AclAction {
Noop = 0,
Allow = 1,
Drop = 2,
}
export enum AclChainType {
UnspecifiedChain = 0,
Inbound = 1,
Outbound = 2,
Forward = 3,
}
export interface AclRule {
name: string
description: string
priority: number
enabled: boolean
protocol: AclProtocol
ports: string[]
source_ips: string[]
destination_ips: string[]
source_ports: string[]
action: AclAction
rate_limit: number
burst_limit: number
stateful: boolean
source_groups: string[]
destination_groups: string[]
}
export interface AclChain {
name: string
chain_type: AclChainType
description: string
enabled: boolean
rules: AclRule[]
default_action: AclAction
}
export interface GroupIdentity {
group_name: string
group_secret: string
}
export interface GroupInfo {
declares: GroupIdentity[]
members: string[]
}
export interface AclV1 {
chains: AclChain[]
group?: GroupInfo
}
export interface Acl {
acl_v1?: AclV1
}
export interface NetworkConfig {
instance_id: string
@@ -47,6 +115,7 @@ export interface NetworkConfig {
use_smoltcp?: boolean
disable_ipv6?: boolean
ipv6_public_addr_auto?: boolean
enable_kcp_proxy?: boolean
disable_kcp_input?: boolean
enable_quic_proxy?: boolean
@@ -64,6 +133,7 @@ export interface NetworkConfig {
disable_encryption?: boolean
disable_tcp_hole_punching?: boolean
disable_udp_hole_punching?: boolean
disable_upnp?: boolean
disable_sym_hole_punching?: boolean
enable_relay_network_whitelist?: boolean
@@ -85,6 +155,7 @@ export interface NetworkConfig {
enable_private_mode?: boolean
port_forwards: PortForwardConfig[]
acl?: Acl
}
export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
@@ -121,6 +192,7 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
use_smoltcp: false,
disable_ipv6: false,
ipv6_public_addr_auto: false,
enable_kcp_proxy: false,
disable_kcp_input: false,
enable_quic_proxy: false,
@@ -138,6 +210,7 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
disable_encryption: false,
disable_tcp_hole_punching: false,
disable_udp_hole_punching: false,
disable_upnp: false,
disable_sym_hole_punching: false,
enable_relay_network_whitelist: false,
relay_network_whitelist: [],
@@ -152,6 +225,15 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
enable_magic_dns: false,
enable_private_mode: false,
port_forwards: [],
acl: {
acl_v1: {
group: {
declares: [],
members: [],
},
chains: [],
},
},
}
}
@@ -1,80 +1,17 @@
<script lang="ts" setup>
import { computed, inject, ref } from 'vue';
import { Card, Password, Button } from 'primevue';
import { useToast } from 'primevue/usetoast';
import { useRouter } from 'vue-router';
import { useI18n } from 'vue-i18n';
import ApiClient from '../modules/api';
import { clearMustChangePasswordFlag } from '../modules/auth-status';
import { validatePasswordStrength } from '../modules/password-policy';
const dialogRef = inject<any>('dialogRef');
const api = computed<ApiClient>(() => dialogRef.value.data.api);
const password = ref('');
const confirmPassword = ref('');
const toast = useToast();
const router = useRouter();
const { t } = useI18n();
const passwordValidation = computed(() => validatePasswordStrength(password.value));
const passwordMatches = computed(() => password.value === confirmPassword.value);
const passwordErrorMessage = computed(() => {
if (password.value.length === 0 || passwordValidation.value.valid) {
return '';
}
return t(passwordValidation.value.reasonKey!);
});
const confirmPasswordErrorMessage = computed(() => {
if (confirmPassword.value.length === 0 || passwordMatches.value) {
return '';
}
return t('web.common.password_mismatch');
});
const canSubmit = computed(() => passwordValidation.value.valid && passwordMatches.value);
const changePassword = async () => {
if (!passwordValidation.value.valid) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t(passwordValidation.value.reasonKey!),
life: 3000,
});
return;
}
if (!passwordMatches.value) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t('web.common.password_mismatch'),
life: 3000,
});
return;
}
try {
await api.value.change_password(password.value);
toast.add({
severity: 'success',
summary: t('web.common.success'),
detail: t('web.main.password_changed_relogin'),
life: 3000,
});
clearMustChangePasswordFlag();
dialogRef.value.close();
router.push({ name: 'login' });
} catch (error) {
toast.add({
severity: 'error',
summary: t('web.common.error'),
detail: error instanceof Error ? error.message : String(error),
life: 3000,
});
}
await api.value.change_password(password.value);
dialogRef.value.close();
}
</script>
@@ -82,28 +19,15 @@ const changePassword = async () => {
<div class="flex items-center justify-center">
<Card class="w-full max-w-md p-6">
<template #header>
<h2 class="text-2xl font-semibold text-center">{{ t('web.main.change_password') }}
<h2 class="text-2xl font-semibold text-center">Change Password
</h2>
</template>
<template #content>
<div class="flex flex-col space-y-4">
<Password v-model="password" :placeholder="t('web.settings.new_password')" :feedback="false"
toggleMask />
<Password v-model="confirmPassword" :placeholder="t('web.settings.confirm_password')"
:feedback="false" toggleMask />
<small class="text-surface-500 dark:text-surface-400">
{{ t('web.common.password_strength_hint') }}
</small>
<small v-if="passwordErrorMessage" class="text-red-500 dark:text-red-400">
{{ passwordErrorMessage }}
</small>
<small v-if="confirmPasswordErrorMessage" class="text-red-500 dark:text-red-400">
{{ confirmPasswordErrorMessage }}
</small>
<Button @click="changePassword" :label="t('web.common.confirm')"
:disabled="!canSubmit" />
<Password v-model="password" placeholder="New Password" :feedback="false" toggleMask />
<Button @click="changePassword" label="Ok" />
</div>
</template>
</Card>
</div>
</template>
</template>
+1 -60
View File
@@ -7,8 +7,6 @@ import { I18nUtils } from 'easytier-frontend-lib';
import { getInitialApiHost, cleanAndLoadApiHosts, saveApiHost } from "../modules/api-host"
import { useI18n } from 'vue-i18n'
import ApiClient, { Credential, RegisterData } from '../modules/api';
import { setMustChangePasswordFlag } from '../modules/auth-status';
import { validatePasswordStrength } from '../modules/password-policy';
const { t } = useI18n()
@@ -24,26 +22,8 @@ const username = ref('');
const password = ref('');
const registerUsername = ref('');
const registerPassword = ref('');
const registerConfirmPassword = ref('');
const captcha = ref('');
const captchaSrc = computed(() => api.value.captcha_url());
const registerPasswordValidation = computed(() => validatePasswordStrength(registerPassword.value));
const registerPasswordsMatch = computed(() => registerPassword.value === registerConfirmPassword.value);
const registerPasswordErrorMessage = computed(() => {
if (registerPassword.value.length === 0 || registerPasswordValidation.value.valid) {
return '';
}
return t(registerPasswordValidation.value.reasonKey!);
});
const registerConfirmPasswordErrorMessage = computed(() => {
if (registerConfirmPassword.value.length === 0 || registerPasswordsMatch.value) {
return '';
}
return t('web.common.password_mismatch');
});
const canRegister = computed(() => registerPasswordValidation.value.valid && registerPasswordsMatch.value);
const onSubmit = async () => {
@@ -53,7 +33,6 @@ const onSubmit = async () => {
let ret = await api.value?.login(credential);
if (ret.success) {
localStorage.setItem('apiHost', btoa(apiHost.value));
setMustChangePasswordFlag(Boolean(ret.mustChangePassword));
router.push({
name: 'dashboard',
params: { apiHost: btoa(apiHost.value) },
@@ -64,26 +43,6 @@ const onSubmit = async () => {
};
const onRegister = async () => {
if (!registerPasswordValidation.value.valid) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t(registerPasswordValidation.value.reasonKey!),
life: 3000,
});
return;
}
if (!registerPasswordsMatch.value) {
toast.add({
severity: 'warn',
summary: t('web.common.warning'),
detail: t('web.common.password_mismatch'),
life: 3000,
});
return;
}
saveApiHost(apiHost.value);
const credential: Credential = { username: registerUsername.value, password: registerPassword.value };
const registerReq: RegisterData = { credentials: credential, captcha: captcha.value };
@@ -197,23 +156,6 @@ onBeforeUnmount(() => {
}}</label>
<Password id="register-password" v-model="registerPassword" required toggleMask
:feedback="false" class="w-full" />
<small class="text-surface-500 dark:text-surface-400">
{{ t('web.common.password_strength_hint') }}
</small>
<small v-if="registerPasswordErrorMessage" class="block text-red-500 dark:text-red-400">
{{ registerPasswordErrorMessage }}
</small>
</div>
<div class="p-field">
<label for="register-confirm-password" class="block text-sm font-medium">
{{ t('web.settings.confirm_password') }}
</label>
<Password id="register-confirm-password" v-model="registerConfirmPassword" required toggleMask
:feedback="false" class="w-full" />
<small v-if="registerConfirmPasswordErrorMessage"
class="block text-red-500 dark:text-red-400">
{{ registerConfirmPasswordErrorMessage }}
</small>
</div>
<div class="p-field">
<label for="captcha" class="block text-sm font-medium">{{ t('web.login.captcha') }}</label>
@@ -221,8 +163,7 @@ onBeforeUnmount(() => {
<img :src="captchaSrc" alt="Captcha" class="mt-2 mb-2" />
</div>
<div class="flex items-center justify-between">
<Button :label="t('web.login.register')" type="submit" class="w-full"
:disabled="!canRegister" />
<Button :label="t('web.login.register')" type="submit" class="w-full" />
</div>
<div class="flex items-center justify-between">
<Button :label="t('web.login.back_to_login')" type="button" class="w-full"
@@ -1,18 +1,13 @@
<script setup lang="ts">
import { I18nUtils } from 'easytier-frontend-lib'
import { computed, onMounted, ref, onUnmounted, nextTick } from 'vue';
import { Button, Message, TieredMenu } from 'primevue';
import { Button, TieredMenu } from 'primevue';
import { useRoute, useRouter } from 'vue-router';
import { useDialog } from 'primevue/usedialog';
import ChangePassword from './ChangePassword.vue';
import Icon from '../assets/easytier.png'
import { useI18n } from 'vue-i18n'
import ApiClient from '../modules/api';
import {
clearMustChangePasswordFlag,
getMustChangePasswordFlag,
setMustChangePasswordFlag,
} from '../modules/auth-status';
const { t } = useI18n()
const route = useRoute();
@@ -20,7 +15,6 @@ const router = useRouter();
const api = computed<ApiClient | undefined>(() => {
try {
return new ApiClient(atob(route.params.apiHost as string), () => {
clearMustChangePasswordFlag();
router.push({ name: 'login' });
})
} catch (e) {
@@ -29,42 +23,25 @@ const api = computed<ApiClient | undefined>(() => {
});
const dialog = useDialog();
const mustChangePassword = ref(false);
const openChangePasswordDialog = () => {
dialog.open(ChangePassword, {
props: {
modal: true,
},
data: {
api: api.value,
}
});
};
const loadAuthStatus = async () => {
const cachedStatus = getMustChangePasswordFlag();
if (cachedStatus !== null) {
mustChangePassword.value = cachedStatus;
}
try {
const status = await api.value?.check_login_status();
mustChangePassword.value = Boolean(
status?.loggedIn && status?.mustChangePassword,
);
setMustChangePasswordFlag(mustChangePassword.value);
} catch (e) {
console.error('Failed to load auth status', e);
}
};
const userMenu = ref();
const userMenuItems = ref([
{
label: t('web.main.change_password'),
icon: 'pi pi-key',
command: openChangePasswordDialog,
command: () => {
console.log('File');
let ret = dialog.open(ChangePassword, {
props: {
modal: true,
},
data: {
api: api.value,
}
});
console.log("return", ret)
},
},
{
label: t('web.main.logout'),
@@ -75,7 +52,6 @@ const userMenuItems = ref([
} catch (e) {
console.error("logout failed", e);
}
clearMustChangePasswordFlag();
router.push({ name: 'login' });
},
},
@@ -116,7 +92,6 @@ onMounted(async () => {
// DOM
await nextTick();
document.addEventListener('click', handleClickOutside);
await loadAuthStatus();
});
onUnmounted(() => {
@@ -196,13 +171,6 @@ onUnmounted(() => {
<div class="p-4 sm:ml-64">
<div class="p-4 border-2 border-gray-200 border-dashed rounded-lg dark:border-gray-700">
<div class="grid grid-cols-1 gap-4">
<Message v-if="mustChangePassword" severity="warn" :closable="false">
<div class="flex flex-col gap-3 sm:flex-row sm:items-center sm:justify-between">
<span>{{ t('web.main.default_password_warning') }}</span>
<Button size="small" icon="pi pi-key" :label="t('web.main.change_password_now')"
@click="openChangePasswordDialog" />
</div>
</Message>
<RouterView v-slot="{ Component }">
<component :is="Component" :api="api" />
</RouterView>
+14 -37
View File
@@ -2,8 +2,6 @@ import axios, { AxiosError, AxiosInstance, AxiosResponse, InternalAxiosRequestCo
import { type Api, NetworkTypes, Utils } from 'easytier-frontend-lib';
import { Md5 } from 'ts-md5';
const hashAuthPassword = (password: string) => Md5.hashStr(password);
export interface ValidateConfigResponse {
toml_config: string;
}
@@ -16,16 +14,6 @@ export interface OidcConfigResponse {
export interface LoginResponse {
success: boolean;
message: string;
mustChangePassword?: boolean;
}
export interface AuthStatusResponse {
must_change_password: boolean;
}
export interface CheckLoginStatusResponse {
loggedIn: boolean;
mustChangePassword: boolean;
}
export interface RegisterResponse {
@@ -94,6 +82,7 @@ export class ApiClient {
// 添加响应拦截器
this.client.interceptors.response.use((response: AxiosResponse) => {
console.debug('Axios Response:', response);
return response.data; // 假设服务器返回的数据都在data属性中
}, (error: any) => {
if (error.response) {
@@ -119,8 +108,9 @@ export class ApiClient {
// 注册
public async register(data: RegisterData): Promise<RegisterResponse> {
try {
data.credentials.password = hashAuthPassword(data.credentials.password);
await this.client.post<RegisterResponse>('/auth/register', data);
data.credentials.password = Md5.hashStr(data.credentials.password);
const response = await this.client.post<RegisterResponse>('/auth/register', data);
console.log("register response:", response);
return { success: true, message: 'Register success', };
} catch (error) {
if (error instanceof AxiosError) {
@@ -133,13 +123,10 @@ export class ApiClient {
// 登录
public async login(data: Credential): Promise<LoginResponse> {
try {
data.password = hashAuthPassword(data.password);
const response = await this.client.post<any, AuthStatusResponse>('/auth/login', data);
return {
success: true,
message: 'Login success',
mustChangePassword: response.must_change_password,
};
data.password = Md5.hashStr(data.password);
const response = await this.client.post<any>('/auth/login', data);
console.log("login response:", response);
return { success: true, message: 'Login success', };
} catch (error) {
if (error instanceof AxiosError) {
if (error.response?.status === 401) {
@@ -160,26 +147,16 @@ export class ApiClient {
}
public async change_password(new_password: string) {
await this.client.put('/auth/password', { new_password: hashAuthPassword(new_password) });
await this.client.put('/auth/password', { new_password: Md5.hashStr(new_password) });
}
public async check_login_status(): Promise<CheckLoginStatusResponse> {
public async check_login_status() {
try {
const response = await this.client.get<any, AuthStatusResponse>('/auth/check_login_status');
return {
loggedIn: true,
mustChangePassword: response.must_change_password,
};
await this.client.get('/auth/check_login_status');
return true;
} catch (error) {
if (error instanceof AxiosError && error.response?.status === 401) {
return {
loggedIn: false,
mustChangePassword: false,
};
}
throw error;
};
return false;
}
}
public async list_session() {
@@ -1,18 +0,0 @@
const MUST_CHANGE_PASSWORD_STORAGE_KEY = 'auth.mustChangePassword';
export const getMustChangePasswordFlag = (): boolean | null => {
const value = sessionStorage.getItem(MUST_CHANGE_PASSWORD_STORAGE_KEY);
if (value === null) {
return null;
}
return value === 'true';
};
export const setMustChangePasswordFlag = (value: boolean) => {
sessionStorage.setItem(MUST_CHANGE_PASSWORD_STORAGE_KEY, value ? 'true' : 'false');
};
export const clearMustChangePasswordFlag = () => {
sessionStorage.removeItem(MUST_CHANGE_PASSWORD_STORAGE_KEY);
};
@@ -1,55 +0,0 @@
export type PasswordValidationReasonKey =
| 'web.common.password_empty'
| 'web.common.password_min_length'
| 'web.common.password_too_weak';
export interface PasswordValidationResult {
valid: boolean;
reasonKey?: PasswordValidationReasonKey;
}
const PASSWORD_MIN_LENGTH = 8;
export const countPasswordClasses = (password: string) => {
let count = 0;
if (/[a-z]/.test(password)) {
count += 1;
}
if (/[A-Z]/.test(password)) {
count += 1;
}
if (/\d/.test(password)) {
count += 1;
}
if (/[^A-Za-z0-9\s]/.test(password)) {
count += 1;
}
return count;
};
export const validatePasswordStrength = (password: string): PasswordValidationResult => {
if (password.trim().length === 0) {
return {
valid: false,
reasonKey: 'web.common.password_empty',
};
}
if (password.length < PASSWORD_MIN_LENGTH) {
return {
valid: false,
reasonKey: 'web.common.password_min_length',
};
}
if (countPasswordClasses(password) < 2) {
return {
valid: false,
reasonKey: 'web.common.password_too_weak',
};
}
return { valid: true };
};
+19 -11
View File
@@ -2,8 +2,8 @@ pub mod session;
pub mod storage;
use std::sync::{
atomic::{AtomicU32, Ordering},
Arc,
atomic::{AtomicU32, Ordering},
};
use dashmap::DashMap;
@@ -19,11 +19,11 @@ use maxminddb::geoip2;
use session::{Location, Session};
use storage::{Storage, StorageToken};
use crate::webhook::SharedWebhookConfig;
use crate::FeatureFlags;
use crate::webhook::SharedWebhookConfig;
use tokio::task::JoinSet;
use crate::db::{entity::user_running_network_configs, Db, UserIdInDb};
use crate::db::{Db, UserIdInDb, entity::user_running_network_configs};
#[derive(rust_embed::Embed)]
#[folder = "resources/"]
@@ -340,7 +340,7 @@ mod tests {
};
use sqlx::Executor;
use crate::{client_manager::ClientManager, db::Db, FeatureFlags};
use crate::{FeatureFlags, client_manager::ClientManager, db::Db};
#[tokio::test]
async fn test_client() {
@@ -365,6 +365,7 @@ mod tests {
let _c = WebClient::new(
connector,
"test",
uuid::Uuid::new_v4(),
"test",
false,
Arc::new(NetworkInstanceManager::new()),
@@ -379,19 +380,26 @@ mod tests {
let req = tokio::time::timeout(Duration::from_secs(12), async {
loop {
let session = mgr
let sessions = mgr
.client_sessions
.iter()
.next()
.map(|item| item.value().clone());
let Some(session) = session else {
.map(|item| item.value().clone())
.collect::<Vec<_>>();
if sessions.is_empty() {
tokio::time::sleep(Duration::from_millis(100)).await;
continue;
};
let mut waiter = session.data().read().await.heartbeat_waiter();
if let Ok(req) = waiter.recv().await {
}
let mut found_req = None;
for session in sessions {
if let Some(req) = session.data().read().await.req() {
found_req = Some(req);
break;
}
}
if let Some(req) = found_req {
break req;
}
tokio::time::sleep(Duration::from_millis(100)).await;
}
})
.await
File diff suppressed because it is too large Load Diff
@@ -1,6 +1,9 @@
//! `SeaORM` Entity, @generated by sea-orm-codegen 1.1.0
use easytier::{launcher::NetworkConfig, rpc_service::remote_client::PersistentConfig};
use easytier::{
common::config::ConfigSource, launcher::NetworkConfig,
rpc_service::remote_client::PersistentConfig,
};
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
@@ -12,10 +15,12 @@ pub struct Model {
pub user_id: i32,
#[sea_orm(column_type = "Text")]
pub device_id: String,
#[sea_orm(column_type = "Text", unique)]
#[sea_orm(column_type = "Text")]
pub network_instance_id: String,
#[sea_orm(column_type = "Text")]
pub network_config: String,
#[sea_orm(column_type = "Text")]
pub source: String,
pub disabled: bool,
pub create_time: DateTimeWithTimeZone,
pub update_time: DateTimeWithTimeZone,
@@ -48,4 +53,7 @@ impl PersistentConfig<DbErr> for Model {
fn get_network_config(&self) -> Result<NetworkConfig, DbErr> {
serde_json::from_str(&self.network_config).map_err(|e| DbErr::Json(e.to_string()))
}
fn get_network_config_source(&self) -> ConfigSource {
self.source.parse().unwrap_or(ConfigSource::User)
}
}
-1
View File
@@ -11,7 +11,6 @@ pub struct Model {
#[sea_orm(unique)]
pub username: String,
pub password: String,
pub must_change_password: bool,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
+76 -33
View File
@@ -3,16 +3,17 @@
pub mod entity;
use easytier::{
common::config::ConfigSource,
launcher::NetworkConfig,
rpc_service::remote_client::{ListNetworkProps, Storage},
};
use entity::user_running_network_configs;
use sea_orm::{
prelude::Expr, sea_query::OnConflict, ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait,
QueryFilter as _, Set, SqlxSqliteConnector, TransactionTrait as _,
ColumnTrait as _, DatabaseConnection, DbErr, EntityTrait, QueryFilter as _, Set,
SqlxSqliteConnector, TransactionTrait as _, prelude::Expr, sea_query::OnConflict,
};
use sea_orm_migration::MigratorTrait as _;
use sqlx::{migrate::MigrateDatabase as _, types::chrono, Sqlite, SqlitePool};
use sqlx::{Sqlite, SqlitePool, migrate::MigrateDatabase as _, types::chrono};
use uuid::Uuid;
use crate::migrator;
@@ -96,7 +97,6 @@ impl Db {
let user_active = users::ActiveModel {
username: Set(username.to_string()),
password: Set(password_hash),
must_change_password: Set(false),
..Default::default()
};
let insert_result = users::Entity::insert(user_active).exec(&txn).await?;
@@ -150,6 +150,7 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
(user_id, device_id): (UserIdInDb, Uuid),
network_inst_id: Uuid,
network_config: NetworkConfig,
source: ConfigSource,
) -> Result<(), DbErr> {
let txn = self.orm_db().begin().await?;
@@ -162,6 +163,7 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
])
.update_columns([
urnc::Column::NetworkConfig,
urnc::Column::Source,
urnc::Column::Disabled,
urnc::Column::UpdateTime,
])
@@ -173,6 +175,7 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
network_config: sea_orm::Set(
serde_json::to_string(&network_config).map_err(|e| DbErr::Json(e.to_string()))?,
),
source: sea_orm::Set(source.as_str().to_string()),
disabled: sea_orm::Set(false),
create_time: sea_orm::Set(chrono::Local::now().fixed_offset()),
update_time: sea_orm::Set(chrono::Local::now().fixed_offset()),
@@ -278,31 +281,14 @@ impl Storage<(UserIdInDb, Uuid), user_running_network_configs::Model, DbErr> for
#[cfg(test)]
mod tests {
use easytier::{proto::api::manage::NetworkConfig, rpc_service::remote_client::Storage};
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter as _};
use crate::db::{
entity::{user_running_network_configs, users},
Db, ListNetworkProps,
use easytier::{
common::config::ConfigSource,
proto::api::manage::NetworkConfig,
rpc_service::remote_client::{PersistentConfig, Storage},
};
use sea_orm::{ActiveModelTrait, ColumnTrait, EntityTrait, QueryFilter as _, Set};
#[tokio::test]
async fn created_users_default_to_not_requiring_password_change() {
let db = Db::memory_db().await;
let user = db
.create_user_and_join_users_group("created-user", "pre-hashed-password".to_string())
.await
.unwrap();
let stored = users::Entity::find_by_id(user.id)
.one(db.orm_db())
.await
.unwrap()
.unwrap();
assert!(!stored.must_change_password);
}
use crate::db::{Db, ListNetworkProps, entity::user_running_network_configs};
#[tokio::test]
async fn test_user_network_config_management() {
@@ -316,9 +302,14 @@ mod tests {
let inst_id = uuid::Uuid::new_v4();
let device_id = uuid::Uuid::new_v4();
db.insert_or_update_user_network_config((user_id, device_id), inst_id, network_config)
.await
.unwrap();
db.insert_or_update_user_network_config(
(user_id, device_id),
inst_id,
network_config,
ConfigSource::User,
)
.await
.unwrap();
let result = user_running_network_configs::Entity::find()
.filter(user_running_network_configs::Column::UserId.eq(user_id))
@@ -328,6 +319,7 @@ mod tests {
.unwrap();
println!("{:?}", result);
assert_eq!(result.network_config, network_config_json);
assert_eq!(result.get_network_config_source(), ConfigSource::User);
// overwrite the config
let network_config = NetworkConfig {
@@ -335,9 +327,14 @@ mod tests {
..Default::default()
};
let network_config_json = serde_json::to_string(&network_config).unwrap();
db.insert_or_update_user_network_config((user_id, device_id), inst_id, network_config)
.await
.unwrap();
db.insert_or_update_user_network_config(
(user_id, device_id),
inst_id,
network_config,
ConfigSource::Webhook,
)
.await
.unwrap();
let result2 = user_running_network_configs::Entity::find()
.filter(user_running_network_configs::Column::UserId.eq(user_id))
@@ -347,6 +344,11 @@ mod tests {
.unwrap();
println!("device: {}, {:?}", device_id, result2);
assert_eq!(result2.network_config, network_config_json);
assert_eq!(result2.get_network_config_source(), ConfigSource::Webhook);
assert_eq!(
result2.get_runtime_network_config_source(),
ConfigSource::Webhook
);
assert_eq!(result.create_time, result2.create_time);
assert_ne!(result.update_time, result2.update_time);
@@ -370,6 +372,45 @@ mod tests {
assert!(result3.is_none());
}
#[tokio::test]
async fn test_legacy_network_config_defaults_to_user_runtime_source() {
let db = Db::memory_db().await;
let user_id = 1;
let inst_id = uuid::Uuid::new_v4();
let device_id = uuid::Uuid::new_v4();
user_running_network_configs::ActiveModel {
user_id: Set(user_id),
device_id: Set(device_id.to_string()),
network_instance_id: Set(inst_id.to_string()),
network_config: Set(serde_json::to_string(&NetworkConfig {
network_name: Some("legacy".to_string()),
..Default::default()
})
.unwrap()),
source: Set("legacy".to_string()),
disabled: Set(false),
create_time: Set(sqlx::types::chrono::Local::now().fixed_offset()),
update_time: Set(sqlx::types::chrono::Local::now().fixed_offset()),
..Default::default()
}
.insert(db.orm_db())
.await
.unwrap();
let result = user_running_network_configs::Entity::find()
.filter(user_running_network_configs::Column::UserId.eq(user_id))
.one(db.orm_db())
.await
.unwrap()
.unwrap();
assert_eq!(result.get_network_config_source(), ConfigSource::User);
assert_eq!(
result.get_runtime_network_config_source(),
ConfigSource::User
);
}
#[tokio::test]
async fn test_user_network_config_same_instance_id_is_scoped_by_device() {
let db = Db::memory_db().await;
@@ -385,6 +426,7 @@ mod tests {
network_name: Some("cfg-1".to_string()),
..Default::default()
},
ConfigSource::User,
)
.await
.unwrap();
@@ -395,6 +437,7 @@ mod tests {
network_name: Some("cfg-2".to_string()),
..Default::default()
},
ConfigSource::User,
)
.await
.unwrap();
+2 -2
View File
@@ -16,8 +16,8 @@ use easytier::{
log,
network::{local_ipv4, local_ipv6},
},
tunnel::{tcp::TcpTunnelListener, udp::UdpTunnelListener, TunnelListener},
utils::setup_panic_handler,
tunnel::{TunnelListener, tcp::TcpTunnelListener, udp::UdpTunnelListener},
utils::panic::setup_panic_handler,
};
use easytier::tunnel::IpScheme;
@@ -1,129 +0,0 @@
use sea_orm_migration::prelude::*;
pub struct Migration;
const DEFAULT_USER_PASSWORD_HASH: &str =
"$argon2i$v=19$m=16,t=2,p=1$aGVyRDBrcnRycnlaMDhkbw$449SEcv/qXf+0fnI9+fYVQ";
const DEFAULT_ADMIN_PASSWORD_HASH: &str =
"$argon2i$v=19$m=16,t=2,p=1$bW5idXl0cmY$61n+JxL4r3dwLPAEDlDdtg";
#[derive(DeriveIden)]
enum Users {
Table,
Username,
Password,
MustChangePassword,
}
impl MigrationName for Migration {
fn name(&self) -> &str {
"m20260405_000003_add_must_change_password"
}
}
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Users::Table)
.add_column(
ColumnDef::new(Users::MustChangePassword)
.boolean()
.not_null()
.default(false),
)
.to_owned(),
)
.await?;
manager
.exec_stmt(
Query::update()
.table(Users::Table)
.value(Users::MustChangePassword, true)
.cond_where(any![
Expr::col(Users::Username)
.eq("admin")
.and(Expr::col(Users::Password).eq(DEFAULT_ADMIN_PASSWORD_HASH)),
Expr::col(Users::Username)
.eq("user")
.and(Expr::col(Users::Password).eq(DEFAULT_USER_PASSWORD_HASH)),
])
.to_owned(),
)
.await?;
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Users::Table)
.drop_column(Users::MustChangePassword)
.to_owned(),
)
.await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter as _, SqlxSqliteConnector};
use sea_orm_migration::prelude::SchemaManager;
use sqlx::sqlite::SqlitePoolOptions;
use super::{Migration, MigrationTrait, DEFAULT_USER_PASSWORD_HASH};
use crate::db::entity::users;
async fn find_user(db: &sea_orm::DatabaseConnection, username: &str) -> users::Model {
users::Entity::find()
.filter(users::Column::Username.eq(username))
.one(db)
.await
.unwrap()
.unwrap()
}
#[tokio::test]
async fn migration_only_marks_seeded_accounts_still_using_default_passwords() {
let pool = SqlitePoolOptions::new()
.max_connections(1)
.connect("sqlite::memory:")
.await
.unwrap();
sqlx::query(
"CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT NOT NULL UNIQUE,
password TEXT NOT NULL
)",
)
.execute(&pool)
.await
.unwrap();
let changed_admin_password = password_auth::generate_hash("already-changed");
sqlx::query("INSERT INTO users (username, password) VALUES (?, ?), (?, ?)")
.bind("admin")
.bind(changed_admin_password)
.bind("user")
.bind(DEFAULT_USER_PASSWORD_HASH)
.execute(&pool)
.await
.unwrap();
let db = SqlxSqliteConnector::from_sqlx_sqlite_pool(pool);
Migration.up(&SchemaManager::new(&db)).await.unwrap();
assert!(!find_user(&db, "admin").await.must_change_password);
assert!(find_user(&db, "user").await.must_change_password);
}
}

Some files were not shown because too many files have changed in this diff Show More