Compare commits

..

27 Commits

Author SHA1 Message Date
Luna Yao 8428a89d2d refactor: introduce HedgeExt for task hedging; rewrite NatDstQuicConnector (#2229) 2026-05-12 20:26:16 +08:00
韩嘉乐 513695297c [OHOS] feat: Enhance Rust kernel with config management and routing improvements (#2227)
* [OHOS.with ai] 将配置管理/配置分享/路由聚合/实例状态解析下沉至 Rust 内核,收敛职责并提升性能 (#2209)

* feat: add ohrs config store and startup error logging

* feat: full ability core for ohos

* feat: full ability core for ohos

* feat: clean code

---------

Co-authored-by: FrankHan <frankhan@FrankHans-Mac-mini.local>

* fix: 添加缺失文件

* fix: 修复更新路由启动两次TUN问题,并调整日志

* fix: rustfmt

* fix: 适配Cidr忽略/32格式路由

* fix: 修复Option适配错误

* fix: rustfmt

* fix: rustfmt

---------

Co-authored-by: FrankHan <frankhan@FrankHans-Mac-mini.local>
2026-05-10 14:15:31 +08:00
21paradox bfbfa2ef8d fix: reuse conn by dst_peer_id, every peer use only 1 quic conn, to fix nat lost problem (#2216) 2026-05-09 22:33:44 +08:00
KKRainbow 8e1d079142 feat: add Windows UDP broadcast relay (#2222)
This may helps games to find rooms in virtual network.

- add opt-in Windows UDP broadcast relay config flag and CLI/env plumbing
- capture local UDP broadcasts with Windows raw sockets, normalize packets, and inject them via PeerManager
2026-05-09 09:56:31 +08:00
fanyang 55f15bb6f0 fix(connector): classify manual reconnect timeouts by stage (#2062) 2026-05-08 22:08:51 +08:00
Luna Yao 96fd39649a revert UPX version to 4.2.4 in core.yml (#2221) 2026-05-07 18:49:40 +08:00
KKRainbow 74fc8b300d chore: bump version to 2.6.4 (#2219) 2026-05-07 13:48:51 +08:00
KKRainbow baeee40b79 fix machine uid and easytier-web panic (#2215)
1. fix(web-client): persist and migrate machine id
2. fix panic when easytier-web session receive malformat packet
2026-05-07 00:57:42 +08:00
fanyang 4342c8d7a2 fix: add missing CLI help text (#2213) 2026-05-05 17:05:34 +08:00
KKRainbow 1178b312fa fix foreign network entry leak (#2211) 2026-05-05 11:01:44 +08:00
fanyang 362aa7a9cd fix: allow omitted ACL config fields (#2206) 2026-05-04 00:47:24 +08:00
KKRainbow 12a7b5a5c5 fix: scope peer center server data to instance (#2198)
Stop sharing PeerCenterServer state through a process-global map so local and foreign-network services cannot mix peer-center data when peer ids overlap.
2026-05-02 01:43:01 +08:00
fanyang 4eba9b07b6 fix(web-client): keep retrying unreachable config server (#2140)
Defer config-server connector creation into the web client retry loop so
service startup does not fail when network or DNS is unavailable.
2026-05-02 00:09:48 +08:00
KKRainbow 1b48029bdc fix: clean stale foreign network state (#2197)
- clear foreign-network traffic metric peer caches on peer removal and network cleanup
- release reserved foreign-network peer IDs on handshake/add-peer error paths
- avoid creating no-op foreign-network token buckets when limits are unlimited
- shrink relay/session maps after cleanup and remove unused peer-center global data entries
2026-05-01 23:30:51 +08:00
KKRainbow 3542e944cb fix(quic): prune stopped endpoints from pool (#2195)
* remove wss port 0 compatibility code
* fix(quic): prune stopped endpoints from pool
2026-05-01 18:51:39 +08:00
KKRainbow 852d1c9e14 feat(gui): add UPnP and public IPv6 advanced options (#2194)
Expose disable-upnp and ipv6_public_addr_auto in the shared web/GUI config editor
bump release metadata to 2.6.3.
2026-05-01 13:45:19 +08:00
KKRainbow 4958394469 fix: protect self peer during credential refresh and allow need-p2p peers through public server (#2192)
* fix: protect self peer during credential refresh

* fix: allow need-p2p peers through public server
2026-05-01 06:59:30 +08:00
KKRainbow 41b6d65604 fix faketcp filter on windows (#2190) 2026-04-30 23:55:56 +08:00
KKRainbow aae30894dd fix: keep file logger disabled by default (#2189) 2026-04-30 21:42:30 +08:00
fanyang 81d169abfc fix: fall back when CLI manage service is unavailable (#2185) 2026-04-30 19:50:50 +08:00
Luna Yao 9c6c210e89 fix: disable SO_EXCLUSIVEADDRUSE on Windows (#2180) 2026-04-30 19:48:54 +08:00
Mg Pig d1c6dcf754 fix: prevent URL input layout flicker with container queries (#2186) 2026-04-30 19:45:01 +08:00
KKRainbow 97c8c4f55a feat: support disabling relay data forwarding (#2188)
- add a disable_relay_data runtime/config patch option
- reuse the existing avoid_relay_data feature flag when relay data forwarding is disabled
2026-04-30 19:44:40 +08:00
KKRainbow ed8df2d58f prevent EasyTier-managed IPv6 from being used as underlay connections (#2181)
When a node has public IPv6 addresses allocated by EasyTier, those addresses
are installed on the host's network interfaces. The system would then pick
them up as candidate source/destination addresses for underlay connections
(direct peer, UDP hole punch, bind addresses), causing overlay traffic to
loop back into the overlay itself.

Add a central predicate is_ip_easytier_managed_ipv6() and apply it at every
point where IPv6 addresses are selected for underlay use:
- Filter managed IPv6 from DNS-resolved connector addresses, including a
  UDP socket getsockname check to detect whether the OS would route through
  the overlay to reach a destination
- Skip managed IPv6 in bind address selection and STUN candidate filtering
- Strip managed IPv6 from GetIpListResponse RPC so peers never learn them
- Pass pre-resolved addresses to tunnel connectors to avoid re-resolution

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 12:17:22 +08:00
lurenjia f66010e6f9 fix: preserve URL type in matches_scheme (#2179)
Avoid resolving Url::as_ref() to the full URL string before TunnelScheme
conversion. Add regression coverage for owned/borrowed URLs and the UDP
IPv6 hole-punch branch condition.

Co-authored-by: KKRainbow <443152178@qq.com>
2026-04-28 23:23:41 +08:00
Luna Yao d5c4700d32 utils: replace defer, ContextGuard, DetachableTask with guarden crate (#2163) 2026-04-27 18:29:46 +08:00
KKRainbow 969ecfc4ca fix(gui): refresh service after core version upgrade (#2172) 2026-04-27 15:54:52 +08:00
125 changed files with 8724 additions and 1867 deletions
+4 -1
View File
@@ -157,6 +157,9 @@ jobs:
- uses: mlugg/setup-zig@v2 - uses: mlugg/setup-zig@v2
if: ${{ contains(matrix.OS, 'ubuntu') }} if: ${{ contains(matrix.OS, 'ubuntu') }}
with:
version: 0.16.0
use-cache: true
- uses: taiki-e/install-action@v2 - uses: taiki-e/install-action@v2
if: ${{ contains(matrix.OS, 'ubuntu') }} if: ${{ contains(matrix.OS, 'ubuntu') }}
@@ -227,7 +230,7 @@ jobs:
*) UPX_ARCH="amd64" ;; *) UPX_ARCH="amd64" ;;
esac esac
UPX_VERSION=5.1.1 UPX_VERSION=4.2.4
UPX_PKG="upx-${UPX_VERSION}-${UPX_ARCH}_linux" UPX_PKG="upx-${UPX_VERSION}-${UPX_ARCH}_linux"
curl -L "https://github.com/upx/upx/releases/download/v${UPX_VERSION}/${UPX_PKG}.tar.xz" -s | tar xJvf - curl -L "https://github.com/upx/upx/releases/download/v${UPX_VERSION}/${UPX_PKG}.tar.xz" -s | tar xJvf -
cp "${UPX_PKG}/upx" . cp "${UPX_PKG}/upx" .
+1 -1
View File
@@ -11,7 +11,7 @@ on:
image_tag: image_tag:
description: 'Tag for this image build' description: 'Tag for this image build'
type: string type: string
default: 'v2.6.2' default: 'v2.6.4'
required: true required: true
mark_latest: mark_latest:
description: 'Mark this image as latest' description: 'Mark this image as latest'
+1 -1
View File
@@ -18,7 +18,7 @@ on:
version: version:
description: 'Version for this release' description: 'Version for this release'
type: string type: string
default: 'v2.6.2' default: 'v2.6.4'
required: true required: true
make_latest: make_latest:
description: 'Mark this release as latest' description: 'Mark this release as latest'
Generated
+46 -25
View File
@@ -2229,7 +2229,7 @@ checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555"
[[package]] [[package]]
name = "easytier" name = "easytier"
version = "2.6.2" version = "2.6.4"
dependencies = [ dependencies = [
"aes-gcm", "aes-gcm",
"anyhow", "anyhow",
@@ -2273,6 +2273,7 @@ dependencies = [
"gethostname 0.5.0", "gethostname 0.5.0",
"git-version", "git-version",
"globwalk", "globwalk",
"guarden",
"hickory-client", "hickory-client",
"hickory-proto", "hickory-proto",
"hickory-resolver", "hickory-resolver",
@@ -2290,6 +2291,7 @@ dependencies = [
"machine-uid", "machine-uid",
"maplit", "maplit",
"mimalloc", "mimalloc",
"moka",
"multimap", "multimap",
"natpmp", "natpmp",
"netlink-packet-core", "netlink-packet-core",
@@ -2404,7 +2406,7 @@ dependencies = [
[[package]] [[package]]
name = "easytier-gui" name = "easytier-gui"
version = "2.6.2" version = "2.6.4"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -2456,6 +2458,7 @@ dependencies = [
"dashmap", "dashmap",
"easytier", "easytier",
"futures", "futures",
"guarden",
"jsonwebtoken", "jsonwebtoken",
"mimalloc", "mimalloc",
"mockall", "mockall",
@@ -2484,7 +2487,7 @@ dependencies = [
[[package]] [[package]]
name = "easytier-web" name = "easytier-web"
version = "2.6.2" version = "2.6.4"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -3590,6 +3593,28 @@ dependencies = [
"syn 2.0.117", "syn 2.0.117",
] ]
[[package]]
name = "guarden"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca87812d87fa82896df1adfb5c111cdeaae3edb6da028f5df002dcbd7df71454"
dependencies = [
"futures",
"guarden-macros",
"tokio",
]
[[package]]
name = "guarden-macros"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b42f4b8de91cbd793ce8e6cf8d4821ef02d2d5b4468e0a55a36c65c5581de53"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]] [[package]]
name = "h2" name = "h2"
version = "0.4.7" version = "0.4.7"
@@ -3705,12 +3730,6 @@ version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hermit-abi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024"
[[package]] [[package]]
name = "hermit-abi" name = "hermit-abi"
version = "0.5.2" version = "0.5.2"
@@ -4026,7 +4045,7 @@ dependencies = [
"libc", "libc",
"percent-encoding", "percent-encoding",
"pin-project-lite", "pin-project-lite",
"socket2 0.6.1", "socket2 0.5.10",
"tokio", "tokio",
"tower-service", "tower-service",
"tracing", "tracing",
@@ -4695,9 +4714,9 @@ dependencies = [
[[package]] [[package]]
name = "libc" name = "libc"
version = "0.2.172" version = "0.2.186"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa" checksum = "68ab91017fe16c622486840e4c83c9a37afeff978bd239b5293d61ece587de66"
[[package]] [[package]]
name = "libdbus-sys" name = "libdbus-sys"
@@ -5043,14 +5062,13 @@ dependencies = [
[[package]] [[package]]
name = "mio" name = "mio"
version = "1.0.2" version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "80e04d1dcff3aae0704555fe5fee3bcfaf3d1fdf8a7e521d5b9d2b42acb52cec" checksum = "50b7e5b27aa02a74bac8c3f23f448f8d87ff11f92d3aac1a6ed369ee08cc56c1"
dependencies = [ dependencies = [
"hermit-abi 0.3.9",
"libc", "libc",
"wasi 0.11.0+wasi-snapshot-preview1", "wasi 0.11.0+wasi-snapshot-preview1",
"windows-sys 0.52.0", "windows-sys 0.61.2",
] ]
[[package]] [[package]]
@@ -5086,9 +5104,12 @@ version = "0.12.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a9321642ca94a4282428e6ea4af8cc2ca4eac48ac7a6a4ea8f33f76d0ce70926" checksum = "a9321642ca94a4282428e6ea4af8cc2ca4eac48ac7a6a4ea8f33f76d0ce70926"
dependencies = [ dependencies = [
"async-lock",
"crossbeam-channel", "crossbeam-channel",
"crossbeam-epoch", "crossbeam-epoch",
"crossbeam-utils", "crossbeam-utils",
"event-listener",
"futures-util",
"loom", "loom",
"parking_lot", "parking_lot",
"portable-atomic", "portable-atomic",
@@ -6551,7 +6572,7 @@ checksum = "5d0e4f59085d47d8241c88ead0f274e8a0cb551f3625263c05eb8dd897c34218"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"concurrent-queue", "concurrent-queue",
"hermit-abi 0.5.2", "hermit-abi",
"pin-project-lite", "pin-project-lite",
"rustix 1.0.7", "rustix 1.0.7",
"windows-sys 0.61.2", "windows-sys 0.61.2",
@@ -8650,12 +8671,12 @@ dependencies = [
[[package]] [[package]]
name = "socket2" name = "socket2"
version = "0.6.1" version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17129e116933cf371d018bb80ae557e889637989d8638274fb25622827b03881" checksum = "3a766e1110788c36f4fa1c2b71b387a7815aa65f88ce0229841826633d93723e"
dependencies = [ dependencies = [
"libc", "libc",
"windows-sys 0.60.2", "windows-sys 0.61.2",
] ]
[[package]] [[package]]
@@ -9774,9 +9795,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]] [[package]]
name = "tokio" name = "tokio"
version = "1.48.0" version = "1.52.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff360e02eab121e0bc37a2d3b4d4dc622e6eda3a8e5253d5435ecf5bd4c68408" checksum = "b67dee974fe86fd92cc45b7a95fdd2f99a36a6d7b0d431a231178d3d670bbcc6"
dependencies = [ dependencies = [
"bytes", "bytes",
"libc", "libc",
@@ -9784,7 +9805,7 @@ dependencies = [
"parking_lot", "parking_lot",
"pin-project-lite", "pin-project-lite",
"signal-hook-registry", "signal-hook-registry",
"socket2 0.6.1", "socket2 0.6.3",
"tokio-macros", "tokio-macros",
"tracing", "tracing",
"windows-sys 0.61.2", "windows-sys 0.61.2",
@@ -9792,9 +9813,9 @@ dependencies = [
[[package]] [[package]]
name = "tokio-macros" name = "tokio-macros"
version = "2.6.0" version = "2.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af407857209536a95c8e56f8231ef2c2e2aff839b22e07a1ffcbc617e9db9fa5" checksum = "385a6cb71ab9ab790c5fe8d67f1645e6c450a7ce006a33de03daa956cf70a496"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
+1 -1
View File
@@ -1,6 +1,6 @@
id=easytier_magisk id=easytier_magisk
name=EasyTier_Magisk name=EasyTier_Magisk
version=v2.6.2 version=v2.6.4
versionCode=1 versionCode=1
author=EasyTier author=EasyTier
description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier) description=easytier magisk module @EasyTier(https://github.com/EasyTier/EasyTier)
+544 -132
View File
File diff suppressed because it is too large Load Diff
+10
View File
@@ -7,6 +7,10 @@ edition = "2024"
crate-type=["cdylib"] crate-type=["cdylib"]
[dependencies] [dependencies]
async-trait = "0.1"
base64 = "0.22"
flate2 = "1.1"
gethostname = "1.1"
ohos-hilog-binding = {version = "*", features = ["redirect"]} ohos-hilog-binding = {version = "*", features = ["redirect"]}
easytier = { path = "../../easytier" } easytier = { path = "../../easytier" }
napi-derive-ohos = "1.1" napi-derive-ohos = "1.1"
@@ -26,10 +30,16 @@ napi-ohos = { version = "1.1", default-features = false, features = [
"web_stream", "web_stream",
] } ] }
once_cell = "1.21.3" once_cell = "1.21.3"
ipnet = "2.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.125" serde_json = "1.0.125"
prost-reflect = { version = "0.14.5", default-features = false, features = ["derive"] }
rusqlite = { version = "0.32", features = ["bundled"] }
tracing-subscriber = "0.3.19" tracing-subscriber = "0.3.19"
tracing-core = "0.1.33" tracing-core = "0.1.33"
tracing = "0.1.41" tracing = "0.1.41"
tokio = { version = "1", features = ["rt-multi-thread", "sync", "time"] }
url = "2.5"
uuid = { version = "1.5.0", features = [ uuid = { version = "1.5.0", features = [
"v4", "v4",
"fast-rng", "fast-rng",
@@ -0,0 +1,4 @@
pub(crate) mod repository;
pub(crate) mod services;
pub(crate) mod storage;
pub(crate) mod types;
@@ -0,0 +1,13 @@
#[path = "../../config_repo/field_store.rs"]
mod field_store;
#[path = "../../config_repo/import_export.rs"]
mod import_export;
#[path = "../../config_repo/legacy_migration.rs"]
mod legacy_migration;
#[path = "../../config_repo/validation.rs"]
mod validation;
#[path = "../../config_repo.rs"]
mod repo;
pub use repo::*;
@@ -0,0 +1,2 @@
pub(crate) mod schema_service;
pub(crate) mod share_link_service;
@@ -0,0 +1,414 @@
use easytier::proto::ALL_DESCRIPTOR_BYTES;
use napi_derive_ohos::napi;
use once_cell::sync::Lazy;
use prost_reflect::{Cardinality, DescriptorPool, FieldDescriptor, Kind, MessageDescriptor};
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct FieldOption {
pub label: String,
pub value: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ValidationRule {
pub rule_type: String,
pub arg: String,
pub message: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct NetworkConfigSchema {
pub node_kind: String,
pub name: String,
pub field_number: i32,
pub type_name: Option<String>,
pub semantic_type: Option<String>,
pub value_kind: String,
pub is_list: bool,
pub required: bool,
pub default_value_text: Option<String>,
pub enum_options: Vec<FieldOption>,
pub validations: Vec<ValidationRule>,
pub children: Vec<NetworkConfigSchema>,
pub definitions: Vec<NetworkConfigSchema>,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct ConfigFieldMapping {
pub field_name: String,
pub field_number: i32,
}
static DESCRIPTOR_POOL: Lazy<DescriptorPool> = Lazy::new(|| {
DescriptorPool::decode(ALL_DESCRIPTOR_BYTES)
.expect("easytier descriptor pool should decode from embedded protobuf descriptors")
});
const NETWORK_CONFIG_MESSAGE_NAME: &str = "api.manage.NetworkConfig";
fn descriptor_pool() -> &'static DescriptorPool {
&DESCRIPTOR_POOL
}
fn network_config_descriptor() -> MessageDescriptor {
descriptor_pool()
.get_message_by_name(NETWORK_CONFIG_MESSAGE_NAME)
.expect("api.manage.NetworkConfig descriptor should exist")
}
fn field_default_value_text(field: &FieldDescriptor) -> Option<String> {
if field.is_list() || field.is_map() {
return Some("[]".to_string());
}
match field.kind() {
Kind::Bool => Some("false".to_string()),
Kind::String => Some("\"\"".to_string()),
Kind::Bytes => Some("\"\"".to_string()),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => Some("0".to_string()),
Kind::Enum(enum_desc) => enum_desc
.get_value(0)
.map(|value| value.number().to_string()),
Kind::Message(_) => None,
}
}
fn field_type_name(field: &FieldDescriptor) -> Option<String> {
match field.kind() {
Kind::Enum(enum_desc) => Some(enum_desc.full_name().to_string()),
Kind::Message(message_desc) => Some(message_desc.full_name().to_string()),
_ => None,
}
}
fn field_semantic_type(field: &FieldDescriptor) -> Option<String> {
match field.name() {
"virtual_ipv4" => Some("cidr_ip".to_string()),
"network_length" => Some("cidr_mask".to_string()),
"peer_urls" => Some("peer[]".to_string()),
"proxy_cidrs" => Some("cidr[]".to_string()),
"listener_urls" => Some("listener[]".to_string()),
"routes" => Some("route[]".to_string()),
"exit_nodes" => Some("ip[]".to_string()),
"relay_network_whitelist" => Some("network_name[]".to_string()),
"mapped_listeners" => Some("mapped_listener[]".to_string()),
"port_forwards" => Some("port_forward[]".to_string()),
_ => None,
}
}
fn enum_options(kind: Kind) -> Vec<FieldOption> {
match kind {
Kind::Enum(enum_desc) => enum_desc
.values()
.map(|value| FieldOption {
label: value.name().to_string(),
value: value.number().to_string(),
})
.collect(),
_ => Vec::new(),
}
}
fn should_expose_field(field: &FieldDescriptor) -> bool {
match field.containing_oneof() {
Some(_) => field
.field_descriptor_proto()
.proto3_optional
.unwrap_or(false),
None => true,
}
}
fn build_validations(field: &FieldDescriptor) -> Vec<ValidationRule> {
if field.cardinality() == Cardinality::Required {
return vec![ValidationRule {
rule_type: "required".to_string(),
arg: String::new(),
message: format!("{} is required", field.name()),
}];
}
Vec::new()
}
fn kind_to_value_kind(field: &FieldDescriptor) -> String {
if field.is_map() {
return "object".to_string();
}
match field.kind() {
Kind::Bool => "boolean".to_string(),
Kind::String | Kind::Bytes => "string".to_string(),
Kind::Int32
| Kind::Sint32
| Kind::Sfixed32
| Kind::Int64
| Kind::Sint64
| Kind::Sfixed64
| Kind::Uint32
| Kind::Fixed32
| Kind::Uint64
| Kind::Fixed64
| Kind::Float
| Kind::Double => "number".to_string(),
Kind::Enum(_) => "enum".to_string(),
Kind::Message(_) => "object".to_string(),
}
}
fn build_node(
node_kind: &str,
name: String,
field_number: i32,
type_name: Option<String>,
semantic_type: Option<String>,
value_kind: String,
is_list: bool,
required: bool,
default_value_text: Option<String>,
enum_options: Vec<FieldOption>,
validations: Vec<ValidationRule>,
children: Vec<NetworkConfigSchema>,
definitions: Vec<NetworkConfigSchema>,
) -> NetworkConfigSchema {
NetworkConfigSchema {
node_kind: node_kind.to_string(),
name,
field_number,
type_name,
semantic_type,
value_kind,
is_list,
required,
default_value_text,
enum_options,
validations,
children,
definitions,
}
}
fn build_map_entry_node(message_desc: &MessageDescriptor) -> NetworkConfigSchema {
let key_field = message_desc.map_entry_key_field();
let value_field = message_desc.map_entry_value_field();
build_node(
"object",
message_desc.name().to_string(),
0,
Some(message_desc.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
vec![
build_schema_field_node(&key_field),
build_schema_field_node(&value_field),
],
Vec::new(),
)
}
fn field_children(field: &FieldDescriptor) -> Vec<NetworkConfigSchema> {
if field.is_map() {
if let Kind::Message(message_desc) = field.kind() {
return vec![build_map_entry_node(&message_desc)];
}
}
match field.kind() {
Kind::Message(message_desc) => build_message_children(&message_desc),
_ => Vec::new(),
}
}
fn build_message_children(message_desc: &MessageDescriptor) -> Vec<NetworkConfigSchema> {
message_desc
.fields()
.filter(should_expose_field)
.map(|field| build_schema_field_node(&field))
.collect()
}
fn build_schema_field_node(field: &FieldDescriptor) -> NetworkConfigSchema {
build_node(
"field",
field.name().to_string(),
field.number() as i32,
field_type_name(field),
field_semantic_type(field),
kind_to_value_kind(field),
field.is_list() || field.is_map(),
field.cardinality() == Cardinality::Required,
field_default_value_text(field),
enum_options(field.kind()),
build_validations(field),
field_children(field),
Vec::new(),
)
}
fn collect_definitions() -> Vec<NetworkConfigSchema> {
let mut definitions = Vec::new();
for message_desc in descriptor_pool().all_messages() {
let full_name = message_desc.full_name();
if full_name == NETWORK_CONFIG_MESSAGE_NAME || message_desc.is_map_entry() {
continue;
}
definitions.push(build_node(
"object",
full_name.to_string(),
0,
Some(full_name.to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&message_desc),
Vec::new(),
));
}
for enum_desc in descriptor_pool().all_enums() {
definitions.push(build_node(
"enum",
enum_desc.full_name().to_string(),
0,
Some(enum_desc.full_name().to_string()),
None,
"enum".to_string(),
false,
false,
None,
enum_options(Kind::Enum(enum_desc.clone())),
Vec::new(),
Vec::new(),
Vec::new(),
));
}
definitions.sort_by(|a, b| a.name.cmp(&b.name));
definitions
}
fn build_network_config_schema() -> NetworkConfigSchema {
let network_config = network_config_descriptor();
build_node(
"schema",
network_config.name().to_string(),
0,
Some(network_config.full_name().to_string()),
None,
"object".to_string(),
false,
true,
None,
Vec::new(),
Vec::new(),
build_message_children(&network_config),
collect_definitions(),
)
}
fn build_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
network_config_descriptor()
.fields()
.filter(should_expose_field)
.map(|field| ConfigFieldMapping {
field_name: field.name().to_string(),
field_number: field.number() as i32,
})
.collect()
}
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn schema_is_exposed_as_single_tree_type() {
let schema = get_network_config_schema();
assert_eq!(schema.node_kind, "schema");
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(
schema.type_name.as_deref(),
Some("api.manage.NetworkConfig")
);
let virtual_ipv4 = schema
.children
.iter()
.find(|field| field.name == "virtual_ipv4")
.expect("virtual_ipv4 field");
assert_eq!(virtual_ipv4.semantic_type.as_deref(), Some("cidr_ip"));
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
let secure_mode_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "common.SecureModeConfig")
.expect("secure mode definition");
assert!(
secure_mode_definition
.children
.iter()
.any(|field| field.name == "local_private_key")
);
let networking_method_definition = schema
.definitions
.iter()
.find(|definition| definition.name == "api.manage.NetworkingMethod")
.expect("networking method enum definition");
assert!(
networking_method_definition
.enum_options
.iter()
.any(|option| option.label == "PublicServer")
);
}
}
@@ -0,0 +1,197 @@
use crate::config::repository::{get_config_record, save_config_record};
use crate::config::services::schema_service::get_network_config_field_mappings;
use crate::config::types::stored_config::SharedConfigLinkPayload;
use base64::{Engine as _, engine::general_purpose::URL_SAFE_NO_PAD};
use easytier::proto::api::manage::NetworkConfig;
use flate2::{Compression, read::ZlibDecoder, write::ZlibEncoder};
use gethostname::gethostname;
use std::collections::HashMap;
use std::io::{Read, Write};
use url::Url;
use uuid::Uuid;
const SHARE_LINK_HOST: &str = "easytier.cn";
const SHARE_LINK_PATH: &str = "/comp_cfg";
fn field_name_to_id_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_name, mapping.field_number.to_string()))
.collect()
}
fn field_id_to_name_map() -> HashMap<String, String> {
get_network_config_field_mappings()
.into_iter()
.map(|mapping| (mapping.field_number.to_string(), mapping.field_name))
.collect()
}
fn prune_empty(value: &serde_json::Value) -> Option<serde_json::Value> {
match value {
serde_json::Value::Null => None,
serde_json::Value::Array(values) if values.is_empty() => None,
_ => Some(value.clone()),
}
}
fn map_config_json(config: &NetworkConfig) -> Result<String, String> {
let field_name_to_id = field_name_to_id_map();
let raw = serde_json::to_value(config).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in raw.as_object().cloned().unwrap_or_default() {
let Some(value) = prune_empty(&value) else {
continue;
};
let mapped_key = field_name_to_id.get(&key).cloned().unwrap_or(key);
mapped.insert(mapped_key, value);
}
serde_json::to_string(&mapped).map_err(|err| err.to_string())
}
fn unmap_config_json(raw: &str) -> Result<NetworkConfig, String> {
let field_id_to_name = field_id_to_name_map();
let value = serde_json::from_str::<serde_json::Value>(raw).map_err(|err| err.to_string())?;
let mut mapped = serde_json::Map::new();
for (key, value) in value.as_object().cloned().unwrap_or_default() {
let field_name = field_id_to_name.get(&key).cloned().unwrap_or(key);
mapped.insert(field_name, value);
}
serde_json::from_value(serde_json::Value::Object(mapped)).map_err(|err| err.to_string())
}
fn compress_to_base64url(raw: &str) -> Result<String, String> {
let mut encoder = ZlibEncoder::new(Vec::new(), Compression::best());
encoder
.write_all(raw.as_bytes())
.map_err(|err| err.to_string())?;
let compressed = encoder.finish().map_err(|err| err.to_string())?;
Ok(URL_SAFE_NO_PAD.encode(compressed))
}
fn decompress_from_base64url(raw: &str) -> Result<String, String> {
let compressed = URL_SAFE_NO_PAD.decode(raw).map_err(|err| err.to_string())?;
let mut decoder = ZlibDecoder::new(compressed.as_slice());
let mut out = String::new();
decoder
.read_to_string(&mut out)
.map_err(|err| err.to_string())?;
Ok(out)
}
pub fn build_config_share_link(
config_id: &str,
display_name: Option<String>,
only_start: bool,
) -> Option<String> {
let record = get_config_record(config_id)?;
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let mapped_json = map_config_json(&config).ok()?;
let compressed = compress_to_base64url(&mapped_json).ok()?;
let final_name = display_name
.or(Some(record.meta.display_name))
.filter(|name| !name.is_empty());
let mut url = Url::parse(&format!("https://{SHARE_LINK_HOST}{SHARE_LINK_PATH}")).ok()?;
url.query_pairs_mut().append_pair("cfg", &compressed);
if let Some(name) = final_name {
url.query_pairs_mut().append_pair("name", &name);
}
if only_start {
url.query_pairs_mut().append_pair("only_start", "true");
}
Some(url.to_string())
}
pub fn parse_config_share_link(share_link: &str) -> Option<SharedConfigLinkPayload> {
let url = Url::parse(share_link).ok()?;
if url.host_str()? != SHARE_LINK_HOST || url.path() != SHARE_LINK_PATH {
return None;
}
let cfg = url
.query_pairs()
.find(|(key, _)| key == "cfg")?
.1
.to_string();
let mapped_json = decompress_from_base64url(&cfg).ok()?;
let mut config = unmap_config_json(&mapped_json).ok()?;
config.instance_id = Some(Uuid::new_v4().to_string());
let hostname = gethostname().to_string_lossy().to_string();
if !hostname.is_empty() {
config.hostname = Some(hostname);
}
let config_json = serde_json::to_string(&config).ok()?;
let display_name = url
.query_pairs()
.find(|(key, _)| key == "name")
.map(|(_, value)| value.to_string())
.filter(|name| !name.is_empty());
let only_start = url
.query_pairs()
.find(|(key, _)| key == "only_start")
.map(|(_, value)| value == "true")
.unwrap_or(false);
Some(SharedConfigLinkPayload {
config_json,
display_name,
only_start,
})
}
pub fn import_config_share_link(
share_link: &str,
display_name_override: Option<String>,
) -> Option<String> {
let payload = parse_config_share_link(share_link)?;
let config = serde_json::from_str::<NetworkConfig>(&payload.config_json).ok()?;
let config_id = config.instance_id.clone()?;
let display_name = display_name_override
.filter(|name| !name.is_empty())
.or(payload.display_name)
.unwrap_or_else(|| config_id.clone());
save_config_record(config_id.clone(), display_name, payload.config_json)?;
Some(config_id)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config_repo::{create_config_record, init_config_store};
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
std::env::temp_dir()
.join(format!("easytier_ohrs_share_test_{unique}"))
.to_string_lossy()
.into_owned()
}
#[test]
fn share_link_roundtrip_works() {
assert!(init_config_store(test_root()));
create_config_record("cfg-share".to_string(), "share-demo".to_string())
.expect("create config");
let link = build_config_share_link("cfg-share", None, true).expect("share link");
let payload = parse_config_share_link(&link).expect("parse link");
let config =
serde_json::from_str::<NetworkConfig>(&payload.config_json).expect("config json");
assert!(payload.only_start);
assert_eq!(payload.display_name.as_deref(), Some("share-demo"));
assert_ne!(config.instance_id.as_deref(), Some("cfg-share"));
let imported_id = import_config_share_link(&link, None).expect("import link");
assert_ne!(imported_id, "cfg-share");
}
}
@@ -0,0 +1,333 @@
use crate::config::types::stored_config::{StoredConfigList, StoredConfigMeta};
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::{Connection, OptionalExtension, params};
use std::path::PathBuf;
use std::sync::Mutex;
use std::time::{SystemTime, UNIX_EPOCH};
static CONFIG_DB_PATH: Mutex<Option<PathBuf>> = Mutex::new(None);
const CONFIG_DB_FILE_NAME: &str = "easytier-config-store.db";
#[derive(Debug, Clone)]
struct StoredConfigMetaRecord {
config_id: String,
display_name: String,
created_at: String,
updated_at: String,
favorite: bool,
temporary: bool,
}
pub(crate) fn now_ts_string() -> String {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|d| d.as_secs().to_string())
.unwrap_or_else(|_| "0".to_string())
}
fn db_file_path() -> Option<PathBuf> {
CONFIG_DB_PATH
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
fn init_schema(conn: &Connection) -> rusqlite::Result<()> {
conn.execute_batch(
"PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS stored_configs (
config_id TEXT PRIMARY KEY,
display_name TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
favorite INTEGER NOT NULL DEFAULT 0,
temporary INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE IF NOT EXISTS stored_config_fields (
config_id TEXT NOT NULL,
field_name TEXT NOT NULL,
field_json TEXT NOT NULL,
updated_at TEXT NOT NULL,
PRIMARY KEY (config_id, field_name),
FOREIGN KEY (config_id) REFERENCES stored_configs(config_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_stored_config_fields_config_id
ON stored_config_fields(config_id);",
)
}
pub(crate) fn open_db() -> Option<Connection> {
let path = db_file_path()?;
let conn = match Connection::open(&path) {
Ok(conn) => conn,
Err(e) => {
hilog_error!("[Rust] failed to open config db {}: {}", path.display(), e);
return None;
}
};
if let Err(e) = init_schema(&conn) {
hilog_error!(
"[Rust] failed to initialize config db {}: {}",
path.display(),
e
);
return None;
}
Some(conn)
}
fn row_to_meta(row: &rusqlite::Row<'_>) -> rusqlite::Result<StoredConfigMetaRecord> {
Ok(StoredConfigMetaRecord {
config_id: row.get(0)?,
display_name: row.get(1)?,
created_at: row.get(2)?,
updated_at: row.get(3)?,
favorite: row.get::<_, i64>(4)? != 0,
temporary: row.get::<_, i64>(5)? != 0,
})
}
fn load_meta_record(conn: &Connection, config_id: &str) -> Option<StoredConfigMetaRecord> {
conn.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
}
fn to_meta(record: StoredConfigMetaRecord) -> StoredConfigMeta {
StoredConfigMeta {
config_id: record.config_id,
display_name: record.display_name,
created_at: record.created_at,
updated_at: record.updated_at,
favorite: record.favorite,
temporary: record.temporary,
}
}
pub fn init_config_meta_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
if let Err(e) = std::fs::create_dir_all(&root) {
hilog_error!(
"[Rust] failed to create config db dir {}: {}",
root.display(),
e
);
return false;
}
let db_path = root.join(CONFIG_DB_FILE_NAME);
match CONFIG_DB_PATH.lock() {
Ok(mut guard) => {
*guard = Some(db_path.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config db path: {}", e);
return false;
}
}
if open_db().is_none() {
return false;
}
hilog_debug!("[Rust] initialized config db at {}", db_path.display());
true
}
pub fn list_config_meta_entries() -> StoredConfigList {
let Some(conn) = open_db() else {
return StoredConfigList { configs: vec![] };
};
let mut stmt = match conn.prepare(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs
ORDER BY updated_at DESC, display_name ASC",
) {
Ok(stmt) => stmt,
Err(e) => {
hilog_error!("[Rust] failed to prepare list meta query: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let rows = match stmt.query_map([], row_to_meta) {
Ok(rows) => rows,
Err(e) => {
hilog_error!("[Rust] failed to list config meta rows: {}", e);
return StoredConfigList { configs: vec![] };
}
};
let configs = rows.filter_map(Result::ok).map(to_meta).collect();
StoredConfigList { configs }
}
pub fn get_config_display_name(config_id: &str) -> Option<String> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(|record| record.display_name)
}
pub fn get_config_meta(config_id: &str) -> Option<StoredConfigMeta> {
let conn = open_db()?;
load_meta_record(&conn, config_id).map(to_meta)
}
pub fn upsert_config_meta(
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> StoredConfigMeta {
let now = now_ts_string();
let Some(conn) = open_db() else {
return StoredConfigMeta {
config_id,
display_name,
created_at: now.clone(),
updated_at: now,
favorite,
temporary,
};
};
let created_at = load_meta_record(&conn, &config_id)
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
if let Err(e) = conn.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
) {
hilog_error!("[Rust] failed to upsert config meta: {}", e);
}
get_config_meta(&config_id).unwrap_or(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
})
}
pub(crate) fn upsert_config_meta_in_tx(
tx: &rusqlite::Transaction<'_>,
config_id: String,
display_name: String,
favorite: bool,
temporary: bool,
) -> Option<StoredConfigMeta> {
let now = now_ts_string();
let created_at = tx
.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(|record| record.created_at)
.unwrap_or_else(|| now.clone());
tx.execute(
"INSERT INTO stored_configs (
config_id, display_name, created_at, updated_at, favorite, temporary
) VALUES (?1, ?2, ?3, ?4, ?5, ?6)
ON CONFLICT(config_id) DO UPDATE SET
display_name = excluded.display_name,
updated_at = excluded.updated_at,
favorite = excluded.favorite,
temporary = excluded.temporary",
params![
config_id,
display_name,
created_at,
now,
if favorite { 1 } else { 0 },
if temporary { 1 } else { 0 }
],
)
.ok()?;
tx.query_row(
"SELECT config_id, display_name, created_at, updated_at, favorite, temporary
FROM stored_configs WHERE config_id = ?1",
params![config_id],
row_to_meta,
)
.optional()
.ok()
.flatten()
.map(to_meta)
.or(Some(StoredConfigMeta {
config_id,
display_name,
created_at,
updated_at: now,
favorite,
temporary,
}))
}
pub fn set_config_display_name(
config_id: String,
display_name: String,
) -> Option<StoredConfigMeta> {
let conn = open_db()?;
let mut record = load_meta_record(&conn, &config_id)?;
record.display_name = display_name;
record.updated_at = now_ts_string();
conn.execute(
"UPDATE stored_configs
SET display_name = ?2, updated_at = ?3
WHERE config_id = ?1",
params![config_id, record.display_name, record.updated_at],
)
.ok()?;
Some(to_meta(record))
}
pub fn delete_config_meta(config_id: &str) -> bool {
let Some(conn) = open_db() else {
return false;
};
match conn.execute(
"DELETE FROM stored_configs WHERE config_id = ?1",
params![config_id],
) {
Ok(rows) => rows > 0,
Err(e) => {
hilog_error!("[Rust] failed to delete config meta {}: {}", config_id, e);
false
}
}
}
@@ -0,0 +1 @@
pub(crate) mod config_meta;
@@ -0,0 +1 @@
pub(crate) mod stored_config;
@@ -0,0 +1,68 @@
use napi_derive_ohos::napi;
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigMeta {
pub config_id: String,
pub display_name: String,
pub created_at: String,
pub updated_at: String,
pub favorite: bool,
pub temporary: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigRecord {
pub meta: StoredConfigMeta,
pub config_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigList {
pub configs: Vec<StoredConfigMeta>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct ExportTomlResult {
pub toml_text: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct StoredConfigSummary {
pub config_id: String,
pub display_name: String,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct SharedConfigLinkPayload {
pub config_json: String,
pub display_name: Option<String>,
pub only_start: bool,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct LocalSocketSyncMessage {
pub message_type: String,
pub payload_json: String,
}
#[derive(Debug, Clone, Serialize)]
#[napi(object)]
pub struct KeyValuePair {
pub key: String,
pub value: String,
}
@@ -0,0 +1,349 @@
use super::{field_store, import_export, legacy_migration, validation};
use crate::config::storage::config_meta::{
delete_config_meta, get_config_meta, init_config_meta_store, list_config_meta_entries, open_db,
upsert_config_meta_in_tx,
};
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::ConfigLoader;
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_debug, hilog_error};
use rusqlite::params;
use serde_json::Value;
use std::path::PathBuf;
use std::sync::Mutex;
static CONFIG_ROOT_DIR: Mutex<Option<PathBuf>> = Mutex::new(None);
pub(crate) const CONFIG_DIR_NAME: &str = "easytier-configs";
pub(crate) const KERNEL_SOCKET_FILE_NAME: &str = "easytier-kernel.sock";
pub(crate) fn config_root_dir() -> Option<PathBuf> {
CONFIG_ROOT_DIR
.lock()
.ok()
.and_then(|guard| guard.as_ref().cloned())
}
pub(crate) fn kernel_socket_path() -> Option<PathBuf> {
config_root_dir().map(|root| root.join(KERNEL_SOCKET_FILE_NAME))
}
pub(crate) fn legacy_config_file_path(config_id: &str) -> Option<PathBuf> {
legacy_migration::legacy_config_file_path(&config_root_dir(), CONFIG_DIR_NAME, config_id)
}
pub fn init_config_store(root_dir: String) -> bool {
let root = PathBuf::from(root_dir);
let configs_dir = root.join(CONFIG_DIR_NAME);
if let Err(e) = std::fs::create_dir_all(&configs_dir) {
hilog_error!(
"[Rust] failed to create config dir {}: {}",
configs_dir.display(),
e
);
return false;
}
match CONFIG_ROOT_DIR.lock() {
Ok(mut guard) => {
*guard = Some(root.clone());
}
Err(e) => {
hilog_error!("[Rust] failed to lock config root dir: {}", e);
return false;
}
}
if !init_config_meta_store(root.to_string_lossy().into_owned()) {
return false;
}
hilog_debug!(
"[Rust] initialized config repo at {}",
configs_dir.display()
);
true
}
fn migrate_legacy_file_if_needed(config_id: &str) -> Option<()> {
legacy_migration::migrate_legacy_file_if_needed(
&config_root_dir(),
CONFIG_DIR_NAME,
config_id,
save_config_record,
)
}
pub fn save_config_record(
config_id: String,
display_name: String,
config_json: String,
) -> Option<StoredConfigRecord> {
let config = match validation::validate_config_json(&config_json, config_id.clone()) {
Ok(config) => config,
Err(e) => {
hilog_error!("[Rust] save_config_record failed {}", e);
return None;
}
};
let normalized_json = match serde_json::to_string(&config) {
Ok(raw) => raw,
Err(e) => {
hilog_error!(
"[Rust] failed to serialize normalized config {}: {}",
config_id,
e
);
return None;
}
};
let fields = match validation::config_to_top_level_map(&config) {
Some(fields) => fields,
None => return None,
};
let conn = open_db()?;
let tx = conn.unchecked_transaction().ok()?;
let existing_meta = get_config_meta(&config_id);
let favorite = existing_meta
.as_ref()
.map(|meta| meta.favorite)
.unwrap_or(false);
let temporary = existing_meta
.as_ref()
.map(|meta| meta.temporary)
.unwrap_or(false);
let meta = upsert_config_meta_in_tx(&tx, config_id.clone(), display_name, favorite, temporary)?;
field_store::replace_config_fields(&tx, &config_id, fields)?;
tx.commit().ok()?;
if let Some(legacy_path) = legacy_config_file_path(&config_id) {
if legacy_path.exists() {
let _ = std::fs::remove_file(legacy_path);
}
}
Some(StoredConfigRecord {
meta,
config_json: normalized_json,
})
}
pub fn load_config_json(config_id: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let object = field_store::load_config_map_from_db(config_id)?;
serde_json::to_string(&Value::Object(object)).ok()
}
pub fn get_config_record(config_id: &str) -> Option<StoredConfigRecord> {
let config_json = load_config_json(config_id)?;
let meta = get_config_meta(config_id)?;
Some(StoredConfigRecord { meta, config_json })
}
pub fn get_config_field_value(config_id: &str, field: &str) -> Option<String> {
migrate_legacy_file_if_needed(config_id)?;
let conn = open_db()?;
conn.query_row(
"SELECT field_json FROM stored_config_fields
WHERE config_id = ?1 AND field_name = ?2",
params![config_id, field],
|row| row.get::<_, String>(0),
)
.ok()
}
pub fn set_config_field_value(config_id: &str, field: &str, json_value: &str) -> bool {
if field.contains('.') {
return false;
}
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
let mut value = match serde_json::from_str::<Value>(&raw) {
Ok(value) => value,
Err(_) => return false,
};
let new_field_value = match serde_json::from_str::<Value>(json_value) {
Ok(value) => value,
Err(_) => return false,
};
let object = match value.as_object_mut() {
Some(object) => object,
None => return false,
};
object.insert(field.to_string(), new_field_value);
let normalized = match serde_json::to_string(&value) {
Ok(raw) => raw,
Err(_) => return false,
};
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, normalized).is_some()
}
pub fn get_display_name(config_id: &str) -> Option<String> {
get_config_meta(config_id).map(|meta| meta.display_name)
}
pub fn get_default_config_json() -> Option<String> {
crate::build_default_network_config_json().ok()
}
pub fn create_config_record(config_id: String, display_name: String) -> Option<StoredConfigRecord> {
let raw = get_default_config_json()?;
let mut config = serde_json::from_str::<NetworkConfig>(&raw).ok()?;
config.instance_id = Some(config_id.clone());
let normalized_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, display_name, normalized_json)
}
pub fn start_kernel_with_config_id(config_id: &str) -> bool {
let raw = match load_config_json(config_id) {
Some(raw) => raw,
None => return false,
};
crate::run_network_instance_from_json(&raw)
}
pub fn list_config_meta_json() -> String {
serde_json::to_string(&list_config_meta_entries().configs).unwrap_or_else(|_| "[]".to_string())
}
pub fn delete_config_record(config_id: &str) -> bool {
if let Some(path) = legacy_config_file_path(config_id) {
if path.exists() {
let _ = std::fs::remove_file(path);
}
}
let conn = match open_db() {
Some(conn) => conn,
None => return false,
};
if let Err(e) = conn.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!("[Rust] failed to delete config fields {}: {}", config_id, e);
return false;
}
delete_config_meta(config_id)
}
pub fn export_config_toml(config_id: &str) -> Option<ExportTomlResult> {
let record = get_config_record(config_id)?;
import_export::export_config_toml_from_record(&record)
}
pub fn import_toml_config(
toml_text: String,
display_name: Option<String>,
) -> Option<StoredConfigRecord> {
import_export::import_toml_to_record(toml_text, display_name, save_config_record)
}
#[cfg(test)]
mod tests {
use super::*;
use rusqlite::params;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
fn test_root() -> String {
let unique = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
let dir = std::env::temp_dir().join(format!("easytier_ohrs_test_{}", unique));
dir.to_string_lossy().into_owned()
}
#[test]
fn save_get_export_delete_roundtrip() {
let root = test_root();
assert!(init_config_store(root.clone()));
let config_json = crate::build_default_network_config_json().expect("default config");
let saved = save_config_record("cfg-1".to_string(), "test-config".to_string(), config_json)
.expect("save config");
assert_eq!(saved.meta.config_id, "cfg-1");
assert_eq!(saved.meta.display_name, "test-config");
let loaded = get_config_record("cfg-1").expect("load config");
assert_eq!(loaded.meta.display_name, "test-config");
assert!(loaded.config_json.contains("cfg-1"));
let legacy_json_path = PathBuf::from(&root)
.join(CONFIG_DIR_NAME)
.join("cfg-1.json");
assert!(
!legacy_json_path.exists(),
"config should no longer be persisted as a per-config json file"
);
let conn = open_db().expect("db should be open");
let field_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM stored_config_fields WHERE config_id = ?1",
params!["cfg-1"],
|row| row.get(0),
)
.expect("count config fields");
assert!(field_count > 0, "config fields should be stored in sqlite");
let exported = export_config_toml("cfg-1").expect("export toml");
assert!(exported.toml_text.contains("instance_id"));
assert!(delete_config_record("cfg-1"));
assert!(get_config_record("cfg-1").is_none());
}
#[test]
fn set_config_field_updates_only_requested_top_level_field() {
let root = test_root();
assert!(init_config_store(root));
let config_json = crate::build_default_network_config_json().expect("default config");
save_config_record(
"cfg-field".to_string(),
"field-config".to_string(),
config_json,
)
.expect("save config");
let before_network_name = get_config_field_value("cfg-field", "network_name");
let before_instance_id = get_config_field_value("cfg-field", "instance_id")
.expect("instance id field should exist");
assert!(set_config_field_value(
"cfg-field",
"network_name",
"\"changed-network\""
));
assert_eq!(
get_config_field_value("cfg-field", "network_name"),
Some("\"changed-network\"".to_string())
);
assert_eq!(
get_config_field_value("cfg-field", "instance_id"),
Some(before_instance_id)
);
assert_ne!(
get_config_field_value("cfg-field", "network_name"),
before_network_name
);
}
}
@@ -0,0 +1,67 @@
use crate::config::storage::config_meta::{now_ts_string, open_db};
use ohos_hilog_binding::hilog_error;
use rusqlite::{Connection, params};
use serde_json::{Map, Value};
pub(super) fn load_config_map_from_db(config_id: &str) -> Option<Map<String, Value>> {
let conn = open_db()?;
let mut stmt = conn
.prepare(
"SELECT field_name, field_json
FROM stored_config_fields
WHERE config_id = ?1",
)
.ok()?;
let rows = stmt
.query_map(params![config_id], |row| {
let field_name: String = row.get(0)?;
let field_json: String = row.get(1)?;
Ok((field_name, field_json))
})
.ok()?;
let mut object = Map::new();
for row in rows {
let (field_name, field_json) = row.ok()?;
let value = serde_json::from_str::<Value>(&field_json).ok()?;
object.insert(field_name, value);
}
if object.is_empty() {
None
} else {
Some(object)
}
}
pub(super) fn replace_config_fields(
tx: &Connection,
config_id: &str,
fields: Map<String, Value>,
) -> Option<()> {
if let Err(e) = tx.execute(
"DELETE FROM stored_config_fields WHERE config_id = ?1",
params![config_id],
) {
hilog_error!(
"[Rust] failed to clear existing config fields {}: {}",
config_id,
e
);
return None;
}
for (field_name, value) in fields {
let field_json = serde_json::to_string(&value).ok()?;
if let Err(e) = tx.execute(
"INSERT INTO stored_config_fields (config_id, field_name, field_json, updated_at)
VALUES (?1, ?2, ?3, ?4)",
params![config_id, field_name, field_json, now_ts_string()],
) {
hilog_error!("[Rust] failed to persist config field {}: {}", config_id, e);
return None;
}
}
Some(())
}
@@ -0,0 +1,48 @@
use crate::config::types::stored_config::{ExportTomlResult, StoredConfigRecord};
use easytier::common::config::{ConfigLoader, TomlConfigLoader};
use easytier::proto::api::manage::NetworkConfig;
pub(super) fn export_config_toml_from_record(
record: &StoredConfigRecord,
) -> Option<ExportTomlResult> {
let config = serde_json::from_str::<NetworkConfig>(&record.config_json).ok()?;
let toml = config.gen_config().ok()?;
Some(ExportTomlResult {
toml_text: toml.dump(),
})
}
pub(super) fn import_toml_to_record(
toml_text: String,
display_name: Option<String>,
save_config_record: impl Fn(String, String, String) -> Option<StoredConfigRecord>,
) -> Option<StoredConfigRecord> {
let config =
NetworkConfig::new_from_config(TomlConfigLoader::new_from_str(&toml_text).ok()?).ok()?;
let config_id = config.instance_id.clone()?;
let name_from_toml = toml_text
.lines()
.find_map(|line| {
let trimmed = line.trim();
if !trimmed.starts_with("instance_name") {
return None;
}
trimmed.split_once('=').map(|(_, value)| {
value
.trim()
.trim_matches('"')
.trim_matches('\'')
.to_string()
})
})
.filter(|name| !name.is_empty());
let final_name = display_name
.filter(|name| !name.is_empty())
.or(name_from_toml)
.unwrap_or_else(|| config_id.clone());
let config_json = serde_json::to_string(&config).ok()?;
save_config_record(config_id, final_name, config_json)
}
@@ -0,0 +1,45 @@
use crate::config::storage::config_meta::get_config_meta;
use ohos_hilog_binding::hilog_error;
use std::path::PathBuf;
pub(super) fn legacy_config_file_path(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
) -> Option<PathBuf> {
root_dir.as_ref().map(|root| {
root.join(config_dir_name)
.join(format!("{}.json", config_id))
})
}
pub(super) fn migrate_legacy_file_if_needed(
root_dir: &Option<PathBuf>,
config_dir_name: &str,
config_id: &str,
save_config_record: impl Fn(
String,
String,
String,
) -> Option<crate::config::types::stored_config::StoredConfigRecord>,
) -> Option<()> {
let legacy_path = legacy_config_file_path(root_dir, config_dir_name, config_id)?;
if !legacy_path.exists() {
return Some(());
}
let raw = std::fs::read_to_string(&legacy_path).ok()?;
let display_name = get_config_meta(config_id)
.map(|meta| meta.display_name)
.unwrap_or_else(|| config_id.to_string());
save_config_record(config_id.to_string(), display_name, raw)?;
if let Err(e) = std::fs::remove_file(&legacy_path) {
hilog_error!(
"[Rust] failed to remove legacy config file {}: {}",
legacy_path.display(),
e
);
}
Some(())
}
@@ -0,0 +1,30 @@
use easytier::proto::api::manage::NetworkConfig;
use serde_json::{Map, Value};
pub(super) fn normalize_config_id(
mut config: NetworkConfig,
requested_id: String,
) -> Result<NetworkConfig, String> {
if requested_id.is_empty() {
return Err("config_id is required".to_string());
}
config.instance_id = Some(requested_id);
Ok(config)
}
pub(super) fn validate_config_json(
config_json: &str,
config_id: String,
) -> Result<NetworkConfig, String> {
let config = serde_json::from_str::<NetworkConfig>(config_json)
.map_err(|e| format!("parse config json failed: {}", e))?;
let config = normalize_config_id(config, config_id)?;
config
.gen_config()
.map_err(|e| format!("generate toml failed: {}", e))?;
Ok(config)
}
pub(super) fn config_to_top_level_map(config: &NetworkConfig) -> Option<Map<String, Value>> {
serde_json::to_value(config).ok()?.as_object().cloned()
}
@@ -0,0 +1,2 @@
pub(crate) mod config_api;
pub(crate) mod runtime_api;
@@ -0,0 +1,46 @@
use crate::config;
pub(crate) fn init_config_store(root_dir: String) -> bool {
config::repository::init_config_store(root_dir)
}
pub(crate) fn list_configs() -> String {
config::repository::list_config_meta_json()
}
pub(crate) fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
config::repository::save_config_record(config_id, display_name, config_json).is_some()
}
pub(crate) fn create_config(config_id: String, display_name: String) -> bool {
config::repository::create_config_record(config_id, display_name).is_some()
}
pub(crate) fn delete_stored_config_meta(config_id: String) -> bool {
config::repository::delete_config_record(&config_id)
}
pub(crate) fn get_config(config_id: String) -> Option<String> {
config::repository::load_config_json(&config_id)
}
pub(crate) fn get_default_config() -> Option<String> {
config::repository::get_default_config_json()
}
pub(crate) fn get_config_field(config_id: String, field: String) -> Option<String> {
config::repository::get_config_field_value(&config_id, &field)
}
pub(crate) fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
config::repository::set_config_field_value(&config_id, &field, &json_value)
}
pub(crate) fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
config::repository::import_toml_config(toml_text, display_name)
.map(|record| record.meta.config_id)
}
pub(crate) fn export_toml(config_id: String) -> Option<String> {
config::repository::export_config_toml(&config_id).map(|ret| ret.toml_text)
}
@@ -0,0 +1,184 @@
use crate::config::repository::load_config_json;
use crate::config::storage::config_meta::get_config_display_name;
use crate::config::types::stored_config::KeyValuePair;
use crate::kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use crate::runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use crate::{ASYNC_RUNTIME, EASYTIER_VERSION, INSTANCE_MANAGER, WEB_CLIENTS};
use easytier::proto::api::manage::NetworkConfig;
use ohos_hilog_binding::{hilog_error, hilog_info};
use std::sync::Arc;
pub(crate) fn start_kernel(
config_id: String,
start_kernel_with_config_id: impl Fn(&str) -> bool,
) -> bool {
start_kernel_with_config_id(&config_id)
}
pub(crate) fn stop_kernel(
config_id: String,
stop_web_client: impl Fn(&str) -> bool,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
maybe_stop_local_socket_server: impl Fn(),
) -> bool {
clear_tun_attached(&config_id);
if stop_web_client(&config_id) {
return true;
}
let Some(instance_id) = parse_instance_uuid(&config_id) else {
return false;
};
let ret = INSTANCE_MANAGER
.delete_network_instance(vec![instance_id])
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!("[Rust] stop_kernel failed {}: {}", config_id, err);
false
});
maybe_stop_local_socket_server();
ret
}
pub(crate) fn stop_network_instance(
config_ids: Vec<String>,
stop_kernel: impl Fn(String) -> bool,
) -> bool {
let mut ok = true;
for config_id in config_ids {
ok = stop_kernel(config_id) && ok;
}
ok
}
pub(crate) fn collect_network_infos() -> Vec<KeyValuePair> {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return vec![];
}
};
infos
.into_iter()
.filter_map(|(key, value)| {
serde_json::to_string(&value)
.ok()
.map(|value_json| KeyValuePair {
key: key.to_string(),
value: value_json,
})
})
.collect()
}
pub(crate) fn set_tun_fd(
config_id: String,
fd: i32,
parse_instance_uuid: impl Fn(&str) -> Option<uuid::Uuid>,
) -> bool {
let Some(instance_id) = parse_instance_uuid(&config_id) else {
hilog_error!("[Rust] set_tun_fd invalid instance id: {}", config_id);
return false;
};
INSTANCE_MANAGER
.set_tun_fd(&instance_id, fd)
.map(|_| {
mark_tun_attached(&config_id);
hilog_info!(
"[Rust] set_tun_fd success instance={} fd={} marked_attached=true",
config_id,
fd
);
true
})
.unwrap_or_else(|err| {
hilog_error!("[Rust] set_tun_fd failed {}: {}", config_id, err);
false
})
}
pub(crate) fn get_runtime_snapshot() -> RuntimeAggregateState {
get_runtime_snapshot_inner()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
let infos = match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(infos) => infos,
Err(err) => {
hilog_error!("[Rust] collect network infos failed {}", err);
return RuntimeAggregateState {
instances: vec![],
tun: TunAggregateState {
active: false,
attached_instance_ids: vec![],
aggregated_routes: vec![],
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count: 0,
};
}
};
let mut instances = Vec::with_capacity(infos.len());
for (instance_uuid, info) in infos {
let config_id = instance_uuid.to_string();
let display_name = get_config_display_name(&config_id).unwrap_or_else(|| config_id.clone());
let config_json = load_config_json(&config_id);
let stored_config = config_json
.as_deref()
.and_then(|raw| serde_json::from_str::<NetworkConfig>(raw).ok());
let magic_dns_enabled = stored_config
.as_ref()
.and_then(|cfg| cfg.enable_magic_dns)
.unwrap_or(false);
let need_exit_node = stored_config
.as_ref()
.map(|cfg| !cfg.exit_nodes.is_empty())
.unwrap_or(false);
instances.push(runtime_instance_from_running_info(
config_id,
display_name,
magic_dns_enabled,
need_exit_node,
info,
));
}
instances.sort_by(|a, b| {
a.display_name
.cmp(&b.display_name)
.then_with(|| a.instance_id.cmp(&b.instance_id))
});
let attached_instance_ids = instances
.iter()
.filter(|instance| instance.tun_required)
.map(|instance| instance.instance_id.clone())
.collect::<Vec<_>>();
let aggregated_routes = aggregate_requested_tun_routes(&instances);
let running_instance_count =
instances.iter().filter(|instance| instance.running).count() as i32;
let tun_active = !attached_instance_ids.is_empty();
RuntimeAggregateState {
instances,
tun: TunAggregateState {
active: tun_active,
attached_instance_ids,
aggregated_routes,
dns_servers: vec![],
need_rebuild: false,
},
running_instance_count,
}
}
@@ -0,0 +1,6 @@
mod protocol;
mod routing;
mod socket_server;
pub(crate) use routing::aggregate_requested_tun_routes;
pub use socket_server::{start_local_socket_server, stop_local_socket_server};
@@ -0,0 +1,50 @@
use crate::config::types::stored_config::LocalSocketSyncMessage;
use serde::Serialize;
use std::io::{Error, ErrorKind, Write};
use std::os::unix::net::UnixStream;
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct TunRequestPayload {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub aggregated_routes: Vec<String>,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
}
pub(crate) fn send_local_socket_message(
stream: &mut UnixStream,
message_type: &str,
payload_json: String,
) -> std::io::Result<()> {
let message = LocalSocketSyncMessage {
message_type: message_type.to_string(),
payload_json,
};
let mut raw = serde_json::to_vec(&message)
.map_err(|err| Error::new(ErrorKind::InvalidData, err.to_string()))?;
raw.push(b'\n');
stream.write_all(&raw)?;
Ok(())
}
pub(crate) fn broadcast_local_socket_message(
clients: &mut Vec<UnixStream>,
message_type: &str,
payload_json: &str,
) -> bool {
let mut active_clients = Vec::with_capacity(clients.len());
let mut delivered = false;
for mut client in clients.drain(..) {
if send_local_socket_message(&mut client, message_type, payload_json.to_string()).is_ok() {
delivered = true;
active_clients.push(client);
}
}
*clients = active_clients;
delivered
}
@@ -0,0 +1,105 @@
use crate::config::repository::load_config_json;
use crate::runtime::state::runtime_state::RuntimeInstanceState;
use easytier::proto::api::manage::NetworkConfig;
use ipnet::IpNet;
use ohos_hilog_binding::hilog_debug;
use std::collections::HashSet;
use std::net::IpAddr;
pub(crate) fn load_manual_routes(config_id: &str) -> Vec<String> {
load_config_json(config_id)
.and_then(|raw| serde_json::from_str::<NetworkConfig>(&raw).ok())
.map(|config| config.routes)
.unwrap_or_default()
}
fn normalize_route_cidr(route: &str) -> Option<String> {
route
.parse::<IpNet>()
.ok()
.map(|network| match network {
IpNet::V4(net) => net.trunc().to_string(),
IpNet::V6(net) => net.trunc().to_string(),
})
.or_else(|| {
route.parse::<IpAddr>().ok().map(|addr| match addr {
IpAddr::V4(ip) => format!("{}/32", ip),
IpAddr::V6(ip) => format!("{}/128", ip),
})
})
}
fn simplify_routes(routes: Vec<String>) -> Vec<String> {
let mut parsed = routes
.into_iter()
.filter_map(|route| normalize_route_cidr(&route))
.filter_map(|route| route.parse::<IpNet>().ok())
.collect::<Vec<_>>();
parsed.sort_by(|left, right| {
left.prefix_len()
.cmp(&right.prefix_len())
.then_with(|| left.network().to_string().cmp(&right.network().to_string()))
});
let mut simplified = Vec::<IpNet>::new();
'outer: for route in parsed {
for existing in &simplified {
if existing.contains(&route.network()) && existing.prefix_len() <= route.prefix_len() {
continue 'outer;
}
}
simplified.retain(|existing| {
!(route.contains(&existing.network()) && route.prefix_len() <= existing.prefix_len())
});
simplified.push(route);
}
let mut seen = HashSet::new();
simplified
.into_iter()
.map(|route| route.to_string())
.filter(|route| seen.insert(route.clone()))
.collect()
}
pub(crate) fn aggregate_tun_routes(instance: &RuntimeInstanceState) -> Vec<String> {
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
let manual_routes = load_manual_routes(&instance.config_id);
let proxy_cidrs = instance
.routes
.iter()
.flat_map(|route| route.proxy_cidrs.iter().cloned())
.collect::<Vec<_>>();
let mut raw_routes = Vec::new();
if let Some(cidr) = virtual_ipv4_cidr.clone() {
raw_routes.push(cidr);
}
raw_routes.extend(manual_routes.iter().cloned());
raw_routes.extend(proxy_cidrs.iter().cloned());
let aggregated_routes = simplify_routes(raw_routes);
hilog_debug!(
"[Rust] aggregate_tun_routes instance={} proxy_cidrs={:?} aggregated_routes={:?}",
instance.instance_id,
proxy_cidrs,
aggregated_routes
);
aggregated_routes
}
pub(crate) fn aggregate_requested_tun_routes(instances: &[RuntimeInstanceState]) -> Vec<String> {
let mut aggregated_routes = Vec::new();
let mut seen_routes = HashSet::new();
for instance in instances.iter().filter(|instance| instance.tun_required) {
for route in aggregate_tun_routes(instance) {
if seen_routes.insert(route.clone()) {
aggregated_routes.push(route);
}
}
}
aggregated_routes
}
@@ -0,0 +1,196 @@
use super::protocol::{TunRequestPayload, broadcast_local_socket_message};
use crate::config::repository::kernel_socket_path;
use crate::get_runtime_snapshot_inner;
use crate::kernel_bridge::routing::aggregate_tun_routes;
use ohos_hilog_binding::{hilog_error, hilog_info};
use once_cell::sync::Lazy;
use std::collections::{HashMap, HashSet};
use std::io::ErrorKind;
use std::os::unix::net::{UnixListener, UnixStream};
use std::path::PathBuf;
use std::sync::Mutex;
use std::sync::atomic::{AtomicBool, Ordering};
use std::thread::{self, JoinHandle};
use std::time::Duration;
struct LocalSocketState {
stop_flag: std::sync::Arc<AtomicBool>,
socket_path: PathBuf,
worker: JoinHandle<()>,
}
static LOCAL_SOCKET_STATE: Lazy<Mutex<Option<LocalSocketState>>> = Lazy::new(|| Mutex::new(None));
pub fn start_local_socket_server() -> bool {
let socket_path = match kernel_socket_path() {
Some(path) => path,
None => {
hilog_error!("[Rust] kernel socket path unavailable");
return false;
}
};
match LOCAL_SOCKET_STATE.lock() {
Ok(guard) if guard.is_some() => return true,
Ok(_) => {}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
}
if socket_path.exists() {
let _ = std::fs::remove_file(&socket_path);
}
let listener = match UnixListener::bind(&socket_path) {
Ok(listener) => listener,
Err(err) => {
hilog_error!(
"[Rust] bind localsocket failed {}: {}",
socket_path.display(),
err
);
return false;
}
};
if let Err(err) = listener.set_nonblocking(true) {
hilog_error!("[Rust] set localsocket nonblocking failed: {}", err);
let _ = std::fs::remove_file(&socket_path);
return false;
}
let stop_flag = std::sync::Arc::new(AtomicBool::new(false));
let worker_stop_flag = stop_flag.clone();
let worker = thread::spawn(move || {
let mut last_snapshot_json = String::new();
let mut delivered_tun_requests = HashSet::new();
let mut last_tun_route_signatures = HashMap::<String, String>::new();
let mut clients = Vec::<UnixStream>::new();
while !worker_stop_flag.load(Ordering::Relaxed) {
let mut accepted_client = false;
loop {
match listener.accept() {
Ok((stream, _addr)) => {
accepted_client = true;
clients.push(stream);
}
Err(err) if err.kind() == ErrorKind::WouldBlock => break,
Err(err) => {
hilog_error!("[Rust] accept localsocket failed: {}", err);
break;
}
}
}
let snapshot = get_runtime_snapshot_inner();
let snapshot_json = match serde_json::to_string(&snapshot) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize runtime snapshot failed: {}", err);
thread::sleep(Duration::from_millis(250));
continue;
}
};
if accepted_client || snapshot_json != last_snapshot_json {
let _ = broadcast_local_socket_message(
&mut clients,
"runtime_snapshot",
&snapshot_json,
);
last_snapshot_json = snapshot_json;
}
for instance in snapshot.instances.iter() {
if instance.running && instance.tun_required {
let virtual_ipv4 = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4.clone());
let virtual_ipv4_cidr = instance
.my_node_info
.as_ref()
.and_then(|info| info.virtual_ipv4_cidr.clone());
if clients.is_empty() {
continue;
}
if virtual_ipv4.is_none() || virtual_ipv4_cidr.is_none() {
continue;
}
let aggregated_routes = aggregate_tun_routes(instance);
let route_signature = serde_json::to_string(&aggregated_routes)
.unwrap_or_else(|_| "[]".to_string());
let should_send = !delivered_tun_requests.contains(&instance.instance_id)
|| last_tun_route_signatures
.get(&instance.instance_id)
.map(|value| value != &route_signature)
.unwrap_or(true);
if !should_send {
continue;
}
let payload = TunRequestPayload {
config_id: instance.config_id.clone(),
instance_id: instance.instance_id.clone(),
display_name: instance.display_name.clone(),
virtual_ipv4,
virtual_ipv4_cidr,
aggregated_routes,
magic_dns_enabled: instance.magic_dns_enabled,
need_exit_node: instance.need_exit_node,
};
let payload_json = match serde_json::to_string(&payload) {
Ok(json) => json,
Err(err) => {
hilog_error!("[Rust] serialize tun request failed: {}", err);
continue;
}
};
if broadcast_local_socket_message(&mut clients, "tun_request", &payload_json) {
delivered_tun_requests.insert(instance.instance_id.clone());
last_tun_route_signatures
.insert(instance.instance_id.clone(), route_signature);
}
} else {
delivered_tun_requests.remove(&instance.instance_id);
last_tun_route_signatures.remove(&instance.instance_id);
}
}
thread::sleep(Duration::from_millis(250));
}
});
match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => {
*guard = Some(LocalSocketState {
stop_flag,
socket_path,
worker,
});
true
}
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
false
}
}
}
pub fn stop_local_socket_server() -> bool {
let state = match LOCAL_SOCKET_STATE.lock() {
Ok(mut guard) => guard.take(),
Err(err) => {
hilog_error!("[Rust] lock localsocket state failed: {}", err);
return false;
}
};
if let Some(state) = state {
state.stop_flag.store(true, Ordering::Relaxed);
let _ = state.worker.join();
let _ = std::fs::remove_file(state.socket_path);
}
true
}
+439 -139
View File
@@ -1,185 +1,485 @@
mod native_log; mod config;
mod exports;
mod kernel_bridge;
mod platform;
mod runtime;
use easytier::common::config::{ConfigFileControl, ConfigLoader, TomlConfigLoader}; use config::repository::{
create_config_record, delete_config_record, export_config_toml, get_config_field_value,
get_default_config_json, import_toml_config, init_config_store as init_repo_store,
list_config_meta_json, save_config_record, set_config_field_value, start_kernel_with_config_id,
};
use config::services::schema_service::{
ConfigFieldMapping, NetworkConfigSchema,
get_network_config_field_mappings as build_network_config_field_mappings,
get_network_config_schema as build_network_config_schema,
};
use config::services::share_link_service::{
build_config_share_link as build_config_share_link_inner,
import_config_share_link as import_config_share_link_inner,
parse_config_share_link as parse_config_share_link_inner,
};
use config::storage::config_meta::get_config_display_name;
use config::types::stored_config::{KeyValuePair, SharedConfigLinkPayload};
use easytier::common::constants::EASYTIER_VERSION; use easytier::common::constants::EASYTIER_VERSION;
use easytier::common::{
MachineIdOptions,
config::{ConfigFileControl, ConfigLoader, TomlConfigLoader},
};
use easytier::instance_manager::NetworkInstanceManager; use easytier::instance_manager::NetworkInstanceManager;
use easytier::proto::api::manage::NetworkConfig; use easytier::proto::api::manage::NetworkConfig;
use easytier::proto::api::manage::NetworkingMethod;
use easytier::web_client::{WebClient, WebClientHooks, run_web_client};
use kernel_bridge::{
aggregate_requested_tun_routes, start_local_socket_server as start_local_socket_server_inner,
stop_local_socket_server as stop_local_socket_server_inner,
};
use napi_derive_ohos::napi; use napi_derive_ohos::napi;
use ohos_hilog_binding::{hilog_debug, hilog_error}; use ohos_hilog_binding::{hilog_error, hilog_info};
use runtime::state::runtime_state::{
RuntimeAggregateState, TunAggregateState, clear_tun_attached, mark_tun_attached,
runtime_instance_from_running_info,
};
use std::collections::{HashMap, HashSet};
use std::format; use std::format;
use std::sync::{Arc, Mutex};
use tokio::runtime::{Builder, Runtime};
use uuid::Uuid; use uuid::Uuid;
static INSTANCE_MANAGER: once_cell::sync::Lazy<NetworkInstanceManager> = pub(crate) static INSTANCE_MANAGER: once_cell::sync::Lazy<Arc<NetworkInstanceManager>> =
once_cell::sync::Lazy::new(NetworkInstanceManager::new); once_cell::sync::Lazy::new(|| Arc::new(NetworkInstanceManager::new()));
static ASYNC_RUNTIME: once_cell::sync::Lazy<Runtime> = once_cell::sync::Lazy::new(|| {
Builder::new_multi_thread()
.enable_all()
.build()
.expect("tokio runtime for easytier-ohrs")
});
static WEB_CLIENTS: once_cell::sync::Lazy<Mutex<HashMap<String, ManagedWebClient>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashMap::new()));
#[napi(object)] #[derive(Default)]
pub struct KeyValuePair { struct TrackedWebClientHooks {
pub key: String, instance_ids: Mutex<HashSet<Uuid>>,
pub value: String,
} }
#[napi] struct ManagedWebClient {
pub fn easytier_version() -> String { _client: WebClient,
EASYTIER_VERSION.to_string() hooks: Arc<TrackedWebClientHooks>,
} }
#[napi] #[async_trait::async_trait]
pub fn set_tun_fd(inst_id: String, fd: i32) -> bool { impl WebClientHooks for TrackedWebClientHooks {
match Uuid::try_parse(&inst_id) { async fn post_run_network_instance(&self, id: &Uuid) -> Result<(), String> {
Ok(uuid) => match INSTANCE_MANAGER.set_tun_fd(&uuid, fd) { self.instance_ids
Ok(_) => { .lock()
hilog_debug!("[Rust] set tun fd {} to {}.", fd, inst_id); .map_err(|err| err.to_string())?
true .insert(*id);
} Ok(())
Err(e) => { }
hilog_error!("[Rust] cant set tun fd {} to {}. {}", fd, inst_id, e);
false async fn post_remove_network_instances(&self, ids: &[Uuid]) -> Result<(), String> {
} let mut guard = self.instance_ids.lock().map_err(|err| err.to_string())?;
}, for id in ids {
Err(e) => { guard.remove(id);
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e); }
Ok(())
}
}
fn is_config_server_config(config: &NetworkConfig) -> bool {
matches!(
NetworkingMethod::try_from(config.networking_method.unwrap_or_default())
.unwrap_or_default(),
NetworkingMethod::PublicServer
) && config
.public_server_url
.as_ref()
.is_some_and(|url| !url.trim().is_empty())
}
fn stop_web_client(config_id: &str) -> bool {
let managed = match WEB_CLIENTS.lock() {
Ok(mut guard) => guard.remove(config_id),
Err(err) => {
hilog_error!("[Rust] stop_web_client lock failed {}", err);
return false;
}
};
let Some(managed) = managed else {
return false;
};
let tracked_ids = managed
.hooks
.instance_ids
.lock()
.map(|guard| guard.iter().copied().collect::<Vec<_>>())
.unwrap_or_default();
drop(managed);
if tracked_ids.is_empty() {
maybe_stop_local_socket_server();
return true;
}
let ret = INSTANCE_MANAGER
.delete_network_instance(tracked_ids)
.map(|_| true)
.unwrap_or_else(|err| {
hilog_error!(
"[Rust] stop config server instances failed {}: {}",
config_id,
err
);
false
});
maybe_stop_local_socket_server();
ret
}
fn ensure_local_socket_server_started() -> bool {
start_local_socket_server_inner()
}
fn maybe_stop_local_socket_server() {
let no_local_instances = INSTANCE_MANAGER.list_network_instance_ids().is_empty();
let no_web_clients = WEB_CLIENTS
.lock()
.map(|guard| guard.is_empty())
.unwrap_or(false);
if no_local_instances && no_web_clients {
let _ = stop_local_socket_server_inner();
}
}
fn run_config_server_instance(config_id: &str, config: &NetworkConfig) -> bool {
if INSTANCE_MANAGER
.list_network_instance_ids()
.iter()
.next()
.is_some()
{
hilog_error!("[Rust] there is a running instance!");
return false;
}
let Some(config_server_url) = config.public_server_url.clone() else {
hilog_error!("[Rust] public_server_url missing for config server mode");
return false;
};
let hooks = Arc::new(TrackedWebClientHooks::default());
let secure_mode = config
.secure_mode
.as_ref()
.map(|mode| mode.enabled)
.unwrap_or(false);
let hostname = config.hostname.clone();
if !ensure_local_socket_server_started() {
return false;
}
let client = ASYNC_RUNTIME.block_on(run_web_client(
&config_server_url,
MachineIdOptions::default(),
hostname,
secure_mode,
INSTANCE_MANAGER.clone(),
Some(hooks.clone()),
));
let client = match client {
Ok(client) => client,
Err(err) => {
hilog_error!("[Rust] start config server failed {}", err);
return false;
}
};
match WEB_CLIENTS.lock() {
Ok(mut guard) => {
guard.insert(
config_id.to_string(),
ManagedWebClient {
_client: client,
hooks,
},
);
true
}
Err(err) => {
hilog_error!("[Rust] store config server client failed {}", err);
false false
} }
} }
} }
#[napi] pub(crate) fn build_default_network_config_json() -> Result<String, String> {
pub fn default_network_config() -> String { let config = NetworkConfig::new_from_config(TomlConfigLoader::default())
match NetworkConfig::new_from_config(TomlConfigLoader::default()) { .map_err(|e| format!("default_network_config failed {}", e))?;
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)), serde_json::to_string(&config).map_err(|e| format!("default_network_config failed {}", e))
Err(e) => {
hilog_error!("[Rust] default_network_config failed {}", e);
format!("ERROR {}", e)
}
}
} }
#[napi] fn convert_toml_to_network_config_inner(toml_text: &str) -> Result<String, String> {
pub fn convert_toml_to_network_config(cfg_str: String) -> String { let config = NetworkConfig::new_from_config(
match TomlConfigLoader::new_from_str(&cfg_str) { TomlConfigLoader::new_from_str(toml_text).map_err(|e| e.to_string())?,
Ok(cfg) => match NetworkConfig::new_from_config(cfg) { )
Ok(result) => serde_json::to_string(&result).unwrap_or_else(|e| format!("ERROR {}", e)), .map_err(|e| e.to_string())?;
Err(e) => { serde_json::to_string(&config).map_err(|e| e.to_string())
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
},
Err(e) => {
hilog_error!("[Rust] convert_toml_to_network_config failed {}", e);
format!("ERROR {}", e)
}
}
} }
#[napi] fn parse_network_config_inner(cfg_json: &str) -> bool {
pub fn parse_network_config(cfg_json: String) -> bool { serde_json::from_str::<NetworkConfig>(cfg_json)
match serde_json::from_str::<NetworkConfig>(&cfg_json) { .ok()
Ok(cfg) => match cfg.gen_config() { .and_then(|cfg| cfg.gen_config().ok())
Ok(toml) => { .is_some()
hilog_debug!("[Rust] Convert to Toml {}", toml.dump());
true
}
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
},
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
false
}
}
} }
#[napi] pub(crate) fn run_network_instance_from_json(cfg_json: &str) -> bool {
pub fn run_network_instance(cfg_json: String) -> bool { let config = match serde_json::from_str::<NetworkConfig>(cfg_json) {
let cfg = match serde_json::from_str::<NetworkConfig>(&cfg_json) { Ok(cfg) => cfg,
Ok(cfg) => match cfg.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
},
Err(e) => { Err(e) => {
hilog_error!("[Rust] parse config failed {}", e); hilog_error!("[Rust] parse config failed {}", e);
return false; return false;
} }
}; };
if INSTANCE_MANAGER.list_network_instance_ids().len() > 0 { if is_config_server_config(&config) {
let Some(config_id) = config.instance_id.as_deref() else {
hilog_error!("[Rust] config server config missing instance id");
return false;
};
return run_config_server_instance(config_id, &config);
}
let cfg = match config.gen_config() {
Ok(toml) => toml,
Err(e) => {
hilog_error!("[Rust] parse config failed {}", e);
return false;
}
};
if !INSTANCE_MANAGER.list_network_instance_ids().is_empty() {
hilog_error!("[Rust] there is a running instance!"); hilog_error!("[Rust] there is a running instance!");
return false; return false;
} }
if !ensure_local_socket_server_started() {
return false;
}
let inst_id = cfg.get_id(); let inst_id = cfg.get_id();
if INSTANCE_MANAGER if INSTANCE_MANAGER
.list_network_instance_ids() .list_network_instance_ids()
.contains(&inst_id) .contains(&inst_id)
{ {
hilog_error!("[Rust] instance {} already exists", inst_id);
return false; return false;
} }
INSTANCE_MANAGER
.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG)
.unwrap();
true
}
#[napi] match INSTANCE_MANAGER.run_network_instance(cfg, false, ConfigFileControl::STATIC_CONFIG) {
pub fn stop_network_instance(inst_names: Vec<String>) { Ok(_) => true,
INSTANCE_MANAGER Err(err) => {
.delete_network_instance( hilog_error!("[Rust] start_kernel failed for {}: {}", inst_id, err);
inst_names
.into_iter()
.filter_map(|s| Uuid::parse_str(&s).ok())
.collect(),
)
.unwrap();
hilog_debug!("[Rust] stop_network_instance");
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
let mut result = Vec::new();
match INSTANCE_MANAGER.collect_network_infos_sync() {
Ok(map) => {
for (uuid, info) in map.iter() {
// convert value to json string
let value = match serde_json::to_string(&info) {
Ok(value) => value,
Err(e) => {
hilog_error!("[Rust] failed to serialize instance {} info: {}", uuid, e);
continue;
}
};
result.push(KeyValuePair {
key: uuid.clone().to_string(),
value: value.clone(),
});
}
}
Err(_) => {}
}
result
}
#[napi]
pub fn collect_running_network() -> Vec<String> {
INSTANCE_MANAGER
.list_network_instance_ids()
.clone()
.into_iter()
.map(|id| id.to_string())
.collect()
}
#[napi]
pub fn is_running_network(inst_id: String) -> bool {
match Uuid::try_parse(&inst_id) {
Ok(uuid) => INSTANCE_MANAGER.list_network_instance_ids().contains(&uuid),
Err(e) => {
hilog_error!("[Rust] cant covert {} to uuid. {}", inst_id, e);
false false
} }
} }
} }
fn parse_instance_uuid(config_id: &str) -> Option<Uuid> {
match Uuid::parse_str(config_id) {
Ok(uuid) => Some(uuid),
Err(err) => {
hilog_error!("[Rust] invalid config_id {}: {}", config_id, err);
None
}
}
}
#[napi]
pub fn init_config_store(root_dir: String) -> bool {
exports::config_api::init_config_store(root_dir)
}
#[napi]
pub fn list_configs() -> String {
exports::config_api::list_configs()
}
#[napi]
pub fn get_config_display_name_by_id(config_id: String) -> Option<String> {
get_config_display_name(&config_id)
}
#[napi]
pub fn save_config(config_id: String, display_name: String, config_json: String) -> bool {
exports::config_api::save_config(config_id, display_name, config_json)
}
#[napi]
pub fn create_config(config_id: String, display_name: String) -> bool {
exports::config_api::create_config(config_id, display_name)
}
#[napi]
pub fn rename_stored_config(config_id: String, display_name: String) -> bool {
config::storage::config_meta::set_config_display_name(config_id, display_name).is_some()
}
#[napi]
pub fn delete_stored_config_meta(config_id: String) -> bool {
exports::config_api::delete_stored_config_meta(config_id)
}
#[napi]
pub fn get_config(config_id: String) -> Option<String> {
exports::config_api::get_config(config_id)
}
#[napi]
pub fn get_default_config() -> Option<String> {
exports::config_api::get_default_config()
}
#[napi]
pub fn get_config_field(config_id: String, field: String) -> Option<String> {
exports::config_api::get_config_field(config_id, field)
}
#[napi]
pub fn set_config_field(config_id: String, field: String, json_value: String) -> bool {
exports::config_api::set_config_field(config_id, field, json_value)
}
#[napi]
pub fn import_toml(toml_text: String, display_name: Option<String>) -> Option<String> {
exports::config_api::import_toml(toml_text, display_name)
}
#[napi]
pub fn export_toml(config_id: String) -> Option<String> {
exports::config_api::export_toml(config_id)
}
#[napi]
pub fn start_kernel(config_id: String) -> bool {
exports::runtime_api::start_kernel(config_id, start_kernel_with_config_id)
}
#[napi]
pub fn stop_kernel(config_id: String) -> bool {
exports::runtime_api::stop_kernel(
config_id,
stop_web_client,
parse_instance_uuid,
maybe_stop_local_socket_server,
)
}
#[napi]
pub fn stop_network_instance(config_ids: Vec<String>) -> bool {
exports::runtime_api::stop_network_instance(config_ids, stop_kernel)
}
#[napi]
pub fn easytier_version() -> String {
EASYTIER_VERSION.to_string()
}
#[napi]
pub fn default_network_config() -> String {
get_default_config().unwrap_or_else(|| "{}".to_string())
}
#[napi]
pub fn convert_toml_to_network_config(toml_text: String) -> String {
convert_toml_to_network_config_inner(&toml_text).unwrap_or_else(|err| format!("ERROR: {err}"))
}
#[napi]
pub fn parse_network_config(cfg_json: String) -> bool {
parse_network_config_inner(&cfg_json)
}
#[napi]
pub fn run_network_instance(cfg_json: String) -> bool {
run_network_instance_from_json(&cfg_json)
}
#[napi]
pub fn collect_network_infos() -> Vec<KeyValuePair> {
exports::runtime_api::collect_network_infos()
}
#[napi]
pub fn set_tun_fd(config_id: String, fd: i32) -> bool {
exports::runtime_api::set_tun_fd(config_id, fd, parse_instance_uuid)
}
#[napi]
pub fn get_network_config_schema() -> NetworkConfigSchema {
build_network_config_schema()
}
#[napi]
pub fn get_network_config_field_mappings() -> Vec<ConfigFieldMapping> {
build_network_config_field_mappings()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn exported_plain_object_schema_contains_core_networkconfig_metadata() {
let schema = get_network_config_schema();
assert_eq!(schema.name, "NetworkConfig");
assert_eq!(schema.node_kind, "schema");
assert!(
schema
.children
.iter()
.any(|field| field.name == "network_name")
);
let secure_mode = schema
.children
.iter()
.find(|field| field.name == "secure_mode")
.expect("secure_mode field");
assert!(
secure_mode
.children
.iter()
.any(|field| field.name == "enabled")
);
}
}
#[napi]
pub fn get_runtime_snapshot() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot()
}
pub(crate) fn get_runtime_snapshot_inner() -> RuntimeAggregateState {
exports::runtime_api::get_runtime_snapshot_inner()
}
#[napi]
pub fn build_config_share_link(config_id: String, only_start: Option<bool>) -> Option<String> {
build_config_share_link_inner(&config_id, None, only_start.unwrap_or(false))
}
#[napi]
pub fn parse_config_share_link(share_link: String) -> Option<SharedConfigLinkPayload> {
parse_config_share_link_inner(&share_link)
}
#[napi]
pub fn import_config_share_link(
share_link: String,
display_name_override: Option<String>,
) -> Option<String> {
import_config_share_link_inner(&share_link, display_name_override)
}
@@ -0,0 +1 @@
pub(crate) mod logging;
@@ -0,0 +1 @@
pub(crate) mod native_log;
@@ -0,0 +1 @@
pub(crate) mod state;
@@ -0,0 +1 @@
pub(crate) mod runtime_state;
@@ -0,0 +1,293 @@
use easytier::proto::{api, common};
use napi_derive_ohos::napi;
use serde::Serialize;
use std::collections::HashSet;
use std::sync::Mutex;
static ATTACHED_TUN_INSTANCE_IDS: once_cell::sync::Lazy<Mutex<HashSet<String>>> =
once_cell::sync::Lazy::new(|| Mutex::new(HashSet::new()));
pub fn mark_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.insert(instance_id.to_string());
}
}
pub fn clear_tun_attached(instance_id: &str) {
if let Ok(mut guard) = ATTACHED_TUN_INSTANCE_IDS.lock() {
guard.remove(instance_id);
}
}
pub fn is_tun_attached(instance_id: &str) -> bool {
ATTACHED_TUN_INSTANCE_IDS
.lock()
.map(|guard| guard.contains(instance_id))
.unwrap_or(false)
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnStats {
pub rx_bytes: i64,
pub tx_bytes: i64,
pub rx_packets: i64,
pub tx_packets: i64,
pub latency_us: i64,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerConnInfo {
pub conn_id: String,
pub my_peer_id: i64,
pub peer_id: i64,
pub features: Vec<String>,
pub tunnel_type: Option<String>,
pub local_addr: Option<String>,
pub remote_addr: Option<String>,
pub resolved_remote_addr: Option<String>,
pub stats: Option<PeerConnStats>,
pub loss_rate: Option<f64>,
pub is_client: bool,
pub network_name: Option<String>,
pub is_closed: bool,
pub secure_auth_level: Option<i32>,
pub peer_identity_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct PeerInfo {
pub peer_id: i64,
pub default_conn_id: Option<String>,
pub directly_connected_conns: Vec<String>,
pub conns: Vec<PeerConnInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RouteView {
pub peer_id: i64,
pub hostname: Option<String>,
pub ipv4: Option<String>,
pub ipv4_cidr: Option<String>,
pub ipv6_cidr: Option<String>,
pub proxy_cidrs: Vec<String>,
pub next_hop_peer_id: Option<i64>,
pub cost: Option<i32>,
pub path_latency: Option<i64>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
pub inst_id: Option<String>,
pub version: Option<String>,
pub is_public_server: Option<bool>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct MyNodeInfo {
pub virtual_ipv4: Option<String>,
pub virtual_ipv4_cidr: Option<String>,
pub hostname: Option<String>,
pub version: Option<String>,
pub peer_id: Option<i64>,
pub listeners: Vec<String>,
pub vpn_portal_cfg: Option<String>,
pub udp_nat_type: Option<i32>,
pub tcp_nat_type: Option<i32>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeInstanceState {
pub config_id: String,
pub instance_id: String,
pub display_name: String,
pub running: bool,
pub tun_required: bool,
pub tun_attached: bool,
pub magic_dns_enabled: bool,
pub need_exit_node: bool,
pub error_message: Option<String>,
pub my_node_info: Option<MyNodeInfo>,
pub events: Vec<String>,
pub routes: Vec<RouteView>,
pub peers: Vec<PeerInfo>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct TunAggregateState {
pub active: bool,
pub attached_instance_ids: Vec<String>,
pub aggregated_routes: Vec<String>,
pub dns_servers: Vec<String>,
pub need_rebuild: bool,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
#[napi(object)]
pub struct RuntimeAggregateState {
pub instances: Vec<RuntimeInstanceState>,
pub tun: TunAggregateState,
pub running_instance_count: i32,
}
fn stringify_ipv4_inet(value: Option<common::Ipv4Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_ipv6_inet(value: Option<common::Ipv6Inet>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_url(value: Option<common::Url>) -> Option<String> {
value.map(|v| v.to_string())
}
fn stringify_uuid(value: Option<common::Uuid>) -> Option<String> {
value.map(|v| v.to_string())
}
fn optional_u32_to_i64(value: Option<u32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn optional_i32_to_i64(value: Option<i32>) -> Option<i64> {
value.map(|v| v as i64)
}
fn route_to_view(route: api::instance::Route) -> RouteView {
let stun = route.stun_info;
let feature_flag = route.feature_flag;
RouteView {
peer_id: route.peer_id as i64,
hostname: (!route.hostname.is_empty()).then_some(route.hostname),
ipv4: route
.ipv4_addr
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
ipv4_cidr: stringify_ipv4_inet(route.ipv4_addr),
ipv6_cidr: stringify_ipv6_inet(route.ipv6_addr),
proxy_cidrs: route.proxy_cidrs,
next_hop_peer_id: optional_u32_to_i64(route.next_hop_peer_id_latency_first)
.or_else(|| Some(route.next_hop_peer_id as i64)),
cost: Some(route.cost),
path_latency: optional_i32_to_i64(route.path_latency_latency_first)
.or_else(|| Some(route.path_latency as i64)),
udp_nat_type: stun.as_ref().map(|info| info.udp_nat_type),
tcp_nat_type: stun.as_ref().map(|info| info.tcp_nat_type),
inst_id: (!route.inst_id.is_empty()).then_some(route.inst_id),
version: (!route.version.is_empty()).then_some(route.version),
is_public_server: feature_flag.map(|flag| flag.is_public_server),
}
}
fn peer_conn_to_view(conn: api::instance::PeerConnInfo) -> PeerConnInfo {
let stats = conn.stats.map(|stats| PeerConnStats {
rx_bytes: stats.rx_bytes as i64,
tx_bytes: stats.tx_bytes as i64,
rx_packets: stats.rx_packets as i64,
tx_packets: stats.tx_packets as i64,
latency_us: stats.latency_us as i64,
});
PeerConnInfo {
conn_id: conn.conn_id,
my_peer_id: conn.my_peer_id as i64,
peer_id: conn.peer_id as i64,
features: conn.features,
tunnel_type: conn.tunnel.as_ref().map(|t| t.tunnel_type.clone()),
local_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.local_addr.clone())),
remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.remote_addr.clone())),
resolved_remote_addr: conn
.tunnel
.as_ref()
.and_then(|t| stringify_url(t.resolved_remote_addr.clone())),
stats,
loss_rate: Some(conn.loss_rate as f64),
is_client: conn.is_client,
network_name: (!conn.network_name.is_empty()).then_some(conn.network_name),
is_closed: conn.is_closed,
secure_auth_level: Some(conn.secure_auth_level),
peer_identity_type: Some(conn.peer_identity_type),
}
}
fn peer_to_view(peer: api::instance::PeerInfo) -> PeerInfo {
PeerInfo {
peer_id: peer.peer_id as i64,
default_conn_id: stringify_uuid(peer.default_conn_id),
directly_connected_conns: peer
.directly_connected_conns
.into_iter()
.map(|id| id.to_string())
.collect(),
conns: peer.conns.into_iter().map(peer_conn_to_view).collect(),
}
}
fn my_node_info_to_view(info: api::manage::MyNodeInfo) -> MyNodeInfo {
MyNodeInfo {
virtual_ipv4: info
.virtual_ipv4
.as_ref()
.and_then(|inet| inet.address.as_ref())
.map(|addr| addr.to_string()),
virtual_ipv4_cidr: stringify_ipv4_inet(info.virtual_ipv4),
hostname: (!info.hostname.is_empty()).then_some(info.hostname),
version: (!info.version.is_empty()).then_some(info.version),
peer_id: Some(info.peer_id as i64),
listeners: info
.listeners
.into_iter()
.map(|url| url.to_string())
.collect(),
vpn_portal_cfg: info.vpn_portal_cfg,
udp_nat_type: info.stun_info.as_ref().map(|stun| stun.udp_nat_type),
tcp_nat_type: info.stun_info.as_ref().map(|stun| stun.tcp_nat_type),
}
}
pub fn runtime_instance_from_running_info(
config_id: String,
display_name: String,
magic_dns_enabled: bool,
need_exit_node: bool,
info: api::manage::NetworkInstanceRunningInfo,
) -> RuntimeInstanceState {
let tun_attached = info.running && is_tun_attached(&config_id);
let tun_required = info.running && (info.dev_name != "no_tun" || tun_attached);
RuntimeInstanceState {
config_id: config_id.clone(),
instance_id: config_id,
display_name,
running: info.running,
tun_required,
tun_attached,
magic_dns_enabled,
need_exit_node,
error_message: info.error_msg,
my_node_info: info.my_node_info.map(my_node_info_to_view),
events: info.events,
routes: info.routes.into_iter().map(route_to_view).collect(),
peers: info.peers.into_iter().map(peer_to_view).collect(),
}
}
@@ -12,6 +12,7 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1.0", features = ["v4", "serde"] } uuid = { version = "1.0", features = ["v4", "serde"] }
guarden = "0.1"
# Axum web framework # Axum web framework
axum = { version = "0.8.4", features = ["macros"] } axum = { version = "0.8.4", features = ["macros"] }
@@ -10,9 +10,9 @@ use easytier::{
common::config::{ common::config::{
ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader, ConfigFileControl, ConfigLoader, NetworkIdentity, PeerConfig, TomlConfigLoader,
}, },
defer,
instance_manager::NetworkInstanceManager, instance_manager::NetworkInstanceManager,
}; };
use guarden::defer;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::any; use sqlx::any;
use tokio_util::task::AbortOnDropHandle; use tokio_util::task::AbortOnDropHandle;
+1 -1
View File
@@ -1,7 +1,7 @@
{ {
"name": "easytier-gui", "name": "easytier-gui",
"type": "module", "type": "module",
"version": "2.6.2", "version": "2.6.4",
"private": true, "private": true,
"packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4", "packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4",
"scripts": { "scripts": {
+1 -1
View File
@@ -1,6 +1,6 @@
[package] [package]
name = "easytier-gui" name = "easytier-gui"
version = "2.6.2" version = "2.6.4"
description = "EasyTier GUI" description = "EasyTier GUI"
authors = ["you"] authors = ["you"]
edition.workspace = true edition.workspace = true
+9 -1
View File
@@ -490,10 +490,18 @@ async fn init_web_client(app: AppHandle, url: Option<String>) -> Result<(), Stri
.ok_or_else(|| "Instance manager is not available".to_string())?; .ok_or_else(|| "Instance manager is not available".to_string())?;
let hooks = Arc::new(manager::GuiHooks { app: app.clone() }); let hooks = Arc::new(manager::GuiHooks { app: app.clone() });
let machine_id_state_dir = app
.path()
.app_data_dir()
.with_context(|| "Failed to resolve machine id state directory")
.map_err(|e| format!("{:#}", e))?;
let web_client = web_client::run_web_client( let web_client = web_client::run_web_client(
url.as_str(), url.as_str(),
None, easytier::common::MachineIdOptions {
explicit_machine_id: None,
state_dir: Some(machine_id_state_dir),
},
None, None,
false, false,
instance_manager, instance_manager,
+1 -1
View File
@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false "createUpdaterArtifacts": false
}, },
"productName": "easytier-gui", "productName": "easytier-gui",
"version": "2.6.2", "version": "2.6.4",
"identifier": "com.kkrainbow.easytier", "identifier": "com.kkrainbow.easytier",
"plugins": { "plugins": {
"shell": { "shell": {
+1
View File
@@ -18,6 +18,7 @@ export interface ServiceMode extends WebClientConfig {
rpc_portal: string rpc_portal: string
file_log_level: 'off' | 'warn' | 'info' | 'debug' | 'trace' file_log_level: 'off' | 'warn' | 'info' | 'debug' | 'trace'
file_log_dir: string file_log_dir: string
installed_core_version?: string
} }
export interface RemoteMode { export interface RemoteMode {
+20 -3
View File
@@ -16,7 +16,7 @@ import { useToast, useConfirm } from 'primevue'
import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode' import { loadMode, saveMode, WebClientConfig, type Mode } from '~/composables/mode'
import { saveLastNetworkInstanceId, loadLastNetworkInstanceId } from '~/composables/config' import { saveLastNetworkInstanceId, loadLastNetworkInstanceId } from '~/composables/config'
import ModeSwitcher from '~/components/ModeSwitcher.vue' import ModeSwitcher from '~/components/ModeSwitcher.vue'
import { getServiceStatus } from '~/composables/backend' import { getEasytierVersion, getServiceStatus } from '~/composables/backend'
const { t, locale } = useI18n() const { t, locale } = useI18n()
const confirm = useConfirm() const confirm = useConfirm()
@@ -85,6 +85,20 @@ async function onUninstallService() {
}); });
} }
function stripModeMetadata(mode: Mode) {
if (mode.mode !== 'service') {
return mode
}
const serviceConfig = { ...mode }
delete serviceConfig.installed_core_version
return serviceConfig
}
function modeConfigChanged(next: Mode) {
return JSON.stringify(stripModeMetadata(next)) !== JSON.stringify(stripModeMetadata(currentMode.value))
}
async function onStopService() { async function onStopService() {
isModeSaving.value = true isModeSaving.value = true
manualDisconnect.value = true manualDisconnect.value = true
@@ -134,13 +148,14 @@ async function initWithMode(mode: Mode) {
} }
url = mode.remote_rpc_address url = mode.remote_rpc_address
break; break;
case 'service': case 'service': {
if (!mode.config_dir || !mode.file_log_dir || !mode.file_log_level || !mode.rpc_portal) { if (!mode.config_dir || !mode.file_log_dir || !mode.file_log_level || !mode.rpc_portal) {
toast.add({ severity: 'error', summary: t('error'), detail: t('mode.service_config_empty'), life: 10000 }) toast.add({ severity: 'error', summary: t('error'), detail: t('mode.service_config_empty'), life: 10000 })
return initWithMode({ ...mode, mode: 'normal' }); return initWithMode({ ...mode, mode: 'normal' });
} }
let serviceStatus = await getServiceStatus() let serviceStatus = await getServiceStatus()
if (serviceStatus === "NotInstalled" || JSON.stringify(mode) !== JSON.stringify(currentMode.value)) { const coreVersion = await getEasytierVersion()
if (serviceStatus === "NotInstalled" || modeConfigChanged(mode) || mode.installed_core_version !== coreVersion) {
mode.config_server_url = mode.config_server_url || undefined mode.config_server_url = mode.config_server_url || undefined
await initService({ await initService({
config_dir: mode.config_dir, config_dir: mode.config_dir,
@@ -149,6 +164,7 @@ async function initWithMode(mode: Mode) {
rpc_portal: mode.rpc_portal, rpc_portal: mode.rpc_portal,
config_server: mode.config_server_url, config_server: mode.config_server_url,
}) })
mode.installed_core_version = coreVersion
serviceStatus = await getServiceStatus() serviceStatus = await getServiceStatus()
} }
if (serviceStatus === "Stopped") { if (serviceStatus === "Stopped") {
@@ -157,6 +173,7 @@ async function initWithMode(mode: Mode) {
url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1") url = "tcp://" + mode.rpc_portal.replace("0.0.0.0", "127.0.0.1")
retrys = 5 retrys = 5
break; break;
}
case 'normal': case 'normal':
url = mode.rpc_portal; url = mode.rpc_portal;
break; break;
+1 -1
View File
@@ -1,6 +1,6 @@
[package] [package]
name = "easytier-web" name = "easytier-web"
version = "2.6.2" version = "2.6.4"
edition.workspace = true edition.workspace = true
description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server." description = "Config server for easytier. easytier-core gets config from this and web frontend use it as restful api server."
@@ -81,6 +81,7 @@ const bool_flags: BoolFlag[] = [
{ field: 'latency_first', help: 'latency_first_help' }, { field: 'latency_first', help: 'latency_first_help' },
{ field: 'use_smoltcp', help: 'use_smoltcp_help' }, { field: 'use_smoltcp', help: 'use_smoltcp_help' },
{ field: 'disable_ipv6', help: 'disable_ipv6_help' }, { field: 'disable_ipv6', help: 'disable_ipv6_help' },
{ field: 'ipv6_public_addr_auto', help: 'ipv6_public_addr_auto_help' },
{ field: 'enable_kcp_proxy', help: 'enable_kcp_proxy_help' }, { field: 'enable_kcp_proxy', help: 'enable_kcp_proxy_help' },
{ field: 'disable_kcp_input', help: 'disable_kcp_input_help' }, { field: 'disable_kcp_input', help: 'disable_kcp_input_help' },
{ field: 'enable_quic_proxy', help: 'enable_quic_proxy_help' }, { field: 'enable_quic_proxy', help: 'enable_quic_proxy_help' },
@@ -98,6 +99,8 @@ const bool_flags: BoolFlag[] = [
{ field: 'disable_encryption', help: 'disable_encryption_help' }, { field: 'disable_encryption', help: 'disable_encryption_help' },
{ field: 'disable_tcp_hole_punching', help: 'disable_tcp_hole_punching_help' }, { field: 'disable_tcp_hole_punching', help: 'disable_tcp_hole_punching_help' },
{ field: 'disable_udp_hole_punching', help: 'disable_udp_hole_punching_help' }, { field: 'disable_udp_hole_punching', help: 'disable_udp_hole_punching_help' },
{ field: 'enable_udp_broadcast_relay', help: 'enable_udp_broadcast_relay_help' },
{ field: 'disable_upnp', help: 'disable_upnp_help' },
{ field: 'disable_sym_hole_punching', help: 'disable_sym_hole_punching_help' }, { field: 'disable_sym_hole_punching', help: 'disable_sym_hole_punching_help' },
{ field: 'enable_magic_dns', help: 'enable_magic_dns_help' }, { field: 'enable_magic_dns', help: 'enable_magic_dns_help' },
{ field: 'enable_private_mode', help: 'enable_private_mode_help' }, { field: 'enable_private_mode', help: 'enable_private_mode_help' },
@@ -2,7 +2,7 @@
import { AutoComplete, Button, Dialog, InputNumber, InputText } from 'primevue' import { AutoComplete, Button, Dialog, InputNumber, InputText } from 'primevue'
import InputGroup from 'primevue/inputgroup' import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon' import InputGroupAddon from 'primevue/inputgroupaddon'
import { computed, onMounted, onUnmounted, ref, watch } from 'vue' import { computed, ref, watch } from 'vue'
import { useI18n } from 'vue-i18n' import { useI18n } from 'vue-i18n'
const props = defineProps<{ const props = defineProps<{
@@ -13,25 +13,8 @@ const props = defineProps<{
const { t } = useI18n() const { t } = useI18n()
const url = defineModel<string>({ required: true }) const url = defineModel<string>({ required: true })
const editing = ref(false) const editing = ref(false)
const container = ref<HTMLElement | null>(null)
const internalCompact = ref(false)
const hostFocused = ref(false) const hostFocused = ref(false)
onMounted(() => {
if (container.value) {
const observer = new ResizeObserver(entries => {
for (const entry of entries) {
internalCompact.value = entry.contentRect.width < 400
}
})
observer.observe(container.value)
onUnmounted(() => {
observer.disconnect()
})
}
})
const parseUrl = (val: string | null | undefined): { proto: string; host: string; port: number | null } => { const parseUrl = (val: string | null | undefined): { proto: string; host: string; port: number | null } => {
const getValidPort = (portStr: string, proto: string) => { const getValidPort = (portStr: string, proto: string) => {
const p = parseInt(portStr) const p = parseInt(portStr)
@@ -169,28 +152,30 @@ const onProtoChange = (newProto: string) => {
</script> </script>
<template> <template>
<div ref="container" class="w-full"> <div class="url-input-container w-full min-w-0 overflow-hidden">
<InputGroup v-if="!internalCompact" class="w-full"> <InputGroup class="url-input-full w-full min-w-0">
<AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown <AutoComplete :model-value="internalValue.proto" :suggestions="filteredProtos" dropdown
class="max-w-32 proto-autocomplete-in-group" @complete="searchProtos" class="max-w-32 proto-autocomplete-in-group" @complete="searchProtos"
@update:model-value="onProtoChange" /> @update:model-value="onProtoChange" />
<InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow" <InputText v-model="internalValue.host" :placeholder="placeholder || '0.0.0.0'" class="grow min-w-0"
@focus="onHostFocus" @blur="onHostBlur" /> @focus="onHostFocus" @blur="onHostBlur" />
<template v-if="!isNoPortProto"> <template v-if="!isNoPortProto">
<InputGroupAddon> <InputGroupAddon>
<span style="font-weight: bold">:</span> <span style="font-weight: bold">:</span>
</InputGroupAddon> </InputGroupAddon>
<InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="max-w-24" <InputNumber v-model="internalValue.port" :format="false" :min="1" :max="65535" class="max-w-24"
:placeholder="String(protos[internalValue.proto] ?? 11010)" :placeholder="String(protos[internalValue.proto] ?? 11010)" fluid />
fluid />
</template> </template>
<!-- Rendered in both responsive branches; keep action slot content free of side effects and duplicate IDs. -->
<slot name="actions"></slot> <slot name="actions"></slot>
</InputGroup> </InputGroup>
<div v-else class="flex justify-between items-center p-2 border rounded w-full"> <div
<span class="truncate mr-2">{{ url }}</span> class="url-input-compact flex justify-between items-center p-2 border rounded w-full min-w-0 overflow-hidden">
<div class="flex items-center"> <span class="truncate mr-2 min-w-0 flex-1 overflow-hidden">{{ url }}</span>
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" @click="editing = true" /> <div class="flex items-center shrink-0">
<Button icon="pi pi-pencil" class="p-button-sm p-button-text" :aria-label="t('web.common.edit')"
@click="editing = true" />
<slot name="actions"></slot> <slot name="actions"></slot>
</div> </div>
</div> </div>
@@ -222,6 +207,28 @@ const onProtoChange = (newProto: string) => {
</template> </template>
<style scoped> <style scoped>
.url-input-container {
container-type: inline-size;
}
.url-input-full {
display: none;
}
.url-input-compact {
display: flex;
}
@container (min-width: 400px) {
.url-input-full {
display: flex;
}
.url-input-compact {
display: none;
}
}
.proto-autocomplete-in-group, .proto-autocomplete-in-group,
.proto-autocomplete-in-group :deep(.p-autocomplete-input), .proto-autocomplete-in-group :deep(.p-autocomplete-input),
.proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) { .proto-autocomplete-in-group :deep(.p-autocomplete-dropdown) {
@@ -104,6 +104,9 @@ use_smoltcp_help: 使用用户态 TCP/IP 协议栈,避免操作系统防火墙
disable_ipv6: 禁用IPv6 disable_ipv6: 禁用IPv6
disable_ipv6_help: 禁用此节点的IPv6功能,仅使用IPv4进行网络通信。 disable_ipv6_help: 禁用此节点的IPv6功能,仅使用IPv4进行网络通信。
ipv6_public_addr_auto: 自动获取公网 IPv6
ipv6_public_addr_auto_help: 自动从共享了 IPv6 子网的对等节点获取一个公网 IPv6 地址。
enable_kcp_proxy: 启用 KCP 代理 enable_kcp_proxy: 启用 KCP 代理
enable_kcp_proxy_help: 将 TCP 流量转为 KCP 流量,降低传输延迟,提升传输速度。 enable_kcp_proxy_help: 将 TCP 流量转为 KCP 流量,降低传输延迟,提升传输速度。
@@ -157,6 +160,12 @@ disable_tcp_hole_punching_help: 禁用TCP打洞功能
disable_udp_hole_punching: 禁用UDP打洞 disable_udp_hole_punching: 禁用UDP打洞
disable_udp_hole_punching_help: 禁用UDP打洞功能 disable_udp_hole_punching_help: 禁用UDP打洞功能
enable_udp_broadcast_relay: UDP 广播中继
enable_udp_broadcast_relay_help: "仅 Windows:捕获物理网卡上的本机 UDP 广播包并转发给 EasyTier 对等节点,帮助局域网游戏发现房间。需要管理员权限。"
disable_upnp: 禁用 UPnP
disable_upnp_help: 禁用符合条件监听器的运行时 UPnP/NAT-PMP 端口映射;自动端口映射默认开启。
disable_sym_hole_punching: 禁用对称NAT打洞 disable_sym_hole_punching: 禁用对称NAT打洞
disable_sym_hole_punching_help: 禁用对称NAT的打洞(生日攻击),将对称NAT视为锥形NAT处理 disable_sym_hole_punching_help: 禁用对称NAT的打洞(生日攻击),将对称NAT视为锥形NAT处理
@@ -254,6 +263,7 @@ event:
DhcpIpv4Conflicted: DHCP IPv4地址冲突 DhcpIpv4Conflicted: DHCP IPv4地址冲突
PortForwardAdded: 端口转发添加 PortForwardAdded: 端口转发添加
ProxyCidrsUpdated: 子网代理CIDR更新 ProxyCidrsUpdated: 子网代理CIDR更新
UdpBroadcastRelayStartResult: UDP广播中继启动结果
web: web:
login: login:
@@ -103,6 +103,9 @@ use_smoltcp_help: Use a user-space TCP/IP stack to avoid issues with operating s
disable_ipv6: Disable IPv6 disable_ipv6: Disable IPv6
disable_ipv6_help: Disable IPv6 functionality for this node, only use IPv4 for network communication. disable_ipv6_help: Disable IPv6 functionality for this node, only use IPv4 for network communication.
ipv6_public_addr_auto: Auto Public IPv6
ipv6_public_addr_auto_help: Auto-obtain a public IPv6 address from a peer that shares its IPv6 subnet.
enable_kcp_proxy: Enable KCP Proxy enable_kcp_proxy: Enable KCP Proxy
enable_kcp_proxy_help: Convert TCP traffic to KCP traffic to reduce latency and boost transmission speed. enable_kcp_proxy_help: Convert TCP traffic to KCP traffic to reduce latency and boost transmission speed.
@@ -156,6 +159,12 @@ disable_tcp_hole_punching_help: Disable tcp hole punching
disable_udp_hole_punching: Disable UDP Hole Punching disable_udp_hole_punching: Disable UDP Hole Punching
disable_udp_hole_punching_help: Disable udp hole punching disable_udp_hole_punching_help: Disable udp hole punching
enable_udp_broadcast_relay: UDP Broadcast Relay
enable_udp_broadcast_relay_help: "Windows only: capture local UDP broadcast packets from physical interfaces and forward them to EasyTier peers. Helps games to find rooms in local network. Requires administrator privileges."
disable_upnp: Disable UPnP
disable_upnp_help: Disable runtime UPnP/NAT-PMP port mapping for eligible listeners; automatic port mapping is enabled by default.
disable_sym_hole_punching: Disable Symmetric NAT Hole Punching disable_sym_hole_punching: Disable Symmetric NAT Hole Punching
disable_sym_hole_punching_help: Disable special hole punching handling for symmetric NAT (based on birthday attack), treat symmetric NAT as cone NAT disable_sym_hole_punching_help: Disable special hole punching handling for symmetric NAT (based on birthday attack), treat symmetric NAT as cone NAT
@@ -254,6 +263,7 @@ event:
DhcpIpv4Conflicted: DhcpIpv4Conflicted DhcpIpv4Conflicted: DhcpIpv4Conflicted
PortForwardAdded: PortForwardAdded PortForwardAdded: PortForwardAdded
ProxyCidrsUpdated: ProxyCidrsUpdated ProxyCidrsUpdated: ProxyCidrsUpdated
UdpBroadcastRelayStartResult: UDP Broadcast Relay Start Result
web: web:
login: login:
@@ -115,6 +115,7 @@ export interface NetworkConfig {
use_smoltcp?: boolean use_smoltcp?: boolean
disable_ipv6?: boolean disable_ipv6?: boolean
ipv6_public_addr_auto?: boolean
enable_kcp_proxy?: boolean enable_kcp_proxy?: boolean
disable_kcp_input?: boolean disable_kcp_input?: boolean
enable_quic_proxy?: boolean enable_quic_proxy?: boolean
@@ -132,6 +133,8 @@ export interface NetworkConfig {
disable_encryption?: boolean disable_encryption?: boolean
disable_tcp_hole_punching?: boolean disable_tcp_hole_punching?: boolean
disable_udp_hole_punching?: boolean disable_udp_hole_punching?: boolean
disable_upnp?: boolean
enable_udp_broadcast_relay?: boolean
disable_sym_hole_punching?: boolean disable_sym_hole_punching?: boolean
enable_relay_network_whitelist?: boolean enable_relay_network_whitelist?: boolean
@@ -190,6 +193,7 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
use_smoltcp: false, use_smoltcp: false,
disable_ipv6: false, disable_ipv6: false,
ipv6_public_addr_auto: false,
enable_kcp_proxy: false, enable_kcp_proxy: false,
disable_kcp_input: false, disable_kcp_input: false,
enable_quic_proxy: false, enable_quic_proxy: false,
@@ -207,6 +211,8 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
disable_encryption: false, disable_encryption: false,
disable_tcp_hole_punching: false, disable_tcp_hole_punching: false,
disable_udp_hole_punching: false, disable_udp_hole_punching: false,
disable_upnp: false,
enable_udp_broadcast_relay: false,
disable_sym_hole_punching: false, disable_sym_hole_punching: false,
enable_relay_network_whitelist: false, enable_relay_network_whitelist: false,
relay_network_whitelist: [], relay_network_whitelist: [],
@@ -443,4 +449,6 @@ export enum EventType {
PortForwardAdded = 'PortForwardAdded', // PortForwardConfigPb PortForwardAdded = 'PortForwardAdded', // PortForwardConfigPb
ProxyCidrsUpdated = 'ProxyCidrsUpdated', // string[], string[] ProxyCidrsUpdated = 'ProxyCidrsUpdated', // string[], string[]
UdpBroadcastRelayStartResult = 'UdpBroadcastRelayStartResult', // { capture_backend?: string, error?: string }
} }
+1
View File
@@ -365,6 +365,7 @@ mod tests {
let _c = WebClient::new( let _c = WebClient::new(
connector, connector,
"test", "test",
uuid::Uuid::new_v4(),
"test", "test",
false, false,
Arc::new(NetworkInstanceManager::new()), Arc::new(NetworkInstanceManager::new()),
+4 -1
View File
@@ -3,7 +3,7 @@ name = "easytier"
description = "A full meshed p2p VPN, connecting all your devices in one network with one command." description = "A full meshed p2p VPN, connecting all your devices in one network with one command."
homepage = "https://github.com/EasyTier/EasyTier" homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier" repository = "https://github.com/EasyTier/EasyTier"
version = "2.6.2" version = "2.6.4"
edition.workspace = true edition.workspace = true
rust-version.workspace = true rust-version.workspace = true
authors = ["kkrainbow"] authors = ["kkrainbow"]
@@ -50,6 +50,8 @@ time = "0.3"
toml = "0.8.12" toml = "0.8.12"
chrono = { version = "0.4.37", features = ["serde"] } chrono = { version = "0.4.37", features = ["serde"] }
guarden = "0.1"
delegate = "0.13.5" delegate = "0.13.5"
itertools = "0.14.0" itertools = "0.14.0"
@@ -68,6 +70,7 @@ async-stream = "0.3.5"
async-trait = "0.1.74" async-trait = "0.1.74"
dashmap = "6.0" dashmap = "6.0"
moka = { version = "0.12", features = ["future"] }
timedmap = "=1.0.1" timedmap = "=1.0.1"
# for full-path zero-copy # for full-path zero-copy
+5
View File
@@ -191,6 +191,11 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
) )
.type_attribute("peer_rpc.RouteForeignNetworkSummary", "#[derive(Hash, Eq)]") .type_attribute("peer_rpc.RouteForeignNetworkSummary", "#[derive(Hash, Eq)]")
.type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]") .type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]")
.type_attribute("acl.Acl", "#[serde(default)]")
.type_attribute("acl.AclV1", "#[serde(default)]")
.type_attribute("acl.Chain", "#[serde(default)]")
.type_attribute("acl.Rule", "#[serde(default)]")
.type_attribute("acl.GroupInfo", "#[serde(default)]")
.field_attribute(".api.manage.NetworkConfig", "#[serde(default)]") .field_attribute(".api.manage.NetworkConfig", "#[serde(default)]")
.service_generator(Box::new(easytier_rpc_build::ServiceGenerator::default())) .service_generator(Box::new(easytier_rpc_build::ServiceGenerator::default()))
.btree_map(["."]) .btree_map(["."])
+8 -2
View File
@@ -12,9 +12,9 @@ core_clap:
仅用户名:--config-server admin,将使用官方的服务器 仅用户名:--config-server admin,将使用官方的服务器
machine_id: machine_id:
en: |+ en: |+
the machine id to identify this machine, used for config recovery after disconnection, must be unique and fixed. default is from system. the machine id to identify this machine, used for config recovery after disconnection, must be unique and fixed. by default it is loaded from persisted local state; on first start it may be migrated from system information or generated, then remains fixed.
zh-CN: |+ zh-CN: |+
Web 配置服务器通过 machine id 来识别机器,用于断线重连后的配置恢复,需要保证唯一且固定不变。默认从系统获得 Web 配置服务器通过 machine id 来识别机器,用于断线重连后的配置恢复,需要保证唯一且固定不变。默认从本地持久化状态读取;首次启动时可能基于系统信息迁移或生成,之后保持固定不变
config_file: config_file:
en: "path to the config file, NOTE: the options set by cmdline args will override options in config file" en: "path to the config file, NOTE: the options set by cmdline args will override options in config file"
zh-CN: "配置文件路径,注意:命令行中的配置的选项会覆盖配置文件中的选项" zh-CN: "配置文件路径,注意:命令行中的配置的选项会覆盖配置文件中的选项"
@@ -184,6 +184,9 @@ core_clap:
disable_upnp: disable_upnp:
en: "disable runtime UPnP/NAT-PMP port mapping for eligible listeners; automatic port mapping is enabled by default" en: "disable runtime UPnP/NAT-PMP port mapping for eligible listeners; automatic port mapping is enabled by default"
zh-CN: "禁用符合条件监听器的运行时 UPnP/NAT-PMP 端口映射;自动端口映射默认开启" zh-CN: "禁用符合条件监听器的运行时 UPnP/NAT-PMP 端口映射;自动端口映射默认开启"
enable_udp_broadcast_relay:
en: "Windows only: capture local UDP broadcast packets from physical interfaces and forward them to EasyTier peers. Helps games to find rooms in local network. Requires administrator privileges."
zh-CN: "仅 Windows:捕获物理网卡上的本机 UDP 广播包并转发给 EasyTier 对等节点,帮助局域网游戏发现房间。需要管理员权限。"
relay_all_peer_rpc: relay_all_peer_rpc:
en: "relay all peer rpc packets, even if the peer is not in the relay network whitelist. this can help peers not in relay network whitelist to establish p2p connection." en: "relay all peer rpc packets, even if the peer is not in the relay network whitelist. this can help peers not in relay network whitelist to establish p2p connection."
zh-CN: "转发所有对等节点的RPC数据包,即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。" zh-CN: "转发所有对等节点的RPC数据包,即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。"
@@ -274,6 +277,9 @@ core_clap:
check_config: check_config:
en: Check config validity without starting the network en: Check config validity without starting the network
zh-CN: 检查配置文件的有效性并退出 zh-CN: 检查配置文件的有效性并退出
daemon:
en: Run in daemon mode
zh-CN: 以守护进程模式运行
file_log_size_mb: file_log_size_mb:
en: "per file log size in MB, default is 100MB" en: "per file log size in MB, default is 100MB"
zh-CN: "单个文件日志大小,单位 MB,默认值为 100MB" zh-CN: "单个文件日志大小,单位 MB,默认值为 100MB"
+9 -9
View File
@@ -11,9 +11,8 @@ use windows::{
NET_FW_RULE_DIR_OUT, NET_FW_RULE_DIR_OUT,
}, },
Networking::WinSock::{ Networking::WinSock::{
IP_UNICAST_IF, IPPROTO_IP, IPPROTO_IPV6, IPV6_UNICAST_IF, SIO_UDP_CONNRESET, IP_UNICAST_IF, IPPROTO_IP, IPPROTO_IPV6, IPV6_UNICAST_IF, SIO_UDP_CONNRESET, SOCKET,
SO_EXCLUSIVEADDRUSE, SOCKET, SOCKET_ERROR, SOL_SOCKET, WSAGetLastError, WSAIoctl, SOCKET_ERROR, WSAGetLastError, WSAIoctl, htonl, setsockopt,
htonl, setsockopt,
}, },
System::Com::{ System::Com::{
CLSCTX_ALL, COINIT_MULTITHREADED, CoCreateInstance, CoInitializeEx, CoUninitialize, CLSCTX_ALL, COINIT_MULTITHREADED, CoCreateInstance, CoInitializeEx, CoUninitialize,
@@ -137,12 +136,13 @@ pub fn setup_socket_for_win<S: AsRawSocket>(
} }
let socket = SOCKET(socket.as_raw_socket() as usize); let socket = SOCKET(socket.as_raw_socket() as usize);
let optval = 1_i32.to_ne_bytes();
unsafe { // let optval = 1_i32.to_ne_bytes();
if setsockopt(socket, SOL_SOCKET, SO_EXCLUSIVEADDRUSE, Some(&optval)) == SOCKET_ERROR { // unsafe {
return Err(io::Error::last_os_error()); // if setsockopt(socket, SOL_SOCKET, SO_EXCLUSIVEADDRUSE, Some(&optval)) == SOCKET_ERROR {
} // return Err(io::Error::last_os_error());
} // }
// }
if let Some(iface) = bind_dev { if let Some(iface) = bind_dev {
set_ip_unicast_if(socket, bind_addr, &iface)?; set_ip_unicast_if(socket, bind_addr, &iface)?;
+39
View File
@@ -1339,6 +1339,45 @@ mod tests {
assert_eq!(result.matched_rule, Some(RuleId::Priority(70))); assert_eq!(result.matched_rule, Some(RuleId::Priority(70)));
} }
#[tokio::test]
async fn test_forward_acl_source_ip_whitelist() {
let mut acl_config = Acl::default();
let mut acl_v1 = AclV1::default();
let mut chain = Chain {
name: "subnet_proxy_protect".to_string(),
chain_type: ChainType::Forward as i32,
enabled: true,
default_action: Action::Drop as i32,
..Default::default()
};
chain.rules.push(Rule {
name: "allow_my_devices".to_string(),
priority: 1000,
enabled: true,
action: Action::Allow as i32,
protocol: Protocol::Any as i32,
source_ips: vec!["10.172.192.2/32".to_string()],
..Default::default()
});
acl_v1.chains.push(chain);
acl_config.acl_v1 = Some(acl_v1);
let processor = AclProcessor::new(acl_config);
let mut packet_info = create_test_packet_info();
packet_info.dst_ip = "192.168.1.10".parse().unwrap();
packet_info.src_ip = "10.172.192.2".parse().unwrap();
let result = processor.process_packet(&packet_info, ChainType::Forward);
assert_eq!(result.action, Action::Allow);
assert_eq!(result.matched_rule, Some(RuleId::Priority(1000)));
packet_info.src_ip = "10.172.192.3".parse().unwrap();
let result = processor.process_packet(&packet_info, ChainType::Forward);
assert_eq!(result.action, Action::Drop);
assert_eq!(result.matched_rule, Some(RuleId::Default));
}
fn create_test_acl_config() -> Acl { fn create_test_acl_config() -> Acl {
let mut acl_config = Acl::default(); let mut acl_config = Acl::default();
+67
View File
@@ -71,6 +71,8 @@ pub fn gen_default_flags() -> Flags {
need_p2p: false, need_p2p: false,
instance_recv_bps_limit: u64::MAX, instance_recv_bps_limit: u64::MAX,
disable_upnp: false, disable_upnp: false,
disable_relay_data: false,
enable_udp_broadcast_relay: false,
} }
} }
@@ -1336,6 +1338,71 @@ stun_servers = [
assert!(err.to_string().contains("mapped listener port is missing")); assert!(err.to_string().contains("mapped listener port is missing"));
} }
#[test]
fn test_acl_toml_rule_uses_defaults_for_omitted_fields() {
use crate::proto::acl::{Action, ChainType, Protocol};
let config_str = r#"
[[acl.acl_v1.chains]]
name = "subnet_proxy_protect"
chain_type = 3
enabled = true
default_action = 2
[[acl.acl_v1.chains.rules]]
name = "allow_my_devices"
priority = 1000
action = 1
source_ips = ["10.172.192.2/32"]
protocol = 5
enabled = true
"#;
let config = TomlConfigLoader::new_from_str(config_str).unwrap();
let acl = config.get_acl().unwrap();
let acl_v1 = acl.acl_v1.unwrap();
let chain = &acl_v1.chains[0];
let rule = &chain.rules[0];
assert_eq!(chain.chain_type, ChainType::Forward as i32);
assert_eq!(chain.default_action, Action::Drop as i32);
assert_eq!(rule.action, Action::Allow as i32);
assert_eq!(rule.protocol, Protocol::Any as i32);
assert_eq!(rule.source_ips, vec!["10.172.192.2/32"]);
assert!(rule.ports.is_empty());
assert!(rule.source_ports.is_empty());
assert!(rule.destination_ips.is_empty());
assert!(rule.source_groups.is_empty());
assert!(rule.destination_groups.is_empty());
assert_eq!(rule.rate_limit, 0);
assert_eq!(rule.burst_limit, 0);
assert!(!rule.stateful);
}
#[test]
fn test_acl_toml_group_can_omit_declares_or_members() {
let declares_only = r#"
[acl.acl_v1.group]
[[acl.acl_v1.group.declares]]
group_name = "admin"
group_secret = "admin-pw"
"#;
let config = TomlConfigLoader::new_from_str(declares_only).unwrap();
let group = config.get_acl().unwrap().acl_v1.unwrap().group.unwrap();
assert_eq!(group.declares.len(), 1);
assert!(group.members.is_empty());
let members_only = r#"
[acl.acl_v1.group]
members = ["admin"]
"#;
let config = TomlConfigLoader::new_from_str(members_only).unwrap();
let group = config.get_acl().unwrap().acl_v1.unwrap().group.unwrap();
assert!(group.declares.is_empty());
assert_eq!(group.members, vec!["admin"]);
}
#[test] #[test]
fn test_network_config_source_user_is_implicit() { fn test_network_config_source_user_is_implicit() {
let config = TomlConfigLoader::default(); let config = TomlConfigLoader::default();
-2
View File
@@ -23,8 +23,6 @@ define_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, u64, 1000);
define_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, u64, 10); define_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, u64, 10);
define_global_var!(MACHINE_UID, Option<String>, None);
define_global_var!(MAX_DIRECT_CONNS_PER_PEER_IN_FOREIGN_NETWORK, u32, 3); define_global_var!(MAX_DIRECT_CONNS_PER_PEER_IN_FOREIGN_NETWORK, u32, 3);
define_global_var!(DIRECT_CONNECT_TO_PUBLIC_SERVER, bool, true); define_global_var!(DIRECT_CONNECT_TO_PUBLIC_SERVER, bool, true);
+20 -12
View File
@@ -73,16 +73,6 @@ pub async fn socket_addrs(
.port() .port()
.or_else(default_port_number) .or_else(default_port_number)
.ok_or(Error::InvalidUrl(url.to_string()))?; .ok_or(Error::InvalidUrl(url.to_string()))?;
// See https://github.com/EasyTier/EasyTier/pull/947
// here is for compatibility with old version
let port = match port {
0 => match url.scheme() {
"ws" => 80,
"wss" => 443,
_ => port,
},
_ => port,
};
// if host is an ip address, return it directly // if host is an ip address, return it directly
match host { match host {
@@ -121,9 +111,8 @@ pub async fn socket_addrs(
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::defer;
use super::*; use super::*;
use guarden::defer;
#[tokio::test] #[tokio::test]
async fn test_socket_addrs() { async fn test_socket_addrs() {
@@ -140,4 +129,23 @@ mod tests {
assert_eq!(2, addrs.len(), "addrs: {:?}", addrs); assert_eq!(2, addrs.len(), "addrs: {:?}", addrs);
println!("addrs2: {:?}", addrs); println!("addrs2: {:?}", addrs);
} }
#[tokio::test]
async fn socket_addrs_preserves_explicit_zero_port() {
let cases = [
("ws://127.0.0.1:0", 80, 0),
("wss://127.0.0.1:0", 443, 0),
("ws://127.0.0.1", 80, 80),
("wss://127.0.0.1", 443, 443),
];
for (raw_url, default_port, expected_port) in cases {
let url = url::Url::parse(raw_url).unwrap();
let addrs = socket_addrs(&url, || Some(default_port)).await.unwrap();
assert_eq!(
addrs,
vec![SocketAddr::from(([127, 0, 0, 1], expected_port))]
);
}
}
} }
+2 -1
View File
@@ -1,5 +1,4 @@
use std::{io, result}; use std::{io, result};
use thiserror::Error; use thiserror::Error;
use crate::tunnel; use crate::tunnel;
@@ -55,4 +54,6 @@ pub enum Error {
pub type Result<T> = result::Result<T, Error>; pub type Result<T> = result::Result<T, Error>;
pub type ErrorCollection = crate::utils::error::ErrorCollection<Error>;
// impl From for std:: // impl From for std::
+165 -9
View File
@@ -1,5 +1,5 @@
use std::{ use std::{
collections::{HashMap, hash_map::DefaultHasher}, collections::{BTreeSet, HashMap, hash_map::DefaultHasher},
hash::Hasher, hash::Hasher,
net::{IpAddr, SocketAddr}, net::{IpAddr, SocketAddr},
sync::{Arc, Mutex}, sync::{Arc, Mutex},
@@ -77,6 +77,11 @@ pub enum GlobalCtxEvent {
ProxyCidrsUpdated(Vec<cidr::Ipv4Cidr>, Vec<cidr::Ipv4Cidr>), // (added, removed) ProxyCidrsUpdated(Vec<cidr::Ipv4Cidr>, Vec<cidr::Ipv4Cidr>), // (added, removed)
UdpBroadcastRelayStartResult {
capture_backend: Option<String>,
error: Option<String>,
},
CredentialChanged, CredentialChanged,
} }
@@ -203,6 +208,7 @@ pub struct GlobalCtx {
cached_ipv4: AtomicCell<Option<cidr::Ipv4Inet>>, cached_ipv4: AtomicCell<Option<cidr::Ipv4Inet>>,
cached_ipv6: AtomicCell<Option<cidr::Ipv6Inet>>, cached_ipv6: AtomicCell<Option<cidr::Ipv6Inet>>,
public_ipv6_lease: AtomicCell<Option<cidr::Ipv6Inet>>, public_ipv6_lease: AtomicCell<Option<cidr::Ipv6Inet>>,
public_ipv6_routes: Mutex<BTreeSet<std::net::Ipv6Addr>>,
cached_proxy_cidrs: AtomicCell<Option<Vec<ProxyNetworkConfig>>>, cached_proxy_cidrs: AtomicCell<Option<Vec<ProxyNetworkConfig>>>,
ip_collector: Mutex<Option<Arc<IPCollector>>>, ip_collector: Mutex<Option<Arc<IPCollector>>>,
@@ -216,6 +222,12 @@ pub struct GlobalCtx {
flags: ArcSwap<Flags>, flags: ArcSwap<Flags>,
// Runtime/base advertised feature flags before config-owned fields are
// overlaid by set_flags. Keep this separate so config patches do not erase
// runtime state such as public-server role, IPv6 provider status, or the
// non-whitelist avoid-relay preference.
base_feature_flags: AtomicCell<PeerFeatureFlag>,
feature_flags: AtomicCell<PeerFeatureFlag>, feature_flags: AtomicCell<PeerFeatureFlag>,
token_bucket_manager: TokenBucketManager, token_bucket_manager: TokenBucketManager,
@@ -246,8 +258,17 @@ impl std::fmt::Debug for GlobalCtx {
pub type ArcGlobalCtx = std::sync::Arc<GlobalCtx>; pub type ArcGlobalCtx = std::sync::Arc<GlobalCtx>;
impl GlobalCtx { impl GlobalCtx {
fn derive_feature_flags(flags: &Flags, current: Option<PeerFeatureFlag>) -> PeerFeatureFlag { fn apply_disable_relay_data_flag(
let mut feature_flags = current.unwrap_or_default(); flags: &Flags,
mut feature_flags: PeerFeatureFlag,
) -> PeerFeatureFlag {
if flags.disable_relay_data {
feature_flags.avoid_relay_data = true;
}
feature_flags
}
fn derive_feature_flags(flags: &Flags, mut feature_flags: PeerFeatureFlag) -> PeerFeatureFlag {
feature_flags.kcp_input = !flags.disable_kcp_input; feature_flags.kcp_input = !flags.disable_kcp_input;
feature_flags.no_relay_kcp = flags.disable_relay_kcp; feature_flags.no_relay_kcp = flags.disable_relay_kcp;
feature_flags.support_conn_list_sync = true; feature_flags.support_conn_list_sync = true;
@@ -255,7 +276,7 @@ impl GlobalCtx {
feature_flags.no_relay_quic = flags.disable_relay_quic; feature_flags.no_relay_quic = flags.disable_relay_quic;
feature_flags.need_p2p = flags.need_p2p; feature_flags.need_p2p = flags.need_p2p;
feature_flags.disable_p2p = flags.disable_p2p; feature_flags.disable_p2p = flags.disable_p2p;
feature_flags Self::apply_disable_relay_data_flag(flags, feature_flags)
} }
pub fn new(config_fs: impl ConfigLoader + 'static) -> Self { pub fn new(config_fs: impl ConfigLoader + 'static) -> Self {
@@ -284,7 +305,8 @@ impl GlobalCtx {
let flags = config_fs.get_flags(); let flags = config_fs.get_flags();
let feature_flags = Self::derive_feature_flags(&flags, None); let base_feature_flags = PeerFeatureFlag::default();
let feature_flags = Self::derive_feature_flags(&flags, base_feature_flags);
let credential_storage_path = config_fs.get_credential_file(); let credential_storage_path = config_fs.get_credential_file();
let credential_manager = Arc::new(CredentialManager::new(credential_storage_path)); let credential_manager = Arc::new(CredentialManager::new(credential_storage_path));
@@ -300,6 +322,7 @@ impl GlobalCtx {
cached_ipv4: AtomicCell::new(None), cached_ipv4: AtomicCell::new(None),
cached_ipv6: AtomicCell::new(None), cached_ipv6: AtomicCell::new(None),
public_ipv6_lease: AtomicCell::new(None), public_ipv6_lease: AtomicCell::new(None),
public_ipv6_routes: Mutex::new(BTreeSet::new()),
cached_proxy_cidrs: AtomicCell::new(None), cached_proxy_cidrs: AtomicCell::new(None),
ip_collector: Mutex::new(Some(Arc::new(IPCollector::new( ip_collector: Mutex::new(Some(Arc::new(IPCollector::new(
@@ -316,6 +339,8 @@ impl GlobalCtx {
flags: ArcSwap::new(Arc::new(flags)), flags: ArcSwap::new(Arc::new(flags)),
base_feature_flags: AtomicCell::new(base_feature_flags),
feature_flags: AtomicCell::new(feature_flags), feature_flags: AtomicCell::new(feature_flags),
token_bucket_manager: TokenBucketManager::new(), token_bucket_manager: TokenBucketManager::new(),
@@ -395,6 +420,11 @@ impl GlobalCtx {
self.public_ipv6_lease.store(addr); self.public_ipv6_lease.store(addr);
} }
pub fn set_public_ipv6_routes(&self, routes: BTreeSet<cidr::Ipv6Inet>) {
*self.public_ipv6_routes.lock().unwrap() =
routes.into_iter().map(|route| route.address()).collect();
}
pub fn is_ip_local_ipv6(&self, ip: &std::net::Ipv6Addr) -> bool { pub fn is_ip_local_ipv6(&self, ip: &std::net::Ipv6Addr) -> bool {
self.get_ipv6().map(|x| x.address() == *ip).unwrap_or(false) self.get_ipv6().map(|x| x.address() == *ip).unwrap_or(false)
|| self || self
@@ -403,6 +433,10 @@ impl GlobalCtx {
.unwrap_or(false) .unwrap_or(false)
} }
pub fn is_ip_easytier_managed_ipv6(&self, ip: &std::net::Ipv6Addr) -> bool {
self.is_ip_local_ipv6(ip) || self.public_ipv6_routes.lock().unwrap().contains(ip)
}
pub fn get_advertised_ipv6_public_addr_prefix(&self) -> Option<cidr::Ipv6Cidr> { pub fn get_advertised_ipv6_public_addr_prefix(&self) -> Option<cidr::Ipv6Cidr> {
*self.advertised_ipv6_public_addr_prefix.lock().unwrap() *self.advertised_ipv6_public_addr_prefix.lock().unwrap()
} }
@@ -502,7 +536,7 @@ impl GlobalCtx {
self.config.set_flags(flags.clone()); self.config.set_flags(flags.clone());
self.feature_flags.store(Self::derive_feature_flags( self.feature_flags.store(Self::derive_feature_flags(
&flags, &flags,
Some(self.feature_flags.load()), self.base_feature_flags.load(),
)); ));
self.flags.store(Arc::new(flags)); self.flags.store(Arc::new(flags));
} }
@@ -567,8 +601,53 @@ impl GlobalCtx {
self.feature_flags.load() self.feature_flags.load()
} }
pub fn set_feature_flags(&self, flags: PeerFeatureFlag) { /// Replace the runtime/base advertised flags as a complete snapshot.
self.feature_flags.store(flags); ///
/// This is intended for foreign scoped contexts that inherit an already
/// computed feature-flag snapshot from their parent. Most callers should use
/// a narrower setter so they do not accidentally overwrite unrelated runtime
/// state.
pub fn set_base_advertised_feature_flags(&self, feature_flags: PeerFeatureFlag) {
self.base_feature_flags.store(feature_flags);
let flags = self.flags.load();
self.feature_flags
.store(Self::apply_disable_relay_data_flag(
flags.as_ref(),
feature_flags,
));
}
/// Set the avoid-relay preference that is independent of disable_relay_data.
///
/// disable_relay_data still forces the effective advertised flag to true,
/// but this base preference is preserved when that config flag is toggled.
pub fn set_avoid_relay_data_preference(&self, avoid_relay_data: bool) -> bool {
let mut base_feature_flags = self.base_feature_flags.load();
base_feature_flags.avoid_relay_data = avoid_relay_data;
self.base_feature_flags.store(base_feature_flags);
let mut feature_flags = self.feature_flags.load();
let previous = feature_flags.avoid_relay_data;
feature_flags.avoid_relay_data = avoid_relay_data || self.flags.load().disable_relay_data;
self.feature_flags.store(feature_flags);
previous != feature_flags.avoid_relay_data
}
/// Set the runtime IPv6-provider advertised bit without touching
/// config-derived feature flags.
pub fn set_ipv6_public_addr_provider_feature_flag(&self, enabled: bool) -> bool {
let mut base_feature_flags = self.base_feature_flags.load();
base_feature_flags.ipv6_public_addr_provider = enabled;
self.base_feature_flags.store(base_feature_flags);
let mut feature_flags = self.feature_flags.load();
if feature_flags.ipv6_public_addr_provider == enabled {
return false;
}
feature_flags.ipv6_public_addr_provider = enabled;
self.feature_flags.store(feature_flags);
true
} }
pub fn token_bucket_manager(&self) -> &TokenBucketManager { pub fn token_bucket_manager(&self) -> &TokenBucketManager {
@@ -785,7 +864,7 @@ pub mod tests {
let mut feature_flags = global_ctx.get_feature_flags(); let mut feature_flags = global_ctx.get_feature_flags();
feature_flags.avoid_relay_data = true; feature_flags.avoid_relay_data = true;
feature_flags.is_public_server = true; feature_flags.is_public_server = true;
global_ctx.set_feature_flags(feature_flags); global_ctx.set_base_advertised_feature_flags(feature_flags);
let mut flags = global_ctx.get_flags().clone(); let mut flags = global_ctx.get_flags().clone();
flags.disable_kcp_input = true; flags.disable_kcp_input = true;
@@ -809,6 +888,83 @@ pub mod tests {
assert!(!feature_flags.ipv6_public_addr_provider); assert!(!feature_flags.ipv6_public_addr_provider);
} }
#[tokio::test]
async fn set_base_advertised_feature_flags_applies_current_values() {
let config = TomlConfigLoader::default();
let global_ctx = GlobalCtx::new(config);
let feature_flags = PeerFeatureFlag {
kcp_input: false,
no_relay_kcp: true,
quic_input: false,
no_relay_quic: true,
is_public_server: true,
..Default::default()
};
global_ctx.set_base_advertised_feature_flags(feature_flags);
assert_eq!(global_ctx.get_feature_flags(), feature_flags);
}
#[tokio::test]
async fn set_base_advertised_feature_flags_keeps_disable_relay_data_effective() {
let config = TomlConfigLoader::default();
let global_ctx = GlobalCtx::new(config);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = true;
global_ctx.set_flags(flags);
let mut feature_flags = global_ctx.get_feature_flags();
feature_flags.avoid_relay_data = false;
feature_flags.is_public_server = true;
global_ctx.set_base_advertised_feature_flags(feature_flags);
let advertised_feature_flags = global_ctx.get_feature_flags();
assert!(advertised_feature_flags.avoid_relay_data);
assert!(advertised_feature_flags.is_public_server);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
let advertised_feature_flags = global_ctx.get_feature_flags();
assert!(!advertised_feature_flags.avoid_relay_data);
assert!(advertised_feature_flags.is_public_server);
}
#[tokio::test]
async fn disable_relay_data_sets_avoid_relay_feature_flag() {
let config = TomlConfigLoader::default();
let global_ctx = GlobalCtx::new(config);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = true;
global_ctx.set_flags(flags);
assert!(global_ctx.get_feature_flags().avoid_relay_data);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
assert!(!global_ctx.get_feature_flags().avoid_relay_data);
global_ctx.set_avoid_relay_data_preference(true);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = true;
global_ctx.set_flags(flags);
assert!(global_ctx.get_feature_flags().avoid_relay_data);
let mut flags = global_ctx.get_flags().clone();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
assert!(global_ctx.get_feature_flags().avoid_relay_data);
}
#[tokio::test] #[tokio::test]
async fn should_deny_proxy_for_process_wide_rpc_port() { async fn should_deny_proxy_for_process_wide_rpc_port() {
protected_port::clear_protected_tcp_ports_for_test(); protected_port::clear_protected_tcp_ports_for_test();
+193 -11
View File
@@ -58,6 +58,21 @@ fn parse_env_filter(default_level: Option<LevelFilter>) -> Result<EnvFilter, any
.with_context(|| "failed to create env filter") .with_context(|| "failed to create env filter")
} }
fn parse_static_filter(level: LevelFilter) -> Result<EnvFilter, anyhow::Error> {
EnvFilter::builder()
.with_default_directive(level.into())
.parse("")
.with_context(|| "failed to create static filter")
}
fn parse_file_filter(level: LevelFilter) -> Result<EnvFilter, anyhow::Error> {
if matches!(level, LevelFilter::OFF) {
parse_static_filter(level)
} else {
parse_env_filter(Some(level))
}
}
fn is_log(meta: &Metadata) -> bool { fn is_log(meta: &Metadata) -> bool {
meta.target() == LOG_TARGET || meta.target().starts_with(&format!("{LOG_TARGET}::")) meta.target() == LOG_TARGET || meta.target().starts_with(&format!("{LOG_TARGET}::"))
} }
@@ -165,14 +180,17 @@ fn file_layers(
) -> anyhow::Result<(Vec<BoxLayer>, Option<NewFilterSender>)> { ) -> anyhow::Result<(Vec<BoxLayer>, Option<NewFilterSender>)> {
let mut layers = Vec::new(); let mut layers = Vec::new();
let level = config.level.map(|s| s.parse().unwrap()); let level = config
.level
.map(|s| s.parse().unwrap())
.unwrap_or(LevelFilter::OFF);
if matches!(level, Some(LevelFilter::OFF)) && !reload { if matches!(level, LevelFilter::OFF) && !reload {
return Ok((layers, None)); return Ok((layers, None));
} }
let (file_filter, file_filter_reloader) = let (file_filter, file_filter_reloader) =
tracing_subscriber::reload::Layer::<_, Registry>::new(parse_env_filter(level)?); tracing_subscriber::reload::Layer::<_, Registry>::new(parse_file_filter(level)?);
let layer = |wrapper| { let layer = |wrapper| {
layer() layer()
@@ -218,9 +236,7 @@ fn file_layers(
// 初始化全局状态 // 初始化全局状态
let _ = LOGGER_LEVEL_SENDER.set(std::sync::Mutex::new(tx.clone())); let _ = LOGGER_LEVEL_SENDER.set(std::sync::Mutex::new(tx.clone()));
if let Some(level) = level { let _ = CURRENT_LOG_LEVEL.set(std::sync::Mutex::new(level.to_string()));
let _ = CURRENT_LOG_LEVEL.set(std::sync::Mutex::new(level.to_string()));
}
std::thread::spawn(move || { std::thread::spawn(move || {
while let Ok(lf) = rx.recv() { while let Ok(lf) = rx.recv() {
@@ -232,11 +248,7 @@ fn file_layers(
} }
}; };
let mut new_filter = match EnvFilter::builder() let mut new_filter = match parse_file_filter(parsed_level) {
.with_default_directive(parsed_level.into())
.from_env()
.with_context(|| "failed to create file filter")
{
Ok(filter) => Some(filter), Ok(filter) => Some(filter),
Err(e) => { Err(e) => {
error!("Failed to build new log filter for {:?}: {:?}", lf, e); error!("Failed to build new log filter for {:?}: {:?}", lf, e);
@@ -268,6 +280,36 @@ mod tests {
use super::*; use super::*;
use crate::common::config::FileLoggerConfig; use crate::common::config::FileLoggerConfig;
const RUST_LOG: &str = "RUST_LOG";
struct EnvVarGuard {
key: &'static str,
previous: Option<std::ffi::OsString>,
}
impl EnvVarGuard {
fn set(key: &'static str, value: &str) -> Self {
let previous = std::env::var_os(key);
unsafe { std::env::set_var(key, value) };
Self { key, previous }
}
fn unset(key: &'static str) -> Self {
let previous = std::env::var_os(key);
unsafe { std::env::remove_var(key) };
Self { key, previous }
}
}
impl Drop for EnvVarGuard {
fn drop(&mut self) {
match &self.previous {
Some(value) => unsafe { std::env::set_var(self.key, value) },
None => unsafe { std::env::remove_var(self.key) },
}
}
}
#[ctor::ctor] #[ctor::ctor]
fn init() { fn init() {
let _ = Registry::default() let _ = Registry::default()
@@ -276,7 +318,147 @@ mod tests {
} }
#[test] #[test]
fn default_file_logger_level_is_off_without_reload() {
let (layers, sender) = file_layers(FileLoggerConfig::default(), false).unwrap();
assert!(layers.is_empty());
assert!(sender.is_none());
}
#[test]
#[serial_test::serial]
fn default_file_logger_level_filters_info_with_reload() {
let _guard = EnvVarGuard::set(RUST_LOG, "info");
let temp_dir = tempfile::tempdir().unwrap();
let log_file_name = "default-off-test.log".to_string();
let log_path = temp_dir.path().join(&log_file_name);
let cfg = FileLoggerConfig {
file: Some(log_file_name),
dir: Some(temp_dir.path().to_string_lossy().to_string()),
..Default::default()
};
let (layers, _sender) = file_layers(cfg, true).unwrap();
let marker = "default-file-logger-off-marker";
let subscriber = Registry::default().with(layers);
tracing::subscriber::with_default(subscriber, || {
tracing::info!(target: LOG_TARGET, "{}", marker);
std::thread::sleep(std::time::Duration::from_millis(300));
});
let content = std::fs::read_to_string(&log_path).unwrap_or_default();
assert!(
!content.contains(marker),
"default file logger level should filter info logs"
);
}
#[test]
#[serial_test::serial]
fn file_logger_level_uses_env_filter_when_enabled() {
let _guard = EnvVarGuard::set(RUST_LOG, "debug");
let temp_dir = tempfile::tempdir().unwrap();
let log_file_name = "env-filter-test.log".to_string();
let log_path = temp_dir.path().join(&log_file_name);
let cfg = FileLoggerConfig {
level: Some(LevelFilter::INFO.to_string()),
file: Some(log_file_name),
dir: Some(temp_dir.path().to_string_lossy().to_string()),
..Default::default()
};
let (layers, _sender) = file_layers(cfg, true).unwrap();
let marker = "file-logger-env-filter-marker";
let subscriber = Registry::default().with(layers);
tracing::subscriber::with_default(subscriber, || {
tracing::debug!(target: LOG_TARGET, "{}", marker);
std::thread::sleep(std::time::Duration::from_millis(300));
});
let content = std::fs::read_to_string(&log_path).unwrap_or_default();
assert!(
content.contains(marker),
"enabled file logger should use RUST_LOG directives"
);
}
#[test]
#[serial_test::serial]
fn file_logger_reload_uses_env_filter_when_enabled() {
let _guard = EnvVarGuard::set(RUST_LOG, "debug");
let temp_dir = tempfile::tempdir().unwrap();
let log_file_name = "reload-env-filter-test.log".to_string();
let log_path = temp_dir.path().join(&log_file_name);
let cfg = FileLoggerConfig {
file: Some(log_file_name),
dir: Some(temp_dir.path().to_string_lossy().to_string()),
..Default::default()
};
let (layers, sender) = file_layers(cfg, true).unwrap();
let sender = sender.expect("reload=true should return a sender");
let marker = "file-logger-reload-env-filter-marker";
let subscriber = Registry::default().with(layers);
tracing::subscriber::with_default(subscriber, || {
sender.send(LevelFilter::INFO.to_string()).unwrap();
std::thread::sleep(std::time::Duration::from_millis(300));
tracing::debug!(target: LOG_TARGET, "{}", marker);
std::thread::sleep(std::time::Duration::from_millis(300));
});
let content = std::fs::read_to_string(&log_path).unwrap_or_default();
assert!(
content.contains(marker),
"file logger enabled by reload should use RUST_LOG directives"
);
}
#[test]
#[serial_test::serial]
fn file_logger_reload_off_ignores_env_filter() {
let _guard = EnvVarGuard::set(RUST_LOG, "info");
let temp_dir = tempfile::tempdir().unwrap();
let log_file_name = "reload-off-test.log".to_string();
let log_path = temp_dir.path().join(&log_file_name);
let cfg = FileLoggerConfig {
level: Some(LevelFilter::INFO.to_string()),
file: Some(log_file_name),
dir: Some(temp_dir.path().to_string_lossy().to_string()),
..Default::default()
};
let (layers, sender) = file_layers(cfg, true).unwrap();
let sender = sender.expect("reload=true should return a sender");
let marker = "file-logger-reload-off-marker";
let subscriber = Registry::default().with(layers);
tracing::subscriber::with_default(subscriber, || {
sender.send(LevelFilter::OFF.to_string()).unwrap();
std::thread::sleep(std::time::Duration::from_millis(300));
tracing::info!(target: LOG_TARGET, "{}", marker);
std::thread::sleep(std::time::Duration::from_millis(300));
});
let content = std::fs::read_to_string(&log_path).unwrap_or_default();
assert!(
!content.contains(marker),
"disabled file logger should ignore RUST_LOG directives"
);
}
#[test]
#[serial_test::serial]
fn test_logger_reload() { fn test_logger_reload() {
let _guard = EnvVarGuard::unset(RUST_LOG);
let temp_dir = tempfile::tempdir().unwrap(); let temp_dir = tempfile::tempdir().unwrap();
let log_file_name = "reload-test.log".to_string(); let log_file_name = "reload-test.log".to_string();
let log_path = temp_dir.path().join(&log_file_name); let log_path = temp_dir.path().join(&log_file_name);
+596
View File
@@ -0,0 +1,596 @@
use std::{
env,
ffi::OsString,
io::Write as _,
path::{Path, PathBuf},
time::{Duration, Instant},
};
use anyhow::Context as _;
#[cfg(unix)]
use nix::{
errno::Errno,
fcntl::{Flock, FlockArg},
};
#[derive(Debug, Clone, Default)]
pub struct MachineIdOptions {
pub explicit_machine_id: Option<String>,
pub state_dir: Option<PathBuf>,
}
pub fn resolve_machine_id(opts: &MachineIdOptions) -> anyhow::Result<uuid::Uuid> {
if let Some(explicit_machine_id) = opts.explicit_machine_id.as_deref() {
return Ok(parse_or_hash_machine_id(explicit_machine_id));
}
let state_file = resolve_machine_id_state_file(opts.state_dir.as_deref())?;
let allow_legacy_machine_uid_migration =
should_attempt_legacy_machine_uid_migration(&state_file);
if let Some(machine_id) = read_state_machine_id(&state_file)? {
return Ok(machine_id);
}
if let Some(machine_id) = read_legacy_machine_id_file() {
return persist_machine_id(&state_file, machine_id);
}
if allow_legacy_machine_uid_migration
&& let Some(machine_id) = resolve_legacy_machine_uid_hash()
{
return persist_machine_id(&state_file, machine_id);
}
let machine_id = resolve_new_machine_id().unwrap_or_else(uuid::Uuid::new_v4);
persist_machine_id(&state_file, machine_id)
}
fn parse_or_hash_machine_id(raw: &str) -> uuid::Uuid {
if let Ok(mid) = uuid::Uuid::parse_str(raw.trim()) {
return mid;
}
digest_uuid_from_str(raw)
}
fn digest_uuid_from_str(raw: &str) -> uuid::Uuid {
let mut b = [0u8; 16];
crate::tunnel::generate_digest_from_str("", raw, &mut b);
uuid::Uuid::from_bytes(b)
}
fn resolve_machine_id_state_file(state_dir: Option<&Path>) -> anyhow::Result<PathBuf> {
let state_dir = match state_dir {
Some(dir) => dir.to_path_buf(),
None => default_machine_id_state_dir()?,
};
Ok(state_dir.join("machine_id"))
}
fn non_empty_os_string(value: Option<OsString>) -> Option<OsString> {
value.filter(|value| !value.is_empty())
}
#[cfg(target_os = "linux")]
fn default_linux_machine_id_state_dir(
xdg_data_home: Option<OsString>,
home: Option<OsString>,
) -> PathBuf {
if let Some(path) = non_empty_os_string(xdg_data_home) {
return PathBuf::from(path).join("easytier");
}
if let Some(home) = non_empty_os_string(home) {
return PathBuf::from(home)
.join(".local")
.join("share")
.join("easytier");
}
PathBuf::from("/var/lib/easytier")
}
fn default_machine_id_state_dir() -> anyhow::Result<PathBuf> {
cfg_select! {
target_os = "linux" => Ok(default_linux_machine_id_state_dir(
env::var_os("XDG_DATA_HOME"),
env::var_os("HOME"),
)),
all(target_os = "macos", not(feature = "macos-ne")) => {
let home = non_empty_os_string(env::var_os("HOME"))
.ok_or_else(|| anyhow::anyhow!("HOME is not set, cannot resolve machine id state directory"))?;
Ok(PathBuf::from(home)
.join("Library")
.join("Application Support")
.join("com.easytier"))
},
target_os = "windows" => {
let local_app_data = non_empty_os_string(env::var_os("LOCALAPPDATA")).ok_or_else(|| {
anyhow::anyhow!("LOCALAPPDATA is not set, cannot resolve machine id state directory")
})?;
Ok(PathBuf::from(local_app_data).join("easytier"))
},
target_os = "freebsd" => {
let home = non_empty_os_string(env::var_os("HOME"))
.ok_or_else(|| anyhow::anyhow!("HOME is not set, cannot resolve machine id state directory"))?;
Ok(PathBuf::from(home).join(".local").join("share").join("easytier"))
},
target_os = "android" => {
anyhow::bail!("machine id state directory must be provided explicitly on Android");
},
_ => anyhow::bail!("machine id state directory is unsupported on this platform"),
}
}
fn read_state_machine_id(path: &Path) -> anyhow::Result<Option<uuid::Uuid>> {
let Some(contents) = read_optional_file(path)? else {
return Ok(None);
};
let machine_id = uuid::Uuid::parse_str(contents.trim())
.with_context(|| format!("invalid machine id in state file {}", path.display()))?;
Ok(Some(machine_id))
}
fn read_legacy_machine_id_file() -> Option<uuid::Uuid> {
let path = legacy_machine_id_file_path()?;
read_legacy_machine_id_file_at(&path)
}
fn read_legacy_machine_id_file_at(path: &Path) -> Option<uuid::Uuid> {
let contents = match std::fs::read_to_string(path) {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return None,
Err(err) => {
tracing::warn!(
path = %path.display(),
%err,
"ignoring unreadable legacy machine id file"
);
return None;
}
};
match uuid::Uuid::parse_str(contents.trim()) {
Ok(machine_id) => Some(machine_id),
Err(err) => {
tracing::warn!(
path = %path.display(),
%err,
"ignoring invalid legacy machine id file"
);
None
}
}
}
fn legacy_machine_id_file_path() -> Option<PathBuf> {
std::env::current_exe()
.ok()
.map(|path| path.with_file_name("et_machine_id"))
}
fn read_optional_file(path: &Path) -> anyhow::Result<Option<String>> {
match std::fs::read_to_string(path) {
Ok(contents) => Ok(Some(contents)),
Err(err) if err.kind() == std::io::ErrorKind::NotFound => Ok(None),
Err(err) => Err(err).with_context(|| format!("failed to read {}", path.display())),
}
}
fn should_attempt_legacy_machine_uid_migration(state_file: &Path) -> bool {
let Some(state_dir) = state_file.parent() else {
return false;
};
let Ok(mut entries) = std::fs::read_dir(state_dir) else {
return false;
};
entries.any(|entry| entry.is_ok())
}
fn resolve_legacy_machine_uid_hash() -> Option<uuid::Uuid> {
machine_uid_seed().map(|seed| digest_uuid_from_str(seed.as_str()))
}
fn resolve_new_machine_id() -> Option<uuid::Uuid> {
let seed = machine_uid_seed()?;
#[cfg(target_os = "linux")]
{
let seed = linux_machine_id_seed(&seed);
Some(digest_uuid_from_str(&seed))
}
#[cfg(not(target_os = "linux"))]
{
Some(digest_uuid_from_str(&seed))
}
}
#[cfg(any(
target_os = "linux",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
))]
fn machine_uid_seed() -> Option<String> {
machine_uid::get()
.ok()
.filter(|value| !value.trim().is_empty())
}
#[cfg(not(any(
target_os = "linux",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
)))]
fn machine_uid_seed() -> Option<String> {
None
}
#[cfg(target_os = "linux")]
fn linux_machine_id_seed(machine_uid: &str) -> String {
let mut seed = format!("machine_uid={machine_uid}");
let hostname = gethostname::gethostname()
.to_string_lossy()
.trim()
.to_string();
if !hostname.is_empty() {
seed.push_str("\nhostname=");
seed.push_str(&hostname);
}
let mac_addresses = collect_linux_mac_addresses();
if !mac_addresses.is_empty() {
seed.push_str("\nmacs=");
seed.push_str(&mac_addresses.join(","));
}
seed
}
#[cfg(target_os = "linux")]
fn collect_linux_mac_addresses() -> Vec<String> {
let mut macs = Vec::new();
let Ok(entries) = std::fs::read_dir("/sys/class/net") else {
return macs;
};
for entry in entries.flatten() {
let Ok(name) = entry.file_name().into_string() else {
continue;
};
if name == "lo" {
continue;
}
let address_path = entry.path().join("address");
let Ok(address) = std::fs::read_to_string(address_path) else {
continue;
};
let address = address.trim().to_ascii_lowercase();
if address.is_empty() || address == "00:00:00:00:00:00" {
continue;
}
macs.push(address);
}
macs.sort();
macs.dedup();
macs.truncate(3);
macs
}
fn persist_machine_id(path: &Path, machine_id: uuid::Uuid) -> anyhow::Result<uuid::Uuid> {
if let Some(existing) = read_state_machine_id(path)? {
return Ok(existing);
}
let _lock = MachineIdWriteLock::acquire(path)?;
if let Some(existing) = read_state_machine_id(path)? {
return Ok(existing);
}
write_uuid_file_atomically(path, machine_id)?;
Ok(machine_id)
}
fn write_uuid_file_atomically(path: &Path, machine_id: uuid::Uuid) -> anyhow::Result<()> {
let parent = path.parent().ok_or_else(|| {
anyhow::anyhow!(
"machine id state file {} has no parent directory",
path.display()
)
})?;
std::fs::create_dir_all(parent).with_context(|| {
format!(
"failed to create machine id state directory {}",
parent.display()
)
})?;
let tmp_path = parent.join(format!(
".machine_id.tmp-{}-{}",
std::process::id(),
uuid::Uuid::new_v4()
));
{
let mut file = std::fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(&tmp_path)
.with_context(|| format!("failed to create {}", tmp_path.display()))?;
file.write_all(machine_id.to_string().as_bytes())
.with_context(|| format!("failed to write {}", tmp_path.display()))?;
file.sync_all()
.with_context(|| format!("failed to flush {}", tmp_path.display()))?;
}
if let Err(err) = std::fs::rename(&tmp_path, path) {
let _ = std::fs::remove_file(&tmp_path);
return Err(err).with_context(|| {
format!(
"failed to move machine id state file into place at {}",
path.display()
)
});
}
Ok(())
}
struct MachineIdWriteLock {
#[cfg(unix)]
_lock: Flock<std::fs::File>,
#[cfg(not(unix))]
path: PathBuf,
}
impl MachineIdWriteLock {
fn acquire(path: &Path) -> anyhow::Result<Self> {
let parent = path.parent().ok_or_else(|| {
anyhow::anyhow!(
"machine id state file {} has no parent directory",
path.display()
)
})?;
std::fs::create_dir_all(parent).with_context(|| {
format!(
"failed to create machine id state directory {}",
parent.display()
)
})?;
#[cfg(unix)]
{
Self::acquire_unix(path)
}
#[cfg(not(unix))]
{
Self::acquire_fallback(path)
}
}
#[cfg(unix)]
fn acquire_unix(path: &Path) -> anyhow::Result<Self> {
let lock_path = path.with_extension("lock");
let deadline = Instant::now() + Duration::from_secs(5);
let mut lock_file = std::fs::OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(false)
.open(&lock_path)
.with_context(|| format!("failed to open machine id lock {}", lock_path.display()))?;
loop {
match Flock::lock(lock_file, FlockArg::LockExclusiveNonblock) {
Ok(lock) => return Ok(Self { _lock: lock }),
Err((file, Errno::EAGAIN)) => {
if Instant::now() >= deadline {
anyhow::bail!(
"timed out waiting for machine id lock {}",
lock_path.display()
);
}
lock_file = file;
std::thread::sleep(Duration::from_millis(50));
}
Err((_file, err)) => {
anyhow::bail!(
"failed to acquire machine id lock {}: {}",
lock_path.display(),
err
);
}
}
}
}
#[cfg(not(unix))]
fn acquire_fallback(path: &Path) -> anyhow::Result<Self> {
let lock_path = path.with_extension("lock");
let deadline = Instant::now() + Duration::from_secs(5);
loop {
match std::fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(&lock_path)
{
Ok(mut file) => {
writeln!(file, "pid={}", std::process::id()).ok();
return Ok(Self { path: lock_path });
}
Err(err) if err.kind() == std::io::ErrorKind::AlreadyExists => {
if should_reap_stale_lock_file(&lock_path) {
let _ = std::fs::remove_file(&lock_path);
continue;
}
if Instant::now() >= deadline {
anyhow::bail!(
"timed out waiting for machine id lock {}",
lock_path.display()
);
}
std::thread::sleep(Duration::from_millis(50));
}
Err(err) => {
return Err(err).with_context(|| {
format!("failed to acquire machine id lock {}", lock_path.display())
});
}
}
}
}
}
#[cfg(not(unix))]
fn should_reap_stale_lock_file(lock_path: &Path) -> bool {
const STALE_LOCK_AGE: Duration = Duration::from_secs(30);
let Ok(metadata) = std::fs::metadata(lock_path) else {
return false;
};
let Ok(modified) = metadata.modified() else {
return false;
};
modified
.elapsed()
.is_ok_and(|elapsed| elapsed >= STALE_LOCK_AGE)
}
impl Drop for MachineIdWriteLock {
fn drop(&mut self) {
#[cfg(not(unix))]
let _ = std::fs::remove_file(&self.path);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_resolve_machine_id_uses_uuid_seed_verbatim() {
let raw = "33333333-3333-3333-3333-333333333333".to_string();
let opts = MachineIdOptions {
explicit_machine_id: Some(raw.clone()),
state_dir: None,
};
assert_eq!(
resolve_machine_id(&opts).unwrap(),
uuid::Uuid::parse_str(&raw).unwrap()
);
}
#[test]
fn test_resolve_machine_id_reads_state_file() {
let temp_dir = tempfile::tempdir().unwrap();
let expected = uuid::Uuid::new_v4();
std::fs::write(temp_dir.path().join("machine_id"), expected.to_string()).unwrap();
let opts = MachineIdOptions {
explicit_machine_id: None,
state_dir: Some(temp_dir.path().to_path_buf()),
};
assert_eq!(resolve_machine_id(&opts).unwrap(), expected);
}
#[test]
fn test_read_legacy_machine_id_file_ignores_read_errors() {
let temp_dir = tempfile::tempdir().unwrap();
assert_eq!(read_legacy_machine_id_file_at(temp_dir.path()), None);
}
#[test]
fn test_write_uuid_file_atomically_writes_expected_contents() {
let temp_dir = tempfile::tempdir().unwrap();
let machine_id = uuid::Uuid::new_v4();
let state_file = temp_dir.path().join("machine_id");
write_uuid_file_atomically(&state_file, machine_id).unwrap();
assert_eq!(
std::fs::read_to_string(state_file).unwrap(),
machine_id.to_string()
);
}
#[test]
fn test_non_empty_os_string_filters_empty_values() {
assert_eq!(non_empty_os_string(Some(OsString::new())), None);
assert_eq!(
non_empty_os_string(Some(OsString::from("foo"))),
Some(OsString::from("foo"))
);
}
#[cfg(target_os = "linux")]
#[test]
fn test_default_linux_machine_id_state_dir_falls_back_in_order() {
assert_eq!(
default_linux_machine_id_state_dir(
Some(OsString::from("/tmp/xdg")),
Some(OsString::from("/tmp/home"))
),
PathBuf::from("/tmp/xdg").join("easytier")
);
assert_eq!(
default_linux_machine_id_state_dir(
Some(OsString::new()),
Some(OsString::from("/tmp/home"))
),
PathBuf::from("/tmp/home")
.join(".local")
.join("share")
.join("easytier")
);
assert_eq!(
default_linux_machine_id_state_dir(Some(OsString::new()), Some(OsString::new())),
PathBuf::from("/var/lib/easytier")
);
}
#[test]
fn test_persist_machine_id_creates_missing_state_dir() {
let temp_dir = tempfile::tempdir().unwrap();
let state_file = temp_dir.path().join("nested").join("machine_id");
let machine_id = uuid::Uuid::new_v4();
assert_eq!(
persist_machine_id(&state_file, machine_id).unwrap(),
machine_id
);
assert_eq!(
std::fs::read_to_string(state_file).unwrap(),
machine_id.to_string()
);
}
#[test]
fn test_legacy_machine_uid_migration_requires_existing_state_dir_content() {
let temp_dir = tempfile::tempdir().unwrap();
let missing_state_file = temp_dir.path().join("missing").join("machine_id");
assert!(!should_attempt_legacy_machine_uid_migration(
&missing_state_file
));
let empty_dir = temp_dir.path().join("empty");
std::fs::create_dir_all(&empty_dir).unwrap();
assert!(!should_attempt_legacy_machine_uid_migration(
&empty_dir.join("machine_id")
));
std::fs::write(empty_dir.join("config.toml"), "x=1").unwrap();
assert!(should_attempt_legacy_machine_uid_migration(
&empty_dir.join("machine_id")
));
}
}
+3 -76
View File
@@ -1,15 +1,12 @@
use std::{ use std::{
fmt::Debug, fmt::Debug,
future, future,
io::Write as _,
sync::{Arc, Mutex}, sync::{Arc, Mutex},
}; };
use time::util::refresh_tz; use time::util::refresh_tz;
use tokio::{task::JoinSet, time::timeout}; use tokio::{task::JoinSet, time::timeout};
use tracing::Instrument; use tracing::Instrument;
use crate::{set_global_var, use_global_var};
pub mod acl_processor; pub mod acl_processor;
pub mod compressor; pub mod compressor;
pub mod config; pub mod config;
@@ -21,6 +18,7 @@ pub mod global_ctx;
pub mod idn; pub mod idn;
pub mod ifcfg; pub mod ifcfg;
pub mod log; pub mod log;
pub mod machine_id;
pub mod netns; pub mod netns;
pub mod network; pub mod network;
pub mod os_info; pub mod os_info;
@@ -31,6 +29,8 @@ pub mod token_bucket;
pub mod tracing_rolling_appender; pub mod tracing_rolling_appender;
pub mod upnp; pub mod upnp;
pub use machine_id::{MachineIdOptions, resolve_machine_id};
pub fn get_logger_timer<F: time::formatting::Formattable>( pub fn get_logger_timer<F: time::formatting::Formattable>(
format: F, format: F,
) -> tracing_subscriber::fmt::time::OffsetTime<F> { ) -> tracing_subscriber::fmt::time::OffsetTime<F> {
@@ -96,71 +96,6 @@ pub fn join_joinset_background<T: Debug + Send + Sync + 'static>(
); );
} }
pub fn set_default_machine_id(mid: Option<String>) {
set_global_var!(MACHINE_UID, mid);
}
pub fn get_machine_id() -> uuid::Uuid {
if let Some(default_mid) = use_global_var!(MACHINE_UID) {
if let Ok(mid) = uuid::Uuid::parse_str(default_mid.trim()) {
return mid;
}
let mut b = [0u8; 16];
crate::tunnel::generate_digest_from_str("", &default_mid, &mut b);
return uuid::Uuid::from_bytes(b);
}
// a path same as the binary
let machine_id_file = std::env::current_exe()
.map(|x| x.with_file_name("et_machine_id"))
.unwrap_or_else(|_| std::path::PathBuf::from("et_machine_id"));
// try load from local file
if let Ok(mid) = std::fs::read_to_string(&machine_id_file)
&& let Ok(mid) = uuid::Uuid::parse_str(mid.trim())
{
return mid;
}
#[cfg(any(
target_os = "linux",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
))]
let gen_mid = machine_uid::get()
.map(|x| {
if x.is_empty() {
return uuid::Uuid::new_v4();
}
let mut b = [0u8; 16];
crate::tunnel::generate_digest_from_str("", x.as_str(), &mut b);
uuid::Uuid::from_bytes(b)
})
.ok();
#[cfg(not(any(
target_os = "linux",
all(target_os = "macos", not(feature = "macos-ne")),
target_os = "windows",
target_os = "freebsd"
)))]
let gen_mid = None;
if let Some(mid) = gen_mid {
return mid;
}
let gen_mid = uuid::Uuid::new_v4();
// try save to local file
if let Ok(mut file) = std::fs::File::create(machine_id_file) {
let _ = file.write_all(gen_mid.to_string().as_bytes());
}
gen_mid
}
pub fn shrink_dashmap<K: Eq + std::hash::Hash, V>( pub fn shrink_dashmap<K: Eq + std::hash::Hash, V>(
map: &dashmap::DashMap<K, V>, map: &dashmap::DashMap<K, V>,
threshold: Option<usize>, threshold: Option<usize>,
@@ -210,12 +145,4 @@ mod tests {
assert_eq!(weak_js.weak_count(), 0); assert_eq!(weak_js.weak_count(), 0);
assert_eq!(weak_js.strong_count(), 0); assert_eq!(weak_js.strong_count(), 0);
} }
#[test]
fn test_get_machine_id_uses_uuid_seed_verbatim() {
let raw = "33333333-3333-3333-3333-333333333333".to_string();
set_default_machine_id(Some(raw.clone()));
assert_eq!(get_machine_id(), uuid::Uuid::parse_str(&raw).unwrap());
set_default_machine_id(None);
}
} }
+22
View File
@@ -85,6 +85,15 @@ pub enum MetricName {
/// Traffic packets forwarded for foreign network, forward /// Traffic packets forwarded for foreign network, forward
TrafficPacketsForeignForwardForwarded, TrafficPacketsForeignForwardForwarded,
/// UDP broadcast relay packets captured from the raw socket
UdpBroadcastRelayPacketsCaptured,
/// UDP broadcast relay packets ignored before forwarding
UdpBroadcastRelayPacketsIgnored,
/// UDP broadcast relay packets forwarded
UdpBroadcastRelayPacketsForwarded,
/// UDP broadcast relay packets that failed to forward
UdpBroadcastRelayPacketsForwardFailed,
/// Compression bytes before compression /// Compression bytes before compression
CompressionBytesRxBefore, CompressionBytesRxBefore,
/// Compression bytes after compression /// Compression bytes after compression
@@ -167,6 +176,19 @@ impl fmt::Display for MetricName {
write!(f, "traffic_packets_foreign_forward_forwarded") write!(f, "traffic_packets_foreign_forward_forwarded")
} }
MetricName::UdpBroadcastRelayPacketsCaptured => {
write!(f, "udp_broadcast_relay_packets_captured")
}
MetricName::UdpBroadcastRelayPacketsIgnored => {
write!(f, "udp_broadcast_relay_packets_ignored")
}
MetricName::UdpBroadcastRelayPacketsForwarded => {
write!(f, "udp_broadcast_relay_packets_forwarded")
}
MetricName::UdpBroadcastRelayPacketsForwardFailed => {
write!(f, "udp_broadcast_relay_packets_forward_failed")
}
MetricName::CompressionBytesRxBefore => write!(f, "compression_bytes_rx_before"), MetricName::CompressionBytesRxBefore => write!(f, "compression_bytes_rx_before"),
MetricName::CompressionBytesRxAfter => write!(f, "compression_bytes_rx_after"), MetricName::CompressionBytesRxAfter => write!(f, "compression_bytes_rx_after"),
MetricName::CompressionBytesTxBefore => write!(f, "compression_bytes_tx_before"), MetricName::CompressionBytesTxBefore => write!(f, "compression_bytes_tx_before"),
+70 -30
View File
@@ -64,6 +64,24 @@ async fn resolve_mapped_listener_addrs(listener: &url::Url) -> Result<Vec<Socket
socket_addrs(listener, || mapped_listener_port(listener)).await socket_addrs(listener, || mapped_listener_port(listener)).await
} }
fn is_usable_public_ipv6_candidate(ip: &Ipv6Addr, global_ctx: &ArcGlobalCtx) -> bool {
is_usable_public_ipv6_candidate_with_mode(ip, global_ctx, TESTING.load(Ordering::Relaxed))
}
fn is_usable_public_ipv6_candidate_with_mode(
ip: &Ipv6Addr,
global_ctx: &ArcGlobalCtx,
testing: bool,
) -> bool {
!global_ctx.is_ip_easytier_managed_ipv6(ip)
&& (testing
|| (!ip.is_loopback()
&& !ip.is_unspecified()
&& !ip.is_unique_local()
&& !ip.is_unicast_link_local()
&& !ip.is_multicast()))
}
#[async_trait::async_trait] #[async_trait::async_trait]
pub trait PeerManagerForDirectConnector { pub trait PeerManagerForDirectConnector {
async fn list_peers(&self) -> Vec<PeerId>; async fn list_peers(&self) -> Vec<PeerId>;
@@ -190,34 +208,28 @@ impl DirectConnectorManagerData {
.with_context(|| format!("failed to bind local socket for {}", remote_url))?, .with_context(|| format!("failed to bind local socket for {}", remote_url))?,
); );
let connector_ip = self let connector_ip = self
.peer_manager .global_ctx
.get_global_ctx()
.get_stun_info_collector() .get_stun_info_collector()
.get_stun_info() .get_stun_info()
.public_ip .public_ip
.iter() .iter()
.find(|x| x.contains(':')) .filter_map(|ip| ip.parse::<Ipv6Addr>().ok())
.ok_or(anyhow::anyhow!( .find(|ip| !self.global_ctx.is_ip_easytier_managed_ipv6(ip));
"failed to get public ipv6 address from stun info"
))?
.parse::<Ipv6Addr>()
.with_context(|| {
format!(
"failed to parse public ipv6 address from stun info: {:?}",
self.peer_manager
.get_global_ctx()
.get_stun_info_collector()
.get_stun_info()
)
})?;
let connector_addr =
SocketAddr::new(IpAddr::V6(connector_ip), local_socket.local_addr()?.port());
// ask remote to send v6 hole punch packet // ask remote to send v6 hole punch packet
// and no matter what the result is, continue to connect // and no matter what the result is, continue to connect
let _ = self if let Some(connector_ip) = connector_ip {
.remote_send_udp_hole_punch_packet(dst_peer_id, connector_addr, remote_url) let connector_addr =
.await; SocketAddr::new(IpAddr::V6(connector_ip), local_socket.local_addr()?.port());
let _ = self
.remote_send_udp_hole_punch_packet(dst_peer_id, connector_addr, remote_url)
.await;
} else {
tracing::debug!(
?remote_url,
"skip remote IPv6 hole-punch packet; no non-EasyTier public IPv6 in STUN info"
);
}
let udp_connector = UdpTunnelConnector::new(remote_url.clone()); let udp_connector = UdpTunnelConnector::new(remote_url.clone());
let remote_addr = SocketAddr::from_url(remote_url.clone(), IpVersion::V6).await?; let remote_addr = SocketAddr::from_url(remote_url.clone(), IpVersion::V6).await?;
@@ -479,14 +491,7 @@ impl DirectConnectorManagerData {
.iter() .iter()
.chain(ip_list.public_ipv6.iter()) .chain(ip_list.public_ipv6.iter())
.filter_map(|x| Ipv6Addr::from_str(&x.to_string()).ok()) .filter_map(|x| Ipv6Addr::from_str(&x.to_string()).ok())
.filter(|x| { .filter(|x| is_usable_public_ipv6_candidate(x, &self.global_ctx))
TESTING.load(Ordering::Relaxed)
|| (!x.is_loopback()
&& !x.is_unspecified()
&& !x.is_unique_local()
&& !x.is_unicast_link_local()
&& !x.is_multicast())
})
.collect::<HashSet<_>>() .collect::<HashSet<_>>()
.iter() .iter()
.for_each(|ip| { .for_each(|ip| {
@@ -515,6 +520,11 @@ impl DirectConnectorManagerData {
); );
} }
}); });
} else if self.global_ctx.is_ip_easytier_managed_ipv6(s_addr.ip()) {
tracing::debug!(
?listener,
"skip EasyTier-managed IPv6 as direct-connect target"
);
} else if !s_addr.ip().is_loopback() || TESTING.load(Ordering::Relaxed) { } else if !s_addr.ip().is_loopback() || TESTING.load(Ordering::Relaxed) {
if self if self
.global_ctx .global_ctx
@@ -790,9 +800,10 @@ impl DirectConnectorManager {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::sync::Arc; use std::{collections::BTreeSet, sync::Arc};
use crate::{ use crate::{
common::global_ctx::tests::get_mock_global_ctx,
connector::direct::{ connector::direct::{
DirectConnectorManager, DirectConnectorManagerData, DstListenerUrlBlackListItem, DirectConnectorManager, DirectConnectorManagerData, DstListenerUrlBlackListItem,
}, },
@@ -802,12 +813,41 @@ mod tests {
wait_route_appear_with_cost, wait_route_appear_with_cost,
}, },
proto::peer_rpc::GetIpListResponse, proto::peer_rpc::GetIpListResponse,
tunnel::{IpScheme, TunnelScheme, matches_scheme},
}; };
use std::net::{IpAddr, Ipv4Addr, SocketAddr}; use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use super::{TESTING, mapped_listener_port, resolve_mapped_listener_addrs}; use super::{TESTING, mapped_listener_port, resolve_mapped_listener_addrs};
#[tokio::test]
async fn public_ipv6_candidate_rejects_easytier_managed_addr_even_in_tests() {
let global_ctx = get_mock_global_ctx();
let managed_ipv6: cidr::Ipv6Inet = "2001:db8::2/128".parse().unwrap();
global_ctx.set_public_ipv6_routes(BTreeSet::from([managed_ipv6]));
assert!(!super::is_usable_public_ipv6_candidate_with_mode(
&"2001:db8::2".parse().unwrap(),
&global_ctx,
true,
));
assert!(super::is_usable_public_ipv6_candidate_with_mode(
&"::1".parse().unwrap(),
&global_ctx,
true,
));
}
#[test]
fn udp_ipv6_url_matches_hole_punch_branch_condition() {
let remote_url: url::Url = "udp://[2001:db8::1]:11010".parse().unwrap();
let takes_udp_ipv6_hole_punch_branch =
matches_scheme!(remote_url, TunnelScheme::Ip(IpScheme::Udp))
&& matches!(remote_url.host(), Some(url::Host::Ipv6(_)));
assert!(takes_udp_ipv6_hole_punch_branch);
}
#[test] #[test]
fn mapped_listener_port_uses_ip_scheme_defaults() { fn mapped_listener_port_uses_ip_scheme_defaults() {
assert_eq!( assert_eq!(
+162 -52
View File
@@ -1,6 +1,8 @@
use std::{ use std::{
collections::BTreeSet, collections::BTreeSet,
future::Future,
sync::{Arc, Weak}, sync::{Arc, Weak},
time::{Duration, Instant},
}; };
use dashmap::DashSet; use dashmap::DashSet;
@@ -16,7 +18,7 @@ use crate::{
}, },
rpc_types::{self, controller::BaseController}, rpc_types::{self, controller::BaseController},
}, },
tunnel::{IpVersion, TunnelConnector}, tunnel::{IpVersion, TunnelConnector, TunnelScheme, matches_scheme},
utils::weak_upgrade, utils::weak_upgrade,
}; };
@@ -83,6 +85,55 @@ impl ManualConnectorManager {
ret ret
} }
fn reconnect_timeout(dead_url: &url::Url) -> Duration {
let use_long_timeout = matches_scheme!(
dead_url,
TunnelScheme::Http | TunnelScheme::Https | TunnelScheme::Txt | TunnelScheme::Srv
) || matches!(dead_url.scheme(), "ws" | "wss");
Duration::from_secs(if use_long_timeout { 20 } else { 2 })
}
fn remaining_budget(started_at: Instant, total_timeout: Duration) -> Option<Duration> {
let remaining = total_timeout.checked_sub(started_at.elapsed())?;
(!remaining.is_zero()).then_some(remaining)
}
fn emit_connect_error(
data: &ConnectorManagerData,
dead_url: &url::Url,
ip_version: IpVersion,
error: &Error,
) {
data.global_ctx.issue_event(GlobalCtxEvent::ConnectError(
dead_url.to_string(),
format!("{:?}", ip_version),
format!("{:#?}", error),
));
}
fn reconnect_timeout_error(stage: &str, duration: Duration) -> Error {
Error::AnyhowError(anyhow::anyhow!("{} timeout after {:?}", stage, duration))
}
async fn with_reconnect_timeout<T, F>(
stage: &'static str,
started_at: Instant,
total_timeout: Duration,
fut: F,
) -> Result<T, Error>
where
F: Future<Output = Result<T, Error>>,
{
let remaining = Self::remaining_budget(started_at, total_timeout)
.ok_or_else(|| Self::reconnect_timeout_error(stage, started_at.elapsed()))?;
timeout(remaining, fut)
.await
.map_err(|_| Self::reconnect_timeout_error(stage, remaining))?
}
}
impl ManualConnectorManager {
pub fn add_connector<T>(&self, connector: T) pub fn add_connector<T>(&self, connector: T)
where where
T: TunnelConnector + 'static, T: TunnelConnector + 'static,
@@ -242,11 +293,18 @@ impl ManualConnectorManager {
async fn conn_reconnect_with_ip_version( async fn conn_reconnect_with_ip_version(
data: Arc<ConnectorManagerData>, data: Arc<ConnectorManagerData>,
dead_url: String, dead_url: url::Url,
ip_version: IpVersion, ip_version: IpVersion,
started_at: Instant,
total_timeout: Duration,
) -> Result<ReconnResult, Error> { ) -> Result<ReconnResult, Error> {
let connector = let connector = Self::with_reconnect_timeout(
create_connector_by_url(&dead_url, &data.global_ctx.clone(), ip_version).await?; "resolve",
started_at,
total_timeout,
create_connector_by_url(dead_url.as_str(), &data.global_ctx, ip_version),
)
.await?;
data.global_ctx data.global_ctx
.issue_event(GlobalCtxEvent::Connecting(connector.remote_url())); .issue_event(GlobalCtxEvent::Connecting(connector.remote_url()));
@@ -257,10 +315,25 @@ impl ManualConnectorManager {
))); )));
}; };
let (peer_id, conn_id) = pm.try_direct_connect(connector).await?; let tunnel = Self::with_reconnect_timeout(
"connect",
started_at,
total_timeout,
pm.connect_tunnel(connector),
)
.await?;
let (peer_id, conn_id) = Self::with_reconnect_timeout(
"handshake",
started_at,
total_timeout,
pm.add_client_tunnel_with_peer_id_hint(tunnel, true, None),
)
.await?;
tracing::info!("reconnect succ: {} {} {}", peer_id, conn_id, dead_url); tracing::info!("reconnect succ: {} {} {}", peer_id, conn_id, dead_url);
Ok(ReconnResult { Ok(ReconnResult {
dead_url, dead_url: dead_url.to_string(),
peer_id, peer_id,
conn_id, conn_id,
}) })
@@ -273,22 +346,33 @@ impl ManualConnectorManager {
tracing::info!("reconnect: {}", dead_url); tracing::info!("reconnect: {}", dead_url);
let mut ip_versions = vec![]; let mut ip_versions = vec![];
if dead_url.scheme() == "ring" || dead_url.scheme() == "txt" || dead_url.scheme() == "srv" { if matches_scheme!(
dead_url,
TunnelScheme::Ring | TunnelScheme::Txt | TunnelScheme::Srv
) {
ip_versions.push(IpVersion::Both); ip_versions.push(IpVersion::Both);
} else { } else {
let converted_dead_url = crate::common::idn::convert_idn_to_ascii(dead_url.clone())?; let converted_dead_url =
let addrs = match socket_addrs(&converted_dead_url, || Some(1000)).await { match crate::common::idn::convert_idn_to_ascii(dead_url.clone()) {
Ok(url) => url,
Err(error) => {
let error: Error = error.into();
Self::emit_connect_error(&data, &dead_url, IpVersion::Both, &error);
return Err(error);
}
};
let addrs = match Self::with_reconnect_timeout(
"resolve",
Instant::now(),
Self::reconnect_timeout(&dead_url),
socket_addrs(&converted_dead_url, || Some(1000)),
)
.await
{
Ok(addrs) => addrs, Ok(addrs) => addrs,
Err(e) => { Err(error) => {
data.global_ctx.issue_event(GlobalCtxEvent::ConnectError( Self::emit_connect_error(&data, &dead_url, IpVersion::Both, &error);
dead_url.to_string(), return Err(error);
format!("{:?}", IpVersion::Both),
format!("{:?}", e),
));
return Err(Error::AnyhowError(anyhow::anyhow!(
"get ip from url failed: {:?}",
e
)));
} }
}; };
tracing::info!(?addrs, ?dead_url, "get ip from url done"); tracing::info!(?addrs, ?dead_url, "get ip from url done");
@@ -313,46 +397,24 @@ impl ManualConnectorManager {
"cannot get ip from url" "cannot get ip from url"
))); )));
for ip_version in ip_versions { for ip_version in ip_versions {
let use_long_timeout = dead_url.scheme() == "http" let started_at = Instant::now();
|| dead_url.scheme() == "https" let ret = Self::conn_reconnect_with_ip_version(
|| dead_url.scheme() == "ws" data.clone(),
|| dead_url.scheme() == "wss" dead_url.clone(),
|| dead_url.scheme() == "txt" ip_version,
|| dead_url.scheme() == "srv"; started_at,
let ret = timeout( Self::reconnect_timeout(&dead_url),
// allow http/websocket connector to wait longer
std::time::Duration::from_secs(if use_long_timeout { 20 } else { 2 }),
Self::conn_reconnect_with_ip_version(
data.clone(),
dead_url.to_string(),
ip_version,
),
) )
.await; .await;
tracing::info!("reconnect: {} done, ret: {:?}", dead_url, ret); tracing::info!("reconnect: {} done, ret: {:?}", dead_url, ret);
match ret { match ret {
Ok(Ok(_)) => { Ok(result) => return Ok(result),
// 外层和内层都成功:解包并跳出 Err(error) => {
reconn_ret = ret.unwrap(); Self::emit_connect_error(&data, &dead_url, ip_version, &error);
break; reconn_ret = Err(error);
}
Ok(Err(e)) => {
// 外层成功,内层失败
reconn_ret = Err(e);
}
Err(e) => {
// 外层失败
reconn_ret = Err(e.into());
} }
} }
// 发送事件(只有在未 break 时才执行)
data.global_ctx.issue_event(GlobalCtxEvent::ConnectError(
dead_url.to_string(),
format!("{:?}", ip_version),
format!("{:?}", reconn_ret),
));
} }
reconn_ret reconn_ret
@@ -388,6 +450,54 @@ mod tests {
use super::*; use super::*;
#[tokio::test]
async fn reconnect_timeout_reports_exhausted_budget_for_stage() {
let started_at = Instant::now() - Duration::from_millis(50);
let err = ManualConnectorManager::with_reconnect_timeout(
"resolve",
started_at,
Duration::from_millis(1),
async { Ok::<(), Error>(()) },
)
.await
.unwrap_err();
let message = err.to_string();
assert!(message.contains("resolve timeout after"));
}
#[tokio::test]
async fn reconnect_timeout_reports_stage_timeout_with_remaining_budget() {
let err = ManualConnectorManager::with_reconnect_timeout(
"handshake",
Instant::now(),
Duration::from_millis(10),
async {
tokio::time::sleep(Duration::from_millis(50)).await;
Ok::<(), Error>(())
},
)
.await
.unwrap_err();
let message = err.to_string();
assert!(message.contains("handshake timeout after"));
}
#[tokio::test]
async fn reconnect_timeout_preserves_success_within_budget() {
let result = ManualConnectorManager::with_reconnect_timeout(
"connect",
Instant::now(),
Duration::from_millis(50),
async { Ok::<_, Error>(123_u32) },
)
.await
.unwrap();
assert_eq!(result, 123);
}
#[tokio::test] #[tokio::test]
async fn test_reconnect_with_connecting_addr() { async fn test_reconnect_with_connecting_addr() {
set_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, 1); set_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, 1);
+180 -15
View File
@@ -1,19 +1,17 @@
use std::{ use std::net::{IpAddr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6};
net::{SocketAddr, SocketAddrV4, SocketAddrV6},
sync::Arc,
};
use crate::{ use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, idn, network::IPCollector}, common::{dns::socket_addrs, error::Error, global_ctx::ArcGlobalCtx, idn},
connector::dns_connector::DnsTunnelConnector, connector::dns_connector::DnsTunnelConnector,
proto::common::PeerFeatureFlag, proto::common::PeerFeatureFlag,
tunnel::{ tunnel::{
self, FromUrl, IpScheme, IpVersion, TunnelConnector, TunnelError, TunnelScheme, self, IpScheme, IpVersion, TunnelConnector, TunnelError, TunnelScheme,
ring::RingTunnelConnector, tcp::TcpTunnelConnector, udp::UdpTunnelConnector, ring::RingTunnelConnector, tcp::TcpTunnelConnector, udp::UdpTunnelConnector,
}, },
utils::BoxExt, utils::BoxExt,
}; };
use http_connector::HttpTunnelConnector; use http_connector::HttpTunnelConnector;
use rand::seq::SliceRandom;
pub mod direct; pub mod direct;
pub mod manual; pub mod manual;
@@ -56,7 +54,7 @@ pub(crate) fn should_background_p2p_with_peer(
async fn set_bind_addr_for_peer_connector( async fn set_bind_addr_for_peer_connector(
connector: &mut (impl TunnelConnector + ?Sized), connector: &mut (impl TunnelConnector + ?Sized),
is_ipv4: bool, is_ipv4: bool,
ip_collector: &Arc<IPCollector>, global_ctx: &ArcGlobalCtx,
) { ) {
if cfg!(any( if cfg!(any(
target_os = "android", target_os = "android",
@@ -69,7 +67,7 @@ async fn set_bind_addr_for_peer_connector(
return; return;
} }
let ips = ip_collector.collect_ip_addrs().await; let ips = global_ctx.get_ip_collector().collect_ip_addrs().await;
if is_ipv4 { if is_ipv4 {
let mut bind_addrs = vec![]; let mut bind_addrs = vec![];
for ipv4 in ips.interface_ipv4s { for ipv4 in ips.interface_ipv4s {
@@ -80,7 +78,11 @@ async fn set_bind_addr_for_peer_connector(
} else { } else {
let mut bind_addrs = vec![]; let mut bind_addrs = vec![];
for ipv6 in ips.interface_ipv6s.iter().chain(ips.public_ipv6.iter()) { for ipv6 in ips.interface_ipv6s.iter().chain(ips.public_ipv6.iter()) {
let socket_addr = SocketAddrV6::new(std::net::Ipv6Addr::from(*ipv6), 0, 0, 0).into(); let ipv6 = std::net::Ipv6Addr::from(*ipv6);
if global_ctx.is_ip_easytier_managed_ipv6(&ipv6) {
continue;
}
let socket_addr = SocketAddrV6::new(ipv6, 0, 0, 0).into();
bind_addrs.push(socket_addr); bind_addrs.push(socket_addr);
} }
connector.set_bind_addrs(bind_addrs); connector.set_bind_addrs(bind_addrs);
@@ -88,6 +90,144 @@ async fn set_bind_addr_for_peer_connector(
let _ = connector; let _ = connector;
} }
struct ResolvedConnectorAddr {
addr: SocketAddr,
ip_version: IpVersion,
}
fn connector_default_port(url: &url::Url) -> Option<u16> {
url.try_into()
.ok()
.and_then(|s: TunnelScheme| s.try_into().ok())
.map(IpScheme::default_port)
}
fn addr_matches_ip_version(addr: &SocketAddr, ip_version: IpVersion) -> bool {
match ip_version {
IpVersion::V4 => addr.is_ipv4(),
IpVersion::V6 => addr.is_ipv6(),
IpVersion::Both => true,
}
}
fn infer_effective_ip_version(addrs: &[SocketAddr], requested_ip_version: IpVersion) -> IpVersion {
match requested_ip_version {
IpVersion::Both if addrs.iter().all(SocketAddr::is_ipv4) => IpVersion::V4,
IpVersion::Both if addrs.iter().all(SocketAddr::is_ipv6) => IpVersion::V6,
_ => requested_ip_version,
}
}
async fn easytier_managed_ipv6_source_for_dst(
global_ctx: &ArcGlobalCtx,
dst_addr: SocketAddrV6,
) -> Result<Option<Ipv6Addr>, Error> {
let socket = {
let _g = global_ctx.net_ns.guard();
tokio::net::UdpSocket::bind("[::]:0").await?
};
socket.connect(SocketAddr::V6(dst_addr)).await?;
let IpAddr::V6(local_ip) = socket.local_addr()?.ip() else {
return Ok(None);
};
Ok(global_ctx
.is_ip_easytier_managed_ipv6(&local_ip)
.then_some(local_ip))
}
async fn ipv6_connector_reject_reason(
url: &url::Url,
global_ctx: &ArcGlobalCtx,
v6_addr: SocketAddrV6,
skip_source_validation_errors: bool,
) -> Result<Option<String>, Error> {
if global_ctx.is_ip_easytier_managed_ipv6(v6_addr.ip()) {
return Ok(Some(format!(
"{} resolves to EasyTier-managed IPv6 {}",
url,
v6_addr.ip()
)));
}
match easytier_managed_ipv6_source_for_dst(global_ctx, v6_addr).await {
Ok(Some(local_ip)) => Ok(Some(format!(
"{} would use EasyTier-managed IPv6 {} as local source for {}",
url, local_ip, v6_addr
))),
Ok(None) => Ok(None),
Err(err) if skip_source_validation_errors => Ok(Some(format!(
"{} IPv6 candidate {} could not be validated: {}",
url, v6_addr, err
))),
Err(err) => Err(err),
}
}
async fn resolve_connector_socket_addr(
url: &url::Url,
global_ctx: &ArcGlobalCtx,
ip_version: IpVersion,
) -> Result<ResolvedConnectorAddr, Error> {
let addrs = socket_addrs(url, || connector_default_port(url))
.await
.map_err(|e| {
TunnelError::InvalidAddr(format!(
"failed to resolve socket addr, url: {}, error: {}",
url, e
))
})?;
let mut usable_addrs = Vec::new();
let mut rejected_ipv6_reason = None;
let skip_source_validation_errors = ip_version == IpVersion::Both;
for addr in addrs
.into_iter()
.filter(|addr| addr_matches_ip_version(addr, ip_version))
{
if let SocketAddr::V6(v6_addr) = addr
&& let Some(reason) = ipv6_connector_reject_reason(
url,
global_ctx,
v6_addr,
skip_source_validation_errors,
)
.await?
{
rejected_ipv6_reason = Some(reason);
continue;
}
usable_addrs.push(addr);
}
if usable_addrs.is_empty() {
if let Some(reason) = rejected_ipv6_reason {
return Err(Error::InvalidUrl(format!(
"{}, refusing overlay-backed underlay connection",
reason
)));
}
return Err(Error::TunnelError(TunnelError::NoDnsRecordFound(
ip_version,
)));
}
let effective_ip_version = infer_effective_ip_version(&usable_addrs, ip_version);
let addr = usable_addrs
.choose(&mut rand::thread_rng())
.copied()
.ok_or_else(|| Error::TunnelError(TunnelError::NoDnsRecordFound(ip_version)))?;
Ok(ResolvedConnectorAddr {
addr,
ip_version: effective_ip_version,
})
}
pub async fn create_connector_by_url( pub async fn create_connector_by_url(
url: &str, url: &str,
global_ctx: &ArcGlobalCtx, global_ctx: &ArcGlobalCtx,
@@ -98,9 +238,11 @@ pub async fn create_connector_by_url(
let scheme = (&url) let scheme = (&url)
.try_into() .try_into()
.map_err(|_| TunnelError::InvalidProtocol(url.scheme().to_owned()))?; .map_err(|_| TunnelError::InvalidProtocol(url.scheme().to_owned()))?;
let mut effective_connector_ip_version = ip_version;
let mut connector: Box<dyn TunnelConnector + 'static> = match scheme { let mut connector: Box<dyn TunnelConnector + 'static> = match scheme {
TunnelScheme::Ip(scheme) => { TunnelScheme::Ip(scheme) => {
let dst_addr = SocketAddr::from_url(url.clone(), ip_version).await?; let resolved_addr = resolve_connector_socket_addr(&url, global_ctx, ip_version).await?;
effective_connector_ip_version = resolved_addr.ip_version;
let mut connector: Box<dyn TunnelConnector> = match scheme { let mut connector: Box<dyn TunnelConnector> = match scheme {
IpScheme::Tcp => TcpTunnelConnector::new(url).boxed(), IpScheme::Tcp => TcpTunnelConnector::new(url).boxed(),
IpScheme::Udp => UdpTunnelConnector::new(url).boxed(), IpScheme::Udp => UdpTunnelConnector::new(url).boxed(),
@@ -125,11 +267,12 @@ pub async fn create_connector_by_url(
#[cfg(feature = "faketcp")] #[cfg(feature = "faketcp")]
IpScheme::FakeTcp => tunnel::fake_tcp::FakeTcpTunnelConnector::new(url).boxed(), IpScheme::FakeTcp => tunnel::fake_tcp::FakeTcpTunnelConnector::new(url).boxed(),
}; };
connector.set_resolved_addr(resolved_addr.addr);
if global_ctx.config.get_flags().bind_device { if global_ctx.config.get_flags().bind_device {
set_bind_addr_for_peer_connector( set_bind_addr_for_peer_connector(
&mut connector, &mut connector,
dst_addr.is_ipv4(), resolved_addr.addr.is_ipv4(),
&global_ctx.get_ip_collector(), global_ctx,
) )
.await; .await;
} }
@@ -151,16 +294,38 @@ pub async fn create_connector_by_url(
DnsTunnelConnector::new(url, global_ctx.clone()).boxed() DnsTunnelConnector::new(url, global_ctx.clone()).boxed()
} }
}; };
connector.set_ip_version(ip_version); connector.set_ip_version(effective_connector_ip_version);
Ok(connector) Ok(connector)
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::proto::common::PeerFeatureFlag; use std::collections::BTreeSet;
use super::{should_background_p2p_with_peer, should_try_p2p_with_peer}; use crate::{
common::global_ctx::tests::get_mock_global_ctx, proto::common::PeerFeatureFlag,
tunnel::IpVersion,
};
use super::{
create_connector_by_url, should_background_p2p_with_peer, should_try_p2p_with_peer,
};
#[tokio::test]
async fn connector_rejects_easytier_managed_ipv6_destination() {
let global_ctx = get_mock_global_ctx();
let public_route: cidr::Ipv6Inet = "2001:db8::2/128".parse().unwrap();
global_ctx.set_public_ipv6_routes(BTreeSet::from([public_route]));
let ret =
create_connector_by_url("tcp://[2001:db8::2]:11010", &global_ctx, IpVersion::V6).await;
assert!(matches!(
ret,
Err(crate::common::error::Error::InvalidUrl(_))
));
}
#[test] #[test]
fn lazy_background_p2p_requires_need_p2p() { fn lazy_background_p2p_requires_need_p2p() {
+41 -17
View File
@@ -6,6 +6,7 @@ use std::{
use crossbeam::atomic::AtomicCell; use crossbeam::atomic::AtomicCell;
use dashmap::{DashMap, DashSet}; use dashmap::{DashMap, DashSet};
use guarden::defer;
use rand::seq::SliceRandom as _; use rand::seq::SliceRandom as _;
use tokio::{net::UdpSocket, sync::Mutex, task::JoinSet}; use tokio::{net::UdpSocket, sync::Mutex, task::JoinSet};
use tracing::{Instrument, Level, instrument}; use tracing::{Instrument, Level, instrument};
@@ -15,7 +16,6 @@ use crate::{
common::{ common::{
PeerId, error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS, upnp, PeerId, error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS, upnp,
}, },
defer,
peers::peer_manager::PeerManager, peers::peer_manager::PeerManager,
proto::common::NatType, proto::common::NatType,
tunnel::{ tunnel::{
@@ -719,25 +719,31 @@ async fn check_udp_socket_local_addr(
) -> Result<(), Error> { ) -> Result<(), Error> {
let socket = UdpSocket::bind("0.0.0.0:0").await?; let socket = UdpSocket::bind("0.0.0.0:0").await?;
socket.connect(remote_mapped_addr).await?; socket.connect(remote_mapped_addr).await?;
if let Ok(local_addr) = socket.local_addr() { if let Ok(local_addr) = socket.local_addr()
// local_addr should not be equal to an EasyTier-managed virtual/public address. && let Some(err) = easytier_managed_local_addr_error(&global_ctx, local_addr)
match local_addr.ip() { {
IpAddr::V4(ip) => { return Err(anyhow::anyhow!(err).into());
if global_ctx.get_ipv4().map(|ip| ip.address()) == Some(ip) {
return Err(anyhow::anyhow!("local address is virtual ipv4").into());
}
}
IpAddr::V6(ip) => {
if global_ctx.is_ip_local_ipv6(&ip) {
return Err(anyhow::anyhow!("local address is easytier-managed ipv6").into());
}
}
}
} }
Ok(()) Ok(())
} }
fn easytier_managed_local_addr_error(
global_ctx: &ArcGlobalCtx,
local_addr: SocketAddr,
) -> Option<&'static str> {
// local_addr should not be equal to an EasyTier-managed virtual/public address.
match local_addr.ip() {
IpAddr::V4(ip) if global_ctx.get_ipv4().map(|ip| ip.address()) == Some(ip) => {
Some("local address is virtual ipv4")
}
IpAddr::V6(ip) if global_ctx.is_ip_easytier_managed_ipv6(&ip) => {
Some("local address is easytier-managed ipv6")
}
_ => None,
}
}
pub(crate) async fn try_connect_with_socket( pub(crate) async fn try_connect_with_socket(
global_ctx: ArcGlobalCtx, global_ctx: ArcGlobalCtx,
socket: Arc<UdpSocket>, socket: Arc<UdpSocket>,
@@ -763,11 +769,29 @@ pub(crate) async fn try_connect_with_socket(
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::{collections::BTreeSet, net::SocketAddr};
use crate::common::global_ctx::tests::get_mock_global_ctx;
use super::{ use super::{
MAX_PUBLIC_UDP_HOLE_PUNCH_LISTENERS, should_create_public_listener, MAX_PUBLIC_UDP_HOLE_PUNCH_LISTENERS, easytier_managed_local_addr_error,
should_retry_public_listener_selection, should_create_public_listener, should_retry_public_listener_selection,
}; };
#[tokio::test]
async fn local_addr_check_rejects_easytier_public_ipv6_route() {
let global_ctx = get_mock_global_ctx();
let public_route: cidr::Ipv6Inet = "2001:db8::4/128".parse().unwrap();
global_ctx.set_public_ipv6_routes(BTreeSet::from([public_route]));
let local_addr: SocketAddr = "[2001:db8::4]:1234".parse().unwrap();
assert_eq!(
easytier_managed_local_addr_error(&global_ctx, local_addr),
Some("local address is easytier-managed ipv6")
);
}
#[test] #[test]
fn listener_selection_prefers_reuse_before_cap() { fn listener_selection_prefers_reuse_before_cap() {
assert!(!should_create_public_listener(1, true, true, false, false)); assert!(!should_create_public_listener(1, true, true, false, false));
@@ -9,6 +9,7 @@ use std::{
}; };
use anyhow::Context; use anyhow::Context;
use guarden::defer;
use rand::{Rng, seq::SliceRandom}; use rand::{Rng, seq::SliceRandom};
use tokio::{net::UdpSocket, sync::RwLock}; use tokio::{net::UdpSocket, sync::RwLock};
use tokio_util::task::AbortOnDropHandle; use tokio_util::task::AbortOnDropHandle;
@@ -22,7 +23,6 @@ use crate::{
}, },
handle_rpc_result, handle_rpc_result,
}, },
defer,
peers::peer_manager::PeerManager, peers::peer_manager::PeerManager,
proto::{ proto::{
peer_rpc::{ peer_rpc::{
+17 -2
View File
@@ -12,7 +12,6 @@ use crate::{
constants::EASYTIER_VERSION, constants::EASYTIER_VERSION,
log, log,
}, },
defer,
instance_manager::NetworkInstanceManager, instance_manager::NetworkInstanceManager,
launcher::add_proxy_network_to_config, launcher::add_proxy_network_to_config,
proto::common::{CompressionAlgoPb, SecureModeConfig}, proto::common::{CompressionAlgoPb, SecureModeConfig},
@@ -23,6 +22,7 @@ use crate::{
use anyhow::Context; use anyhow::Context;
use cidr::IpCidr; use cidr::IpCidr;
use clap::{CommandFactory, Parser}; use clap::{CommandFactory, Parser};
use guarden::defer;
use rust_i18n::t; use rust_i18n::t;
use std::{ use std::{
net::{IpAddr, SocketAddr}, net::{IpAddr, SocketAddr},
@@ -484,6 +484,15 @@ struct NetworkOptions {
)] )]
disable_upnp: Option<bool>, disable_upnp: Option<bool>,
#[arg(
long,
env = "ET_ENABLE_UDP_BROADCAST_RELAY",
help = t!("core_clap.enable_udp_broadcast_relay").to_string(),
num_args = 0..=1,
default_missing_value = "true"
)]
enable_udp_broadcast_relay: Option<bool>,
#[arg( #[arg(
long, long,
env = "ET_RELAY_ALL_PEER_RPC", env = "ET_RELAY_ALL_PEER_RPC",
@@ -1142,6 +1151,9 @@ impl NetworkOptions {
.disable_sym_hole_punching .disable_sym_hole_punching
.unwrap_or(f.disable_sym_hole_punching); .unwrap_or(f.disable_sym_hole_punching);
f.disable_upnp = self.disable_upnp.unwrap_or(f.disable_upnp); f.disable_upnp = self.disable_upnp.unwrap_or(f.disable_upnp);
f.enable_udp_broadcast_relay = self
.enable_udp_broadcast_relay
.unwrap_or(f.enable_udp_broadcast_relay);
// Configure tld_dns_zone: use provided value if set // Configure tld_dns_zone: use provided value if set
if let Some(tld_dns_zone) = &self.tld_dns_zone { if let Some(tld_dns_zone) = &self.tld_dns_zone {
f.tld_dns_zone = tld_dns_zone.clone(); f.tld_dns_zone = tld_dns_zone.clone();
@@ -1336,7 +1348,10 @@ async fn run_main(cli: Cli) -> anyhow::Result<()> {
let _web_client = if let Some(config_server_url_s) = cli.config_server.as_ref() { let _web_client = if let Some(config_server_url_s) = cli.config_server.as_ref() {
let wc = web_client::run_web_client( let wc = web_client::run_web_client(
config_server_url_s, config_server_url_s,
cli.machine_id.clone(), crate::common::MachineIdOptions {
explicit_machine_id: cli.machine_id.clone(),
state_dir: None,
},
cli.network_options.hostname.clone(), cli.network_options.hostname.clone(),
cli.network_options.secure_mode.unwrap_or(false), cli.network_options.secure_mode.unwrap_or(false),
manager.clone(), manager.clone(),
+63 -9
View File
@@ -74,7 +74,7 @@ use easytier::{
common::{NatType, PortForwardConfigPb, SocketType}, common::{NatType, PortForwardConfigPb, SocketType},
peer_rpc::{GetGlobalPeerMapRequest, PeerCenterRpc, PeerCenterRpcClientFactory}, peer_rpc::{GetGlobalPeerMapRequest, PeerCenterRpc, PeerCenterRpcClientFactory},
rpc_impl::standalone::StandAloneClient, rpc_impl::standalone::StandAloneClient,
rpc_types::controller::BaseController, rpc_types::{controller::BaseController, error::Error as RpcError},
}, },
tunnel::{TunnelScheme, tcp::TcpTunnelConnector}, tunnel::{TunnelScheme, tcp::TcpTunnelConnector},
utils::{PeerRoutePair, string::cost_to_str}, utils::{PeerRoutePair, string::cost_to_str},
@@ -193,8 +193,11 @@ struct PeerArgs {
#[derive(Subcommand, Debug)] #[derive(Subcommand, Debug)]
enum PeerSubCommand { enum PeerSubCommand {
/// List connected peers
List, List,
/// Show public IPv6 address information
Ipv6, Ipv6,
/// List foreign networks discovered by this instance
ListForeign { ListForeign {
#[arg( #[arg(
long, long,
@@ -203,6 +206,7 @@ enum PeerSubCommand {
)] )]
trusted_keys: bool, trusted_keys: bool,
}, },
/// List global foreign networks from the peer center
ListGlobalForeign, ListGlobalForeign,
} }
@@ -214,16 +218,18 @@ struct RouteArgs {
#[derive(Subcommand, Debug)] #[derive(Subcommand, Debug)]
enum RouteSubCommand { enum RouteSubCommand {
/// List routes propagated by peers
List, List,
/// Dump routes in CIDR format
Dump, Dump,
} }
#[derive(Args, Debug)] #[derive(Args, Debug)]
struct ConnectorArgs { struct ConnectorArgs {
#[arg(short, long)] #[arg(short, long, help = "filter connectors by virtual IPv4 address")]
ipv4: Option<String>, ipv4: Option<String>,
#[arg(short, long)] #[arg(short, long, help = "filter connectors by peer URL")]
peers: Vec<String>, peers: Vec<String>,
#[command(subcommand)] #[command(subcommand)]
@@ -242,6 +248,7 @@ enum ConnectorSubCommand {
#[arg(help = "connector url, e.g., tcp://1.2.3.4:11010")] #[arg(help = "connector url, e.g., tcp://1.2.3.4:11010")]
url: String, url: String,
}, },
/// List connectors
List, List,
} }
@@ -283,6 +290,7 @@ struct AclArgs {
#[derive(Subcommand, Debug)] #[derive(Subcommand, Debug)]
enum AclSubCommand { enum AclSubCommand {
/// Show ACL rule hit statistics
Stats, Stats,
} }
@@ -450,19 +458,25 @@ struct InstallArgs {
#[arg(long, default_value = env!("CARGO_PKG_DESCRIPTION"), help = "service description")] #[arg(long, default_value = env!("CARGO_PKG_DESCRIPTION"), help = "service description")]
description: String, description: String,
#[arg(long)] #[arg(long, help = "display name shown by the service manager")]
display_name: Option<String>, display_name: Option<String>,
#[arg(long)] #[arg(
long,
help = "whether to disable starting the service automatically on boot (true/false)"
)]
disable_autostart: Option<bool>, disable_autostart: Option<bool>,
#[arg(long)] #[arg(
long,
help = "whether to disable automatic restart when the service fails (true/false)"
)]
disable_restart_on_failure: Option<bool>, disable_restart_on_failure: Option<bool>,
#[arg(long, help = "path to easytier-core binary")] #[arg(long, help = "path to easytier-core binary")]
core_path: Option<PathBuf>, core_path: Option<PathBuf>,
#[arg(long)] #[arg(long, help = "working directory for the easytier-core service")]
service_work_dir: Option<PathBuf>, service_work_dir: Option<PathBuf>,
#[arg( #[arg(
@@ -526,6 +540,40 @@ type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = Result<T, Error>> + 'a>
type ForeignNetworkMap = BTreeMap<String, ForeignNetworkEntryPb>; type ForeignNetworkMap = BTreeMap<String, ForeignNetworkEntryPb>;
type GlobalForeignNetworkMap = BTreeMap<u32, list_global_foreign_network_response::ForeignNetworks>; type GlobalForeignNetworkMap = BTreeMap<u32, list_global_foreign_network_response::ForeignNetworks>;
fn is_missing_web_client_service(error: &RpcError) -> bool {
matches!(
error,
RpcError::InvalidServiceKey(service_name, _)
if service_name.trim_matches('"') == "WebClientService"
)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn missing_web_client_service_matches_raw_service_name() {
let error = RpcError::InvalidServiceKey("WebClientService".to_string(), "".to_string());
assert!(is_missing_web_client_service(&error));
}
#[test]
fn missing_web_client_service_matches_serialized_service_name() {
let error = RpcError::InvalidServiceKey("\"WebClientService\"".to_string(), "".to_string());
assert!(is_missing_web_client_service(&error));
}
#[test]
fn missing_web_client_service_rejects_other_services() {
let error = RpcError::InvalidServiceKey("PeerManageRpc".to_string(), "".to_string());
assert!(!is_missing_web_client_service(&error));
}
}
#[derive(serde::Serialize)] #[derive(serde::Serialize)]
struct PeerListData { struct PeerListData {
node_info: NodeInfo, node_info: NodeInfo,
@@ -599,9 +647,15 @@ impl<'a> CommandHandler<'a> {
} }
let client = self.get_manage_client().await?; let client = self.get_manage_client().await?;
let inst_ids = client let list_response = match client
.list_network_instance(BaseController::default(), ListNetworkInstanceRequest {}) .list_network_instance(BaseController::default(), ListNetworkInstanceRequest {})
.await? .await
{
Ok(response) => response,
Err(error) if is_missing_web_client_service(&error) => return Ok(None),
Err(error) => return Err(error.into()),
};
let inst_ids = list_response
.inst_ids .inst_ids
.into_iter() .into_iter()
.map(uuid::Uuid::from) .map(uuid::Uuid::from)
+47 -60
View File
@@ -4,9 +4,10 @@ use std::{
time::Duration, time::Duration,
}; };
use anyhow::Context; use anyhow::{Context, anyhow, bail};
use bytes::Bytes; use bytes::Bytes;
use dashmap::DashMap; use dashmap::DashMap;
use guarden::defer;
use kcp_sys::{ use kcp_sys::{
endpoint::{ConnId, KcpEndpoint, KcpPacketReceiver}, endpoint::{ConnId, KcpEndpoint, KcpPacketReceiver},
ffi_safe::KcpConfig, ffi_safe::KcpConfig,
@@ -14,12 +15,13 @@ use kcp_sys::{
stream::KcpStream, stream::KcpStream,
}; };
use prost::Message; use prost::Message;
use tokio::{select, task::JoinSet}; use tokio::task::JoinSet;
use super::{ use super::{
CidrSet, CidrSet,
tcp_proxy::{NatDstConnector, NatDstTcpConnector, TcpProxy}, tcp_proxy::{NatDstConnector, NatDstTcpConnector, TcpProxy},
}; };
use crate::utils::task::HedgeExt;
use crate::{ use crate::{
common::{ common::{
acl_processor::PacketInfo, acl_processor::PacketInfo,
@@ -113,72 +115,57 @@ pub struct NatDstKcpConnector {
impl NatDstConnector for NatDstKcpConnector { impl NatDstConnector for NatDstKcpConnector {
type DstStream = KcpStream; type DstStream = KcpStream;
async fn connect(&self, src: SocketAddr, nat_dst: SocketAddr) -> Result<Self::DstStream> { async fn connect(
&self,
src: SocketAddr,
nat_dst: SocketAddr,
) -> anyhow::Result<Self::DstStream> {
let peer_mgr = self
.peer_mgr
.upgrade()
.ok_or_else(|| anyhow!("peer manager is not available"))?;
let dst_peer = {
let SocketAddr::V4(addr) = nat_dst else {
bail!("ipv6 is not supported");
};
peer_mgr
.get_peer_map()
.get_peer_id_by_ipv4(addr.ip())
.await
.ok_or_else(|| anyhow!("no peer found for nat dst: {}", nat_dst))?
};
tracing::trace!(?nat_dst, ?dst_peer, "kcp nat");
let conn_data = KcpConnData { let conn_data = KcpConnData {
src: Some(src.into()), src: Some(src.into()),
dst: Some(nat_dst.into()), dst: Some(nat_dst.into()),
}; };
let Some(peer_mgr) = self.peer_mgr.upgrade() else { let stream = (0..5)
return Err(anyhow::anyhow!("peer manager is not available").into()); .map(|_| {
}; let kcp_endpoint = self.kcp_endpoint.clone();
let my_peer_id = peer_mgr.my_peer_id();
let dst_peer_id = match nat_dst { async move {
SocketAddr::V4(addr) => peer_mgr.get_peer_map().get_peer_id_by_ipv4(addr.ip()).await, let conn_id = kcp_endpoint
SocketAddr::V6(_) => return Err(anyhow::anyhow!("ipv6 is not supported").into()), .connect(
}; Duration::from_secs(10),
my_peer_id,
dst_peer,
Bytes::from(conn_data.encode_to_vec()),
)
.await?;
let Some(dst_peer) = dst_peer_id else { KcpStream::new(&kcp_endpoint, conn_id).context("failed to create kcp stream")
return Err(anyhow::anyhow!("no peer found for nat dst: {}", nat_dst).into());
};
tracing::trace!("kcp nat dst: {:?}, dst peers: {:?}", nat_dst, dst_peer);
let mut connect_tasks: JoinSet<std::result::Result<ConnId, anyhow::Error>> = JoinSet::new();
let mut retry_remain = 5;
loop {
select! {
Some(Ok(Ok(ret))) = connect_tasks.join_next() => {
// just wait for the previous connection to finish
let stream = KcpStream::new(&self.kcp_endpoint, ret)
.ok_or(anyhow::anyhow!("failed to create kcp stream"))?;
return Ok(stream);
} }
_ = tokio::time::sleep(Duration::from_millis(200)), if !connect_tasks.is_empty() && retry_remain > 0 => { })
// no successful connection yet, trigger another connection attempt .hedge(Duration::from_millis(200))
} .await
else => { .context("failed to connect to peer")?;
// got error in connect_tasks, continue to retry
if retry_remain == 0 && connect_tasks.is_empty() {
break;
}
}
}
// create a new connection task Ok(stream)
if retry_remain == 0 {
continue;
}
retry_remain -= 1;
let kcp_endpoint = self.kcp_endpoint.clone();
let my_peer_id = peer_mgr.my_peer_id();
let conn_data_clone = conn_data;
connect_tasks.spawn(async move {
kcp_endpoint
.connect(
Duration::from_secs(10),
my_peer_id,
dst_peer,
Bytes::from(conn_data_clone.encode_to_vec()),
)
.await
.with_context(|| format!("failed to connect to nat dst: {}", nat_dst))
});
}
Err(anyhow::anyhow!("failed to connect to nat dst: {}", nat_dst).into())
} }
fn check_packet_from_peer_fast(&self, _cidr_set: &CidrSet, _global_ctx: &GlobalCtx) -> bool { fn check_packet_from_peer_fast(&self, _cidr_set: &CidrSet, _global_ctx: &GlobalCtx) -> bool {
@@ -359,7 +346,7 @@ impl KcpProxyDst {
transport_type: TcpProxyEntryTransportType::Kcp.into(), transport_type: TcpProxyEntryTransportType::Kcp.into(),
}, },
); );
crate::defer! { defer! {
proxy_entries.remove(&conn_id); proxy_entries.remove(&conn_id);
if proxy_entries.capacity() - proxy_entries.len() > 16 { if proxy_entries.capacity() - proxy_entries.len() > 16 {
proxy_entries.shrink_to_fit(); proxy_entries.shrink_to_fit();
+109 -60
View File
@@ -18,16 +18,20 @@ use crate::tunnel::packet_def::{
PacketType, PeerManagerHeader, TAIL_RESERVED_SIZE, ZCPacket, ZCPacketType, PacketType, PeerManagerHeader, TAIL_RESERVED_SIZE, ZCPacket, ZCPacketType,
}; };
use crate::tunnel::quic::{client_config, endpoint_config, server_config}; use crate::tunnel::quic::{client_config, endpoint_config, server_config};
use anyhow::{Context, Error, anyhow}; use crate::utils::task::HedgeExt;
use anyhow::{Context, Error, anyhow, bail, ensure};
use atomic_refcell::AtomicRefCell; use atomic_refcell::AtomicRefCell;
use bytes::{BufMut, Bytes, BytesMut}; use bytes::{BufMut, Bytes, BytesMut};
use dashmap::DashMap; use dashmap::DashMap;
use derivative::Derivative; use derivative::Derivative;
use derive_more::{Constructor, Deref, DerefMut, From, Into}; use derive_more::{Constructor, Deref, DerefMut, From, Into};
use guarden::defer;
use moka::future::Cache;
use prost::Message; use prost::Message;
use quinn::udp::{EcnCodepoint, RecvMeta, Transmit}; use quinn::udp::{EcnCodepoint, RecvMeta, Transmit};
use quinn::{ use quinn::{
AsyncUdpSocket, Endpoint, RecvStream, SendStream, StreamId, UdpPoller, default_runtime, AsyncUdpSocket, Connection, ConnectionError, Endpoint, RecvStream, SendStream, StreamId,
UdpPoller, WriteError, default_runtime,
}; };
use std::cmp::min; use std::cmp::min;
use std::future::Future; use std::future::Future;
@@ -42,8 +46,8 @@ use tokio::io::{AsyncReadExt, Join, join};
use tokio::sync::mpsc::error::TrySendError; use tokio::sync::mpsc::error::TrySendError;
use tokio::sync::mpsc::{Receiver, Sender, channel}; use tokio::sync::mpsc::{Receiver, Sender, channel};
use tokio::task::JoinSet; use tokio::task::JoinSet;
use tokio::time::{Instant, timeout}; use tokio::time::timeout;
use tokio::{join, pin, select}; use tokio::{join, select};
use tokio_util::sync::PollSender; use tokio_util::sync::PollSender;
use tracing::{debug, error, info, instrument, trace, warn}; use tracing::{debug, error, info, instrument, trace, warn};
@@ -278,6 +282,7 @@ impl From<(SendStream, RecvStream)> for QuicStream {
pub struct NatDstQuicConnector { pub struct NatDstQuicConnector {
pub(crate) endpoint: Endpoint, pub(crate) endpoint: Endpoint,
pub(crate) peer_mgr: Weak<PeerManager>, pub(crate) peer_mgr: Weak<PeerManager>,
pub(crate) conn_map: Cache<PeerId, Connection>,
} }
#[async_trait::async_trait] #[async_trait::async_trait]
@@ -288,21 +293,25 @@ impl NatDstConnector for NatDstQuicConnector {
&self, &self,
src: SocketAddr, src: SocketAddr,
nat_dst: SocketAddr, nat_dst: SocketAddr,
) -> crate::common::error::Result<Self::DstStream> { ) -> anyhow::Result<Self::DstStream> {
let Some(peer_mgr) = self.peer_mgr.upgrade() else { let peer_mgr = self
return Err(anyhow::anyhow!("peer manager is not available").into()); .peer_mgr
.upgrade()
.ok_or_else(|| anyhow!("peer manager is not available"))?;
let dst_peer = {
let SocketAddr::V4(addr) = nat_dst else {
bail!("ipv6 is not supported");
};
peer_mgr
.get_peer_map()
.get_peer_id_by_ipv4(addr.ip())
.await
.ok_or_else(|| anyhow!("no peer found for nat dst: {}", nat_dst))?
}; };
let Some(dst_peer_id) = (match nat_dst { tracing::trace!(?nat_dst, ?dst_peer, "quic nat");
SocketAddr::V4(addr) => peer_mgr.get_peer_map().get_peer_id_by_ipv4(addr.ip()).await,
SocketAddr::V6(_) => return Err(anyhow::anyhow!("ipv6 is not supported").into()),
}) else {
return Err(anyhow::anyhow!("no peer found for nat dst: {}", nat_dst).into());
};
trace!("quic nat dst: {:?}, dst peers: {:?}", nat_dst, dst_peer_id);
let addr = QuicAddr::new(dst_peer_id, PacketType::QuicSrc).into();
let header = { let header = {
let conn_data = QuicConnData { let conn_data = QuicConnData {
src: Some(src.into()), src: Some(src.into()),
@@ -310,61 +319,90 @@ impl NatDstConnector for NatDstQuicConnector {
}; };
let len = conn_data.encoded_len(); let len = conn_data.encoded_len();
if len > (u16::MAX as usize) { ensure!(len <= u16::MAX as usize, "conn data too large: {len}");
return Err(anyhow!("conn data too large: {:?}", len).into());
}
let mut buf = BytesMut::with_capacity(2 + len); let mut buf = BytesMut::with_capacity(2 + len);
buf.put_u16(len as u16); buf.put_u16(len as u16);
conn_data.encode(&mut buf).unwrap(); conn_data.encode(&mut buf)?;
buf.freeze() buf.freeze()
}; };
let mut connect_tasks = JoinSet::<Result<QuicStream, Error>>::new(); let reconnect = || async move {
let connect = |tasks: &mut JoinSet<_>| { self.conn_map.invalidate(&dst_peer).await;
let endpoint = self.endpoint.clone();
let header = header.clone();
tasks.spawn(async move { let connect = (0..5)
let connection = endpoint.connect(addr, "")?.await?; .map(|_| {
let mut stream: QuicStream = connection.open_bi().await?.into(); let endpoint = self.endpoint.clone();
stream.writer_mut().write_chunk(header).await?; async move {
Ok(stream) endpoint
}); .connect(QuicAddr::new(dst_peer, PacketType::QuicSrc).into(), "")
.context("failed to create connection")?
.await
.context("connection failed")
}
})
.hedge(Duration::from_millis(200));
self.conn_map
.try_get_with(dst_peer, connect)
.await
.context("failed to connect to peer")
}; };
connect(&mut connect_tasks); let mut reconnected = false;
let timer = tokio::time::sleep(Duration::from_millis(200)); let mut connection = if let Some(connection) = self.conn_map.get(&dst_peer).await
pin!(timer); && connection.close_reason().is_none()
{
connection
} else {
reconnected = true;
reconnect().await?
};
let mut retry_remain = 5;
loop { loop {
select! { let is_retryable = |error: &ConnectionError| {
Some(result) = connect_tasks.join_next() => { matches!(
match result { error,
Ok(Ok(stream)) => return Ok(stream.into()), ConnectionError::ConnectionClosed(_)
_ => { | ConnectionError::ApplicationClosed(_)
if connect_tasks.is_empty() { | ConnectionError::Reset
if retry_remain == 0 { | ConnectionError::TimedOut
return Err(anyhow!("failed to connect to nat dst: {:?}", nat_dst).into()) )
} };
let mut retry = !reconnected;
let header = header.clone();
let result = async {
let mut stream: QuicStream = connection
.open_bi()
.await
.inspect_err(|error| retry &= is_retryable(error))?
.into();
stream
.writer_mut()
.write_chunk(header)
.await
.inspect_err(|error| {
retry &= matches!(error, WriteError::ConnectionLost(error) if is_retryable(error))
})?;
Ok(stream.into())
}
.await;
retry_remain -= 1; if let Err(error) = &result {
connect(&mut connect_tasks); if retry {
timer.as_mut().reset(Instant::now() + Duration::from_millis(200)) debug!(?error, "failed to open quic stream, retrying...");
} reconnected = true;
} connection = reconnect().await?;
} continue;
} } else {
_ = &mut timer, if retry_remain > 0 => { self.conn_map.invalidate(&dst_peer).await;
retry_remain -= 1;
connect(&mut connect_tasks);
timer.as_mut().reset(Instant::now() + Duration::from_millis(200));
} }
} }
break result;
} }
} }
@@ -594,10 +632,17 @@ impl QuicStreamReceiver {
} }
}; };
match Self::establish_stream(stream, ctx.clone()).await { let ctx = ctx.clone();
Ok(stream) => drop(tasks.spawn(stream)), tasks.spawn(async move {
Err(e) => warn!("failed to establish quic stream from {:?}: {:?}", connection.remote_address(), e), match Self::establish_stream(stream, ctx).await {
} Ok(transfer_fut) => {
if let Err(e) = transfer_fut.await {
warn!("quic stream transfer error: {:?}", e);
}
}
Err(e) => warn!("failed to establish quic stream: {:?}", e),
}
});
} }
res = tasks.join_next(), if !tasks.is_empty() => { res = tasks.join_next(), if !tasks.is_empty() => {
@@ -662,7 +707,7 @@ impl QuicStreamReceiver {
transport_type: TcpProxyEntryTransportType::Quic.into(), transport_type: TcpProxyEntryTransportType::Quic.into(),
}, },
); );
crate::defer! { defer! {
proxy_entries.remove(&handle); proxy_entries.remove(&handle);
if proxy_entries.capacity() - proxy_entries.len() > 16 { if proxy_entries.capacity() - proxy_entries.len() > 16 {
proxy_entries.shrink_to_fit(); proxy_entries.shrink_to_fit();
@@ -815,7 +860,7 @@ impl QuicProxy {
Arc::new(socket), Arc::new(socket),
default_runtime().unwrap(), default_runtime().unwrap(),
) )
.unwrap(); .unwrap(); // TODO: maybe a different transport config
endpoint.set_default_client_config(client_config()); endpoint.set_default_client_config(client_config());
self.endpoint = Some(endpoint.clone()); self.endpoint = Some(endpoint.clone());
@@ -844,6 +889,10 @@ impl QuicProxy {
NatDstQuicConnector { NatDstQuicConnector {
endpoint: endpoint.clone(), endpoint: endpoint.clone(),
peer_mgr: Arc::downgrade(&peer_mgr), peer_mgr: Arc::downgrade(&peer_mgr),
conn_map: Cache::builder()
.max_capacity(u8::MAX.into()) // cf. quinn transport config (max_concurrent_bidi_streams)
.time_to_idle(Duration::from_secs(600)) // cf. quinn transport config (max_idle_timeout)
.build(),
}, },
)); ));
+1 -1
View File
@@ -240,7 +240,7 @@ impl AsyncTcpConnector for Socks5KcpConnector {
let ret = c let ret = c
.connect(self.src_addr, addr) .connect(self.src_addr, addr)
.await .await
.map_err(|e| super::fast_socks5::SocksError::Other(e.into()))?; .map_err(super::fast_socks5::SocksError::Other)?;
Ok(SocksTcpStream::Kcp(ret)) Ok(SocksTcpStream::Kcp(ret))
} }
} }
+8 -9
View File
@@ -44,7 +44,7 @@ use super::tokio_smoltcp::{self, Net, NetConfig, channel_device};
pub(crate) trait NatDstConnector: Send + Sync + Clone + 'static { pub(crate) trait NatDstConnector: Send + Sync + Clone + 'static {
type DstStream: AsyncRead + AsyncWrite + Unpin + Send; type DstStream: AsyncRead + AsyncWrite + Unpin + Send;
async fn connect(&self, src: SocketAddr, dst: SocketAddr) -> Result<Self::DstStream>; async fn connect(&self, src: SocketAddr, dst: SocketAddr) -> anyhow::Result<Self::DstStream>;
fn check_packet_from_peer_fast(&self, cidr_set: &CidrSet, global_ctx: &GlobalCtx) -> bool; fn check_packet_from_peer_fast(&self, cidr_set: &CidrSet, global_ctx: &GlobalCtx) -> bool;
fn check_packet_from_peer( fn check_packet_from_peer(
&self, &self,
@@ -63,14 +63,13 @@ pub struct NatDstTcpConnector;
#[async_trait::async_trait] #[async_trait::async_trait]
impl NatDstConnector for NatDstTcpConnector { impl NatDstConnector for NatDstTcpConnector {
type DstStream = TcpStream; type DstStream = TcpStream;
async fn connect(&self, _src: SocketAddr, nat_dst: SocketAddr) -> Result<Self::DstStream> { async fn connect(
let socket = match TcpSocket::new_v4() { &self,
Ok(s) => s, _src: SocketAddr,
Err(error) => { nat_dst: SocketAddr,
log::error!(?error, "create v4 socket failed"); ) -> anyhow::Result<Self::DstStream> {
return Err(error.into()); let socket = TcpSocket::new_v4()
} .inspect_err(|error| log::error!(?error, "create v4 socket failed"))?;
};
let stream = timeout(Duration::from_secs(10), socket.connect(nat_dst)) let stream = timeout(Duration::from_secs(10), socket.connect(nat_dst))
.await? .await?
@@ -1,6 +1,5 @@
// translated from tailscale #32ce1bdb48078ec4cedaeeb5b1b2ff9c0ef61a49 // translated from tailscale #32ce1bdb48078ec4cedaeeb5b1b2ff9c0ef61a49
use crate::defer;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use dbus::blocking::stdintf::org_freedesktop_dbus::Properties as _; use dbus::blocking::stdintf::org_freedesktop_dbus::Properties as _;
use std::fs; use std::fs;
@@ -167,6 +166,7 @@ fn new_os_configurator(_interface_name: String) -> Result<()> {
Ok(()) Ok(())
} }
use guarden::defer;
use std::io::{self, BufRead, Cursor}; use std::io::{self, BufRead, Cursor};
/// 返回 `resolv.conf` 内容的拥有者("systemd-resolved"、"NetworkManager"、"resolvconf" 或空字符串) /// 返回 `resolv.conf` 内容的拥有者("systemd-resolved"、"NetworkManager"、"resolvconf" 或空字符串)
+5
View File
@@ -340,6 +340,11 @@ impl InstanceConfigPatcher {
global_ctx.set_ipv6(Some(ipv6.into())); global_ctx.set_ipv6(Some(ipv6.into()));
global_ctx.config.set_ipv6(Some(ipv6.into())); global_ctx.config.set_ipv6(Some(ipv6.into()));
} }
if let Some(disable_relay_data) = patch.disable_relay_data {
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = disable_relay_data;
global_ctx.set_flags(flags);
}
if let Some(enabled) = patch.ipv6_public_addr_provider { if let Some(enabled) = patch.ipv6_public_addr_provider {
global_ctx.config.set_ipv6_public_addr_provider(enabled); global_ctx.config.set_ipv6_public_addr_provider(enabled);
provider_config_changed = true; provider_config_changed = true;
+3
View File
@@ -10,3 +10,6 @@ pub mod proxy_cidrs_monitor;
#[cfg(feature = "tun")] #[cfg(feature = "tun")]
pub mod virtual_nic; pub mod virtual_nic;
#[cfg(any(windows, test))]
pub(crate) mod windows_udp_broadcast;
+8 -13
View File
@@ -1,5 +1,8 @@
use std::{path::Path, sync::Arc}; #[cfg(target_os = "linux")]
use std::path::Path;
use std::sync::Arc;
#[cfg(target_os = "linux")]
use anyhow::Context; use anyhow::Context;
use cidr::{Ipv6Cidr, Ipv6Inet}; use cidr::{Ipv6Cidr, Ipv6Inet};
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
@@ -321,7 +324,7 @@ async fn resolve_public_ipv6_provider_runtime_state_linux(
} }
async fn resolve_public_ipv6_provider_runtime_state( async fn resolve_public_ipv6_provider_runtime_state(
global_ctx: &ArcGlobalCtx, _global_ctx: &ArcGlobalCtx,
config: PublicIpv6ProviderConfigSnapshot, config: PublicIpv6ProviderConfigSnapshot,
) -> PublicIpv6ProviderRuntimeState { ) -> PublicIpv6ProviderRuntimeState {
if !config.provider_enabled { if !config.provider_enabled {
@@ -331,7 +334,7 @@ async fn resolve_public_ipv6_provider_runtime_state(
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
{ {
return resolve_public_ipv6_provider_runtime_state_linux( return resolve_public_ipv6_provider_runtime_state_linux(
global_ctx, _global_ctx,
config.configured_prefix, config.configured_prefix,
) )
.await; .await;
@@ -361,16 +364,8 @@ fn apply_public_ipv6_provider_runtime_state(
let prefix_changed = global_ctx.set_advertised_ipv6_public_addr_prefix(next_prefix); let prefix_changed = global_ctx.set_advertised_ipv6_public_addr_prefix(next_prefix);
let next_provider_enabled = matches!(state, PublicIpv6ProviderRuntimeState::Active(_)); let next_provider_enabled = matches!(state, PublicIpv6ProviderRuntimeState::Active(_));
let feature_changed = { let feature_changed =
let mut feature_flags = global_ctx.get_feature_flags(); global_ctx.set_ipv6_public_addr_provider_feature_flag(next_provider_enabled);
if feature_flags.ipv6_public_addr_provider == next_provider_enabled {
false
} else {
feature_flags.ipv6_public_addr_provider = next_provider_enabled;
global_ctx.set_feature_flags(feature_flags);
true
}
};
prefix_changed || feature_changed prefix_changed || feature_changed
} }
+35
View File
@@ -35,6 +35,8 @@ use tokio::{
task::JoinSet, task::JoinSet,
}; };
use tokio_util::bytes::Bytes; use tokio_util::bytes::Bytes;
#[cfg(target_os = "windows")]
use tokio_util::task::AbortOnDropHandle;
use tun::{AbstractDevice, AsyncDevice, Configuration, Layer}; use tun::{AbstractDevice, AsyncDevice, Configuration, Layer};
use zerocopy::{NativeEndian, NetworkEndian}; use zerocopy::{NativeEndian, NetworkEndian};
@@ -801,6 +803,9 @@ pub struct NicCtx {
nic: Arc<Mutex<VirtualNic>>, nic: Arc<Mutex<VirtualNic>>,
tasks: JoinSet<()>, tasks: JoinSet<()>,
#[cfg(target_os = "windows")]
windows_udp_broadcast_relay: Option<AbortOnDropHandle<()>>,
} }
impl NicCtx { impl NicCtx {
@@ -819,6 +824,9 @@ impl NicCtx {
nic: Arc::new(Mutex::new(VirtualNic::new(global_ctx))), nic: Arc::new(Mutex::new(VirtualNic::new(global_ctx))),
tasks: JoinSet::new(), tasks: JoinSet::new(),
#[cfg(target_os = "windows")]
windows_udp_broadcast_relay: None,
} }
} }
@@ -1005,6 +1013,31 @@ impl NicCtx {
}); });
} }
#[cfg(target_os = "windows")]
fn start_windows_udp_broadcast_relay(&mut self, virtual_ipv4: Ipv4Inet) {
if !self.global_ctx.get_flags().enable_udp_broadcast_relay {
return;
}
let Some(peer_manager) = self.peer_mgr.upgrade() else {
tracing::warn!("peer manager is dropped, skip Windows UDP broadcast relay");
return;
};
match super::windows_udp_broadcast::start(peer_manager, virtual_ipv4) {
Ok(handle) => {
self.windows_udp_broadcast_relay = Some(handle);
tracing::info!("Windows UDP broadcast relay started");
}
Err(err) => {
tracing::warn!(
?err,
"failed to start Windows UDP broadcast relay; administrator privileges are required"
);
}
}
}
async fn apply_route_changes( async fn apply_route_changes(
ifcfg: &impl IfConfiguerTrait, ifcfg: &impl IfConfiguerTrait,
ifname: &str, ifname: &str,
@@ -1347,6 +1380,8 @@ impl NicCtx {
// Assign IPv4 address if provided // Assign IPv4 address if provided
if let Some(ipv4_addr) = ipv4_addr { if let Some(ipv4_addr) = ipv4_addr {
self.assign_ipv4_to_tun_device(ipv4_addr).await?; self.assign_ipv4_to_tun_device(ipv4_addr).await?;
#[cfg(target_os = "windows")]
self.start_windows_udp_broadcast_relay(ipv4_addr);
} }
// Assign IPv6 address if provided // Assign IPv6 address if provided
File diff suppressed because it is too large Load Diff
+22
View File
@@ -474,6 +474,28 @@ fn handle_event(
); );
} }
GlobalCtxEvent::UdpBroadcastRelayStartResult {
capture_backend,
error,
} => {
if let Some(error) = error {
event!(
warn,
?capture_backend,
%error,
"[{}] UDP broadcast relay start failed",
instance_id
);
} else {
event!(
info,
?capture_backend,
"[{}] UDP broadcast relay started",
instance_id
);
}
}
GlobalCtxEvent::CredentialChanged => { GlobalCtxEvent::CredentialChanged => {
event!(info, "[{}] credential changed", instance_id); event!(info, "[{}] credential changed", instance_id);
} }
+11
View File
@@ -816,6 +816,14 @@ impl NetworkConfig {
flags.disable_upnp = disable_upnp; flags.disable_upnp = disable_upnp;
} }
if let Some(disable_relay_data) = self.disable_relay_data {
flags.disable_relay_data = disable_relay_data;
}
if let Some(enable_udp_broadcast_relay) = self.enable_udp_broadcast_relay {
flags.enable_udp_broadcast_relay = enable_udp_broadcast_relay;
}
if let Some(disable_sym_hole_punching) = self.disable_sym_hole_punching { if let Some(disable_sym_hole_punching) = self.disable_sym_hole_punching {
flags.disable_sym_hole_punching = disable_sym_hole_punching; flags.disable_sym_hole_punching = disable_sym_hole_punching;
} }
@@ -990,6 +998,8 @@ impl NetworkConfig {
result.disable_tcp_hole_punching = Some(flags.disable_tcp_hole_punching); result.disable_tcp_hole_punching = Some(flags.disable_tcp_hole_punching);
result.disable_udp_hole_punching = Some(flags.disable_udp_hole_punching); result.disable_udp_hole_punching = Some(flags.disable_udp_hole_punching);
result.disable_upnp = Some(flags.disable_upnp); result.disable_upnp = Some(flags.disable_upnp);
result.disable_relay_data = Some(flags.disable_relay_data);
result.enable_udp_broadcast_relay = Some(flags.enable_udp_broadcast_relay);
result.disable_sym_hole_punching = Some(flags.disable_sym_hole_punching); result.disable_sym_hole_punching = Some(flags.disable_sym_hole_punching);
result.enable_magic_dns = Some(flags.accept_dns); result.enable_magic_dns = Some(flags.accept_dns);
result.mtu = Some(flags.mtu as i32); result.mtu = Some(flags.mtu as i32);
@@ -1258,6 +1268,7 @@ mod tests {
flags.disable_tcp_hole_punching = rng.gen_bool(0.2); flags.disable_tcp_hole_punching = rng.gen_bool(0.2);
flags.disable_udp_hole_punching = rng.gen_bool(0.2); flags.disable_udp_hole_punching = rng.gen_bool(0.2);
flags.disable_upnp = rng.gen_bool(0.2); flags.disable_upnp = rng.gen_bool(0.2);
flags.enable_udp_broadcast_relay = rng.gen_bool(0.2);
flags.accept_dns = rng.gen_bool(0.6); flags.accept_dns = rng.gen_bool(0.6);
flags.mtu = rng.gen_range(1200..1500); flags.mtu = rng.gen_range(1200..1500);
flags.private_mode = rng.gen_bool(0.3); flags.private_mode = rng.gen_bool(0.3);
+1 -24
View File
@@ -65,7 +65,7 @@ impl PeerCenterBase {
return Err(Error::Shutdown); return Err(Error::Shutdown);
}; };
rpc_mgr.rpc_server().registry().register( rpc_mgr.rpc_server().registry().register(
PeerCenterRpcServer::new(PeerCenterServer::new(self.peer_mgr.my_peer_id())), PeerCenterRpcServer::new(PeerCenterServer::new()),
&self.peer_mgr.get_global_ctx().get_network_name(), &self.peer_mgr.get_global_ctx().get_network_name(),
); );
Ok(()) Ok(())
@@ -486,7 +486,6 @@ impl PeerCenterPeerManagerTrait for PeerMapWithPeerRpcManager {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::{ use crate::{
peer_center::server::get_global_data,
peers::tests::{connect_peer_manager, create_mock_peer_manager, wait_route_appear}, peers::tests::{connect_peer_manager, create_mock_peer_manager, wait_route_appear},
tunnel::common::tests::wait_for_condition, tunnel::common::tests::wait_for_condition,
}; };
@@ -515,25 +514,6 @@ mod tests {
.await .await
.unwrap(); .unwrap();
let center_peer = PeerCenterBase::select_center_peer(&peer_mgr_a)
.await
.unwrap();
let center_data = get_global_data(center_peer);
// wait center_data has 3 records for 10 seconds
wait_for_condition(
|| async {
if center_data.global_peer_map.len() == 4 {
println!("center data {:#?}", center_data.global_peer_map);
true
} else {
false
}
},
Duration::from_secs(20),
)
.await;
let mut digest = None; let mut digest = None;
for pc in peer_centers.iter() { for pc in peer_centers.iter() {
let rpc_service = pc.get_rpc_service(); let rpc_service = pc.get_rpc_service();
@@ -578,8 +558,5 @@ mod tests {
route_cost.end_update(); route_cost.end_update();
assert!(!route_cost.need_update()); assert!(!route_cost.need_update());
} }
let global_digest = get_global_data(center_peer).digest.load();
assert_eq!(digest.as_ref().unwrap(), &global_digest);
} }
} }
+96 -30
View File
@@ -6,7 +6,6 @@ use std::{
use crossbeam::atomic::AtomicCell; use crossbeam::atomic::AtomicCell;
use dashmap::DashMap; use dashmap::DashMap;
use once_cell::sync::Lazy;
use tokio::task::JoinSet; use tokio::task::JoinSet;
use crate::{ use crate::{
@@ -35,50 +34,41 @@ pub(crate) struct PeerCenterInfoEntry {
update_time: std::time::Instant, update_time: std::time::Instant,
} }
#[derive(Default)] #[derive(Debug, Default)]
pub(crate) struct PeerCenterServerGlobalData { struct PeerCenterServerData {
pub(crate) global_peer_map: DashMap<SrcDstPeerPair, PeerCenterInfoEntry>, global_peer_map: DashMap<SrcDstPeerPair, PeerCenterInfoEntry>,
pub(crate) peer_report_time: DashMap<PeerId, std::time::Instant>, peer_report_time: DashMap<PeerId, std::time::Instant>,
pub(crate) digest: AtomicCell<Digest>, digest: AtomicCell<Digest>,
}
// a global unique instance for PeerCenterServer
pub(crate) static GLOBAL_DATA: Lazy<DashMap<PeerId, Arc<PeerCenterServerGlobalData>>> =
Lazy::new(DashMap::new);
pub(crate) fn get_global_data(node_id: PeerId) -> Arc<PeerCenterServerGlobalData> {
GLOBAL_DATA
.entry(node_id)
.or_insert_with(|| Arc::new(PeerCenterServerGlobalData::default()))
.value()
.clone()
} }
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct PeerCenterServer { pub struct PeerCenterServer {
// every peer has its own server, so use per-struct dash map is ok. data: Arc<PeerCenterServerData>,
my_node_id: PeerId,
tasks: Arc<JoinSet<()>>, tasks: Arc<JoinSet<()>>,
} }
impl PeerCenterServer { impl PeerCenterServer {
pub fn new(my_node_id: PeerId) -> Self { pub fn new() -> Self {
let data = Arc::new(PeerCenterServerData::default());
let weak_data = Arc::downgrade(&data);
let mut tasks = JoinSet::new(); let mut tasks = JoinSet::new();
tasks.spawn(async move { tasks.spawn(async move {
loop { loop {
tokio::time::sleep(std::time::Duration::from_secs(10)).await; tokio::time::sleep(std::time::Duration::from_secs(10)).await;
PeerCenterServer::clean_outdated_peer(my_node_id).await; let Some(data) = weak_data.upgrade() else {
break;
};
PeerCenterServer::clean_outdated_peer_data(&data).await;
} }
}); });
PeerCenterServer { PeerCenterServer {
my_node_id, data,
tasks: Arc::new(tasks), tasks: Arc::new(tasks),
} }
} }
async fn clean_outdated_peer(my_node_id: PeerId) { async fn clean_outdated_peer_data(data: &PeerCenterServerData) {
let data = get_global_data(my_node_id);
data.peer_report_time.retain(|_, v| { data.peer_report_time.retain(|_, v| {
std::time::Instant::now().duration_since(*v) < std::time::Duration::from_secs(180) std::time::Instant::now().duration_since(*v) < std::time::Duration::from_secs(180)
}); });
@@ -88,8 +78,7 @@ impl PeerCenterServer {
}); });
} }
fn calc_global_digest(my_node_id: PeerId) -> Digest { fn calc_global_digest_data(data: &PeerCenterServerData) -> Digest {
let data = get_global_data(my_node_id);
let mut hasher = std::collections::hash_map::DefaultHasher::new(); let mut hasher = std::collections::hash_map::DefaultHasher::new();
data.global_peer_map data.global_peer_map
.iter() .iter()
@@ -117,7 +106,7 @@ impl PeerCenterRpc for PeerCenterServer {
tracing::debug!("receive report_peers"); tracing::debug!("receive report_peers");
let data = get_global_data(self.my_node_id); let data = &self.data;
data.peer_report_time data.peer_report_time
.insert(my_peer_id, std::time::Instant::now()); .insert(my_peer_id, std::time::Instant::now());
@@ -134,7 +123,7 @@ impl PeerCenterRpc for PeerCenterServer {
} }
data.digest data.digest
.store(PeerCenterServer::calc_global_digest(self.my_node_id)); .store(PeerCenterServer::calc_global_digest_data(data));
Ok(ReportPeersResponse::default()) Ok(ReportPeersResponse::default())
} }
@@ -147,7 +136,7 @@ impl PeerCenterRpc for PeerCenterServer {
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> { ) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let digest = req.digest; let digest = req.digest;
let data = get_global_data(self.my_node_id); let data = &self.data;
if digest == data.digest.load() && digest != 0 { if digest == data.digest.load() && digest != 0 {
return Ok(GetGlobalPeerMapResponse::default()); return Ok(GetGlobalPeerMapResponse::default());
} }
@@ -171,3 +160,80 @@ impl PeerCenterRpc for PeerCenterServer {
}) })
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn server_clones_share_instance_data() {
let server = PeerCenterServer::new();
let server_clone = server.clone();
let mut peers = PeerInfoForGlobalMap::default();
peers
.direct_peers
.insert(100, DirectConnectedPeerInfo { latency_ms: 3 });
server
.report_peers(
BaseController::default(),
ReportPeersRequest {
my_peer_id: 99,
peer_infos: Some(peers),
},
)
.await
.unwrap();
let resp = server_clone
.get_global_peer_map(
BaseController::default(),
GetGlobalPeerMapRequest { digest: 0 },
)
.await
.unwrap();
assert_eq!(1, resp.global_peer_map.len());
assert!(resp.global_peer_map[&99].direct_peers.contains_key(&100));
}
#[tokio::test]
async fn independent_server_instances_do_not_share_data() {
let server_a = PeerCenterServer::new();
let server_b = PeerCenterServer::new();
let mut peers = PeerInfoForGlobalMap::default();
peers
.direct_peers
.insert(101, DirectConnectedPeerInfo { latency_ms: 5 });
server_a
.report_peers(
BaseController::default(),
ReportPeersRequest {
my_peer_id: 100,
peer_infos: Some(peers),
},
)
.await
.unwrap();
let resp_a = server_a
.get_global_peer_map(
BaseController::default(),
GetGlobalPeerMapRequest { digest: 0 },
)
.await
.unwrap();
assert_eq!(1, resp_a.global_peer_map.len());
let resp_b = server_b
.get_global_peer_map(
BaseController::default(),
GetGlobalPeerMapRequest { digest: 0 },
)
.await
.unwrap();
assert!(resp_b.global_peer_map.is_empty());
}
}
+41 -2
View File
@@ -94,6 +94,8 @@ impl AclFilter {
/// Preserves connection tracking and rate limiting state across reloads /// Preserves connection tracking and rate limiting state across reloads
/// Now lock-free and doesn't require &mut self! /// Now lock-free and doesn't require &mut self!
pub fn reload_rules(&self, acl_config: Option<&Acl>) { pub fn reload_rules(&self, acl_config: Option<&Acl>) {
self.outbound_allow_records.clear();
let Some(acl_config) = acl_config else { let Some(acl_config) = acl_config else {
self.acl_enabled.store(false, Ordering::Relaxed); self.acl_enabled.store(false, Ordering::Relaxed);
return; return;
@@ -400,14 +402,15 @@ mod tests {
use std::{ use std::{
net::{IpAddr, Ipv4Addr, Ipv6Addr}, net::{IpAddr, Ipv4Addr, Ipv6Addr},
sync::Arc, sync::Arc,
time::Instant,
}; };
use crate::{ use crate::{
common::acl_processor::PacketInfo, common::acl_processor::PacketInfo,
proto::acl::{ChainType, Protocol}, proto::acl::{Acl, ChainType, Protocol},
}; };
use super::AclFilter; use super::{AclFilter, OutboundAllowRecord};
fn packet_info(dst_ip: IpAddr) -> PacketInfo { fn packet_info(dst_ip: IpAddr) -> PacketInfo {
PacketInfo { PacketInfo {
@@ -445,4 +448,40 @@ mod tests {
assert_eq!(chain, ChainType::Forward); assert_eq!(chain, ChainType::Forward);
} }
#[tokio::test]
async fn reload_rules_clears_outbound_allow_records() {
let filter = AclFilter::new();
filter.outbound_allow_records.insert(
OutboundAllowRecord {
src_ip: IpAddr::V4(Ipv4Addr::new(10, 0, 0, 1)),
dst_ip: IpAddr::V4(Ipv4Addr::new(10, 0, 0, 2)),
src_port: Some(1234),
dst_port: Some(80),
protocol: Protocol::Tcp,
},
Instant::now(),
);
assert_eq!(filter.outbound_allow_records.len(), 1);
filter.reload_rules(Some(&Acl::default()));
assert_eq!(filter.outbound_allow_records.len(), 0);
filter.outbound_allow_records.insert(
OutboundAllowRecord {
src_ip: IpAddr::V4(Ipv4Addr::new(10, 0, 0, 2)),
dst_ip: IpAddr::V4(Ipv4Addr::new(10, 0, 0, 1)),
src_port: Some(4321),
dst_port: Some(443),
protocol: Protocol::Tcp,
},
Instant::now(),
);
assert_eq!(filter.outbound_allow_records.len(), 1);
filter.reload_rules(None);
assert_eq!(filter.outbound_allow_records.len(), 0);
}
} }
+423 -32
View File
@@ -6,11 +6,15 @@ in the future, with the help wo peer center we can forward packets of peers that
connected to any node in the local network. connected to any node in the local network.
*/ */
use std::{ use std::{
sync::{Arc, Weak}, sync::{
Arc, Weak,
atomic::{AtomicBool, Ordering},
},
time::SystemTime, time::SystemTime,
}; };
use dashmap::{DashMap, DashSet}; use dashmap::{DashMap, DashSet};
use guarden::defer;
use tokio::{ use tokio::{
sync::{ sync::{
Mutex, Mutex,
@@ -56,7 +60,7 @@ use super::{
route_trait::NextHopPolicy, route_trait::NextHopPolicy,
traffic_metrics::{ traffic_metrics::{
InstanceLabelKind, LogicalTrafficMetrics, TrafficKind, TrafficMetricRecorder, InstanceLabelKind, LogicalTrafficMetrics, TrafficKind, TrafficMetricRecorder,
route_peer_info_instance_id, traffic_kind, is_relay_data_packet_type, route_peer_info_instance_id, traffic_kind,
}, },
}; };
@@ -69,11 +73,16 @@ pub trait GlobalForeignNetworkAccessor: Send + Sync + 'static {
struct ForeignNetworkEntry { struct ForeignNetworkEntry {
my_peer_id: PeerId, my_peer_id: PeerId,
// Node-global runtime flags, such as disable_relay_data, live on the parent
// context. The foreign context is scoped to the foreign network's OSPF view.
parent_global_ctx: ArcGlobalCtx,
global_ctx: ArcGlobalCtx, global_ctx: ArcGlobalCtx,
network: NetworkIdentity, network: NetworkIdentity,
peer_map: Arc<PeerMap>, peer_map: Arc<PeerMap>,
relay_peer_map: Arc<RelayPeerMap>, relay_peer_map: Arc<RelayPeerMap>,
peer_session_store: Arc<PeerSessionStore>, peer_session_store: Arc<PeerSessionStore>,
// Static per-network permission from the whitelist check. disable_relay_data
// is the node-wide runtime override layered on top of this value.
relay_data: bool, relay_data: bool,
pm_packet_sender: Mutex<Option<PacketRecvChan>>, pm_packet_sender: Mutex<Option<PacketRecvChan>>,
@@ -82,12 +91,13 @@ struct ForeignNetworkEntry {
packet_recv: Mutex<Option<PacketRecvChanReceiver>>, packet_recv: Mutex<Option<PacketRecvChanReceiver>>,
bps_limiter: Arc<TokenBucket>, bps_limiter: Option<Arc<TokenBucket>>,
peer_center: Arc<PeerCenterInstance>, peer_center: Arc<PeerCenterInstance>,
stats_mgr: Arc<StatsManager>, stats_mgr: Arc<StatsManager>,
traffic_metrics: Arc<TrafficMetricRecorder>, traffic_metrics: Arc<TrafficMetricRecorder>,
event_handler_started: AtomicBool,
tasks: Mutex<JoinSet<()>>, tasks: Mutex<JoinSet<()>>,
@@ -155,10 +165,11 @@ impl ForeignNetworkEntry {
InstanceLabelKind::From, InstanceLabelKind::From,
)), )),
{ {
let peer_map = peer_map.clone(); let peer_map = Arc::downgrade(&peer_map);
move |peer_id| { move |peer_id| {
let peer_map = peer_map.clone(); let peer_map = peer_map.clone();
async move { async move {
let peer_map = peer_map.upgrade()?;
peer_map peer_map
.get_route_peer_info(peer_id) .get_route_peer_info(peer_id)
.await .await
@@ -186,14 +197,16 @@ impl ForeignNetworkEntry {
); );
let relay_bps_limit = global_ctx.config.get_flags().foreign_relay_bps_limit; let relay_bps_limit = global_ctx.config.get_flags().foreign_relay_bps_limit;
let limiter_config = LimiterConfig { let bps_limiter = (relay_bps_limit != u64::MAX).then(|| {
burst_rate: None, let limiter_config = LimiterConfig {
bps: Some(relay_bps_limit), burst_rate: None,
fill_duration_ms: None, bps: Some(relay_bps_limit),
}; fill_duration_ms: None,
let bps_limiter = global_ctx };
.token_bucket_manager() global_ctx
.get_or_create(&network.network_name, limiter_config.into()); .token_bucket_manager()
.get_or_create(&network.network_name, limiter_config.into())
});
let peer_center = Arc::new(PeerCenterInstance::new(Arc::new( let peer_center = Arc::new(PeerCenterInstance::new(Arc::new(
PeerMapWithPeerRpcManager { PeerMapWithPeerRpcManager {
@@ -205,6 +218,7 @@ impl ForeignNetworkEntry {
Self { Self {
my_peer_id, my_peer_id,
parent_global_ctx: global_ctx.clone(),
global_ctx: foreign_global_ctx, global_ctx: foreign_global_ctx,
network, network,
peer_map, peer_map,
@@ -222,6 +236,7 @@ impl ForeignNetworkEntry {
stats_mgr, stats_mgr,
traffic_metrics, traffic_metrics,
event_handler_started: AtomicBool::new(false),
tasks: Mutex::new(JoinSet::new()), tasks: Mutex::new(JoinSet::new()),
@@ -231,6 +246,27 @@ impl ForeignNetworkEntry {
} }
} }
fn desired_avoid_relay_data_feature_flag(
parent_global_ctx: &ArcGlobalCtx,
relay_data: bool,
) -> bool {
!relay_data || parent_global_ctx.get_feature_flags().avoid_relay_data
}
fn sync_parent_relay_data_feature_flag(
parent_global_ctx: &ArcGlobalCtx,
global_ctx: &ArcGlobalCtx,
relay_data: bool,
) -> bool {
let avoid_relay_data =
Self::desired_avoid_relay_data_feature_flag(parent_global_ctx, relay_data);
if global_ctx.get_feature_flags().avoid_relay_data == avoid_relay_data {
return false;
}
global_ctx.set_avoid_relay_data_preference(avoid_relay_data)
}
fn build_foreign_global_ctx( fn build_foreign_global_ctx(
network: &NetworkIdentity, network: &NetworkIdentity,
global_ctx: ArcGlobalCtx, global_ctx: ArcGlobalCtx,
@@ -258,10 +294,9 @@ impl ForeignNetworkEntry {
let mut feature_flag = global_ctx.get_feature_flags(); let mut feature_flag = global_ctx.get_feature_flags();
feature_flag.is_public_server = true; feature_flag.is_public_server = true;
if !relay_data { feature_flag.avoid_relay_data =
feature_flag.avoid_relay_data = true; Self::desired_avoid_relay_data_feature_flag(&global_ctx, relay_data);
} foreign_global_ctx.set_base_advertised_feature_flags(feature_flag);
foreign_global_ctx.set_feature_flags(feature_flag);
for u in global_ctx.get_running_listeners().into_iter() { for u in global_ctx.get_running_listeners().into_iter() {
foreign_global_ctx.add_running_listener(u); foreign_global_ctx.add_running_listener(u);
@@ -412,6 +447,7 @@ impl ForeignNetworkEntry {
let peer_map = self.peer_map.clone(); let peer_map = self.peer_map.clone();
let relay_peer_map = self.relay_peer_map.clone(); let relay_peer_map = self.relay_peer_map.clone();
let traffic_metrics = self.traffic_metrics.clone(); let traffic_metrics = self.traffic_metrics.clone();
let parent_global_ctx = self.parent_global_ctx.clone();
let relay_data = self.relay_data; let relay_data = self.relay_data;
let pm_sender = self.pm_packet_sender.lock().await.take().unwrap(); let pm_sender = self.pm_packet_sender.lock().await.take().unwrap();
let network_name = self.network.network_name.clone(); let network_name = self.network.network_name.clone();
@@ -497,14 +533,21 @@ impl ForeignNetworkEntry {
"ignore packet in foreign network" "ignore packet in foreign network"
); );
} else { } else {
if packet_type == PacketType::Data as u8 if is_relay_data_packet_type(packet_type) {
|| packet_type == PacketType::KcpSrc as u8 let disable_relay_data = parent_global_ctx.flags_arc().disable_relay_data;
|| packet_type == PacketType::KcpDst as u8 if !relay_data || disable_relay_data {
{ tracing::debug!(
if !relay_data { ?from_peer_id,
?to_peer_id,
packet_type,
disable_relay_data,
"drop foreign network relay data"
);
continue; continue;
} }
if !bps_limiter.try_consume(len.into()) { if let Some(bps_limiter) = bps_limiter.as_ref()
&& !bps_limiter.try_consume(len.into())
{
continue; continue;
} }
} }
@@ -589,10 +632,31 @@ impl ForeignNetworkEntry {
}); });
} }
async fn run_parent_feature_flag_sync_routine(&self) {
let parent_global_ctx = self.parent_global_ctx.clone();
let global_ctx = self.global_ctx.clone();
let relay_data = self.relay_data;
self.tasks.lock().await.spawn(async move {
let mut parent_events = parent_global_ctx.subscribe();
loop {
ForeignNetworkEntry::sync_parent_relay_data_feature_flag(
&parent_global_ctx,
&global_ctx,
relay_data,
);
if parent_events.recv().await.is_err() {
parent_events = parent_global_ctx.subscribe();
}
}
});
}
async fn prepare(&self, accessor: Box<dyn GlobalForeignNetworkAccessor>) { async fn prepare(&self, accessor: Box<dyn GlobalForeignNetworkAccessor>) {
self.prepare_route(accessor).await; self.prepare_route(accessor).await;
self.start_packet_recv().await; self.start_packet_recv().await;
self.run_relay_session_gc_routine().await; self.run_relay_session_gc_routine().await;
self.run_parent_feature_flag_sync_routine().await;
self.peer_rpc.run(); self.peer_rpc.run();
self.peer_center.init().await; self.peer_center.init().await;
} }
@@ -617,6 +681,8 @@ struct ForeignNetworkManagerData {
network_peer_last_update: DashMap<String, SystemTime>, network_peer_last_update: DashMap<String, SystemTime>,
accessor: Arc<Box<dyn GlobalForeignNetworkAccessor>>, accessor: Arc<Box<dyn GlobalForeignNetworkAccessor>>,
lock: std::sync::Mutex<()>, lock: std::sync::Mutex<()>,
#[cfg(test)]
fail_next_add_peer_conn_after_entry_insert: AtomicBool,
} }
impl ForeignNetworkManagerData { impl ForeignNetworkManagerData {
@@ -660,6 +726,7 @@ impl ForeignNetworkManagerData {
fn remove_network(&self, network_name: &String) { fn remove_network(&self, network_name: &String) {
let _l = self.lock.lock().unwrap(); let _l = self.lock.lock().unwrap();
if let Some(old) = self.network_peer_maps.remove(network_name) { if let Some(old) = self.network_peer_maps.remove(network_name) {
old.1.traffic_metrics.clear_peer_cache();
let to_remove_peers = old.1.peer_map.list_peers(); let to_remove_peers = old.1.peer_map.list_peers();
for p in to_remove_peers { for p in to_remove_peers {
self.peer_network_map.remove_if(&p, |_, v| { self.peer_network_map.remove_if(&p, |_, v| {
@@ -669,6 +736,39 @@ impl ForeignNetworkManagerData {
} }
} }
self.network_peer_last_update.remove(network_name); self.network_peer_last_update.remove(network_name);
shrink_dashmap(&self.peer_network_map, None);
shrink_dashmap(&self.network_peer_maps, None);
shrink_dashmap(&self.network_peer_last_update, None);
}
fn remove_network_if_current(
&self,
network_name: &String,
expected_entry: &Weak<ForeignNetworkEntry>,
) {
let _l = self.lock.lock().unwrap();
let Some(expected_entry) = expected_entry.upgrade() else {
return;
};
let old = self
.network_peer_maps
.remove_if(network_name, |_, entry| Arc::ptr_eq(entry, &expected_entry));
let Some((_, old)) = old else {
return;
};
old.traffic_metrics.clear_peer_cache();
let to_remove_peers = old.peer_map.list_peers();
for p in to_remove_peers {
self.peer_network_map.remove_if(&p, |_, v| {
v.remove(network_name);
v.is_empty()
});
}
self.network_peer_last_update.remove(network_name);
shrink_dashmap(&self.peer_network_map, None);
shrink_dashmap(&self.network_peer_maps, None);
shrink_dashmap(&self.network_peer_last_update, None);
} }
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
@@ -813,6 +913,8 @@ impl ForeignNetworkManager {
network_peer_last_update: DashMap::new(), network_peer_last_update: DashMap::new(),
accessor: Arc::new(accessor), accessor: Arc::new(accessor),
lock: std::sync::Mutex::new(()), lock: std::sync::Mutex::new(()),
#[cfg(test)]
fail_next_add_peer_conn_after_entry_insert: AtomicBool::new(false),
}); });
let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new())); let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new()));
@@ -830,6 +932,13 @@ impl ForeignNetworkManager {
} }
} }
#[cfg(test)]
fn fail_next_add_peer_conn_after_entry_insert(&self) {
self.data
.fail_next_add_peer_conn_after_entry_insert
.store(true, Ordering::Release);
}
pub fn get_network_peer_id(&self, network_name: &str) -> Option<PeerId> { pub fn get_network_peer_id(&self, network_name: &str) -> Option<PeerId> {
self.data self.data
.network_peer_maps .network_peer_maps
@@ -878,6 +987,35 @@ impl ForeignNetworkManager {
) )
.await; .await;
defer!(rollback_new_entry => sync [
data = self.data.clone(),
network_name = entry.network.network_name.clone(),
peer_id = peer_conn.get_peer_id(),
should_rollback = new_added
] {
if should_rollback {
tracing::warn!(
%network_name,
"rollback newly added foreign network entry after add_peer_conn returned error"
);
data.remove_peer(peer_id, &network_name);
}
});
#[cfg(test)]
if self
.data
.fail_next_add_peer_conn_after_entry_insert
.swap(false, Ordering::AcqRel)
{
return Err(anyhow::anyhow!(
"injected add_peer_conn failure after foreign network entry insert"
)
.into());
}
self.ensure_event_handler_started(&entry);
let same_identity = entry.network == peer_network; let same_identity = entry.network == peer_network;
let peer_identity_type = peer_conn.get_peer_identity_type(); let peer_identity_type = peer_conn.get_peer_identity_type();
let credential_peer_trusted = peer_digest_empty let credential_peer_trusted = peer_digest_empty
@@ -891,10 +1029,6 @@ impl ForeignNetworkManager {
|| credential_identity_mismatch || credential_identity_mismatch
|| entry.my_peer_id != peer_conn.get_my_peer_id() || entry.my_peer_id != peer_conn.get_my_peer_id()
{ {
if new_added {
self.data
.remove_peer(peer_conn.get_peer_id(), &entry.network.network_name.clone());
}
let err = if entry.my_peer_id != peer_conn.get_my_peer_id() { let err = if entry.my_peer_id != peer_conn.get_my_peer_id() {
anyhow::anyhow!( anyhow::anyhow!(
"my peer id not match. exp: {:?} real: {:?}, need retry connect", "my peer id not match. exp: {:?} real: {:?}, need retry connect",
@@ -919,9 +1053,7 @@ impl ForeignNetworkManager {
return Err(err.into()); return Err(err.into());
} }
if new_added { if !new_added && let Some(peer) = entry.peer_map.get_peer_by_id(peer_conn.get_peer_id()) {
self.start_event_handler(&entry).await;
} else if let Some(peer) = entry.peer_map.get_peer_by_id(peer_conn.get_peer_id()) {
let direct_conns_len = peer.get_directly_connections().len(); let direct_conns_len = peer.get_directly_connections().len();
let max_count = use_global_var!(MAX_DIRECT_CONNS_PER_PEER_IN_FOREIGN_NETWORK); let max_count = use_global_var!(MAX_DIRECT_CONNS_PER_PEER_IN_FOREIGN_NETWORK);
if direct_conns_len >= max_count as usize { if direct_conns_len >= max_count as usize {
@@ -935,21 +1067,31 @@ impl ForeignNetworkManager {
} }
entry.peer_map.add_new_peer_conn(peer_conn).await?; entry.peer_map.add_new_peer_conn(peer_conn).await?;
let _ = rollback_new_entry.defuse();
Ok(()) Ok(())
} }
async fn start_event_handler(&self, entry: &ForeignNetworkEntry) { fn ensure_event_handler_started(&self, entry: &Arc<ForeignNetworkEntry>) {
if entry.event_handler_started.swap(true, Ordering::AcqRel) {
return;
}
let data = self.data.clone(); let data = self.data.clone();
let network_name = entry.network.network_name.clone(); let network_name = entry.network.network_name.clone();
let entry_for_cleanup = Arc::downgrade(entry);
let traffic_metrics = Arc::downgrade(&entry.traffic_metrics);
let mut s = entry.global_ctx.subscribe(); let mut s = entry.global_ctx.subscribe();
self.tasks.lock().unwrap().spawn(async move { self.tasks.lock().unwrap().spawn(async move {
while let Ok(e) = s.recv().await { while let Ok(e) = s.recv().await {
match &e { match &e {
GlobalCtxEvent::PeerRemoved(peer_id) => { GlobalCtxEvent::PeerRemoved(peer_id) => {
tracing::info!(?e, "remove peer from foreign network manager"); tracing::info!(?e, "remove peer from foreign network manager");
data.remove_peer(*peer_id, &network_name); if let Some(traffic_metrics) = traffic_metrics.upgrade() {
traffic_metrics.remove_peer(*peer_id);
}
data.network_peer_last_update data.network_peer_last_update
.insert(network_name.clone(), SystemTime::now()); .insert(network_name.clone(), SystemTime::now());
data.remove_peer(*peer_id, &network_name);
} }
GlobalCtxEvent::PeerConnRemoved(..) => { GlobalCtxEvent::PeerConnRemoved(..) => {
tracing::info!(?e, "clear no conn peer from foreign network manager"); tracing::info!(?e, "clear no conn peer from foreign network manager");
@@ -965,7 +1107,10 @@ impl ForeignNetworkManager {
} }
// if lagged or recv done just remove the network // if lagged or recv done just remove the network
tracing::error!("global event handler at foreign network manager exit"); tracing::error!("global event handler at foreign network manager exit");
data.remove_network(&network_name); if let Some(traffic_metrics) = traffic_metrics.upgrade() {
traffic_metrics.clear_peer_cache();
}
data.remove_network_if_current(&network_name, &entry_for_cleanup);
}); });
} }
@@ -1397,6 +1542,92 @@ pub mod tests {
); );
} }
#[tokio::test]
async fn disable_relay_data_blocks_foreign_network_transit_data() {
let pm_center = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let pma_net1 = create_mock_peer_manager_for_foreign_network("net1").await;
let pmb_net1 = create_mock_peer_manager_for_foreign_network("net1").await;
connect_peer_manager(pma_net1.clone(), pm_center.clone()).await;
connect_peer_manager(pmb_net1.clone(), pm_center.clone()).await;
wait_route_appear(pma_net1.clone(), pmb_net1.clone())
.await
.unwrap();
let mut flags = pm_center.get_global_ctx().get_flags();
flags.disable_relay_data = true;
pm_center.get_global_ctx().set_flags(flags);
pm_center
.get_global_ctx()
.issue_event(GlobalCtxEvent::ConfigPatched(Default::default()));
let center_peer_id = pm_center
.get_foreign_network_manager()
.get_network_peer_id("net1")
.unwrap();
wait_for_condition(
|| {
let pma_net1 = pma_net1.clone();
async move {
pma_net1.list_routes().await.iter().any(|route| {
route.peer_id == center_peer_id
&& route
.feature_flag
.as_ref()
.map(|flag| flag.avoid_relay_data)
.unwrap_or(false)
})
}
},
Duration::from_secs(5),
)
.await;
let network_labels =
LabelSet::new().with_label_type(LabelType::NetworkName("net1".to_string()));
let forwarded_bytes_before = metric_value(
&pm_center,
MetricName::TrafficBytesForwarded,
network_labels.clone(),
);
let forwarded_packets_before = metric_value(
&pm_center,
MetricName::TrafficPacketsForwarded,
network_labels.clone(),
);
let mut transit_pkt = ZCPacket::new_with_payload(b"foreign-transit-disabled");
transit_pkt.fill_peer_manager_hdr(
pma_net1.my_peer_id(),
pmb_net1.my_peer_id(),
PacketType::Data as u8,
);
pma_net1
.get_foreign_network_client()
.send_msg(transit_pkt, center_peer_id)
.await
.unwrap();
tokio::time::sleep(Duration::from_millis(300)).await;
assert_eq!(
metric_value(
&pm_center,
MetricName::TrafficBytesForwarded,
network_labels.clone()
),
forwarded_bytes_before
);
assert_eq!(
metric_value(
&pm_center,
MetricName::TrafficPacketsForwarded,
network_labels
),
forwarded_packets_before
);
}
#[tokio::test] #[tokio::test]
async fn foreign_network_transit_control_forwarding_records_control_forwarded_metrics() { async fn foreign_network_transit_control_forwarding_records_control_forwarded_metrics() {
let pm_center = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await; let pm_center = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
@@ -1409,6 +1640,10 @@ pub mod tests {
.await .await
.unwrap(); .unwrap();
let mut flags = pm_center.get_global_ctx().get_flags();
flags.disable_relay_data = true;
pm_center.get_global_ctx().set_flags(flags);
let center_peer_id = pm_center let center_peer_id = pm_center
.get_foreign_network_manager() .get_foreign_network_manager()
.get_network_peer_id("net1") .get_network_peer_id("net1")
@@ -1461,6 +1696,87 @@ pub mod tests {
.await; .await;
} }
#[tokio::test]
async fn failed_new_foreign_peer_conn_rolls_back_entry_maps() {
let pm_center = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let pma_net1 = create_mock_peer_manager_for_foreign_network("net1").await;
let foreign_mgr = pm_center.get_foreign_network_manager();
foreign_mgr.fail_next_add_peer_conn_after_entry_insert();
let (a_ring, b_ring) = crate::tunnel::ring::create_ring_tunnel_pair();
let (client_ret, server_ret) = tokio::time::timeout(Duration::from_secs(5), async {
tokio::join!(
pma_net1.add_client_tunnel(a_ring, false),
pm_center.add_tunnel_as_server(b_ring, true)
)
})
.await
.unwrap();
assert!(client_ret.is_ok());
assert!(server_ret.is_err());
assert!(foreign_mgr.data.get_network_entry("net1").is_none());
assert!(
foreign_mgr
.data
.get_peer_network(pma_net1.my_peer_id())
.is_none()
);
}
#[tokio::test]
async fn foreign_network_peer_removed_clears_traffic_metric_peer_cache() {
let pm_center = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let pma_net1 = create_mock_peer_manager_for_foreign_network("net1").await;
connect_peer_manager(pma_net1.clone(), pm_center.clone()).await;
wait_for_condition(
|| {
let pm_center = pm_center.clone();
async move {
pm_center
.get_foreign_network_manager()
.get_network_peer_id("net1")
.is_some()
}
},
Duration::from_secs(5),
)
.await;
let entry = pm_center
.get_foreign_network_manager()
.data
.get_network_entry("net1")
.unwrap();
entry
.traffic_metrics
.record_rx(pma_net1.my_peer_id(), PacketType::Data as u8, 128)
.await;
assert!(
entry
.traffic_metrics
.contains_peer_cache(pma_net1.my_peer_id())
);
entry
.global_ctx
.issue_event(GlobalCtxEvent::PeerRemoved(pma_net1.my_peer_id()));
wait_for_condition(
|| {
let entry = entry.clone();
let peer_id = pma_net1.my_peer_id();
async move { !entry.traffic_metrics.contains_peer_cache(peer_id) }
},
Duration::from_secs(5),
)
.await;
}
#[tokio::test] #[tokio::test]
async fn foreign_network_encapsulated_forwarding_records_tx_metrics() { async fn foreign_network_encapsulated_forwarding_records_tx_metrics() {
set_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, 1); set_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, 1);
@@ -1657,6 +1973,81 @@ pub mod tests {
)); ));
} }
#[tokio::test]
async fn foreign_entry_feature_flag_tracks_parent_disable_relay_data_toggle() {
let global_ctx = get_mock_global_ctx_with_network(Some(NetworkIdentity::new(
"__access__".to_string(),
"access_secret".to_string(),
)));
let foreign_network = NetworkIdentity::new("net1".to_string(), "net1_secret".to_string());
let (pm_packet_sender, _pm_packet_recv) = create_packet_recv_chan();
let entry = ForeignNetworkEntry::new(
foreign_network,
1,
global_ctx.clone(),
true,
Arc::new(PeerSessionStore::new()),
pm_packet_sender,
);
assert!(!entry.global_ctx.get_feature_flags().avoid_relay_data);
entry.run_parent_feature_flag_sync_routine().await;
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = true;
global_ctx.set_flags(flags);
global_ctx.issue_event(GlobalCtxEvent::ConfigPatched(Default::default()));
wait_for_condition(
|| async { entry.global_ctx.get_feature_flags().avoid_relay_data },
Duration::from_secs(2),
)
.await;
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
global_ctx.issue_event(GlobalCtxEvent::ConfigPatched(Default::default()));
wait_for_condition(
|| async { !entry.global_ctx.get_feature_flags().avoid_relay_data },
Duration::from_secs(2),
)
.await;
}
#[tokio::test]
async fn foreign_entry_without_relay_data_keeps_avoid_feature_flag() {
let global_ctx = get_mock_global_ctx_with_network(Some(NetworkIdentity::new(
"__access__".to_string(),
"access_secret".to_string(),
)));
let foreign_network = NetworkIdentity::new("net1".to_string(), "net1_secret".to_string());
let (pm_packet_sender, _pm_packet_recv) = create_packet_recv_chan();
let entry = ForeignNetworkEntry::new(
foreign_network,
1,
global_ctx.clone(),
false,
Arc::new(PeerSessionStore::new()),
pm_packet_sender,
);
assert!(entry.global_ctx.get_feature_flags().avoid_relay_data);
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
ForeignNetworkEntry::sync_parent_relay_data_feature_flag(
&global_ctx,
&entry.global_ctx,
entry.relay_data,
);
assert!(entry.global_ctx.get_feature_flags().avoid_relay_data);
}
#[test] #[test]
fn credential_trust_path_rejects_admin_identity() { fn credential_trust_path_rejects_admin_identity() {
assert!(ForeignNetworkManager::should_reject_credential_trust_path( assert!(ForeignNetworkManager::should_reject_credential_trust_path(
+4 -2
View File
@@ -12,6 +12,7 @@ use std::{
use base64::Engine as _; use base64::Engine as _;
use base64::engine::general_purpose::STANDARD as BASE64_STANDARD; use base64::engine::general_purpose::STANDARD as BASE64_STANDARD;
use guarden::guard;
use hmac::Mac; use hmac::Mac;
use prost::Message; use prost::Message;
@@ -40,7 +41,6 @@ use crate::{
error::Error, error::Error,
global_ctx::ArcGlobalCtx, global_ctx::ArcGlobalCtx,
}, },
guard,
peers::peer_session::{PeerSessionStore, SessionKey, UpsertResponderSessionReturn}, peers::peer_session::{PeerSessionStore, SessionKey, UpsertResponderSessionReturn},
proto::{ proto::{
api::instance::{PeerConnInfo, PeerConnStats}, api::instance::{PeerConnInfo, PeerConnStats},
@@ -1352,7 +1352,9 @@ impl PeerConn {
let is_foreign_network = conn_info_for_instrument.network_name let is_foreign_network = conn_info_for_instrument.network_name
!= self.global_ctx.get_network_identity().network_name; != self.global_ctx.get_network_identity().network_name;
let recv_limiter = if is_foreign_network { let recv_limiter = if is_foreign_network
&& self.global_ctx.get_flags().foreign_relay_bps_limit != u64::MAX
{
let relay_network_bps_limit = self.global_ctx.get_flags().foreign_relay_bps_limit; let relay_network_bps_limit = self.global_ctx.get_flags().foreign_relay_bps_limit;
let limiter_config = LimiterConfig { let limiter_config = LimiterConfig {
burst_rate: None, burst_rate: None,
+216 -31
View File
@@ -38,7 +38,7 @@ use crate::{
route_trait::{ForeignNetworkRouteInfoMap, MockRoute, NextHopPolicy, RouteInterface}, route_trait::{ForeignNetworkRouteInfoMap, MockRoute, NextHopPolicy, RouteInterface},
traffic_metrics::{ traffic_metrics::{
InstanceLabelKind, LogicalTrafficMetrics, TrafficKind, TrafficMetricRecorder, InstanceLabelKind, LogicalTrafficMetrics, TrafficKind, TrafficMetricRecorder,
route_peer_info_instance_id, traffic_kind, is_relay_data_packet_type, route_peer_info_instance_id, traffic_kind,
}, },
}, },
proto::{ proto::{
@@ -263,9 +263,7 @@ impl PeerManager {
.is_err() .is_err()
{ {
// if local network is not in whitelist, avoid relay data when exist any other route path // if local network is not in whitelist, avoid relay data when exist any other route path
let mut f = global_ctx.get_feature_flags(); global_ctx.set_avoid_relay_data_preference(true);
f.avoid_relay_data = true;
global_ctx.set_feature_flags(f);
} }
let is_secure_mode_enabled = global_ctx let is_secure_mode_enabled = global_ctx
@@ -638,20 +636,27 @@ impl PeerManager {
#[tracing::instrument] #[tracing::instrument]
pub async fn try_direct_connect_with_peer_id_hint<C>( pub async fn try_direct_connect_with_peer_id_hint<C>(
&self, &self,
mut connector: C, connector: C,
peer_id_hint: Option<PeerId>, peer_id_hint: Option<PeerId>,
) -> Result<(PeerId, PeerConnId), Error> ) -> Result<(PeerId, PeerConnId), Error>
where where
C: TunnelConnector + Debug, C: TunnelConnector + Debug,
{ {
let ns = self.global_ctx.net_ns.clone(); let t = self.connect_tunnel(connector).await?;
let t = ns
.run_async(|| async move { connector.connect().await })
.await?;
self.add_client_tunnel_with_peer_id_hint(t, true, peer_id_hint) self.add_client_tunnel_with_peer_id_hint(t, true, peer_id_hint)
.await .await
} }
pub(crate) async fn connect_tunnel<C>(&self, mut connector: C) -> Result<Box<dyn Tunnel>, Error>
where
C: TunnelConnector + Debug,
{
let ns = self.global_ctx.net_ns.clone();
Ok(ns
.run_async(|| async move { connector.connect().await })
.await?)
}
// avoid loop back to virtual network // avoid loop back to virtual network
fn check_remote_addr_not_from_virtual_network( fn check_remote_addr_not_from_virtual_network(
&self, &self,
@@ -689,6 +694,11 @@ impl PeerManager {
Ok(()) Ok(())
} }
fn release_reserved_peer_id(&self, network_name: &str) {
self.reserved_my_peer_id_map.remove(network_name);
shrink_dashmap(&self.reserved_my_peer_id_map, None);
}
#[tracing::instrument(ret)] #[tracing::instrument(ret)]
pub async fn add_tunnel_as_server( pub async fn add_tunnel_as_server(
&self, &self,
@@ -704,7 +714,8 @@ impl PeerManager {
tunnel, tunnel,
self.peer_session_store.clone(), self.peer_session_store.clone(),
); );
conn.do_handshake_as_server_ext(|peer, network_name:&str| { let mut reserved_peer_id_network_name = None;
let handshake_ret = conn.do_handshake_as_server_ext(|peer, network_name:&str| {
if network_name if network_name
== self.global_ctx.get_network_identity().network_name == self.global_ctx.get_network_identity().network_name
{ {
@@ -715,6 +726,7 @@ impl PeerManager {
.foreign_network_manager .foreign_network_manager
.get_network_peer_id(network_name); .get_network_peer_id(network_name);
if peer_id.is_none() { if peer_id.is_none() {
reserved_peer_id_network_name = Some(network_name.to_string());
peer_id = Some(*self.reserved_my_peer_id_map.entry(network_name.to_string()).or_insert_with(|| { peer_id = Some(*self.reserved_my_peer_id_map.entry(network_name.to_string()).or_insert_with(|| {
rand::random::<PeerId>() rand::random::<PeerId>()
}).value()); }).value());
@@ -730,7 +742,14 @@ impl PeerManager {
Ok(()) Ok(())
}) })
.await?; .await;
if let Err(err) = handshake_ret {
if let Some(network_name) = reserved_peer_id_network_name {
self.release_reserved_peer_id(&network_name);
}
return Err(err);
}
let peer_identity = conn.get_network_identity(); let peer_identity = conn.get_network_identity();
let peer_network_name = peer_identity.network_name.clone(); let peer_network_name = peer_identity.network_name.clone();
@@ -749,6 +768,7 @@ impl PeerManager {
if !is_local_network && self.global_ctx.get_flags().private_mode && !foreign_network_allowed if !is_local_network && self.global_ctx.get_flags().private_mode && !foreign_network_allowed
{ {
self.release_reserved_peer_id(&peer_network_name);
return Err(Error::SecretKeyError( return Err(Error::SecretKeyError(
"private mode is turned on, foreign network secret mismatch".to_string(), "private mode is turned on, foreign network secret mismatch".to_string(),
)); ));
@@ -756,14 +776,18 @@ impl PeerManager {
conn.set_is_hole_punched(!is_directly_connected); conn.set_is_hole_punched(!is_directly_connected);
if is_local_network { let add_peer_ret = if is_local_network {
self.add_new_peer_conn(conn).await?; self.add_new_peer_conn(conn).await
} else { } else {
self.foreign_network_manager.add_peer_conn(conn).await?; self.foreign_network_manager.add_peer_conn(conn).await
};
if let Err(err) = add_peer_ret {
self.release_reserved_peer_id(&peer_network_name);
return Err(err);
} }
self.reserved_my_peer_id_map.remove(&peer_network_name); self.release_reserved_peer_id(&peer_network_name);
shrink_dashmap(&self.reserved_my_peer_id_map, None);
tracing::info!("add tunnel as server done"); tracing::info!("add tunnel as server done");
Ok(()) Ok(())
@@ -774,6 +798,7 @@ impl PeerManager {
my_peer_id: PeerId, my_peer_id: PeerId,
peer_map: &PeerMap, peer_map: &PeerMap,
foreign_network_mgr: &ForeignNetworkManager, foreign_network_mgr: &ForeignNetworkManager,
disable_relay_data: bool,
) -> Result<(), ZCPacket> { ) -> Result<(), ZCPacket> {
let pm_header = packet.peer_manager_header().unwrap(); let pm_header = packet.peer_manager_header().unwrap();
if pm_header.packet_type != PacketType::ForeignNetworkPacket as u8 { if pm_header.packet_type != PacketType::ForeignNetworkPacket as u8 {
@@ -783,6 +808,16 @@ impl PeerManager {
let from_peer_id = pm_header.from_peer_id.get(); let from_peer_id = pm_header.from_peer_id.get();
let to_peer_id = pm_header.to_peer_id.get(); let to_peer_id = pm_header.to_peer_id.get();
if disable_relay_data && Self::is_relay_data_zc_packet(&packet) {
tracing::debug!(
?from_peer_id,
?to_peer_id,
inner_packet_type = ?packet.foreign_network_inner_packet_type(),
"drop foreign network relay data while relay data is disabled"
);
return Ok(());
}
let foreign_hdr = packet.foreign_network_hdr().unwrap(); let foreign_hdr = packet.foreign_network_hdr().unwrap();
let foreign_network_name = foreign_hdr.get_network_name(packet.payload()); let foreign_network_name = foreign_hdr.get_network_name(packet.payload());
let foreign_peer_id = foreign_hdr.get_dst_peer_id(); let foreign_peer_id = foreign_hdr.get_dst_peer_id();
@@ -872,6 +907,29 @@ impl PeerManager {
} }
} }
fn is_relay_data_packet(packet_type: u8) -> bool {
is_relay_data_packet_type(packet_type)
}
fn is_relay_data_zc_packet(packet: &ZCPacket) -> bool {
let Some(hdr) = packet.peer_manager_header() else {
return false;
};
if hdr.packet_type == PacketType::ForeignNetworkPacket as u8 {
let inner_packet_type = packet.foreign_network_inner_packet_type();
if inner_packet_type.is_none() {
tracing::warn!(
?hdr,
"foreign network packet has unparseable inner peer manager header"
);
}
return inner_packet_type.is_none_or(Self::is_relay_data_packet);
}
Self::is_relay_data_packet(hdr.packet_type)
}
async fn start_peer_recv(&self) { async fn start_peer_recv(&self) {
let mut recv = self.packet_recv.lock().await.take().unwrap(); let mut recv = self.packet_recv.lock().await.take().unwrap();
let my_peer_id = self.my_peer_id; let my_peer_id = self.my_peer_id;
@@ -925,14 +983,21 @@ impl PeerManager {
self.tasks.lock().await.spawn(async move { self.tasks.lock().await.spawn(async move {
tracing::trace!("start_peer_recv"); tracing::trace!("start_peer_recv");
while let Ok(ret) = recv_packet_from_chan(&mut recv).await { while let Ok(ret) = recv_packet_from_chan(&mut recv).await {
let Err(mut ret) = let disable_relay_data = global_ctx.flags_arc().disable_relay_data;
Self::try_handle_foreign_network_packet(ret, my_peer_id, &peers, &foreign_mgr) let Err(mut ret) = Self::try_handle_foreign_network_packet(
.await ret,
my_peer_id,
&peers,
&foreign_mgr,
disable_relay_data,
)
.await
else { else {
continue; continue;
}; };
let buf_len = ret.buf_len(); let buf_len = ret.buf_len();
let is_relay_data_packet = Self::is_relay_data_zc_packet(&ret);
let Some(hdr) = ret.mut_peer_manager_header() else { let Some(hdr) = ret.mut_peer_manager_header() else {
tracing::warn!(?ret, "invalid packet, skip"); tracing::warn!(?ret, "invalid packet, skip");
continue; continue;
@@ -944,6 +1009,16 @@ impl PeerManager {
let packet_type = hdr.packet_type; let packet_type = hdr.packet_type;
let is_encrypted = hdr.is_encrypted(); let is_encrypted = hdr.is_encrypted();
if to_peer_id != my_peer_id { if to_peer_id != my_peer_id {
if disable_relay_data && is_relay_data_packet {
tracing::debug!(
?from_peer_id,
?to_peer_id,
packet_type,
"drop forwarded relay data while relay data is disabled"
);
continue;
}
if hdr.forward_counter > 7 { if hdr.forward_counter > 7 {
tracing::warn!(?hdr, "forward counter exceed, drop packet"); tracing::warn!(?hdr, "forward counter exceed, drop packet");
continue; continue;
@@ -1494,17 +1569,26 @@ impl PeerManager {
ipv6_addr.is_multicast() || *ipv6_addr == ipv6_inet.last_address() ipv6_addr.is_multicast() || *ipv6_addr == ipv6_inet.last_address()
} }
fn select_ipv4_broadcast_peers<'a>(
routes: impl IntoIterator<Item = &'a instance::Route>,
my_peer_id: PeerId,
) -> Vec<PeerId> {
routes
.into_iter()
.filter_map(|route| {
(route.peer_id != my_peer_id && route.ipv4_addr.is_some()).then_some(route.peer_id)
})
.collect()
}
pub async fn get_msg_dst_peer_ipv4(&self, ipv4_addr: &Ipv4Addr) -> (Vec<PeerId>, bool) { pub async fn get_msg_dst_peer_ipv4(&self, ipv4_addr: &Ipv4Addr) -> (Vec<PeerId>, bool) {
let mut is_exit_node = false; let mut is_exit_node = false;
let mut dst_peers = vec![]; let mut dst_peers = vec![];
if self.is_all_peers_broadcast_ipv4(ipv4_addr) { if self.is_all_peers_broadcast_ipv4(ipv4_addr) {
dst_peers.extend(self.peers.list_routes().await.iter().filter_map(|x| { dst_peers.extend(Self::select_ipv4_broadcast_peers(
if *x.key() != self.my_peer_id { &self.peers.list_route_infos().await,
Some(*x.key()) self.my_peer_id,
} else { ));
None
}
}));
} else if let Some(peer_id) = self.peers.get_peer_id_by_ipv4(ipv4_addr).await { } else if let Some(peer_id) = self.peers.get_peer_id_by_ipv4(ipv4_addr).await {
dst_peers.push(peer_id); dst_peers.push(peer_id);
} else if !self } else if !self
@@ -2080,7 +2164,7 @@ mod tests {
}, },
}, },
proto::{ proto::{
common::{CompressionAlgoPb, NatType, PeerFeatureFlag}, common::{CompressionAlgoPb, NatType},
peer_rpc::SecureAuthLevel, peer_rpc::SecureAuthLevel,
}, },
tunnel::{ tunnel::{
@@ -2124,6 +2208,32 @@ mod tests {
assert!(!PeerManager::should_mark_recent_traffic_for_fanout(2)); assert!(!PeerManager::should_mark_recent_traffic_for_fanout(2));
} }
fn route_with_ipv4(
peer_id: u32,
ipv4_addr: Option<std::net::Ipv4Addr>,
) -> crate::proto::api::instance::Route {
crate::proto::api::instance::Route {
peer_id,
ipv4_addr: ipv4_addr.map(|addr| cidr::Ipv4Inet::new(addr, 24).unwrap().into()),
..Default::default()
}
}
#[test]
fn ipv4_broadcast_peer_selection_skips_peers_without_ipv4() {
let routes = vec![
route_with_ipv4(1, Some(std::net::Ipv4Addr::new(10, 126, 126, 1))),
route_with_ipv4(2, None),
route_with_ipv4(3, Some(std::net::Ipv4Addr::new(10, 126, 126, 3))),
route_with_ipv4(4, None),
];
assert_eq!(
PeerManager::select_ipv4_broadcast_peers(&routes, 3),
vec![1]
);
}
#[test] #[test]
fn gc_recent_traffic_removes_expired_and_connected_entries() { fn gc_recent_traffic_removes_expired_and_connected_entries() {
let stale_peer = 1; let stale_peer = 1;
@@ -2224,6 +2334,84 @@ mod tests {
assert_eq!(signal.version(), initial_version + 2); assert_eq!(signal.version(), initial_version + 2);
} }
#[test]
fn disable_relay_data_classifies_data_plane_packets_only() {
for packet_type in [
PacketType::Data,
PacketType::KcpSrc,
PacketType::KcpDst,
PacketType::QuicSrc,
PacketType::QuicDst,
PacketType::DataWithKcpSrcModified,
PacketType::DataWithQuicSrcModified,
PacketType::ForeignNetworkPacket,
] {
assert!(PeerManager::is_relay_data_packet(packet_type as u8));
}
for packet_type in [
PacketType::RpcReq,
PacketType::RpcResp,
PacketType::Ping,
PacketType::Pong,
PacketType::HandShake,
PacketType::NoiseHandshakeMsg1,
PacketType::NoiseHandshakeMsg2,
PacketType::NoiseHandshakeMsg3,
PacketType::RelayHandshake,
PacketType::RelayHandshakeAck,
] {
assert!(!PeerManager::is_relay_data_packet(packet_type as u8));
}
}
#[test]
fn disable_relay_data_inspects_foreign_network_inner_packet_type() {
let network_name = "net1".to_string();
let mut rpc_packet = ZCPacket::new_with_payload(b"rpc");
rpc_packet.fill_peer_manager_hdr(1, 2, PacketType::RpcReq as u8);
let mut foreign_rpc_packet =
ZCPacket::new_for_foreign_network(&network_name, 2, &rpc_packet);
foreign_rpc_packet.fill_peer_manager_hdr(10, 20, PacketType::ForeignNetworkPacket as u8);
assert_eq!(
foreign_rpc_packet.foreign_network_inner_packet_type(),
Some(PacketType::RpcReq as u8)
);
assert!(!PeerManager::is_relay_data_zc_packet(&foreign_rpc_packet));
let mut data_packet = ZCPacket::new_with_payload(b"data");
data_packet.fill_peer_manager_hdr(1, 2, PacketType::Data as u8);
let mut foreign_data_packet =
ZCPacket::new_for_foreign_network(&network_name, 2, &data_packet);
foreign_data_packet.fill_peer_manager_hdr(10, 20, PacketType::ForeignNetworkPacket as u8);
assert_eq!(
foreign_data_packet.foreign_network_inner_packet_type(),
Some(PacketType::Data as u8)
);
assert!(PeerManager::is_relay_data_zc_packet(&foreign_data_packet));
}
#[tokio::test]
async fn non_whitelisted_network_avoid_relay_survives_disable_relay_data_toggle() {
let global_ctx = get_mock_global_ctx();
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = true;
flags.relay_network_whitelist = "other-network".to_string();
global_ctx.set_flags(flags);
let (packet_send, _packet_recv) = create_packet_recv_chan();
let _peer_mgr = PeerManager::new(RouteAlgoType::Ospf, global_ctx.clone(), packet_send);
let mut flags = global_ctx.get_flags();
flags.disable_relay_data = false;
global_ctx.set_flags(flags);
assert!(global_ctx.get_feature_flags().avoid_relay_data);
}
#[tokio::test] #[tokio::test]
async fn send_msg_internal_does_not_record_tx_metrics_on_failed_delivery() { async fn send_msg_internal_does_not_record_tx_metrics_on_failed_delivery() {
let peer_mgr = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await; let peer_mgr = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
@@ -3121,10 +3309,7 @@ mod tests {
// when b's avoid_relay_data is true, a->c should route through d and e, cost is 3 // when b's avoid_relay_data is true, a->c should route through d and e, cost is 3
peer_mgr_b peer_mgr_b
.get_global_ctx() .get_global_ctx()
.set_feature_flags(PeerFeatureFlag { .set_avoid_relay_data_preference(true);
avoid_relay_data: true,
..Default::default()
});
tokio::time::sleep(Duration::from_secs(2)).await; tokio::time::sleep(Duration::from_secs(2)).await;
if wait_route_appear_with_cost(peer_mgr_a.clone(), peer_mgr_c.my_peer_id, Some(3)) if wait_route_appear_with_cost(peer_mgr_a.clone(), peer_mgr_c.my_peer_id, Some(3))
.await .await
+81 -2
View File
@@ -1228,6 +1228,25 @@ impl SyncedRouteInfo {
Vec<PeerId>, Vec<PeerId>,
HashMap<Vec<u8>, crate::common::global_ctx::TrustedKeyMetadata>, HashMap<Vec<u8>, crate::common::global_ctx::TrustedKeyMetadata>,
) )
where
F: FnMut(PeerId) -> bool,
{
self.verify_and_update_credential_trusts_with_active_peers_protecting(
network_secret,
is_peer_active,
None,
)
}
fn verify_and_update_credential_trusts_with_active_peers_protecting<F>(
&self,
network_secret: Option<&str>,
is_peer_active: F,
protected_peer_id: Option<PeerId>,
) -> (
Vec<PeerId>,
HashMap<Vec<u8>, crate::common::global_ctx::TrustedKeyMetadata>,
)
where where
F: FnMut(PeerId) -> bool, F: FnMut(PeerId) -> bool,
{ {
@@ -1248,6 +1267,9 @@ impl SyncedRouteInfo {
let mut untrusted_peers = let mut untrusted_peers =
Self::collect_revoked_credential_peers(&peer_infos, &prev_trusted, &all_trusted); Self::collect_revoked_credential_peers(&peer_infos, &prev_trusted, &all_trusted);
untrusted_peers.extend(duplicate_untrusted_peers); untrusted_peers.extend(duplicate_untrusted_peers);
if let Some(protected_peer_id) = protected_peer_id {
untrusted_peers.remove(&protected_peer_id);
}
// Remove untrusted peers from peer_infos so they won't appear in route graph // Remove untrusted peers from peer_infos so they won't appear in route graph
if !untrusted_peers.is_empty() { if !untrusted_peers.is_empty() {
@@ -2735,7 +2757,11 @@ impl PeerRouteServiceImpl {
let network_identity = self.global_ctx.get_network_identity(); let network_identity = self.global_ctx.get_network_identity();
let (untrusted, global_trusted_keys) = self let (untrusted, global_trusted_keys) = self
.synced_route_info .synced_route_info
.verify_and_update_credential_trusts(network_identity.network_secret.as_deref()); .verify_and_update_credential_trusts_with_active_peers_protecting(
network_identity.network_secret.as_deref(),
|_| true,
Some(self.my_peer_id),
);
self.global_ctx self.global_ctx
.update_trusted_keys(global_trusted_keys, &network_identity.network_name); .update_trusted_keys(global_trusted_keys, &network_identity.network_name);
@@ -2751,9 +2777,10 @@ impl PeerRouteServiceImpl {
let (untrusted, global_trusted_keys) = self let (untrusted, global_trusted_keys) = self
.synced_route_info .synced_route_info
.verify_and_update_credential_trusts_with_active_peers( .verify_and_update_credential_trusts_with_active_peers_protecting(
network_identity.network_secret.as_deref(), network_identity.network_secret.as_deref(),
|peer_id| self.is_active_non_reusable_credential_peer(peer_id), |peer_id| self.is_active_non_reusable_credential_peer(peer_id),
Some(self.my_peer_id),
); );
self.global_ctx self.global_ctx
.update_trusted_keys(global_trusted_keys, &network_identity.network_name); .update_trusted_keys(global_trusted_keys, &network_identity.network_name);
@@ -5047,6 +5074,58 @@ mod tests {
); );
} }
#[tokio::test]
async fn credential_trust_refresh_does_not_remove_self_peer() {
let my_peer_id = 11;
let remote_peer_id = 12;
let credential_key = vec![8; 32];
let service_impl = PeerRouteServiceImpl::new(my_peer_id, get_mock_global_ctx());
let self_info = make_credential_route_peer_info(my_peer_id, &credential_key);
let remote_info = make_credential_route_peer_info(remote_peer_id, &credential_key);
{
let mut guard = service_impl.synced_route_info.peer_infos.write();
guard.insert(self_info.peer_id, self_info);
guard.insert(remote_info.peer_id, remote_info);
}
service_impl
.synced_route_info
.trusted_credential_pubkeys
.insert(
credential_key.clone(),
TrustedCredentialPubkey {
pubkey: credential_key,
expiry_unix: i64::MAX,
..Default::default()
},
);
let (untrusted_peers, _) = service_impl
.synced_route_info
.verify_and_update_credential_trusts_with_active_peers_protecting(
None,
|_| true,
Some(my_peer_id),
);
assert_eq!(untrusted_peers, vec![remote_peer_id]);
assert!(
service_impl
.synced_route_info
.peer_infos
.read()
.contains_key(&my_peer_id)
);
assert!(
!service_impl
.synced_route_info
.peer_infos
.read()
.contains_key(&remote_peer_id)
);
}
#[tokio::test] #[tokio::test]
async fn credential_refresh_rebuilds_reachability_before_owner_election() { async fn credential_refresh_rebuilds_reachability_before_owner_election() {
const NETWORK_SECRET: &str = "sec1"; const NETWORK_SECRET: &str = "sec1";
+55 -9
View File
@@ -12,6 +12,22 @@ use crate::{
tunnel::udp, tunnel::udp,
}; };
fn remove_easytier_managed_ipv6s(ret: &mut GetIpListResponse, global_ctx: &ArcGlobalCtx) {
ret.interface_ipv6s.retain(|ip| {
let ip = std::net::Ipv6Addr::from(*ip);
!global_ctx.is_ip_easytier_managed_ipv6(&ip)
});
if ret
.public_ipv6
.as_ref()
.map(|ip| std::net::Ipv6Addr::from(*ip))
.is_some_and(|ip| global_ctx.is_ip_easytier_managed_ipv6(&ip))
{
ret.public_ipv6 = None;
}
}
#[derive(Clone)] #[derive(Clone)]
pub struct DirectConnectorManagerRpcServer { pub struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global // TODO: this only cache for one src peer, should make it global
@@ -36,15 +52,7 @@ impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
.chain(self.global_ctx.get_running_listeners()) .chain(self.global_ctx.get_running_listeners())
.map(Into::into) .map(Into::into)
.collect(); .collect();
// remove et ipv6 from the interface ipv6 list remove_easytier_managed_ipv6s(&mut ret, &self.global_ctx);
if let Some(et_ipv6) = self.global_ctx.get_ipv6() {
let et_ipv6: crate::proto::common::Ipv6Addr = et_ipv6.address().into();
ret.interface_ipv6s.retain(|x| *x != et_ipv6);
}
if let Some(public_ipv6) = self.global_ctx.get_public_ipv6_lease() {
let public_ipv6: crate::proto::common::Ipv6Addr = public_ipv6.address().into();
ret.interface_ipv6s.retain(|x| *x != public_ipv6);
}
tracing::trace!( tracing::trace!(
"get_ip_list: public_ipv4: {:?}, public_ipv6: {:?}, listeners: {:?}", "get_ip_list: public_ipv4: {:?}, public_ipv6: {:?}, listeners: {:?}",
ret.public_ipv4, ret.public_ipv4,
@@ -88,3 +96,41 @@ impl DirectConnectorManagerRpcServer {
Self { global_ctx } Self { global_ctx }
} }
} }
#[cfg(test)]
mod tests {
use std::collections::BTreeSet;
use crate::{
common::global_ctx::tests::get_mock_global_ctx,
peers::peer_rpc_service::remove_easytier_managed_ipv6s, proto::peer_rpc::GetIpListResponse,
};
#[tokio::test]
async fn get_ip_list_sanitizer_removes_managed_ipv6_from_all_sources() {
let global_ctx = get_mock_global_ctx();
let virtual_ipv6 = "fd00::1/64".parse().unwrap();
let public_ipv6 = "2001:db8::2/128".parse().unwrap();
let physical_ipv6: std::net::Ipv6Addr = "2001:db8::3".parse().unwrap();
let routed_ipv6: cidr::Ipv6Inet = "2001:db8::4/128".parse().unwrap();
global_ctx.set_ipv6(Some(virtual_ipv6));
global_ctx.set_public_ipv6_lease(Some(public_ipv6));
global_ctx.set_public_ipv6_routes(BTreeSet::from([routed_ipv6]));
let mut ip_list = GetIpListResponse {
public_ipv6: Some(public_ipv6.address().into()),
interface_ipv6s: vec![
virtual_ipv6.address().into(),
public_ipv6.address().into(),
routed_ipv6.address().into(),
physical_ipv6.into(),
],
..Default::default()
};
remove_easytier_managed_ipv6s(&mut ip_list, &global_ctx);
assert_eq!(ip_list.public_ipv6, None);
assert_eq!(ip_list.interface_ipv6s, vec![physical_ipv6.into()]);
}
}
+5 -1
View File
@@ -7,7 +7,10 @@ use anyhow::anyhow;
use dashmap::DashMap; use dashmap::DashMap;
use super::secure_datagram::{SecureDatagramDirection, SecureDatagramSession}; use super::secure_datagram::{SecureDatagramDirection, SecureDatagramSession};
use crate::{common::PeerId, tunnel::packet_def::ZCPacket}; use crate::{
common::{PeerId, shrink_dashmap},
tunnel::packet_def::ZCPacket,
};
pub struct UpsertResponderSessionReturn { pub struct UpsertResponderSessionReturn {
pub session: Arc<PeerSession>, pub session: Arc<PeerSession>,
@@ -78,6 +81,7 @@ impl PeerSessionStore {
pub fn evict_unused_sessions(&self) { pub fn evict_unused_sessions(&self) {
self.sessions self.sessions
.retain(|_key, session| Arc::strong_count(session) > 1); .retain(|_key, session| Arc::strong_count(session) > 1);
shrink_dashmap(&self.sessions, None);
} }
#[tracing::instrument(skip(self))] #[tracing::instrument(skip(self))]
+2
View File
@@ -243,6 +243,8 @@ impl PublicIpv6Service {
.copied() .copied()
.collect::<Vec<_>>(); .collect::<Vec<_>>();
*cached_routes = routes; *cached_routes = routes;
self.global_ctx
.set_public_ipv6_routes(cached_routes.clone());
self.global_ctx self.global_ctx
.issue_event(GlobalCtxEvent::PublicIpv6RoutesUpdated(added, removed)); .issue_event(GlobalCtxEvent::PublicIpv6RoutesUpdated(added, removed));
} }
+9 -1
View File
@@ -9,7 +9,7 @@ use tokio::time::{Duration, timeout};
use crate::peers::foreign_network_client::ForeignNetworkClient; use crate::peers::foreign_network_client::ForeignNetworkClient;
use crate::{ use crate::{
common::error::Error, common::error::Error,
common::{PeerId, global_ctx::ArcGlobalCtx}, common::{PeerId, global_ctx::ArcGlobalCtx, shrink_dashmap},
peers::peer_map::PeerMap, peers::peer_map::PeerMap,
peers::peer_session::{PeerSession, PeerSessionAction, PeerSessionStore, SessionKey}, peers::peer_session::{PeerSession, PeerSessionAction, PeerSessionStore, SessionKey},
peers::route_trait::NextHopPolicy, peers::route_trait::NextHopPolicy,
@@ -652,6 +652,10 @@ impl RelayPeerMap {
self.handshake_locks.remove(&peer_id); self.handshake_locks.remove(&peer_id);
self.pending_packets.remove(&peer_id); self.pending_packets.remove(&peer_id);
} }
shrink_dashmap(&self.states, None);
shrink_dashmap(&self.pending_handshakes, None);
shrink_dashmap(&self.handshake_locks, None);
shrink_dashmap(&self.pending_packets, None);
} }
pub fn has_state(&self, peer_id: PeerId) -> bool { pub fn has_state(&self, peer_id: PeerId) -> bool {
@@ -679,6 +683,10 @@ impl RelayPeerMap {
self.pending_handshakes.remove(&peer_id); self.pending_handshakes.remove(&peer_id);
self.handshake_locks.remove(&peer_id); self.handshake_locks.remove(&peer_id);
self.pending_packets.remove(&peer_id); self.pending_packets.remove(&peer_id);
shrink_dashmap(&self.states, None);
shrink_dashmap(&self.pending_handshakes, None);
shrink_dashmap(&self.handshake_locks, None);
shrink_dashmap(&self.pending_packets, None);
tracing::debug!(?peer_id, "RelayPeerMap removed peer relay state"); tracing::debug!(?peer_id, "RelayPeerMap removed peer relay state");
} }
+20
View File
@@ -201,6 +201,11 @@ impl LogicalTrafficMetrics {
self.per_peer.len() self.per_peer.len()
} }
#[cfg(test)]
fn contains_peer_cache(&self, peer_id: PeerId) -> bool {
self.per_peer.contains_key(&peer_id)
}
fn build_peer_counters(&self, instance_id: &str) -> TrafficCounters { fn build_peer_counters(&self, instance_id: &str) -> TrafficCounters {
let instance_label = match self.label_kind { let instance_label = match self.label_kind {
InstanceLabelKind::To => LabelType::ToInstanceId(instance_id.to_string()), InstanceLabelKind::To => LabelType::ToInstanceId(instance_id.to_string()),
@@ -241,6 +246,13 @@ pub(crate) fn traffic_kind(packet_type: u8) -> TrafficKind {
} }
} }
pub(crate) fn is_relay_data_packet_type(packet_type: u8) -> bool {
// Relay handshakes are control-plane setup; payload data is blocked by its
// original packet type after the session exists.
traffic_kind(packet_type) == TrafficKind::Data
|| packet_type == PacketType::ForeignNetworkPacket as u8
}
#[derive(Clone)] #[derive(Clone)]
struct TrafficMetricGroup { struct TrafficMetricGroup {
data: Arc<LogicalTrafficMetrics>, data: Arc<LogicalTrafficMetrics>,
@@ -326,6 +338,14 @@ impl TrafficMetricRecorder {
self.rx_metrics.control.clear_peer_cache(); self.rx_metrics.control.clear_peer_cache();
} }
#[cfg(test)]
pub(crate) fn contains_peer_cache(&self, peer_id: PeerId) -> bool {
self.tx_metrics.data.contains_peer_cache(peer_id)
|| self.tx_metrics.control.contains_peer_cache(peer_id)
|| self.rx_metrics.data.contains_peer_cache(peer_id)
|| self.rx_metrics.control.contains_peer_cache(peer_id)
}
fn resolve_instance_id(&self, peer_id: PeerId) -> BoxFuture<'static, Option<String>> { fn resolve_instance_id(&self, peer_id: PeerId) -> BoxFuture<'static, Option<String>> {
(self.resolve_instance_id)(peer_id) (self.resolve_instance_id)(peer_id)
} }
+1
View File
@@ -27,6 +27,7 @@ message InstanceConfigPatch {
optional bool ipv6_public_addr_provider = 11; optional bool ipv6_public_addr_provider = 11;
optional bool ipv6_public_addr_auto = 12; optional bool ipv6_public_addr_auto = 12;
optional string ipv6_public_addr_prefix = 13; optional string ipv6_public_addr_prefix = 13;
optional bool disable_relay_data = 14;
} }
message PortForwardPatch { message PortForwardPatch {
+2
View File
@@ -99,6 +99,8 @@ message NetworkConfig {
optional bool ipv6_public_addr_provider = 62; optional bool ipv6_public_addr_provider = 62;
optional bool ipv6_public_addr_auto = 63; optional bool ipv6_public_addr_auto = 63;
optional string ipv6_public_addr_prefix = 64; optional string ipv6_public_addr_prefix = 64;
optional bool disable_relay_data = 65;
optional bool enable_udp_broadcast_relay = 66;
} }
message PortForwardConfig { message PortForwardConfig {
+2
View File
@@ -75,6 +75,8 @@ message FlagsInConfig {
bool need_p2p = 38; bool need_p2p = 38;
uint64 instance_recv_bps_limit = 39; uint64 instance_recv_bps_limit = 39;
bool disable_upnp = 40; bool disable_upnp = 40;
bool disable_relay_data = 41;
bool enable_udp_broadcast_relay = 42;
} }
message RpcDescriptor { message RpcDescriptor {
+4 -1
View File
@@ -14,5 +14,8 @@ pub mod web;
pub mod tests; pub mod tests;
pub mod utils; pub mod utils;
const DESCRIPTOR_POOL_BYTES: &[u8] = pub const DESCRIPTOR_POOL_BYTES: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/file_descriptor_set.bin")); include_bytes!(concat!(env!("OUT_DIR"), "/file_descriptor_set.bin"));
pub const ALL_DESCRIPTOR_BYTES: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/descriptors.bin"));
+1 -1
View File
@@ -1,10 +1,10 @@
use std::sync::{Arc, Mutex, atomic::AtomicBool}; use std::sync::{Arc, Mutex, atomic::AtomicBool};
use futures::{SinkExt as _, StreamExt}; use futures::{SinkExt as _, StreamExt};
use guarden::defer;
use tokio::{task::JoinSet, time::timeout}; use tokio::{task::JoinSet, time::timeout};
use crate::{ use crate::{
defer,
proto::rpc_types::error::Error, proto::rpc_types::error::Error,
tunnel::{Tunnel, packet_def::PacketType, ring::create_ring_tunnel_pair}, tunnel::{Tunnel, packet_def::PacketType, ring::create_ring_tunnel_pair},
}; };

Some files were not shown because too many files have changed in this diff Show More