Compare commits

..

59 Commits

Author SHA1 Message Date
sijie.sun 4dca25db86 bump version to 2.0.3 2024-10-13 12:49:01 +08:00
Sijie.Sun d87a440c04 fix 202 bugs (#418)
* fix peer rpc stop working because of mpsc tunnel close unexpectedly

* fix gui:

1. allow set network prefix for virtual ipv4
2. fix android crash
3. fix subnet proxy cannot be set on android
2024-10-13 11:59:16 +08:00
m1m1sha 55efd62798 Merge pull request #417 from EasyTier/perf/detail
🎈 perf: event log
2024-10-12 20:47:42 +08:00
m1m1sha 70a41275c1 feat: display
Display server tag and whether server supports relay
2024-10-12 20:17:45 +08:00
m1m1sha dd941681ce 🎈 perf: event log 2024-10-12 19:57:36 +08:00
m1m1sha 9824d0adaa Fix/UI detail (#414) 2024-10-12 00:36:57 +08:00
Sijie.Sun d2291628e0 mpsc tunnel may be stuck by slow tcp stream, should not panic for this (#406)
* mpsc tunnel may be stuck by slow tcp stream, should not panic for this
* disallow node connect to self
2024-10-11 00:12:14 +08:00
Sijie.Sun 7ab8cad1af allow use ipv4 address in any cidr (#404) 2024-10-10 10:28:48 +08:00
Sijie.Sun 2c017e0fc5 improve hole punch (#403)
* fix duplicated peer id (again)

* improve udp hole punch

1. always try cone punch for any nat type, tolerate fault stun type.
2. serializing all sym punch request, including server side.
2024-10-10 00:07:42 +08:00
Hs_Yeah d9453589ac Fix panic when DNS resolution for STUN server returns only IPv6 addrs. (#402) 2024-10-09 22:40:01 +08:00
Sijie.Sun e344372616 fix cone-to-cone punch (#401) 2024-10-09 22:39:06 +08:00
Sijie.Sun 63821e56bc fix udp buffer size, avoid packet loss (#399)
also bump version to 2.0.2
2024-10-08 22:01:15 +08:00
Sijie.Sun 1be64223c8 ensure dst have session when we are initiator (#398)
* ensure dst have session when we are initiator

* bump version to 2.0.1
2024-10-08 21:05:46 +08:00
Sijie.Sun a08a8e7f4c update gui dep to resolve mem leak (#394) 2024-10-08 09:21:47 +08:00
Li Yang b31996230d add curl dependency check in the installation script (#391)
In many Linux containers, curl is not installed by default, just like
unzip.
2024-10-07 23:23:44 +08:00
Sijie.Sun 1e836501a8 serialize all sym hole punch (#390) 2024-10-07 23:04:49 +08:00
Sijie.Sun d4e59ffc40 fix listener may have no mapped addr (#389) 2024-10-07 12:15:20 +08:00
Sijie.Sun 37ceb77bf6 nat4-nat4 punch (#388)
this patch optimize the udp hole punch logic:

1. allow start punch hole before stun test complete.
2. add lock to symmetric punch, avoid conflict between concurrent hole punching task.
3. support punching hole for predictable nat4-nat4.
4. make backoff of retry reasonable
2024-10-06 22:49:18 +08:00
sijie.sun ba3da97ad4 fix ipv6 direct connector not work 2024-10-03 11:56:10 +08:00
sijie.sun 984ed8f6cf fix #367
introduce my peer route id and peer id is duplicated only when peer
route id is not same.

this problem occurs because update_self may increase my peer info
version and propagate to ther nodes.
2024-09-29 23:58:33 +08:00
sijie.sun c7895963e4 rollback some parameters 2024-09-29 23:17:46 +08:00
sijie.sun a0ece6ad4d fix public server address in readme 2024-09-29 21:35:16 +08:00
sijie.sun d0a3a40a0f fix bugs
add timeout for wss try_accept

public server should show stats

use default values for flags

bump version to 2.0.0
2024-09-29 17:49:14 +08:00
sijie.sun ff5ee8a05e support forward foreign network packet between peers 2024-09-29 10:31:29 +08:00
Hs_Yeah a50bcf3087 Fix IP address display in the status page of GUI
Signed-off-by: Hs_Yeah <bYeahq@gmail.com>
2024-09-27 15:58:02 +08:00
sijie.sun e0b364d3e2 use ubuntu 24.04 apt source
github action upgraded the ubuntu-latest to 24.04

https://github.com/actions/runner-images/pull/10687
2024-09-27 11:05:52 +08:00
sijie.sun 2496cf51c3 fix connection loss when traffic is huge 2024-09-26 23:49:01 +08:00
sijie.sun 7b4a01e7fb fix ring buffer stuck when using multi thread runtime 2024-09-26 14:34:33 +08:00
Hs_Yeah 3f9a1d8f2e Get dev_name from the global_ctx of each instance 2024-09-24 16:52:38 +08:00
Hs_Yeah 0b927bcc91 Add TUN device name setting support to easytier-gui 2024-09-24 16:52:38 +08:00
Hs_Yeah 92397bf7b6 Set Category of the TUN device's network profile to 1 in Windows Registry 2024-09-24 14:23:42 +08:00
sijie.sun d1e2e1db2b fix ospf foreign network info version 2024-09-23 13:42:25 +08:00
sijie.sun 783ba50c9e add cli command for global foreign network info 2024-09-23 00:03:57 +08:00
sijie.sun aca9a0e35b use ospf route to propogate foreign network info 2024-09-22 22:12:18 +08:00
liyang fb8d262554 Fix spelling errors 2024-09-22 20:58:37 +08:00
sijie.sun bd60cfc2a0 add feature flag to ospf route 2024-09-21 20:54:19 +08:00
sijie.sun 06afd221d5 make ping more smart 2024-09-21 18:00:52 +08:00
sijie.sun 0171fb35a4 fix upload oss 2024-09-21 00:24:58 +08:00
Jiangqiu Shen 99c47813c3 add the options to enable latency first or not
in the old behavior, the flags is not set, and it will be generated as default value in the first read. so the default value for the latency_first will be set to true according to the Default settings to Flag.

so the Vue code init the latency first to true.
2024-09-19 20:09:17 +08:00
sijie.sun 82f5dfd569 show nodes version correctly 2024-09-18 23:15:08 +08:00
sijie.sun 6d7edcd486 fix connect failed after setup one of sockets fails 2024-09-18 23:15:08 +08:00
M2kar 9f273dc887 modify compile command (#333)
* modify compile command

* fix(READMD.md): compile from git

* Update README_CN.md
2024-09-18 21:57:25 +08:00
Jiangqiu Shen ac9cfa5040 making cli parse code more ergonomic by remove some copy and unwrap (#347)
1. remove some unessesary copy in cli parse code of string
2. make some member function into non-member function to avoid taking the self reference.
3. use if let Some(..) instead of if xxx.is_some() to avoid copy and unwrap
2024-09-18 21:57:12 +08:00
Sijie.Sun 1b03223537 use customized rpc implementation, remove Tarpc & Tonic (#348)
This patch removes Tarpc & Tonic GRPC and implements a customized rpc framework, which can be used by peer rpc and cli interface.

web config server can also use this rpc framework.

moreover, rewrite the public server logic, use ospf route to implement public server based networking. this make public server mesh possible.
2024-09-18 21:55:28 +08:00
m1m1sha 0467b0a3dc Merge pull request #342 from EasyTier/ci/issue-template
🐎 ci: Modify Text
2024-09-15 22:39:11 +08:00
m1m1sha ba75167238 🐎 ci: Modify Text 2024-09-15 22:38:06 +08:00
m1m1sha 51e7daa26f Merge pull request #341 from EasyTier/ci/github-issue-template
🐎 ci: github issue template
2024-09-15 22:30:49 +08:00
m1m1sha 2ff653cc6f 🐎 ci: github issue template 2024-09-15 22:28:55 +08:00
m1m1sha cfe4d080d5 🐞 fix: GUI relay display error (#335) 2024-09-14 11:41:38 +08:00
M2kar 9b28ecde8e fix compile error due to rust version format (#332) 2024-09-14 11:40:46 +08:00
Sijie.Sun 096ed39d23 fix udp proxy disconn unexpectedly (#321) 2024-09-11 23:46:26 +08:00
m1m1sha 6ea3adcef8 feat: show version & local node (#318)
*  feat: version

Add display version information, incompatible with lower versions

* 🎈 perf: unknown

Unknown when there is no version number displayed

*  feat: Display local nodes

Display local nodes, incompatible with lower versions
2024-09-11 15:58:13 +08:00
m1m1sha 4342be29d7 Perf/front page (#316)
* 🐳 chore: dependencies

* 🐞 fix: minor style issues

fixed background white patches in dark mode
fixed the line height of the status label, which resulted in a bloated appearance

* 🌈 style: lint

*  feat: about
2024-09-11 09:13:00 +08:00
Sijie.Sun 1609c97574 fix panic when wireguard tunnel encounter udp recv error (#299) 2024-09-02 09:37:34 +08:00
Sijie.Sun f07b3ee9c6 fix punching task leak (#298)
the punching task creator doesn't check if the task is already
running, and may create many punching task to same peer node.

this patch also improve hole punching by checking hole punch packet
even if punch rpc is failed.
2024-08-31 14:37:34 +08:00
Sijie.Sun 2058dbc470 fix wg client hang after some time (#297)
wg portal doesn't know client disconnect causing msg overstocked in queue, make
entire peer packet process pipeline hang.
2024-08-31 12:44:12 +08:00
3RDNature 6964fb71fc Add a setting "disable_udp_hole_punch" to disable UDP hole punch function (#291)
It can solve #289 tentative.

Co-authored-by: 3rdnature <root@natureblog.net>
2024-08-29 11:34:30 +08:00
Jiangqiu Shen a8bb4ee7e5 Update Cargo.toml (#290)
fix compile error metioned in #286
2024-08-29 09:06:48 +08:00
严浩 3fcd74ce4e fix: Different network methods server URL display (#283)
Co-authored-by: 严浩 <i@oo1.dev>
2024-08-27 10:09:46 +08:00
137 changed files with 12504 additions and 7167 deletions
+53
View File
@@ -0,0 +1,53 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 🐞 问题报告 / Bug Report
title: '[bug] '
description: 报告一个问题 / Report a bug
labels: ['type: bug', 'status: needs triage']
body:
- type: markdown
attributes:
value: |
## 在提交问题之前 / First of all
1. 请先搜索有关此问题的 [现有问题](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue)。
1. Please search for [existing issues](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue) about this problem first.
2. 请确保所使用的 Easytier 版本都是最新的。
2. Make sure that all Easytier versions are up-to-date.
3. 请确保这是 EasyTier 的问题,而不是你正在使用的其他内容引起的问题。
3. Make sure it's an issue with EasyTier and not something else you are using.
4. 请记得遵守我们的社区准则并保持友好态度。
4. Remember to follow our community guidelines and be friendly.
- type: textarea
id: description
attributes:
label: 描述问题 / Describe the bug
description: 对 bug 的明确描述。如果条件允许,请包括屏幕截图。 / A clear description of what the bug is. Include screenshots if applicable.
placeholder: 问题描述 / Bug description
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: 重现步骤 / Reproduction
description: 能够重现行为的步骤或指向能够复现的存储库链接。 / A link to a reproduction repo or steps to reproduce the behaviour.
placeholder: |
请提供一个最小化的复现示例或复现步骤,请参考这个指南 https://stackoverflow.com/help/minimal-reproducible-example
Please provide a minimal reproduction or steps to reproduce, see this guide https://stackoverflow.com/help/minimal-reproducible-example
为什么需要重现(问题)?请参阅这篇文章 https://antfu.me/posts/why-reproductions-are-required
Why reproduction is required? see this article https://antfu.me/posts/why-reproductions-are-required
- type: textarea
id: expected-behavior
attributes:
label: 预期结果 / Expected behavior
description: 清楚地描述您期望发生的事情。 / A clear description of what you expected to happen.
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在这里添加关于问题的任何其他上下文。 / Add any other context about the problem here.
@@ -0,0 +1,38 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 💡 新功能请求 / Feature Request
title: '[feat] '
description: 提出一个想法 / Suggest an idea
labels: ['type: feature request']
body:
- type: textarea
id: problem
attributes:
label: 描述问题 / Describe the problem
description: 明确描述此功能将解决的问题 / A clear description of the problem this feature would solve
placeholder: "我总是在...感觉困惑 / I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: "描述您想要的解决方案 / Describe the solution you'd like"
description: 明确说明您希望做出的改变 / A clear description of what change you would like
placeholder: '我希望... / I would like to...'
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: 替代方案 / Alternatives considered
description: "您考虑过的任何替代解决方案 / Any alternative solutions you've considered"
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在此处添加有关问题的任何其他上下文。 / Add any other context about the problem here.
+10 -4
View File
@@ -2,7 +2,7 @@ name: EasyTier Core
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,16 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
# do not skip push on branch starts with releases/
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh"]'
build:
strategy:
@@ -86,6 +88,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- name: Cargo cache
uses: actions/cache@v4
with:
@@ -196,7 +202,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
core-result:
+30 -27
View File
@@ -2,7 +2,7 @@ name: EasyTier GUI
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh"]'
build-gui:
strategy:
@@ -69,6 +70,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-node@v4
with:
node-version: 21
@@ -118,33 +123,31 @@ jobs:
if: ${{ matrix.TARGET == 'aarch64-unknown-linux-musl' }}
run: |
# see https://tauri.app/v1/guides/building/linux/
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security multiverse" | sudo tee -a /etc/apt/sources.list
sudo dpkg --add-architecture arm64
sudo apt-get update && sudo apt-get upgrade -y
sudo apt install gcc-aarch64-linux-gnu
sudo apt install libwebkit2gtk-4.1-dev:arm64
sudo apt install libssl-dev:arm64
sudo apt install -f -o Dpkg::Options::="--force-overwrite" libwebkit2gtk-4.1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
@@ -197,7 +200,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/gui
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
gui-result:
+9 -4
View File
@@ -2,7 +2,7 @@ name: EasyTier Mobile
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/workflows/install_rust.sh"]'
build-mobile:
strategy:
@@ -48,6 +49,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-java@v4
with:
distribution: 'oracle'
@@ -150,7 +155,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/mobile
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
mobile-result:
+1 -1
View File
@@ -21,7 +21,7 @@ on:
version:
description: 'Version for this release'
type: string
default: 'v1.2.3'
default: 'v2.0.3'
required: true
make_latest:
description: 'Mark this release as latest'
Generated
+69 -274
View File
@@ -289,6 +289,16 @@ dependencies = [
"syn 2.0.74",
]
[[package]]
name = "async-ringbuf"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32690af15155711360e74119b99605416c9e4dfd45b0859bd9af795a50693bec"
dependencies = [
"futures",
"ringbuf",
]
[[package]]
name = "async-signal"
version = "0.2.10"
@@ -369,15 +379,6 @@ dependencies = [
"system-deps",
]
[[package]]
name = "atomic-polyfill"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8cf2bce30dfe09ef0bfaef228b9d414faaf7e563035494d7fe092dba54b300f4"
dependencies = [
"critical-section",
]
[[package]]
name = "atomic-shim"
version = "0.2.0"
@@ -427,53 +428,6 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0"
[[package]]
name = "axum"
version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a6c9af12842a67734c9a2e355436e5d03b22383ed60cf13cd0c18fbfe3dcbcf"
dependencies = [
"async-trait",
"axum-core",
"bytes",
"futures-util",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"itoa 1.0.11",
"matchit",
"memchr",
"mime",
"percent-encoding",
"pin-project-lite",
"rustversion",
"serde",
"sync_wrapper 1.0.1",
"tower",
"tower-layer",
"tower-service",
]
[[package]]
name = "axum-core"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a15c63fd72d41492dc4f497196f5da1fb04fb7529e631d73630d1b491e47a2e3"
dependencies = [
"async-trait",
"bytes",
"futures-util",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"mime",
"pin-project-lite",
"rustversion",
"sync_wrapper 0.1.2",
"tower-layer",
"tower-service",
]
[[package]]
name = "backtrace"
version = "0.3.73"
@@ -960,12 +914,6 @@ dependencies = [
"error-code",
]
[[package]]
name = "cobs"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67ba02a97a2bd10f4b59b25c7973101c79642302776489e030cd13cdab09ed15"
[[package]]
name = "cocoa"
version = "0.25.0"
@@ -1176,12 +1124,6 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "critical-section"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7059fff8937831a9ae6f0fe4d658ffabf58f2ca96aa9dec1c889f936f705f216"
[[package]]
name = "crossbeam"
version = "0.8.4"
@@ -1597,11 +1539,12 @@ checksum = "0d6ef0072f8a535281e4876be788938b528e9a1d43900b82c2569af7da799125"
[[package]]
name = "easytier"
version = "1.2.3"
version = "2.0.3"
dependencies = [
"aes-gcm",
"anyhow",
"async-recursion",
"async-ringbuf",
"async-stream",
"async-trait",
"atomic-shim",
@@ -1623,7 +1566,9 @@ dependencies = [
"derivative",
"encoding",
"futures",
"futures-util",
"gethostname 0.5.0",
"git-version",
"globwalk",
"http 1.1.0",
"humansize",
@@ -1637,18 +1582,22 @@ dependencies = [
"petgraph",
"pin-project-lite",
"pnet",
"postcard",
"prost",
"prost-build",
"prost-types",
"quinn",
"rand 0.8.5",
"rcgen",
"regex",
"reqwest 0.11.27",
"ring 0.17.8",
"ringbuf",
"rpc_build",
"rstest",
"rust-i18n",
"rustls",
"serde",
"serde_json",
"serial_test",
"smoltcp",
"socket2",
@@ -1656,7 +1605,6 @@ dependencies = [
"sys-locale",
"tabled",
"tachyonix",
"tarpc",
"thiserror",
"time",
"timedmap",
@@ -1667,7 +1615,6 @@ dependencies = [
"tokio-util",
"tokio-websockets",
"toml 0.8.19",
"tonic",
"tonic-build",
"tracing",
"tracing-appender",
@@ -1684,7 +1631,7 @@ dependencies = [
[[package]]
name = "easytier-gui"
version = "1.2.3"
version = "2.0.3"
dependencies = [
"anyhow",
"chrono",
@@ -1735,12 +1682,6 @@ version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ef6b89e5b37196644d8796de5268852ff179b44e96276cf4290264843743bb7"
[[package]]
name = "embedded-io"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef1a6892d9eef45c8fa6b9e0086428a2cca8491aca8f787c534a3d6d0bcb3ced"
[[package]]
name = "encoding"
version = "0.2.33"
@@ -2372,6 +2313,26 @@ dependencies = [
"winapi",
]
[[package]]
name = "git-version"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ad568aa3db0fcbc81f2f116137f263d7304f512a1209b35b85150d3ef88ad19"
dependencies = [
"git-version-macro",
]
[[package]]
name = "git-version-macro"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53010ccb100b96a67bc32c0175f0ed1426b31b655d562898e57325f81c023ac0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.74",
]
[[package]]
name = "glib"
version = "0.18.5"
@@ -2531,25 +2492,6 @@ dependencies = [
"tracing",
]
[[package]]
name = "h2"
version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa82e28a107a8cc405f0839610bdc9b15f1e25ec7d696aa5cf173edbcb1486ab"
dependencies = [
"atomic-waker",
"bytes",
"fnv",
"futures-core",
"futures-sink",
"http 1.1.0",
"indexmap 2.4.0",
"slab",
"tokio",
"tokio-util",
"tracing",
]
[[package]]
name = "half"
version = "2.4.1"
@@ -2560,15 +2502,6 @@ dependencies = [
"crunchy",
]
[[package]]
name = "hash32"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0c35f58762feb77d74ebe43bdbc3210f09be9fe6742234d573bacc26ed92b67"
dependencies = [
"byteorder",
]
[[package]]
name = "hash32"
version = "0.3.1"
@@ -2590,27 +2523,13 @@ version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
[[package]]
name = "heapless"
version = "0.7.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdc6457c0eb62c71aac4bc17216026d8410337c4126773b9c5daba343f17964f"
dependencies = [
"atomic-polyfill",
"hash32 0.2.1",
"rustc_version",
"serde",
"spin 0.9.8",
"stable_deref_trait",
]
[[package]]
name = "heapless"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bfb9eb618601c89945a70e254898da93b13be0388091d42117462b265bb3fad"
dependencies = [
"hash32 0.3.1",
"hash32",
"stable_deref_trait",
]
@@ -2753,12 +2672,6 @@ dependencies = [
"libm",
]
[[package]]
name = "humantime"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4"
[[package]]
name = "hyper"
version = "0.14.30"
@@ -2769,7 +2682,7 @@ dependencies = [
"futures-channel",
"futures-core",
"futures-util",
"h2 0.3.26",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"httparse",
@@ -2792,11 +2705,9 @@ dependencies = [
"bytes",
"futures-channel",
"futures-util",
"h2 0.4.5",
"http 1.1.0",
"http-body 1.0.1",
"httparse",
"httpdate",
"itoa 1.0.11",
"pin-project-lite",
"smallvec",
@@ -2804,19 +2715,6 @@ dependencies = [
"want",
]
[[package]]
name = "hyper-timeout"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3203a961e5c83b6f5498933e78b6b263e208c197b63e9c6c53cc82ffd3f63793"
dependencies = [
"hyper 1.4.1",
"hyper-util",
"pin-project-lite",
"tokio",
"tower-service",
]
[[package]]
name = "hyper-tls"
version = "0.5.0"
@@ -3379,12 +3277,6 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5"
[[package]]
name = "matchit"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e7465ac9959cc2b1404e8e2367b43684a6d13790fe23056cc8c6c5a6b7bcb94"
[[package]]
name = "md5"
version = "0.7.0"
@@ -3953,25 +3845,6 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "opentelemetry"
version = "0.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6105e89802af13fdf48c49d7646d3b533a70e536d818aae7e78ba0433d01acb8"
dependencies = [
"async-trait",
"crossbeam-channel",
"futures-channel",
"futures-executor",
"futures-util",
"js-sys",
"lazy_static",
"percent-encoding",
"pin-project",
"rand 0.8.5",
"thiserror",
]
[[package]]
name = "option-ext"
version = "0.2.0"
@@ -4481,18 +4354,6 @@ dependencies = [
"universal-hash",
]
[[package]]
name = "postcard"
version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a55c51ee6c0db07e68448e336cf8ea4131a620edefebf9893e759b2d793420f8"
dependencies = [
"cobs",
"embedded-io",
"heapless 0.7.17",
"serde",
]
[[package]]
name = "powerfmt"
version = "0.2.0"
@@ -4605,9 +4466,9 @@ dependencies = [
[[package]]
name = "prost"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e13db3d3fde688c61e2446b4d843bc27a7e8af269a69440c0308021dc92333cc"
checksum = "3b2ecbe40f08db5c006b5764a2645f7f3f141ce756412ac9e1dd6087e6d32995"
dependencies = [
"bytes",
"prost-derive",
@@ -4615,9 +4476,9 @@ dependencies = [
[[package]]
name = "prost-build"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5bb182580f71dd070f88d01ce3de9f4da5021db7115d2e1c3605a754153b77c1"
checksum = "f8650aabb6c35b860610e9cff5dc1af886c9e25073b7b1712a68972af4281302"
dependencies = [
"bytes",
"heck 0.5.0",
@@ -4636,9 +4497,9 @@ dependencies = [
[[package]]
name = "prost-derive"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18bec9b0adc4eba778b33684b7ba3e7137789434769ee3ce3930463ef904cfca"
checksum = "acf0c195eebb4af52c752bec4f52f645da98b6e92077a04110c7f349477ae5ac"
dependencies = [
"anyhow",
"itertools 0.13.0",
@@ -4649,9 +4510,9 @@ dependencies = [
[[package]]
name = "prost-types"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cee5168b05f49d4b0ca581206eb14a7b22fafd963efe729ac48eb03266e25cc2"
checksum = "60caa6738c7369b940c3d49246a8d1749323674c65cb13010134f5c9bad5b519"
dependencies = [
"prost",
]
@@ -4938,7 +4799,7 @@ dependencies = [
"encoding_rs",
"futures-core",
"futures-util",
"h2 0.3.26",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"hyper 0.14.30",
@@ -5034,6 +4895,23 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "ringbuf"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fb0d14419487131a897031a7e81c3b23d092296984fac4eb6df48cc4e3b2f3c5"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "rpc_build"
version = "0.1.0"
dependencies = [
"heck 0.5.0",
"prost-build",
]
[[package]]
name = "rstest"
version = "0.18.2"
@@ -5666,7 +5544,7 @@ dependencies = [
"byteorder",
"cfg-if",
"defmt",
"heapless 0.8.0",
"heapless",
"managed",
]
@@ -5999,40 +5877,6 @@ version = "0.12.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61c41af27dd6d1e27b1b16b489db798443478cef1f06a660c96db617ba5de3b1"
[[package]]
name = "tarpc"
version = "0.32.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f07cb5fb67b0a90ea954b5ffd2fac9944ffef5937c801b987d3f8913f0c37348"
dependencies = [
"anyhow",
"fnv",
"futures",
"humantime",
"opentelemetry",
"pin-project",
"rand 0.8.5",
"serde",
"static_assertions",
"tarpc-plugins",
"thiserror",
"tokio",
"tokio-util",
"tracing",
"tracing-opentelemetry",
]
[[package]]
name = "tarpc-plugins"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ee42b4e559f17bce0385ebf511a7beb67d5cc33c12c96b7f4e9789919d9c10f"
dependencies = [
"proc-macro2",
"quote",
"syn 1.0.109",
]
[[package]]
name = "tauri"
version = "2.0.0-rc.2"
@@ -6589,7 +6433,6 @@ dependencies = [
"futures-core",
"futures-sink",
"pin-project-lite",
"slab",
"tokio",
]
@@ -6695,36 +6538,6 @@ dependencies = [
"winnow 0.6.18",
]
[[package]]
name = "tonic"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38659f4a91aba8598d27821589f5db7dddd94601e7a01b1e485a50e5484c7401"
dependencies = [
"async-stream",
"async-trait",
"axum",
"base64 0.22.1",
"bytes",
"h2 0.4.5",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"hyper 1.4.1",
"hyper-timeout",
"hyper-util",
"percent-encoding",
"pin-project",
"prost",
"socket2",
"tokio",
"tokio-stream",
"tower",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tonic-build"
version = "0.12.1"
@@ -6746,16 +6559,11 @@ checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c"
dependencies = [
"futures-core",
"futures-util",
"indexmap 1.9.3",
"pin-project",
"pin-project-lite",
"rand 0.8.5",
"slab",
"tokio",
"tokio-util",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
@@ -6826,19 +6634,6 @@ dependencies = [
"tracing-core",
]
[[package]]
name = "tracing-opentelemetry"
version = "0.17.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fbbe89715c1dbbb790059e2565353978564924ee85017b5fff365c872ff6721f"
dependencies = [
"once_cell",
"opentelemetry",
"tracing",
"tracing-core",
"tracing-subscriber",
]
[[package]]
name = "tracing-subscriber"
version = "0.3.18"
-1
View File
@@ -10,4 +10,3 @@ panic = "unwind"
panic = "abort"
lto = true
codegen-units = 1
strip = true
+10 -71
View File
@@ -4,84 +4,23 @@
"path": "."
},
{
"name": "gui",
"path": "easytier-gui"
},
{
"name": "core",
"path": "easytier"
},
{
"name": "vpnservice",
"path": "tauri-plugin-vpnservice"
}
],
"settings": {
"eslint.experimental.useFlatConfig": true,
"i18n-ally.sourceLanguage": "cn",
"i18n-ally.keystyle": "nested",
"i18n-ally.sortKeys": true,
// Disable the default formatter
"prettier.enable": false,
"editor.formatOnSave": false,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit",
"source.organizeImports": "never"
},
"eslint.rules.customizations": [
{
"rule": "style/*",
"severity": "off"
},
{
"rule": "style/eol-last",
"severity": "error"
},
{
"rule": "format/*",
"severity": "off"
},
{
"rule": "*-indent",
"severity": "off"
},
{
"rule": "*-spacing",
"severity": "off"
},
{
"rule": "*-spaces",
"severity": "off"
},
{
"rule": "*-order",
"severity": "off"
},
{
"rule": "*-dangle",
"severity": "off"
},
{
"rule": "*-newline",
"severity": "off"
},
{
"rule": "*quotes",
"severity": "off"
},
{
"rule": "*semi",
"severity": "off"
}
],
"eslint.validate": [
"code-workspace",
"javascript",
"javascriptreact",
"typescript",
"typescriptreact",
"vue",
"html",
"markdown",
"json",
"jsonc",
"yaml",
"toml",
"gql",
"graphql"
],
"i18n-ally.localesPaths": [
"easytier-gui/locales"
]
}
}
+14 -5
View File
@@ -47,7 +47,7 @@ EasyTier is a simple, safe and decentralized VPN networking solution implemented
3. **Install from source code**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **Install by Docker Compose**
@@ -200,20 +200,20 @@ Subnet proxy information will automatically sync to each node in the virtual net
### Networking without Public IP
EasyTier supports networking using shared public nodes. The currently deployed shared public node is ``tcp://easytier.public.kkrainbow.top:11010``.
EasyTier supports networking using shared public nodes. The currently deployed shared public node is ``tcp://public.easytier.top:11010``.
When using shared nodes, each node entering the network needs to provide the same ``--network-name`` and ``--network-secret`` parameters as the unique identifier of the network.
Taking two nodes as an example, Node A executes:
```sh
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://public.easytier.top:11010
```
Node B executes
```sh
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://public.easytier.top:11010
```
After the command is successfully executed, Node A can access Node B through the virtual IP 10.144.144.2.
@@ -279,7 +279,16 @@ Before using the Client Config, you need to modify the Interface Address and Pee
### Self-Hosted Public Server
Each node can act as a relay node for other users' networks. Simply start EasyTier without any parameters.
Every virtual network (with same network name and secret) can act as a public server cluster. Nodes of other network can connect to arbitrary nodes in public server cluster to discover each other without public IP.
Run you own public server cluster is exactly same as running an virtual network, except that you can skip config the ipv4 addr.
You can also join the official public server cluster with following command:
```
sudo easytier-core --network-name easytier --network-secret easytier -p tcp://public.easytier.top:11010
```
### Configurations
+13 -5
View File
@@ -47,7 +47,7 @@
3. **通过源码安装**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **通过Docker Compose安装**
@@ -199,20 +199,20 @@ sudo easytier-core --ipv4 10.144.144.2 -n 10.1.1.0/24
### 无公网IP组网
EasyTier 支持共享公网节点进行组网。目前已部署共享的公网节点 ``tcp://easytier.public.kkrainbow.top:11010``。
EasyTier 支持共享公网节点进行组网。目前已部署共享的公网节点 ``tcp://public.easytier.top:11010``。
使用共享节点时,需要每个入网节点提供相同的 ``--network-name`` 和 ``--network-secret`` 参数,作为网络的唯一标识。
以双节点为例,节点 A 执行:
```sh
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
sudo easytier-core -i 10.144.144.1 --network-name abc --network-secret abc -e tcp://public.easytier.top:11010
```
节点 B 执行
```sh
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://easytier.public.kkrainbow.top:11010
sudo easytier-core --ipv4 10.144.144.2 --network-name abc --network-secret abc -e tcp://public.easytier.top:11010
```
命令执行成功后,节点 A 即可通过虚拟 IP 10.144.144.2 访问节点 B。
@@ -282,7 +282,15 @@ connected_clients:
### 自建公共中转服务器
每个节点都可作为其他用户网络的中转节点。不带任何参数直接启动 EasyTier 即可
每个虚拟网络(通过相同的网络名称和密钥建链)都可以充当公共服务器集群。其他网络的节点可以连接到公共服务器集群中的任意节点,无需公共 IP 即可发现彼此
运行自建的公共服务器集群与运行虚拟网络完全相同,不过可以跳过配置 ipv4 地址。
也可以使用以下命令加入官方公共服务器集群,后续将实现公共服务器集群的节点间负载均衡:
```
sudo easytier-core --network-name easytier --network-secret easytier -p tcp://public.easytier.top:11010
```
### 其他配置
+79 -3
View File
@@ -1,5 +1,81 @@
{
"i18n-ally.localesPaths": [
"locales"
"cSpell.words": [
"easytier",
"Vite",
"vueuse",
"pinia",
"demi",
"antfu",
"iconify",
"intlify",
"vitejs",
"unplugin",
"pnpm"
],
"i18n-ally.localesPaths": "locales",
"editor.formatOnSave": false,
// Auto fix
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit",
"source.organizeImports": "never"
},
// Silent the stylistic rules in you IDE, but still auto fix them
"eslint.rules.customizations": [
{
"rule": "style/*",
"severity": "off"
},
{
"rule": "format/*",
"severity": "off"
},
{
"rule": "*-indent",
"severity": "off"
},
{
"rule": "*-spacing",
"severity": "off"
},
{
"rule": "*-spaces",
"severity": "off"
},
{
"rule": "*-order",
"severity": "off"
},
{
"rule": "*-dangle",
"severity": "off"
},
{
"rule": "*-newline",
"severity": "off"
},
{
"rule": "*quotes",
"severity": "off"
},
{
"rule": "*semi",
"severity": "off"
}
],
// The following is optional.
// It's better to put under project setting `.vscode/settings.json`
// to avoid conflicts with working with different eslint configs
// that does not support all formats.
"eslint.validate": [
"javascript",
"javascriptreact",
"typescript",
"typescriptreact",
"vue",
"html",
"markdown",
"json",
"jsonc",
"yaml"
]
}
}
+6 -2
View File
@@ -14,6 +14,11 @@ npm install -g pnpm
### For Desktop (Win/Mac/Linux)
```
cd ../tauri-plugin-vpnservice
pnpm install
pnpm build
cd ../easytier-gui
pnpm install
pnpm tauri build
```
@@ -34,7 +39,6 @@ rustup target add aarch64-linux-android
install java 20
```
Java version depend on gradle version specified in (easytier-gui\src-tauri\gen\android\build.gradle.kts)
See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/compatibility.html) for detail .
@@ -43,4 +47,4 @@ See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/comp
pnpm install
pnpm tauri android init
pnpm tauri android build
```
```
+38
View File
@@ -13,6 +13,7 @@ proxy_cidrs: 子网代理CIDR
enable_vpn_portal: 启用VPN门户
vpn_portal_listen_port: 监听端口
vpn_portal_client_network: 客户端子网
dev_name: TUN接口名称
advanced_settings: 高级设置
basic_settings: 基础设置
listener_urls: 监听地址
@@ -45,11 +46,13 @@ enable_auto_launch: 开启开机自启
exit: 退出
chips_placeholder: 例如: {0}, 按回车添加
hostname_placeholder: '留空默认为主机名: {0}'
dev_name_placeholder: 注意:当多个网络同时使用相同的TUN接口名称时,将会在设置TUN的IP时产生冲突,留空以自动生成随机名称
off_text: 点击关闭
on_text: 点击开启
show_config: 显示配置
close: 关闭
use_latency_first: 延迟优先模式
my_node_info: 当前节点信息
peer_count: 已连接
upload: 上传
@@ -66,6 +69,12 @@ upload_bytes: 上传
download_bytes: 下载
loss_rate: 丢包率
status:
version: 内核版本
local: 本机
server: 服务器
relay: 中继
run_network: 运行网络
stop_network: 停止网络
network_running: 运行中
@@ -75,3 +84,32 @@ dhcp_experimental_warning: 实验性警告!使用DHCP时如果组网环境中
tray:
show: 显示 / 隐藏
exit: 退出
about:
title: 关于
version: 版本
author: 作者
homepage: 主页
license: 许可证
description: 一个简单、安全、去中心化的内网穿透 VPN 组网方案,使用 Rust 语言和 Tokio 框架实现。
check_update: 检查更新
event:
Unknown: 未知
TunDeviceReady: Tun设备就绪
TunDeviceError: Tun设备错误
PeerAdded: 对端添加
PeerRemoved: 对端移除
PeerConnAdded: 对端连接添加
PeerConnRemoved: 对端连接移除
ListenerAdded: 监听器添加
ListenerAddFailed: 监听器添加失败
ListenerAcceptFailed: 监听器接受连接失败
ConnectionAccepted: 连接已接受
ConnectionError: 连接错误
Connecting: 正在连接
ConnectError: 连接错误
VpnPortalClientConnected: VPN门户客户端已连接
VpnPortalClientDisconnected: VPN门户客户端已断开连接
DhcpIpv4Changed: DHCP IPv4地址更改
DhcpIpv4Conflicted: DHCP IPv4地址冲突
+38 -1
View File
@@ -13,6 +13,7 @@ proxy_cidrs: Subnet Proxy CIDRs
enable_vpn_portal: Enable VPN Portal
vpn_portal_listen_port: VPN Portal Listen Port
vpn_portal_client_network: Client Sub Network
dev_name: TUN interface name
advanced_settings: Advanced Settings
basic_settings: Basic Settings
listener_urls: Listener URLs
@@ -43,9 +44,10 @@ logging_copy_dir: Copy Log Path
disable_auto_launch: Disable Launch on Reboot
enable_auto_launch: Enable Launch on Reboot
exit: Exit
use_latency_first: Latency First Mode
chips_placeholder: 'e.g: {0}, press Enter to add'
hostname_placeholder: 'Leave blank and default to host name: {0}'
dev_name_placeholder: 'Note: When multiple networks use the same TUN interface name at the same time, there will be a conflict when setting the TUN''s IP. Leave blank to automatically generate a random name.'
off_text: Press to disable
on_text: Press to enable
show_config: Show Config
@@ -66,6 +68,12 @@ upload_bytes: Upload
download_bytes: Download
loss_rate: Loss Rate
status:
version: Version
local: Local
server: Server
relay: Relay
run_network: Run Network
stop_network: Stop Network
network_running: running
@@ -75,3 +83,32 @@ dhcp_experimental_warning: Experimental warning! if there is an IP conflict in t
tray:
show: Show / Hide
exit: Exit
about:
title: About
version: Version
author: Author
homepage: Homepage
license: License
description: 'EasyTier is a simple, safe and decentralized VPN networking solution implemented with the Rust language and Tokio framework.'
check_update: Check Update
event:
Unknown: Unknown
TunDeviceReady: TunDeviceReady
TunDeviceError: TunDeviceError
PeerAdded: PeerAdded
PeerRemoved: PeerRemoved
PeerConnAdded: PeerConnAdded
PeerConnRemoved: PeerConnRemoved
ListenerAdded: ListenerAdded
ListenerAddFailed: ListenerAddFailed
ListenerAcceptFailed: ListenerAcceptFailed
ConnectionAccepted: ConnectionAccepted
ConnectionError: ConnectionError
Connecting: Connecting
ConnectError: ConnectError
VpnPortalClientConnected: VpnPortalClientConnected
VpnPortalClientDisconnected: VpnPortalClientDisconnected
DhcpIpv4Changed: DhcpIpv4Changed
DhcpIpv4Conflicted: DhcpIpv4Conflicted
+38 -36
View File
@@ -1,7 +1,7 @@
{
"name": "easytier-gui",
"type": "module",
"version": "1.2.3",
"version": "2.0.3",
"private": true,
"scripts": {
"dev": "vite",
@@ -12,50 +12,52 @@
"lint:fix": "eslint . --ignore-pattern src-tauri --fix"
},
"dependencies": {
"@primevue/themes": "^4.0.4",
"@tauri-apps/plugin-autostart": "2.0.0-rc.0",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.0",
"@tauri-apps/plugin-os": "2.0.0-rc.0",
"@tauri-apps/plugin-process": "2.0.0-rc.0",
"@tauri-apps/plugin-shell": "2.0.0-rc.0",
"aura": "link:@primevue/themes/aura",
"pinia": "^2.2.1",
"@primevue/themes": "^4.1.0",
"@tauri-apps/plugin-autostart": "2.0.0-rc.1",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.1",
"@tauri-apps/plugin-os": "2.0.0-rc.1",
"@tauri-apps/plugin-process": "2.0.0-rc.1",
"@tauri-apps/plugin-shell": "2.0.0-rc.1",
"@vueuse/core": "^11.1.0",
"aura": "link:@primevue\\themes\\aura",
"ip-num": "1.5.1",
"pinia": "^2.2.4",
"primeflex": "^3.3.1",
"primeicons": "^7.0.0",
"primevue": "^4.0.4",
"tauri-plugin-vpnservice-api": "link:../tauri-plugin-vpnservice",
"vue": "^3.4.38",
"vue-i18n": "^9.13.1",
"vue-router": "^4.4.3"
"primevue": "^4.1.0",
"tauri-plugin-vpnservice-api": "link:..\\tauri-plugin-vpnservice",
"vue": "^3.5.11",
"vue-i18n": "^10.0.4",
"vue-router": "^4.4.5"
},
"devDependencies": {
"@antfu/eslint-config": "^2.25.1",
"@intlify/unplugin-vue-i18n": "^4.0.0",
"@primevue/auto-import-resolver": "^4.0.4",
"@sveltejs/vite-plugin-svelte": "^3.1.1",
"@antfu/eslint-config": "^3.7.3",
"@intlify/unplugin-vue-i18n": "^5.2.0",
"@primevue/auto-import-resolver": "^4.1.0",
"@tauri-apps/api": "2.0.0-rc.0",
"@tauri-apps/cli": "2.0.0-rc.3",
"@types/node": "^20.14.15",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-vue": "^5.1.2",
"@vue-macros/volar": "^0.19.1",
"@types/node": "^22.7.4",
"@types/uuid": "^10.0.0",
"@vitejs/plugin-vue": "^5.1.4",
"@vue-macros/volar": "0.30.3",
"autoprefixer": "^10.4.20",
"eslint": "^9.9.0",
"eslint": "^9.12.0",
"eslint-plugin-format": "^0.1.2",
"internal-ip": "^8.0.0",
"postcss": "^8.4.41",
"tailwindcss": "^3.4.10",
"typescript": "^5.5.4",
"unplugin-auto-import": "^0.17.8",
"postcss": "^8.4.47",
"tailwindcss": "^3.4.13",
"typescript": "^5.6.2",
"unplugin-auto-import": "^0.18.3",
"unplugin-vue-components": "^0.27.4",
"unplugin-vue-macros": "^2.11.5",
"unplugin-vue-macros": "^2.12.3",
"unplugin-vue-markdown": "^0.26.2",
"unplugin-vue-router": "^0.8.8",
"uuid": "^9.0.1",
"vite": "^5.4.1",
"vite-plugin-vue-devtools": "^7.3.8",
"unplugin-vue-router": "^0.10.8",
"uuid": "^10.0.0",
"vite": "^5.4.8",
"vite-plugin-vue-devtools": "^7.4.6",
"vite-plugin-vue-layouts": "^0.11.0",
"vue-i18n": "^9.13.1",
"vue-tsc": "^2.0.29"
}
}
"vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.6"
},
"packageManager": "pnpm@9.12.1+sha512.e5a7e52a4183a02d5931057f7a0dbff9d5e9ce3161e33fa68ae392125b79282a8a8a470a51dfc8a0ed86221442eb2fb57019b0990ed24fab519bf0e1bc5ccfc4"
}
+2161 -2043
View File
File diff suppressed because it is too large Load Diff
@@ -1,4 +0,0 @@
[build]
target = "x86_64-unknown-linux-gnu"
[target]
+1 -1
View File
@@ -1,6 +1,6 @@
[package]
name = "easytier-gui"
version = "1.2.3"
version = "2.0.3"
description = "EasyTier GUI"
authors = ["you"]
edition = "2021"
@@ -1,4 +1,5 @@
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "migrated",
"description": "permissions that were migrated from v1",
"local": true,
@@ -13,6 +14,7 @@
"core:window:allow-show",
"core:window:allow-hide",
"core:window:allow-set-focus",
"core:window:allow-set-title",
"core:app:default",
"core:resources:default",
"core:menu:default",
@@ -24,7 +26,6 @@
"shell:default",
"process:default",
"clipboard-manager:default",
"core:tray:default",
"core:tray:allow-new",
"core:tray:allow-set-menu",
"core:tray:allow-set-title",
+26 -7
View File
@@ -7,7 +7,7 @@ use anyhow::Context;
use dashmap::DashMap;
use easytier::{
common::config::{
ConfigLoader, FileLoggerConfig, NetworkIdentity, PeerConfig, TomlConfigLoader,
ConfigLoader, FileLoggerConfig, Flags, NetworkIdentity, PeerConfig, TomlConfigLoader,
VpnPortalConfig,
},
launcher::{NetworkInstance, NetworkInstanceRunningInfo},
@@ -41,6 +41,7 @@ struct NetworkConfig {
dhcp: bool,
virtual_ipv4: String,
network_length: i32,
hostname: Option<String>,
network_name: String,
network_secret: String,
@@ -60,6 +61,9 @@ struct NetworkConfig {
listener_urls: Vec<String>,
rpc_port: i32,
latency_first: bool,
dev_name: String,
}
impl NetworkConfig {
@@ -80,9 +84,15 @@ impl NetworkConfig {
if !self.dhcp {
if self.virtual_ipv4.len() > 0 {
cfg.set_ipv4(Some(self.virtual_ipv4.parse().with_context(|| {
format!("failed to parse ipv4 address: {}", self.virtual_ipv4)
})?))
let ip = format!("{}/{}", self.virtual_ipv4, self.network_length)
.parse()
.with_context(|| {
format!(
"failed to parse ipv4 inet address: {}, {}",
self.virtual_ipv4, self.network_length
)
})?;
cfg.set_ipv4(Some(ip));
}
}
@@ -136,7 +146,7 @@ impl NetworkConfig {
}
cfg.set_rpc_portal(
format!("127.0.0.1:{}", self.rpc_port)
format!("0.0.0.0:{}", self.rpc_port)
.parse()
.with_context(|| format!("failed to parse rpc portal port: {}", self.rpc_port))?,
);
@@ -160,7 +170,10 @@ impl NetworkConfig {
})?,
});
}
let mut flags = Flags::default();
flags.latency_first = self.latency_first;
flags.dev_name = self.dev_name.clone();
cfg.set_flags(flags);
Ok(cfg)
}
}
@@ -171,6 +184,11 @@ static INSTANCE_MAP: once_cell::sync::Lazy<DashMap<String, NetworkInstance>> =
static mut LOGGER_LEVEL_SENDER: once_cell::sync::Lazy<Option<NewFilterSender>> =
once_cell::sync::Lazy::new(Default::default);
#[tauri::command]
fn easytier_version() -> Result<String, String> {
Ok(easytier::VERSION.to_string())
}
#[tauri::command]
fn is_autostart() -> Result<bool, String> {
let args: Vec<String> = std::env::args().collect();
@@ -365,7 +383,8 @@ pub fn run() {
get_os_hostname,
set_logging_level,
set_tun_fd,
is_autostart
is_autostart,
easytier_version
])
.on_window_event(|_win, event| match event {
#[cfg(not(target_os = "android"))]
+1 -1
View File
@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false
},
"productName": "easytier-gui",
"version": "1.2.3",
"version": "2.0.3",
"identifier": "com.kkrainbow.easytier",
"plugins": {},
"app": {
+9
View File
@@ -1,3 +1,12 @@
<script setup lang="ts">
import { getCurrentWindow } from '@tauri-apps/api/window'
import pkg from '~/../package.json'
onBeforeMount(async () => {
await getCurrentWindow().setTitle(`Easytier GUI: v${pkg.version}`)
})
</script>
<template>
<RouterView />
</template>
+26 -10
View File
@@ -3,6 +3,7 @@
// @ts-nocheck
// noinspection JSUnusedGlobalSymbols
// Generated by unplugin-auto-import
// biome-ignore lint: disable
export {}
declare global {
const EffectScope: typeof import('vue')['EffectScope']
@@ -20,10 +21,12 @@ declare global {
const definePage: typeof import('unplugin-vue-router/runtime')['definePage']
const defineStore: typeof import('pinia')['defineStore']
const effectScope: typeof import('vue')['effectScope']
const event2human: typeof import('./composables/utils')['event2human']
const generateMenuItem: typeof import('./composables/tray')['generateMenuItem']
const getActivePinia: typeof import('pinia')['getActivePinia']
const getCurrentInstance: typeof import('vue')['getCurrentInstance']
const getCurrentScope: typeof import('vue')['getCurrentScope']
const getEasytierVersion: typeof import('./composables/network')['getEasytierVersion']
const getOsHostname: typeof import('./composables/network')['getOsHostname']
const h: typeof import('vue')['h']
const initMobileService: typeof import('./composables/mobile_vpn')['initMobileService']
@@ -42,10 +45,12 @@ declare global {
const mapWritableState: typeof import('pinia')['mapWritableState']
const markRaw: typeof import('vue')['markRaw']
const nextTick: typeof import('vue')['nextTick']
const num2ipv4: typeof import('./composables/utils')['num2ipv4']
const num2ipv6: typeof import('./composables/utils')['num2ipv6']
const onActivated: typeof import('vue')['onActivated']
const onBeforeMount: typeof import('vue')['onBeforeMount']
const onBeforeRouteLeave: typeof import('vue-router/auto')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router/auto')['onBeforeRouteUpdate']
const onBeforeRouteLeave: typeof import('vue-router')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router')['onBeforeRouteUpdate']
const onBeforeUnmount: typeof import('vue')['onBeforeUnmount']
const onBeforeUpdate: typeof import('vue')['onBeforeUpdate']
const onDeactivated: typeof import('vue')['onDeactivated']
@@ -57,6 +62,7 @@ declare global {
const onServerPrefetch: typeof import('vue')['onServerPrefetch']
const onUnmounted: typeof import('vue')['onUnmounted']
const onUpdated: typeof import('vue')['onUpdated']
const onWatcherCleanup: typeof import('vue')['onWatcherCleanup']
const parseNetworkConfig: typeof import('./composables/network')['parseNetworkConfig']
const prepareVpnService: typeof import('./composables/mobile_vpn')['prepareVpnService']
const provide: typeof import('vue')['provide']
@@ -78,6 +84,7 @@ declare global {
const shallowReadonly: typeof import('vue')['shallowReadonly']
const shallowRef: typeof import('vue')['shallowRef']
const storeToRefs: typeof import('pinia')['storeToRefs']
const timeAgoCn: typeof import('./composables/utils')['timeAgoCn']
const toRaw: typeof import('vue')['toRaw']
const toRef: typeof import('vue')['toRef']
const toRefs: typeof import('vue')['toRefs']
@@ -88,11 +95,14 @@ declare global {
const useCssModule: typeof import('vue')['useCssModule']
const useCssVars: typeof import('vue')['useCssVars']
const useI18n: typeof import('vue-i18n')['useI18n']
const useId: typeof import('vue')['useId']
const useLink: typeof import('vue-router/auto')['useLink']
const useModel: typeof import('vue')['useModel']
const useNetworkStore: typeof import('./stores/network')['useNetworkStore']
const useRoute: typeof import('vue-router/auto')['useRoute']
const useRouter: typeof import('vue-router/auto')['useRouter']
const useRoute: typeof import('vue-router')['useRoute']
const useRouter: typeof import('vue-router')['useRouter']
const useSlots: typeof import('vue')['useSlots']
const useTemplateRef: typeof import('vue')['useTemplateRef']
const useTray: typeof import('./composables/tray')['useTray']
const watch: typeof import('vue')['watch']
const watchEffect: typeof import('vue')['watchEffect']
@@ -102,7 +112,7 @@ declare global {
// for type re-export
declare global {
// @ts-ignore
export type { Component, ComponentPublicInstance, ComputedRef, ExtractDefaultPropTypes, ExtractPropTypes, ExtractPublicPropTypes, InjectionKey, PropType, Ref, VNode, WritableComputedRef } from 'vue'
export type { Component, ComponentPublicInstance, ComputedRef, DirectiveBinding, ExtractDefaultPropTypes, ExtractPropTypes, ExtractPublicPropTypes, InjectionKey, PropType, Ref, MaybeRef, MaybeRefOrGetter, VNode, WritableComputedRef } from 'vue'
import('vue')
}
// for vue template auto import
@@ -121,13 +131,13 @@ declare module 'vue' {
readonly customRef: UnwrapRef<typeof import('vue')['customRef']>
readonly defineAsyncComponent: UnwrapRef<typeof import('vue')['defineAsyncComponent']>
readonly defineComponent: UnwrapRef<typeof import('vue')['defineComponent']>
readonly definePage: UnwrapRef<typeof import('unplugin-vue-router/runtime')['definePage']>
readonly defineStore: UnwrapRef<typeof import('pinia')['defineStore']>
readonly effectScope: UnwrapRef<typeof import('vue')['effectScope']>
readonly generateMenuItem: UnwrapRef<typeof import('./composables/tray')['generateMenuItem']>
readonly getActivePinia: UnwrapRef<typeof import('pinia')['getActivePinia']>
readonly getCurrentInstance: UnwrapRef<typeof import('vue')['getCurrentInstance']>
readonly getCurrentScope: UnwrapRef<typeof import('vue')['getCurrentScope']>
readonly getEasytierVersion: UnwrapRef<typeof import('./composables/network')['getEasytierVersion']>
readonly getOsHostname: UnwrapRef<typeof import('./composables/network')['getOsHostname']>
readonly h: UnwrapRef<typeof import('vue')['h']>
readonly initMobileVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['initMobileVpnService']>
@@ -144,10 +154,12 @@ declare module 'vue' {
readonly mapWritableState: UnwrapRef<typeof import('pinia')['mapWritableState']>
readonly markRaw: UnwrapRef<typeof import('vue')['markRaw']>
readonly nextTick: UnwrapRef<typeof import('vue')['nextTick']>
readonly num2ipv4: UnwrapRef<typeof import('./composables/utils')['num2ipv4']>
readonly num2ipv6: UnwrapRef<typeof import('./composables/utils')['num2ipv6']>
readonly onActivated: UnwrapRef<typeof import('vue')['onActivated']>
readonly onBeforeMount: UnwrapRef<typeof import('vue')['onBeforeMount']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteUpdate']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router')['onBeforeRouteUpdate']>
readonly onBeforeUnmount: UnwrapRef<typeof import('vue')['onBeforeUnmount']>
readonly onBeforeUpdate: UnwrapRef<typeof import('vue')['onBeforeUpdate']>
readonly onDeactivated: UnwrapRef<typeof import('vue')['onDeactivated']>
@@ -159,6 +171,7 @@ declare module 'vue' {
readonly onServerPrefetch: UnwrapRef<typeof import('vue')['onServerPrefetch']>
readonly onUnmounted: UnwrapRef<typeof import('vue')['onUnmounted']>
readonly onUpdated: UnwrapRef<typeof import('vue')['onUpdated']>
readonly onWatcherCleanup: UnwrapRef<typeof import('vue')['onWatcherCleanup']>
readonly parseNetworkConfig: UnwrapRef<typeof import('./composables/network')['parseNetworkConfig']>
readonly prepareVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['prepareVpnService']>
readonly provide: UnwrapRef<typeof import('vue')['provide']>
@@ -189,11 +202,14 @@ declare module 'vue' {
readonly useCssModule: UnwrapRef<typeof import('vue')['useCssModule']>
readonly useCssVars: UnwrapRef<typeof import('vue')['useCssVars']>
readonly useI18n: UnwrapRef<typeof import('vue-i18n')['useI18n']>
readonly useId: UnwrapRef<typeof import('vue')['useId']>
readonly useLink: UnwrapRef<typeof import('vue-router/auto')['useLink']>
readonly useModel: UnwrapRef<typeof import('vue')['useModel']>
readonly useNetworkStore: UnwrapRef<typeof import('./stores/network')['useNetworkStore']>
readonly useRoute: UnwrapRef<typeof import('vue-router/auto')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router/auto')['useRouter']>
readonly useRoute: UnwrapRef<typeof import('vue-router')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router')['useRouter']>
readonly useSlots: UnwrapRef<typeof import('vue')['useSlots']>
readonly useTemplateRef: UnwrapRef<typeof import('vue')['useTemplateRef']>
readonly useTray: UnwrapRef<typeof import('./composables/tray')['useTray']>
readonly watch: UnwrapRef<typeof import('vue')['watch']>
readonly watchEffect: UnwrapRef<typeof import('vue')['watchEffect']>
+27
View File
@@ -0,0 +1,27 @@
<script setup lang="ts">
import { getEasytierVersion } from '~/composables/network'
const { t } = useI18n()
const etVersion = ref('')
onMounted(async () => {
etVersion.value = await getEasytierVersion()
})
</script>
<template>
<Card>
<template #title>
Easytier - {{ t('about.version') }}: {{ etVersion }}
</template>
<template #content>
<p class="mb-1">
{{ t('about.description') }}
</p>
</template>
</Card>
</template>
<style scoped lang="postcss">
</style>
+130 -74
View File
@@ -2,10 +2,8 @@
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { getOsHostname } from '~/composables/network'
import { NetworkingMethod } from '~/types/network'
const { t } = useI18n()
import { ping } from 'tauri-plugin-vpnservice-api'
import { NetworkingMethod } from '~/types/network'
const props = defineProps<{
configInvalid?: boolean
@@ -14,6 +12,8 @@ const props = defineProps<{
defineEmits(['runNetwork'])
const { t } = useI18n()
const networking_methods = ref([
{ value: NetworkingMethod.PublicServer, label: () => t('public_server') },
{ value: NetworkingMethod.Manual, label: () => t('manual') },
@@ -32,24 +32,27 @@ const curNetwork = computed(() => {
return networkStore.curNetwork
})
const protos:{ [proto: string] : number; } = {'tcp': 11010, 'udp': 11010, 'wg':11011, 'ws': 11011, 'wss': 11012}
const protos: { [proto: string]: number } = { tcp: 11010, udp: 11010, wg: 11011, ws: 11011, wss: 11012 }
function searchUrlSuggestions(e: { query: string }): string[] {
const query = e.query
let ret = []
const ret = []
// if query match "^\w+:.*", then no proto prefix
if (query.match(/^\w+:.*/)) {
// if query is a valid url, then add to suggestions
try {
// eslint-disable-next-line no-new
new URL(query)
ret.push(query)
} catch (e) {}
} else {
for (let proto in protos) {
let item = proto + '://' + query
}
catch {}
}
else {
for (const proto in protos) {
let item = `${proto}://${query}`
// if query match ":\d+$", then no port suffix
if (!query.match(/:\d+$/)) {
item += ':' + protos[proto]
item += `:${protos[proto]}`
}
ret.push(item)
}
@@ -58,45 +61,59 @@ function searchUrlSuggestions(e: { query: string }): string[] {
return ret
}
const publicServerSuggestions = ref([''])
const searchPresetPublicServers = (e: { query: string }) => {
const presetPublicServers = [
'tcp://easytier.public.kkrainbow.top:11010',
]
function searchPresetPublicServers(e: { query: string }) {
const presetPublicServers = [
'tcp://public.easytier.top:11010',
]
let query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter((item) => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
const query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter(item => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
publicServerSuggestions.value = ret
publicServerSuggestions.value = ret
}
const peerSuggestions = ref([''])
const searchPeerSuggestions = (e: { query: string }) => {
function searchPeerSuggestions(e: { query: string }) {
peerSuggestions.value = searchUrlSuggestions(e)
}
const inetSuggestions = ref([''])
function searchInetSuggestions(e: { query: string }) {
if (e.query.search('/') >= 0) {
inetSuggestions.value = [e.query]
} else {
const ret = []
for (let i = 0; i < 32; i++) {
ret.push(`${e.query}/${i}`)
}
inetSuggestions.value = ret
}
}
const listenerSuggestions = ref([''])
const searchListenerSuggestiong = (e: { query: string }) => {
let ret = []
function searchListenerSuggestiong(e: { query: string }) {
const ret = []
for (let proto in protos) {
let item = proto + '://0.0.0.0:';
for (const proto in protos) {
let item = `${proto}://0.0.0.0:`
// if query is a number, use it as port
if (e.query.match(/^\d+$/)) {
item += e.query
} else {
}
else {
item += protos[proto]
}
if (item.includes(e.query)) {
ret.push(item)
}
@@ -112,7 +129,7 @@ const searchListenerSuggestiong = (e: { query: string }) => {
function validateHostname() {
if (curNetwork.value.hostname) {
// eslint no-useless-escape
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-zA-Z0-9\-]*/g, '')
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-z0-9\-]*/gi, '')
if (name.length > 32)
name = name.substring(0, 32)
@@ -125,18 +142,12 @@ const osHostname = ref<string>('')
onMounted(async () => {
osHostname.value = await getOsHostname()
osHostname.value = await ping('ffdklsajflkdsjl') || ''
})
</script>
<template>
<div class="flex flex-column h-full">
<div class="flex flex-column">
<div class="w-10/12 self-center ">
<Message severity="warn">
{{ t('dhcp_experimental_warning') }}
</Message>
</div>
<div class="w-10/12 self-center ">
<Panel :header="t('basic_settings')">
<div class="flex flex-column gap-y-2">
@@ -151,11 +162,14 @@ onMounted(async () => {
</label>
</div>
<InputGroup>
<InputText id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help" />
<InputText
id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help"
/>
<InputGroupAddon>
<span>/24</span>
<span>/</span>
</InputGroupAddon>
<InputNumber v-model="curNetwork.network_length" :disabled="curNetwork.dhcp" inputId="horizontal-buttons" showButtons :step="1" mode="decimal" :min="1" :max="32" fluid class="max-w-20"/>
</InputGroup>
</div>
</div>
@@ -167,23 +181,29 @@ onMounted(async () => {
</div>
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="network_secret">{{ t('network_secret') }}</label>
<InputText id="network_secret" v-model="curNetwork.network_secret"
aria-describedby=" network_secret-help" />
<InputText
id="network_secret" v-model="curNetwork.network_secret"
aria-describedby="network_secret-help"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="nm">{{ t('networking_method') }}</label>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value"></SelectButton>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value" />
<div class="items-center flex flex-row p-fluid gap-x-1">
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
v-model="curNetwork.peer_urls" :placeholder="t('chips_placeholder', ['tcp://8.8.8.8:11010'])"
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"/>
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"
/>
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" :suggestions="publicServerSuggestions"
:virtualScrollerOptions="{ itemSize: 38 }" class="grow" dropdown @complete="searchPresetPublicServers" :completeOnFocus="true"
v-model="curNetwork.public_server_url"/>
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" v-model="curNetwork.public_server_url"
:suggestions="publicServerSuggestions" :virtual-scroller-options="{ itemSize: 38 }" class="grow" dropdown :complete-on-focus="true"
@complete="searchPresetPublicServers"
/>
</div>
</div>
</div>
@@ -194,67 +214,103 @@ onMounted(async () => {
<Panel :header="t('advanced_settings')" toggleable collapsed>
<div class="flex flex-column gap-y-2">
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<div class="flex align-items-center">
<Checkbox v-model="curNetwork.latency_first" input-id="use_latency_first" :binary="true" />
<label for="use_latency_first" class="ml-2"> {{ t('use_latency_first') }} </label>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="hostname">{{ t('hostname') }}</label>
<InputText id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname" />
<InputText
id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap w-full">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="username">{{ t('proxy_cidrs') }}</label>
<Chips id="chips" v-model="curNetwork.proxy_cidrs"
:placeholder="t('chips_placeholder', ['10.0.0.0/24'])" separator=" " class="w-full" />
<AutoComplete
id="subnet-proxy"
v-model="curNetwork.proxy_cidrs" :placeholder="t('chips_placeholder', ['10.0.0.0/24'])"
class="w-full" multiple fluid :suggestions="inetSuggestions" @complete="searchInetSuggestions"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap ">
<div class="flex flex-column gap-2 grow">
<label for="username">VPN Portal</label>
<ToggleButton v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"/>
<div class="items-center flex flex-row gap-x-4" v-if="curNetwork.enable_vpn_portal">
<div class="min-w-64">
<InputGroup>
<InputText v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')" />
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<ToggleButton
v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"
/>
<div v-if="curNetwork.enable_vpn_portal" class="items-center flex flex-row gap-x-4">
<div class="min-w-64">
<InputGroup>
<InputText
v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')"
/>
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<InputNumber v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid/>
</div>
<InputNumber
v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid
/>
</div>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="listener_urls">{{ t('listener_urls') }}</label>
<AutoComplete id="listener_urls" :suggestions="listenerSuggestions"
class="w-full" dropdown @complete="searchListenerSuggestiong" :completeOnFocus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
v-model="curNetwork.listener_urls" multiple/>
<AutoComplete
id="listener_urls" v-model="curNetwork.listener_urls"
:suggestions="listenerSuggestions" class="w-full" dropdown :complete-on-focus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
multiple @complete="searchListenerSuggestiong"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="rpc_port">{{ t('rpc_port') }}</label>
<InputNumber id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="username-help"
:format="false" :min="0" :max="65535" />
<InputNumber
id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="rpc_port-help"
:format="false" :min="0" :max="65535"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="dev_name">{{ t('dev_name') }}</label>
<InputText
id="dev_name" v-model="curNetwork.dev_name" aria-describedby="dev_name-help" :format="true"
:placeholder="t('dev_name_placeholder')"
/>
</div>
</div>
</div>
</Panel>
<div class="flex pt-4 justify-content-center">
<Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)" />
<Button
:label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)"
/>
</div>
</div>
</div>
@@ -0,0 +1,32 @@
<script setup lang="ts">
import { EventType } from '~/types/network'
const props = defineProps<{
event: {
[key: string]: any
}
}>()
const { t } = useI18n()
const eventKey = computed(() => {
const key = Object.keys(props.event)[0]
return Object.keys(EventType).includes(key) ? key : 'Unknown'
})
const eventValue = computed(() => {
const value = props.event[eventKey.value]
return typeof value === 'object' ? value : value
})
</script>
<template>
<Fieldset :legend="t(`event.${eventKey}`)">
<template v-if="eventKey !== 'Unknown'">
<div v-if="event.DhcpIpv4Changed">
{{ `${eventValue[0]} -> ${eventValue[1]}` }}
</div>
<pre v-else>{{ eventValue }}</pre>
</template>
<pre v-else>{{ eventValue }}</pre>
</Fieldset>
</template>
+110 -32
View File
@@ -1,11 +1,14 @@
<script setup lang="ts">
import type { NodeInfo } from '~/types/network'
const { t } = useI18n()
import { useTimeAgo } from '@vueuse/core'
import { IPv4, IPv6 } from 'ip-num/IPNumber'
import type { NodeInfo, PeerRoutePair } from '~/types/network'
const props = defineProps<{
instanceId?: string
}>()
const { t } = useI18n()
const networkStore = useNetworkStore()
const curNetwork = computed(() => {
@@ -24,8 +27,16 @@ const curNetworkInst = computed(() => {
})
const peerRouteInfos = computed(() => {
if (curNetworkInst.value)
return curNetworkInst.value.detail?.peer_route_pairs || []
if (curNetworkInst.value) {
const my_node_info = curNetworkInst.value.detail?.my_node_info
return [{
route: {
ipv4_addr: my_node_info?.virtual_ipv4,
hostname: my_node_info?.hostname,
version: my_node_info?.version,
},
}, ...(curNetworkInst.value.detail?.peer_route_pairs || [])]
}
return []
})
@@ -33,8 +44,9 @@ const peerRouteInfos = computed(() => {
function routeCost(info: any) {
if (info.route) {
const cost = info.route.cost
return cost === 1 ? 'p2p' : `relay(${cost})`
return cost ? cost === 1 ? 'p2p' : `relay(${cost})` : t('status.local')
}
return '?'
}
@@ -73,29 +85,40 @@ function humanFileSize(bytes: number, si = false, dp = 1) {
return `${bytes.toFixed(dp)} ${units[u]}`
}
function latencyMs(info: any) {
function latencyMs(info: PeerRoutePair) {
let lat_us_sum = statsCommon(info, 'stats.latency_us')
if (lat_us_sum === undefined)
return ''
lat_us_sum = lat_us_sum / 1000 / info.peer.conns.length
lat_us_sum = lat_us_sum / 1000 / info.peer!.conns.length
return `${lat_us_sum % 1 > 0 ? Math.round(lat_us_sum) + 1 : Math.round(lat_us_sum)}ms`
}
function txBytes(info: any) {
function txBytes(info: PeerRoutePair) {
const tx = statsCommon(info, 'stats.tx_bytes')
return tx ? humanFileSize(tx) : ''
}
function rxBytes(info: any) {
function rxBytes(info: PeerRoutePair) {
const rx = statsCommon(info, 'stats.rx_bytes')
return rx ? humanFileSize(rx) : ''
}
function lossRate(info: any) {
function lossRate(info: PeerRoutePair) {
const lossRate = statsCommon(info, 'loss_rate')
return lossRate !== undefined ? `${Math.round(lossRate * 100)}%` : ''
}
function version(info: PeerRoutePair) {
return info.route.version === '' ? 'unknown' : info.route.version
}
function ipFormat(info: PeerRoutePair) {
const ip = info.route.ipv4_addr
if (typeof ip === 'string')
return ip
return ip ? `${num2ipv4(ip.address)}/${ip.network_length}` : ''
}
const myNodeInfo = computed(() => {
if (!curNetworkInst.value)
return {} as NodeInfo
@@ -117,8 +140,16 @@ const myNodeInfoChips = computed(() => {
if (!my_node_info)
return chips
// virtual ipv4
// TUN Device Name
const dev_name = curNetworkInst.value.detail?.dev_name
if (dev_name) {
chips.push({
label: `TUN Device Name: ${dev_name}`,
icon: '',
} as Chip)
}
// virtual ipv4
chips.push({
label: `Virtual IPv4: ${my_node_info.virtual_ipv4}`,
icon: '',
@@ -128,7 +159,7 @@ const myNodeInfoChips = computed(() => {
const local_ipv4s = my_node_info.ips?.interface_ipv4s
for (const [idx, ip] of local_ipv4s?.entries()) {
chips.push({
label: `Local IPv4 ${idx}: ${ip}`,
label: `Local IPv4 ${idx}: ${num2ipv4(ip)}`,
icon: '',
} as Chip)
}
@@ -137,7 +168,7 @@ const myNodeInfoChips = computed(() => {
const local_ipv6s = my_node_info.ips?.interface_ipv6s
for (const [idx, ip] of local_ipv6s?.entries()) {
chips.push({
label: `Local IPv6 ${idx}: ${ip}`,
label: `Local IPv6 ${idx}: ${num2ipv6(ip)}`,
icon: '',
} as Chip)
}
@@ -146,7 +177,19 @@ const myNodeInfoChips = computed(() => {
const public_ip = my_node_info.ips?.public_ipv4
if (public_ip) {
chips.push({
label: `Public IP: ${public_ip}`,
label: `Public IP: ${IPv4.fromNumber(public_ip.addr)}`,
icon: '',
} as Chip)
}
const public_ipv6 = my_node_info.ips?.public_ipv6
if (public_ipv6) {
chips.push({
label: `Public IPv6: ${IPv6.fromBigInt((BigInt(public_ipv6.part1) << BigInt(96))
+ (BigInt(public_ipv6.part2) << BigInt(64))
+ (BigInt(public_ipv6.part3) << BigInt(32))
+ BigInt(public_ipv6.part4),
)}`,
icon: '',
} as Chip)
}
@@ -171,6 +214,8 @@ const myNodeInfoChips = computed(() => {
PortRestricted = 5,
Symmetric = 6,
SymUdpFirewall = 7,
SymmetricEasyInc = 8,
SymmetricEasyDec = 9,
};
const udpNatType: NatType = my_node_info.stun_info?.udp_nat_type
if (udpNatType !== undefined) {
@@ -183,6 +228,8 @@ const myNodeInfoChips = computed(() => {
[NatType.PortRestricted]: 'Port Restricted',
[NatType.Symmetric]: 'Symmetric',
[NatType.SymUdpFirewall]: 'Symmetric UDP Firewall',
[NatType.SymmetricEasyInc]: 'Symmetric Easy Inc',
[NatType.SymmetricEasyDec]: 'Symmetric Easy Dec',
}
chips.push({
@@ -273,16 +320,18 @@ function showEventLogs() {
<template>
<div>
<Dialog v-model:visible="dialogVisible" modal :header="t(dialogHeader)" :style="{ width: '70%' }">
<Panel>
<ScrollPanel style="width: 100%; height: 400px">
<pre>{{ dialogContent }}</pre>
</ScrollPanel>
</Panel>
<Divider />
<div class="flex justify-content-end gap-2">
<Button type="button" :label="t('close')" @click="dialogVisible = false" />
</div>
<Dialog v-model:visible="dialogVisible" modal :header="t(dialogHeader)" class="w-2/3 h-auto">
<ScrollPanel v-if="dialogHeader === 'vpn_portal_config'">
<pre>{{ dialogContent }}</pre>
</ScrollPanel>
<Timeline v-else :value="dialogContent">
<template #opposite="slotProps">
<small class="text-surface-500 dark:text-surface-400">{{ useTimeAgo(Date.parse(slotProps.item[0])) }}</small>
</template>
<template #content="slotProps">
<HumanEvent :event="slotProps.item[1]" />
</template>
</Timeline>
</Dialog>
<Card v-if="curNetworkInst?.error_msg">
@@ -365,17 +414,46 @@ function showEventLogs() {
{{ t('peer_info') }}
</template>
<template #content>
<DataTable :value="peerRouteInfos" column-resize-mode="fit" table-style="width: 100%">
<Column field="route.ipv4_addr" style="width: 100px;" :header="t('virtual_ipv4')" />
<Column field="route.hostname" style="max-width: 250px;" :header="t('hostname')" />
<Column :field="routeCost" style="width: 100px;" :header="t('route_cost')" />
<Column :field="latencyMs" style="width: 80px;" :header="t('latency')" />
<Column :field="txBytes" style="width: 80px;" :header="t('upload_bytes')" />
<Column :field="rxBytes" style="width: 80px;" :header="t('download_bytes')" />
<Column :field="lossRate" style="width: 100px;" :header="t('loss_rate')" />
<DataTable :value="peerRouteInfos" column-resize-mode="fit" table-class="w-full">
<Column :field="ipFormat" :header="t('virtual_ipv4')" />
<Column :header="t('hostname')">
<template #body="slotProps">
<div
v-if="!slotProps.data.route.cost || !slotProps.data.route.feature_flag.is_public_server"
v-tooltip="slotProps.data.route.hostname"
>
{{
slotProps.data.route.hostname }}
</div>
<div v-else v-tooltip="slotProps.data.route.hostname" class="space-x-1">
<Tag v-if="slotProps.data.route.feature_flag.is_public_server" severity="info" value="Info">
{{ t('status.server') }}
</Tag>
<Tag v-if="slotProps.data.route.no_relay_data" severity="warn" value="Warn">
{{ t('status.relay') }}
</Tag>
</div>
</template>
</Column>
<Column :field="routeCost" :header="t('route_cost')" />
<Column :field="latencyMs" :header="t('latency')" />
<Column :field="txBytes" :header="t('upload_bytes')" />
<Column :field="rxBytes" :header="t('download_bytes')" />
<Column :field="lossRate" :header="t('loss_rate')" />
<Column :header="t('status.version')">
<template #body="slotProps">
<span>{{ version(slotProps.data) }}</span>
</template>
</Column>
</DataTable>
</template>
</Card>
</template>
</div>
</template>
<style lang="postcss" scoped>
.p-timeline :deep(.p-timeline-event-opposite) {
@apply flex-none;
}
</style>
+132 -131
View File
@@ -1,183 +1,184 @@
import { addPluginListener } from '@tauri-apps/api/core';
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api';
import { Route } from '~/types/network';
import { addPluginListener } from '@tauri-apps/api/core'
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
import type { Route } from '~/types/network'
const networkStore = useNetworkStore()
interface vpnStatus {
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
}
var curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: []
const curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: [],
}
async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
let start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
const start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
}
}
async function doStopVpn() {
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
let stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
const stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
}
async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[]) {
if (curVpnStatus.running) {
return
}
if (curVpnStatus.running) {
return
}
console.log('start vpn')
let start_ret = await start_vpn({
"ipv4Addr": ipv4Addr + '/' + cidr,
"routes": routes,
"disallowedApplications": ["com.kkrainbow.easytier"],
"mtu": 1300,
});
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
console.log('start vpn')
const start_ret = await start_vpn({
ipv4Addr: `${ipv4Addr}`,
routes,
disallowedApplications: ['com.kkrainbow.easytier'],
mtu: 1300,
})
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
}
async function onVpnServiceStart(payload: any) {
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
}
async function onVpnServiceStop(payload: any) {
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
}
async function registerVpnServiceListener() {
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart
)
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart,
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop,
)
}
function getRoutesForVpn(routes: Route[]): string[] {
if (!routes) {
return []
}
if (!routes) {
return []
}
let ret = []
for (let r of routes) {
for (let cidr of r.proxy_cidrs) {
if (cidr.indexOf('/') === -1) {
cidr += '/32'
}
ret.push(cidr)
}
const ret = []
for (const r of routes) {
for (let cidr of r.proxy_cidrs) {
if (!cidr.includes('/')) {
cidr += '/32'
}
ret.push(cidr)
}
}
// sort and dedup
return Array.from(new Set(ret)).sort()
// sort and dedup
return Array.from(new Set(ret)).sort()
}
async function onNetworkInstanceChange() {
let insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
const insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
}
catch (e) {
console.error(e)
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
try {
await doStartVpn(virtual_ip, 24, routes)
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
var ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
var routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
} catch (e) {
console.error(e)
}
try {
await doStartVpn(virtual_ip, 24, routes)
} catch (e) {
console.error("start vpn failed, clear all network insts.", e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
return
catch (e) {
console.error('start vpn failed, clear all network insts.', e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
}
}
async function watchNetworkInstance() {
var subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
} catch (_) {
}
subscribe_running = false
})
let subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
}
catch (_) {
}
subscribe_running = false
})
}
export async function initMobileVpnService() {
await registerVpnServiceListener()
await watchNetworkInstance()
await registerVpnServiceListener()
await watchNetworkInstance()
}
export async function prepareVpnService() {
console.log('prepare vpn')
let prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
}
+5 -1
View File
@@ -1,4 +1,4 @@
import { invoke } from "@tauri-apps/api/core"
import { invoke } from '@tauri-apps/api/core'
import type { NetworkConfig, NetworkInstanceRunningInfo } from '~/types/network'
@@ -33,3 +33,7 @@ export async function setLoggingLevel(level: string) {
export async function setTunFd(instanceId: string, fd: number) {
return await invoke('set_tun_fd', { instanceId, fd })
}
export async function getEasytierVersion() {
return await invoke<string>('easytier_version')
}
+17 -12
View File
@@ -1,6 +1,6 @@
import { getCurrentWindow } from '@tauri-apps/api/window'
import { Menu, MenuItem, PredefinedMenuItem } from '@tauri-apps/api/menu'
import { TrayIcon } from '@tauri-apps/api/tray'
import { getCurrentWindow } from '@tauri-apps/api/window'
import pkg from '~/../package.json'
const DEFAULT_TRAY_NAME = 'main'
@@ -8,14 +8,15 @@ const DEFAULT_TRAY_NAME = 'main'
async function toggleVisibility() {
if (await getCurrentWindow().isVisible()) {
await getCurrentWindow().hide()
} else {
}
else {
await getCurrentWindow().show()
await getCurrentWindow().setFocus()
}
}
export async function useTray(init: boolean = false) {
let tray;
let tray
try {
tray = await TrayIcon.getById(DEFAULT_TRAY_NAME)
if (!tray) {
@@ -29,17 +30,18 @@ export async function useTray(init: boolean = false) {
}),
action: async () => {
toggleVisibility()
}
},
})
}
} catch (error) {
}
catch (error) {
console.warn('Error while creating tray icon:', error)
return null
}
if (init) {
tray.setTooltip(`EasyTier\n${pkg.version}`)
tray.setMenuOnLeftClick(false);
tray.setMenuOnLeftClick(false)
tray.setMenu(await Menu.new({
id: 'main',
items: await generateMenuItem(),
@@ -59,7 +61,7 @@ export async function generateMenuItem() {
export async function MenuItemExit(text: string) {
return await PredefinedMenuItem.new({
text: text,
text,
item: 'Quit',
})
}
@@ -69,14 +71,15 @@ export async function MenuItemShow(text: string) {
id: 'show',
text,
action: async () => {
await toggleVisibility();
await toggleVisibility()
},
})
}
export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | undefined = undefined) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
const menu = await Menu.new({
id: 'main',
items: items || await generateMenuItem(),
@@ -86,15 +89,17 @@ export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | und
export async function setTrayRunState(isRunning: boolean = false) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setIcon(isRunning ? 'icons/icon-inactive.ico' : 'icons/icon.ico')
}
export async function setTrayTooltip(tooltip: string) {
if (tooltip) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setTooltip(`EasyTier\n${pkg.version}\n${tooltip}`)
tray.setTitle(`EasyTier\n${pkg.version}\n${tooltip}`)
}
}
}
+15
View File
@@ -0,0 +1,15 @@
import { IPv4, IPv6 } from 'ip-num/IPNumber'
import type { Ipv4Addr, Ipv6Addr } from '~/types/network'
export function num2ipv4(ip: Ipv4Addr) {
return IPv4.fromNumber(ip.addr)
}
export function num2ipv6(ip: Ipv6Addr) {
return IPv6.fromBigInt(
(BigInt(ip.part1) << BigInt(96))
+ (BigInt(ip.part2) << BigInt(64))
+ (BigInt(ip.part3) << BigInt(32))
+ BigInt(ip.part4),
)
}
+16 -14
View File
@@ -1,16 +1,16 @@
import { setupLayouts } from 'virtual:generated-layouts'
import { createRouter, createWebHistory } from 'vue-router/auto'
import Aura from '@primevue/themes/aura'
import PrimeVue from 'primevue/config'
import ToastService from 'primevue/toastservice'
import App from '~/App.vue'
import { createRouter, createWebHistory } from 'vue-router/auto'
import { routes } from 'vue-router/auto-routes'
import App from '~/App.vue'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync, loadAutoLaunchStatusAsync } from './modules/auto_launch'
import '~/styles.css'
import Aura from '@primevue/themes/aura'
import 'primeicons/primeicons.css'
import 'primeflex/primeflex.css'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { loadAutoLaunchStatusAsync, getAutoLaunchStatusAsync } from './modules/auto_launch'
if (import.meta.env.PROD) {
document.addEventListener('keydown', (event) => {
@@ -18,8 +18,9 @@ if (import.meta.env.PROD) {
event.key === 'F5'
|| (event.ctrlKey && event.key === 'r')
|| (event.metaKey && event.key === 'r')
)
) {
event.preventDefault()
}
})
document.addEventListener('contextmenu', (event) => {
@@ -35,7 +36,7 @@ async function main() {
const router = createRouter({
history: createWebHistory(),
extendRoutes: routes => setupLayouts(routes),
routes,
})
app.use(router)
@@ -45,11 +46,12 @@ async function main() {
theme: {
preset: Aura,
options: {
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false
}
}})
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false,
},
},
})
app.use(ToastService)
app.mount('#app')
}
+17 -8
View File
@@ -1,17 +1,26 @@
import { disable, enable, isEnabled } from '@tauri-apps/plugin-autostart'
export async function loadAutoLaunchStatusAsync(target_enable: boolean): Promise<boolean> {
try {
target_enable ? await enable() : await disable()
localStorage.setItem('auto_launch', JSON.stringify(await isEnabled()))
return isEnabled()
try {
if (target_enable) {
await enable()
}
catch (e) {
console.error(e)
return false
else {
// 消除没有配置自启动时进行关闭操作报错
try {
await disable()
}
catch { }
}
localStorage.setItem('auto_launch', JSON.stringify(await isEnabled()))
return isEnabled()
}
catch (e) {
console.error(e)
return false
}
}
export function getAutoLaunchStatusAsync(): boolean {
return localStorage.getItem('auto_launch') === 'true'
return localStorage.getItem('auto_launch') === 'true'
}
+1 -1
View File
@@ -1,5 +1,5 @@
import type { Locale } from 'vue-i18n'
import { createI18n } from 'vue-i18n'
import type { Locale } from 'vue-i18n'
// Import i18n resources
// https://vitejs.dev/guide/features.html#glob-import
+67 -29
View File
@@ -1,24 +1,25 @@
<script setup lang="ts">
import { useToast } from 'primevue/usetoast'
import { exit } from '@tauri-apps/plugin-process'
import TieredMenu from 'primevue/tieredmenu'
import { open } from '@tauri-apps/plugin-shell'
import { appLogDir } from '@tauri-apps/api/path'
import { getCurrentWindow } from '@tauri-apps/api/window'
import { writeText } from '@tauri-apps/plugin-clipboard-manager'
import { type } from '@tauri-apps/plugin-os'
import { exit } from '@tauri-apps/plugin-process'
import { open } from '@tauri-apps/plugin-shell'
import TieredMenu from 'primevue/tieredmenu'
import { useToast } from 'primevue/usetoast'
import Config from '~/components/Config.vue'
import Status from '~/components/Status.vue'
import type { NetworkConfig } from '~/types/network'
import { loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import Status from '~/components/Status.vue'
import { isAutostart, setLoggingLevel } from '~/composables/network'
import { useTray } from '~/composables/tray'
import { getCurrentWindow } from '@tauri-apps/api/window'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import { loadLanguageAsync } from '~/modules/i18n'
import { type NetworkConfig, NetworkingMethod } from '~/types/network'
const { t, locale } = useI18n()
const visible = ref(false)
const aboutVisible = ref(false)
const tomlConfig = ref('')
useTray(true)
@@ -85,7 +86,8 @@ async function runNetworkCb(cfg: NetworkConfig, cb: () => void) {
if (type() === 'android') {
await prepareVpnService()
networkStore.clearNetworkInstances()
} else {
}
else {
networkStore.removeNetworkInstance(cfg.instance_id)
}
@@ -146,7 +148,7 @@ const setting_menu_items = ref([
await loadLanguageAsync((locale.value === 'en' ? 'cn' : 'en'))
await setTrayMenu([
await MenuItemExit(t('tray.exit')),
await MenuItemShow(t('tray.show'))
await MenuItemShow(t('tray.show')),
])
},
},
@@ -179,7 +181,7 @@ const setting_menu_items = ref([
label: () => t('logging_open_dir'),
icon: 'pi pi-folder-open',
command: async () => {
console.log('open log dir', await appLogDir())
// console.log('open log dir', await appLogDir())
await open(await appLogDir())
},
})
@@ -193,6 +195,13 @@ const setting_menu_items = ref([
return items
})(),
},
{
label: () => t('about.title'),
icon: 'pi pi-at',
command: async () => {
aboutVisible.value = true
},
},
{
label: () => t('exit'),
icon: 'pi pi-power-off',
@@ -244,11 +253,15 @@ function isRunning(id: string) {
</ScrollPanel>
</Panel>
<Divider />
<div class="flex justify-content-end gap-2">
<div class="flex gap-2 justify-content-end">
<Button type="button" :label="t('close')" @click="visible = false" />
</div>
</Dialog>
<Dialog v-model:visible="aboutVisible" modal :header="t('about.title')" :style="{ width: '70%' }">
<About />
</Dialog>
<div>
<Toolbar>
<template #start>
@@ -259,29 +272,44 @@ function isRunning(id: string) {
<template #center>
<div class="min-w-40">
<Dropdown v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full">
<Dropdown
v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full"
>
<template #value="slotProps">
<div class="flex items-start content-center">
<div class="mr-3 flex-column">
<span>{{ slotProps.value.network_name }}</span>
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
</template>
<template #option="slotProps">
<div class="flex flex-col items-start content-center">
<div class="flex flex-col items-start content-center max-w-full">
<div class="flex">
<div class="mr-3">
{{ t('network_name') }}: {{ slotProps.option.network_name }}
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3"
:severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
<div>{{ slotProps.option.public_server_url }}</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')">
v-if="slotProps.option.networking_method !== NetworkingMethod.Standalone"
class="max-w-full overflow-hidden text-ellipsis"
>
{{ slotProps.option.networking_method === NetworkingMethod.Manual
? slotProps.option.peer_urls.join(', ')
: slotProps.option.public_server_url }}
</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')"
>
{{ networkStore.instances[slotProps.option.instance_id].detail
? networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 : '' }}
</div>
@@ -292,8 +320,10 @@ function isRunning(id: string) {
</template>
<template #end>
<Button icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu" />
<Button
icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu"
/>
<TieredMenu id="overlay_setting_menu" ref="setting_menu" :model="setting_menu_items" :popup="true" />
</template>
</Toolbar>
@@ -311,16 +341,20 @@ function isRunning(id: string) {
</StepList>
<StepPanels value="1">
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="1">
<Config :instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))" />
<Config
:instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))"
/>
</StepPanel>
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="2">
<div class="flex flex-column">
<Status :instance-id="networkStore.curNetworkId" />
</div>
<div class="flex pt-4 justify-content-center">
<Button :label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))" />
<Button
:label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))"
/>
</div>
</StepPanel>
</StepPanels>
@@ -360,6 +394,10 @@ body {
margin: 0;
}
.p-select-overlay {
max-width: calc(100% - 2rem);
}
/*
.p-tabview-panel {
+2 -1
View File
@@ -108,7 +108,8 @@ export const useNetworkStore = defineStore('networkStore', {
loadAutoStartInstIdsFromLocalStorage() {
try {
this.autoStartInstIds = JSON.parse(localStorage.getItem('autoStartInstIds') || '[]')
} catch (e) {
}
catch (e) {
console.error(e)
this.autoStartInstIds = []
}
-1
View File
@@ -16,7 +16,6 @@
font-weight: 400;
color: #0f0f0f;
background-color: white;
font-synthesis: none;
text-rendering: optimizeLegibility;
+1 -1
View File
@@ -12,7 +12,7 @@ declare module 'vue-router/auto-routes' {
ParamValueOneOrMore,
ParamValueZeroOrMore,
ParamValueZeroOrOne,
} from 'unplugin-vue-router/types'
} from 'vue-router'
/**
* Route name map generated by unplugin-vue-router
+57 -6
View File
@@ -11,6 +11,7 @@ export interface NetworkConfig {
dhcp: boolean
virtual_ipv4: string
network_length: number,
hostname?: string
network_name: string
network_secret: string
@@ -31,6 +32,9 @@ export interface NetworkConfig {
listener_urls: string[]
rpc_port: number
latency_first: boolean
dev_name: string
}
export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
@@ -39,12 +43,13 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
dhcp: true,
virtual_ipv4: '',
network_length: 24,
network_name: 'easytier',
network_secret: '',
networking_method: NetworkingMethod.PublicServer,
public_server_url: 'tcp://easytier.public.kkrainbow.top:11010',
public_server_url: 'tcp://public.easytier.top:11010',
peer_urls: [],
proxy_cidrs: [],
@@ -62,6 +67,8 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
'wg://0.0.0.0:11011',
],
rpc_port: 0,
latency_first: true,
dev_name: '',
}
}
@@ -75,6 +82,7 @@ export interface NetworkInstance {
}
export interface NetworkInstanceRunningInfo {
dev_name: string
my_node_info: NodeInfo
events: Record<string, any>
node_info: NodeInfo
@@ -85,13 +93,26 @@ export interface NetworkInstanceRunningInfo {
error_msg?: string
}
export interface Ipv4Addr {
addr: number
}
export interface Ipv6Addr {
part1: number
part2: number
part3: number
part4: number
}
export interface NodeInfo {
virtual_ipv4: string
hostname: string
version: string
ips: {
public_ipv4: string
interface_ipv4s: string[]
public_ipv6: string
interface_ipv6s: string[]
public_ipv4: Ipv4Addr
interface_ipv4s: Ipv4Addr[]
public_ipv6: Ipv6Addr
interface_ipv6s: Ipv6Addr[]
listeners: {
serialization: string
scheme_end: number
@@ -118,13 +139,17 @@ export interface StunInfo {
export interface Route {
peer_id: number
ipv4_addr: string
ipv4_addr: {
address: Ipv4Addr
network_length: number
} | string | null
next_hop_peer_id: number
cost: number
proxy_cidrs: string[]
hostname: string
stun_info?: StunInfo
inst_id: string
version: string
}
export interface PeerInfo {
@@ -135,6 +160,7 @@ export interface PeerInfo {
export interface PeerConnInfo {
conn_id: string
my_peer_id: number
is_client: boolean
peer_id: number
features: string[]
tunnel?: TunnelInfo
@@ -160,3 +186,28 @@ export interface PeerConnStats {
tx_packets: number
latency_us: number
}
export enum EventType {
TunDeviceReady = 'TunDeviceReady', // string
TunDeviceError = 'TunDeviceError', // string
PeerAdded = 'PeerAdded', // number
PeerRemoved = 'PeerRemoved', // number
PeerConnAdded = 'PeerConnAdded', // PeerConnInfo
PeerConnRemoved = 'PeerConnRemoved', // PeerConnInfo
ListenerAdded = 'ListenerAdded', // any
ListenerAddFailed = 'ListenerAddFailed', // any, string
ListenerAcceptFailed = 'ListenerAcceptFailed', // any, string
ConnectionAccepted = 'ConnectionAccepted', // string, string
ConnectionError = 'ConnectionError', // string, string, string
Connecting = 'Connecting', // any
ConnectError = 'ConnectError', // string, string, string
VpnPortalClientConnected = 'VpnPortalClientConnected', // string, string
VpnPortalClientDisconnected = 'VpnPortalClientDisconnected', // string, string, string
DhcpIpv4Changed = 'DhcpIpv4Changed', // ipv4 | null, ipv4 | null
DhcpIpv4Conflicted = 'DhcpIpv4Conflicted', // ipv4 | null
}
+16 -17
View File
@@ -1,19 +1,19 @@
import path from 'node:path'
import { defineConfig } from 'vite'
import Vue from '@vitejs/plugin-vue'
import Layouts from 'vite-plugin-vue-layouts'
import Components from 'unplugin-vue-components/vite'
import AutoImport from 'unplugin-auto-import/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import process from 'node:process'
import VueI18n from '@intlify/unplugin-vue-i18n/vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import VueRouter from 'unplugin-vue-router/vite'
import { PrimeVueResolver } from '@primevue/auto-import-resolver'
import Vue from '@vitejs/plugin-vue'
import { internalIpV4Sync } from 'internal-ip'
import AutoImport from 'unplugin-auto-import/vite'
import Components from 'unplugin-vue-components/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import { VueRouterAutoImports } from 'unplugin-vue-router'
import { PrimeVueResolver } from '@primevue/auto-import-resolver';
import { svelte } from '@sveltejs/vite-plugin-svelte';
import { internalIpV4Sync } from 'internal-ip';
import VueRouter from 'unplugin-vue-router/vite'
import { defineConfig } from 'vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import Layouts from 'vite-plugin-vue-layouts'
const host = process.env.TAURI_DEV_HOST;
const host = process.env.TAURI_DEV_HOST
// https://vitejs.dev/config/
export default defineConfig(async () => ({
@@ -23,7 +23,6 @@ export default defineConfig(async () => ({
},
},
plugins: [
svelte(),
VueMacros({
plugins: {
vue: Vue({
@@ -100,10 +99,10 @@ export default defineConfig(async () => ({
},
hmr: host
? {
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
: undefined,
},
}))
+13 -8
View File
@@ -3,12 +3,12 @@ name = "easytier"
description = "A full meshed p2p VPN, connecting all your devices in one network with one command."
homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier"
version = "1.2.3"
version = "2.0.3"
edition = "2021"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.75"
rust-version = "1.77.0"
license-file = "LICENSE"
readme = "README.md"
@@ -29,6 +29,8 @@ path = "src/lib.rs"
test = false
[dependencies]
git-version = "0.3.9"
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", features = [
"env-filter",
@@ -49,7 +51,7 @@ futures = { version = "0.3", features = ["bilock", "unstable"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
tokio-util = { version = "0.7.9", features = ["codec", "net"] }
tokio-util = { version = "0.7.9", features = ["codec", "net", "io"] }
async-stream = "0.3.5"
async-trait = "0.1.74"
@@ -101,14 +103,10 @@ uuid = { version = "1.5.0", features = [
crossbeam-queue = "0.3"
once_cell = "1.18.0"
# for packet
postcard = { "version" = "1.0.8", features = ["alloc"] }
# for rpc
tonic = "0.12"
prost = "0.13"
prost-types = "0.13"
anyhow = "1.0"
tarpc = { version = "0.32", features = ["tokio1", "serde1"] }
url = { version = "2.5", features = ["serde"] }
percent-encoding = "2.3.1"
@@ -127,6 +125,7 @@ rand = "0.8.5"
serde = { version = "1.0", features = ["derive"] }
pnet = { version = "0.35.0", features = ["serde"] }
serde_json = "1"
clap = { version = "4.4.8", features = [
"string",
@@ -180,6 +179,9 @@ wildmatch = "2.3.4"
rust-i18n = "3"
sys-locale = "0.3"
ringbuf = "0.4.5"
async-ringbuf = "0.3.1"
[target.'cfg(windows)'.dependencies]
windows-sys = { version = "0.52", features = [
"Win32_Networking_WinSock",
@@ -194,6 +196,8 @@ winreg = "0.52"
tonic-build = "0.12"
globwalk = "0.8.1"
regex = "1"
prost-build = "0.13.2"
rpc_build = { path = "src/proto/rpc_build" }
[target.'cfg(windows)'.build-dependencies]
reqwest = { version = "0.11", features = ["blocking"] }
@@ -203,6 +207,7 @@ zip = "0.6.6"
[dev-dependencies]
serial_test = "3.0.0"
rstest = "0.18.2"
futures-util = "0.3.30"
[target.'cfg(target_os = "linux")'.dev-dependencies]
defguard_wireguard_rs = "0.4.2"
+31 -18
View File
@@ -1,10 +1,5 @@
#[cfg(target_os = "windows")]
use std::{
env,
fs::File,
io::{copy, Cursor},
path::PathBuf,
};
use std::{env, io::Cursor, path::PathBuf};
#[cfg(target_os = "windows")]
struct WindowsBuild {}
@@ -46,8 +41,8 @@ impl WindowsBuild {
fn download_protoc() -> PathBuf {
println!("cargo:info=use exist protoc: {:?}", "k");
let out_dir = Self::get_cargo_target_dir().unwrap();
let fname = out_dir.join("protoc");
let out_dir = Self::get_cargo_target_dir().unwrap().join("protobuf");
let fname = out_dir.join("bin/protoc.exe");
if fname.exists() {
println!("cargo:info=use exist protoc: {:?}", fname);
return fname;
@@ -65,10 +60,7 @@ impl WindowsBuild {
.map(zip::ZipArchive::new)
.unwrap()
.unwrap();
let protoc_zipped_file = content.by_name("bin/protoc.exe").unwrap();
let mut content = protoc_zipped_file;
copy(&mut content, &mut File::create(&fname).unwrap()).unwrap();
content.extract(out_dir).unwrap();
fname
}
@@ -129,14 +121,35 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
#[cfg(target_os = "windows")]
WindowsBuild::check_for_win();
tonic_build::configure()
.type_attribute(".", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute("cli.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("cli.PeerInfoForGlobalMap", "#[derive(Hash)]")
let proto_files = [
"src/proto/peer_rpc.proto",
"src/proto/common.proto",
"src/proto/error.proto",
"src/proto/tests.proto",
"src/proto/cli.proto",
];
for proto_file in &proto_files {
println!("cargo:rerun-if-changed={}", proto_file);
}
prost_build::Config::new()
.type_attribute(".common", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".error", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".cli", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(
"peer_rpc.GetIpListResponse",
"#[derive(serde::Serialize, serde::Deserialize)]",
)
.type_attribute("peer_rpc.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("peer_rpc.PeerInfoForGlobalMap", "#[derive(Hash)]")
.type_attribute("peer_rpc.ForeignNetworkRouteInfoKey", "#[derive(Hash, Eq)]")
.type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]")
.service_generator(Box::new(rpc_build::ServiceGenerator::new()))
.btree_map(&["."])
.compile(&["proto/cli.proto"], &["proto/"])
.compile_protos(&proto_files, &["src/proto/"])
.unwrap();
// tonic_build::compile_protos("proto/cli.proto")?;
check_locale();
Ok(())
}
+7 -1
View File
@@ -108,9 +108,15 @@ core_clap:
disable_p2p:
en: "disable p2p communication, will only relay packets with peers specified by --peers"
zh-CN: "禁用P2P通信,只通过--peers指定的节点转发数据包"
disable_udp_hole_punching:
en: "disable udp hole punching"
zh-CN: "禁用UDP打洞功能"
relay_all_peer_rpc:
en: "relay all peer rpc packets, even if the peer is not in the relay network whitelist. this can help peers not in relay network whitelist to establish p2p connection."
zh-CN: "转发所有对等节点的RPC数据包,即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。"
socks5:
en: "enable socks5 server, allow socks5 client to access virtual network. format: <port>, e.g.: 1080"
zh-CN: "启用 socks5 服务器,允许 socks5 客户端访问虚拟网络. 格式: <端口>,例如:1080"
zh-CN: "启用 socks5 服务器,允许 socks5 客户端访问虚拟网络. 格式: <端口>,例如:1080"
ipv6_listener:
en: "the url of the ipv6 listener, e.g.: tcp://[::]:11010, if not set, will listen on random udp port"
zh-CN: "IPv6 监听器的URL,例如:tcp://[::]:11010,如果未设置,将在随机UDP端口上监听"
+44 -10
View File
@@ -23,8 +23,8 @@ pub trait ConfigLoader: Send + Sync {
fn get_netns(&self) -> Option<String>;
fn set_netns(&self, ns: Option<String>);
fn get_ipv4(&self) -> Option<std::net::Ipv4Addr>;
fn set_ipv4(&self, addr: Option<std::net::Ipv4Addr>);
fn get_ipv4(&self) -> Option<cidr::Ipv4Inet>;
fn set_ipv4(&self, addr: Option<cidr::Ipv4Inet>);
fn get_dhcp(&self) -> bool;
fn set_dhcp(&self, dhcp: bool);
@@ -72,7 +72,7 @@ pub trait ConfigLoader: Send + Sync {
pub type NetworkSecretDigest = [u8; 32];
#[derive(Debug, Clone, Deserialize, Serialize, Default)]
#[derive(Debug, Clone, Deserialize, Serialize, Default, Eq, Hash)]
pub struct NetworkIdentity {
pub network_name: String,
pub network_secret: Option<String>,
@@ -178,6 +178,10 @@ pub struct Flags {
pub disable_p2p: bool,
#[derivative(Default(value = "false"))]
pub relay_all_peer_rpc: bool,
#[derivative(Default(value = "false"))]
pub disable_udp_hole_punching: bool,
#[derivative(Default(value = "\"udp://[::]:0\".to_string()"))]
pub ipv6_listener: String,
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
@@ -206,7 +210,10 @@ struct Config {
socks5_proxy: Option<url::Url>,
flags: Option<Flags>,
flags: Option<serde_json::Map<String, serde_json::Value>>,
#[serde(skip)]
flags_struct: Option<Flags>,
}
#[derive(Debug, Clone)]
@@ -222,13 +229,15 @@ impl Default for TomlConfigLoader {
impl TomlConfigLoader {
pub fn new_from_str(config_str: &str) -> Result<Self, anyhow::Error> {
let config = toml::de::from_str::<Config>(config_str).with_context(|| {
let mut config = toml::de::from_str::<Config>(config_str).with_context(|| {
format!(
"failed to parse config file: {}\n{}",
config_str, config_str
)
})?;
config.flags_struct = Some(Self::gen_flags(config.flags.clone().unwrap_or_default()));
Ok(TomlConfigLoader {
config: Arc::new(Mutex::new(config)),
})
@@ -246,6 +255,24 @@ impl TomlConfigLoader {
Ok(ret)
}
fn gen_flags(mut flags_hashmap: serde_json::Map<String, serde_json::Value>) -> Flags {
let default_flags_json = serde_json::to_string(&Flags::default()).unwrap();
let default_flags_hashmap =
serde_json::from_str::<serde_json::Map<String, serde_json::Value>>(&default_flags_json)
.unwrap();
let mut merged_hashmap = serde_json::Map::new();
for (key, value) in default_flags_hashmap {
if let Some(v) = flags_hashmap.remove(&key) {
merged_hashmap.insert(key, v);
} else {
merged_hashmap.insert(key, value);
}
}
serde_json::from_value(serde_json::Value::Object(merged_hashmap)).unwrap()
}
}
impl ConfigLoader for TomlConfigLoader {
@@ -297,16 +324,23 @@ impl ConfigLoader for TomlConfigLoader {
self.config.lock().unwrap().netns = ns;
}
fn get_ipv4(&self) -> Option<std::net::Ipv4Addr> {
fn get_ipv4(&self) -> Option<cidr::Ipv4Inet> {
let locked_config = self.config.lock().unwrap();
locked_config
.ipv4
.as_ref()
.map(|s| s.parse().ok())
.flatten()
.map(|c: cidr::Ipv4Inet| {
if c.network_length() == 32 {
cidr::Ipv4Inet::new(c.address(), 24).unwrap()
} else {
c
}
})
}
fn set_ipv4(&self, addr: Option<std::net::Ipv4Addr>) {
fn set_ipv4(&self, addr: Option<cidr::Ipv4Inet>) {
self.config.lock().unwrap().ipv4 = if let Some(addr) = addr {
Some(addr.to_string())
} else {
@@ -472,13 +506,13 @@ impl ConfigLoader for TomlConfigLoader {
self.config
.lock()
.unwrap()
.flags
.flags_struct
.clone()
.unwrap_or_default()
}
fn set_flags(&self, flags: Flags) {
self.config.lock().unwrap().flags = Some(flags);
self.config.lock().unwrap().flags_struct = Some(flags);
}
fn get_exit_nodes(&self) -> Vec<Ipv4Addr> {
@@ -563,7 +597,7 @@ level = "warn"
assert!(ret.is_ok());
let ret = ret.unwrap();
assert_eq!("10.144.144.10", ret.get_ipv4().unwrap().to_string());
assert_eq!("10.144.144.10/24", ret.get_ipv4().unwrap().to_string());
assert_eq!(
vec!["tcp://0.0.0.0:11010", "udp://0.0.0.0:11010"],
+9
View File
@@ -21,4 +21,13 @@ macro_rules! set_global_var {
define_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, u64, 1000);
define_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, u64, 10);
pub const UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID: u32 = 2;
pub const EASYTIER_VERSION: &str = git_version::git_version!(
args = ["--abbrev=8", "--always", "--dirty=~"],
prefix = concat!(env!("CARGO_PKG_VERSION"), "-"),
suffix = "",
fallback = env!("CARGO_PKG_VERSION")
);
-2
View File
@@ -31,8 +31,6 @@ pub enum Error {
// RpcListenError(String),
#[error("Rpc connect error: {0}")]
RpcConnectError(String),
#[error("Rpc error: {0}")]
RpcClientError(#[from] tarpc::client::RpcError),
#[error("Timeout error: {0}")]
Timeout(#[from] tokio::time::error::Elapsed),
#[error("url in blacklist")]
+28 -8
View File
@@ -4,7 +4,8 @@ use std::{
sync::{Arc, Mutex},
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::proto::common::PeerFeatureFlag;
use crossbeam::atomic::AtomicCell;
use super::{
@@ -39,8 +40,8 @@ pub enum GlobalCtxEvent {
VpnPortalClientConnected(String, String), // (portal, client ip)
VpnPortalClientDisconnected(String, String), // (portal, client ip)
DhcpIpv4Changed(Option<std::net::Ipv4Addr>, Option<std::net::Ipv4Addr>), // (old, new)
DhcpIpv4Conflicted(Option<std::net::Ipv4Addr>),
DhcpIpv4Changed(Option<cidr::Ipv4Inet>, Option<cidr::Ipv4Inet>), // (old, new)
DhcpIpv4Conflicted(Option<cidr::Ipv4Inet>),
}
type EventBus = tokio::sync::broadcast::Sender<GlobalCtxEvent>;
@@ -55,7 +56,7 @@ pub struct GlobalCtx {
event_bus: EventBus,
cached_ipv4: AtomicCell<Option<std::net::Ipv4Addr>>,
cached_ipv4: AtomicCell<Option<cidr::Ipv4Inet>>,
cached_proxy_cidrs: AtomicCell<Option<Vec<cidr::IpCidr>>>,
ip_collector: Arc<IPCollector>,
@@ -68,6 +69,8 @@ pub struct GlobalCtx {
enable_exit_node: bool,
no_tun: bool,
feature_flags: AtomicCell<PeerFeatureFlag>,
}
impl std::fmt::Debug for GlobalCtx {
@@ -91,7 +94,7 @@ impl GlobalCtx {
let net_ns = NetNS::new(config_fs.get_netns());
let hostname = config_fs.get_hostname();
let (event_bus, _) = tokio::sync::broadcast::channel(100);
let (event_bus, _) = tokio::sync::broadcast::channel(1024);
let stun_info_collection = Arc::new(StunInfoCollector::new_with_default_servers());
@@ -119,6 +122,8 @@ impl GlobalCtx {
enable_exit_node,
no_tun,
feature_flags: AtomicCell::new(PeerFeatureFlag::default()),
}
}
@@ -134,7 +139,7 @@ impl GlobalCtx {
}
}
pub fn get_ipv4(&self) -> Option<std::net::Ipv4Addr> {
pub fn get_ipv4(&self) -> Option<cidr::Ipv4Inet> {
if let Some(ret) = self.cached_ipv4.load() {
return Some(ret);
}
@@ -143,7 +148,7 @@ impl GlobalCtx {
return addr;
}
pub fn set_ipv4(&self, addr: Option<std::net::Ipv4Addr>) {
pub fn set_ipv4(&self, addr: Option<cidr::Ipv4Inet>) {
self.config.set_ipv4(addr);
self.cached_ipv4.store(None);
}
@@ -179,6 +184,10 @@ impl GlobalCtx {
self.config.get_network_identity()
}
pub fn get_network_name(&self) -> String {
self.get_network_identity().network_name
}
pub fn get_ip_collector(&self) -> Arc<IPCollector> {
self.ip_collector.clone()
}
@@ -191,7 +200,6 @@ impl GlobalCtx {
self.stun_info_collection.as_ref()
}
#[cfg(test)]
pub fn replace_stun_info_collector(&self, collector: Box<dyn StunInfoCollectorTrait>) {
// force replace the stun_info_collection without mut and drop the old one
let ptr = &self.stun_info_collection as *const Box<dyn StunInfoCollectorTrait>;
@@ -219,6 +227,10 @@ impl GlobalCtx {
self.config.get_flags()
}
pub fn set_flags(&self, flags: Flags) {
self.config.set_flags(flags);
}
pub fn get_128_key(&self) -> [u8; 16] {
let mut key = [0u8; 16];
let secret = self
@@ -243,6 +255,14 @@ impl GlobalCtx {
pub fn no_tun(&self) -> bool {
self.no_tun
}
pub fn get_feature_flags(&self) -> PeerFeatureFlag {
self.feature_flags.load()
}
pub fn set_feature_flags(&self, flags: PeerFeatureFlag) {
self.feature_flags.store(flags);
}
}
#[cfg(test)]
+1
View File
@@ -14,6 +14,7 @@ pub mod global_ctx;
pub mod ifcfg;
pub mod netns;
pub mod network;
pub mod scoped_task;
pub mod stun;
pub mod stun_codec_ext;
+24 -16
View File
@@ -1,12 +1,13 @@
use std::{net::IpAddr, ops::Deref, sync::Arc};
use crate::rpc::peer::GetIpListResponse;
use pnet::datalink::NetworkInterface;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use crate::proto::peer_rpc::GetIpListResponse;
use super::{netns::NetNS, stun::StunInfoCollectorTrait};
pub const CACHED_IP_LIST_TIMEOUT_SEC: u64 = 60;
@@ -163,7 +164,7 @@ pub struct IPCollector {
impl IPCollector {
pub fn new<T: StunInfoCollectorTrait + 'static>(net_ns: NetNS, stun_info_collector: T) -> Self {
Self {
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::new())),
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::default())),
collect_ip_task: Mutex::new(JoinSet::new()),
net_ns,
stun_info_collector: Arc::new(Box::new(stun_info_collector)),
@@ -195,14 +196,18 @@ impl IPCollector {
let Ok(ip_addr) = ip.parse::<IpAddr>() else {
continue;
};
if ip_addr.is_ipv4() {
cached_ip_list.write().await.public_ipv4 = ip.clone();
} else {
cached_ip_list.write().await.public_ipv6 = ip.clone();
match ip_addr {
IpAddr::V4(v) => {
cached_ip_list.write().await.public_ipv4 = Some(v.into())
}
IpAddr::V6(v) => {
cached_ip_list.write().await.public_ipv6 = Some(v.into())
}
}
}
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_empty() {
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_none() {
CACHED_IP_LIST_TIMEOUT_SEC
} else {
3
@@ -236,7 +241,7 @@ impl IPCollector {
#[tracing::instrument(skip(net_ns))]
async fn do_collect_local_ip_addrs(net_ns: NetNS) -> GetIpListResponse {
let mut ret = crate::rpc::peer::GetIpListResponse::new();
let mut ret = GetIpListResponse::default();
let ifaces = Self::collect_interfaces(net_ns.clone()).await;
let _g = net_ns.guard();
@@ -246,25 +251,28 @@ impl IPCollector {
if ip.is_loopback() || ip.is_multicast() {
continue;
}
if ip.is_ipv4() {
ret.interface_ipv4s.push(ip.to_string());
} else if ip.is_ipv6() {
ret.interface_ipv6s.push(ip.to_string());
match ip {
std::net::IpAddr::V4(v4) => {
ret.interface_ipv4s.push(v4.into());
}
std::net::IpAddr::V6(v6) => {
ret.interface_ipv6s.push(v6.into());
}
}
}
}
if let Ok(v4_addr) = local_ipv4().await {
tracing::trace!("got local ipv4: {}", v4_addr);
if !ret.interface_ipv4s.contains(&v4_addr.to_string()) {
ret.interface_ipv4s.push(v4_addr.to_string());
if !ret.interface_ipv4s.contains(&v4_addr.into()) {
ret.interface_ipv4s.push(v4_addr.into());
}
}
if let Ok(v6_addr) = local_ipv6().await {
tracing::trace!("got local ipv6: {}", v6_addr);
if !ret.interface_ipv6s.contains(&v6_addr.to_string()) {
ret.interface_ipv6s.push(v6_addr.to_string());
if !ret.interface_ipv6s.contains(&v6_addr.into()) {
ret.interface_ipv6s.push(v6_addr.into());
}
}
+134
View File
@@ -0,0 +1,134 @@
//! This crate provides a wrapper type of Tokio's JoinHandle: `ScopedTask`, which aborts the task when it's dropped.
//! `ScopedTask` can still be awaited to join the child-task, and abort-on-drop will still trigger while it is being awaited.
//!
//! For example, if task A spawned task B but is doing something else, and task B is waiting for task C to join,
//! aborting A will also abort both B and C.
use std::future::Future;
use std::ops::Deref;
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::task::JoinHandle;
#[derive(Debug)]
pub struct ScopedTask<T> {
inner: JoinHandle<T>,
}
impl<T> Drop for ScopedTask<T> {
fn drop(&mut self) {
self.inner.abort()
}
}
impl<T> Future for ScopedTask<T> {
type Output = <JoinHandle<T> as Future>::Output;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
Pin::new(&mut self.inner).poll(cx)
}
}
impl<T> From<JoinHandle<T>> for ScopedTask<T> {
fn from(inner: JoinHandle<T>) -> Self {
Self { inner }
}
}
impl<T> Deref for ScopedTask<T> {
type Target = JoinHandle<T>;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
#[cfg(test)]
mod tests {
use super::ScopedTask;
use futures_util::future::pending;
use std::sync::{Arc, RwLock};
use tokio::task::yield_now;
struct Sentry(Arc<RwLock<bool>>);
impl Drop for Sentry {
fn drop(&mut self) {
*self.0.write().unwrap() = true
}
}
#[tokio::test]
async fn drop_while_not_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
drop(task);
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn drop_while_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let handle = tokio::spawn(async move {
ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}))
.await
.unwrap()
});
yield_now().await;
assert!(!*dropped.read().unwrap());
handle.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn no_drop_only_join() {
assert_eq!(
ScopedTask::from(tokio::spawn(async {
yield_now().await;
5
}))
.await
.unwrap(),
5
)
}
#[tokio::test]
async fn manually_abort_before_drop() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn manually_abort_then_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(task.await.is_err());
}
}
+152 -50
View File
@@ -1,9 +1,10 @@
use std::collections::BTreeSet;
use std::net::{IpAddr, SocketAddr};
use std::sync::atomic::AtomicBool;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
use crate::rpc::{NatType, StunInfo};
use crate::proto::common::{NatType, StunInfo};
use anyhow::Context;
use chrono::Local;
use crossbeam::atomic::AtomicCell;
@@ -55,6 +56,8 @@ impl HostResolverIter {
self.ips = ips
.filter(|x| x.is_ipv4())
.choose_multiple(&mut rand::thread_rng(), self.max_ip_per_domain as usize);
if self.ips.is_empty() {return self.next().await;}
}
Err(e) => {
tracing::warn!(?host, ?e, "lookup host for stun failed");
@@ -161,7 +164,7 @@ impl StunClient {
continue;
};
tracing::debug!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
tracing::trace!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
if msg.class() != MessageClass::SuccessResponse
|| msg.method() != BINDING
@@ -216,7 +219,7 @@ impl StunClient {
changed_addr
}
#[tracing::instrument(ret, err, level = Level::DEBUG)]
#[tracing::instrument(ret, level = Level::TRACE)]
pub async fn bind_request(
self,
change_ip: bool,
@@ -243,7 +246,7 @@ impl StunClient {
.encode_into_bytes(message.clone())
.with_context(|| "encode stun message")?;
tids.push(tid as u128);
tracing::debug!(?message, ?msg, tid, "send stun request");
tracing::trace!(?message, ?msg, tid, "send stun request");
self.socket
.send_to(msg.as_slice().into(), &stun_host)
.await?;
@@ -276,7 +279,7 @@ impl StunClient {
latency_us: now.elapsed().as_micros() as u32,
};
tracing::debug!(
tracing::trace!(
?stun_host,
?recv_addr,
?changed_socket_addr,
@@ -303,14 +306,14 @@ impl StunClientBuilder {
task_set.spawn(
async move {
let mut buf = [0; 1620];
tracing::info!("start stun packet listener");
tracing::trace!("start stun packet listener");
loop {
let Ok((len, addr)) = udp_clone.recv_from(&mut buf).await else {
tracing::error!("udp recv_from error");
break;
};
let data = buf[..len].to_vec();
tracing::debug!(?addr, ?data, "recv udp stun packet");
tracing::trace!(?addr, ?data, "recv udp stun packet");
let _ = stun_packet_sender_clone.send(StunPacket { data, addr });
}
}
@@ -342,6 +345,8 @@ impl StunClientBuilder {
pub struct UdpNatTypeDetectResult {
source_addr: SocketAddr,
stun_resps: Vec<BindRequestResponse>,
// if we are easy symmetric nat, we need to test with another port to check inc or dec
extra_bind_test: Option<BindRequestResponse>,
}
impl UdpNatTypeDetectResult {
@@ -349,6 +354,7 @@ impl UdpNatTypeDetectResult {
Self {
source_addr,
stun_resps,
extra_bind_test: None,
}
}
@@ -405,7 +411,7 @@ impl UdpNatTypeDetectResult {
.filter_map(|x| x.mapped_socket_addr)
.collect::<BTreeSet<_>>()
.len();
mapped_addr_count < self.stun_server_count()
mapped_addr_count == 1
}
pub fn nat_type(&self) -> NatType {
@@ -428,7 +434,32 @@ impl UdpNatTypeDetectResult {
return NatType::PortRestricted;
}
} else if !self.stun_resps.is_empty() {
return NatType::Symmetric;
if self.public_ips().len() != 1
|| self.usable_stun_resp_count() <= 1
|| self.max_port() - self.min_port() > 15
|| self.extra_bind_test.is_none()
|| self
.extra_bind_test
.as_ref()
.unwrap()
.mapped_socket_addr
.is_none()
{
return NatType::Symmetric;
} else {
let extra_bind_test = self.extra_bind_test.as_ref().unwrap();
let extra_port = extra_bind_test.mapped_socket_addr.unwrap().port();
let max_port_diff = extra_port.saturating_sub(self.max_port());
let min_port_diff = self.min_port().saturating_sub(extra_port);
if max_port_diff != 0 && max_port_diff < 100 {
return NatType::SymmetricEasyInc;
} else if min_port_diff != 0 && min_port_diff < 100 {
return NatType::SymmetricEasyDec;
} else {
return NatType::Symmetric;
}
}
} else {
return NatType::Unknown;
}
@@ -476,6 +507,13 @@ impl UdpNatTypeDetectResult {
.max()
.unwrap_or(u16::MAX)
}
pub fn usable_stun_resp_count(&self) -> usize {
self.stun_resps
.iter()
.filter(|x| x.mapped_socket_addr.is_some())
.count()
}
}
pub struct UdpNatTypeDetector {
@@ -491,6 +529,19 @@ impl UdpNatTypeDetector {
}
}
async fn get_extra_bind_result(
&self,
source_port: u16,
stun_server: SocketAddr,
) -> Result<BindRequestResponse, Error> {
let udp = Arc::new(UdpSocket::bind(format!("0.0.0.0:{}", source_port)).await?);
let client_builder = StunClientBuilder::new(udp.clone());
client_builder
.new_stun_client(stun_server)
.bind_request(false, false)
.await
}
pub async fn detect_nat_type(&self, source_port: u16) -> Result<UdpNatTypeDetectResult, Error> {
let udp = Arc::new(UdpSocket::bind(format!("0.0.0.0:{}", source_port)).await?);
self.detect_nat_type_with_socket(udp).await
@@ -552,12 +603,15 @@ pub struct StunInfoCollector {
udp_nat_test_result: Arc<RwLock<Option<UdpNatTypeDetectResult>>>,
nat_test_result_time: Arc<AtomicCell<chrono::DateTime<Local>>>,
redetect_notify: Arc<tokio::sync::Notify>,
tasks: JoinSet<()>,
tasks: std::sync::Mutex<JoinSet<()>>,
started: AtomicBool,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for StunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
self.start_stun_routine();
let Some(result) = self.udp_nat_test_result.read().unwrap().clone() else {
return Default::default();
};
@@ -572,13 +626,30 @@ impl StunInfoCollectorTrait for StunInfoCollector {
}
async fn get_udp_port_mapping(&self, local_port: u16) -> Result<SocketAddr, Error> {
let stun_servers = self
self.start_stun_routine();
let mut stun_servers = self
.udp_nat_test_result
.read()
.unwrap()
.clone()
.map(|x| x.collect_available_stun_server())
.ok_or(Error::NotFound)?;
.unwrap_or(vec![]);
if stun_servers.is_empty() {
let mut host_resolver =
HostResolverIter::new(self.stun_servers.read().unwrap().clone(), 2);
while let Some(addr) = host_resolver.next().await {
stun_servers.push(addr);
if stun_servers.len() >= 2 {
break;
}
}
}
if stun_servers.is_empty() {
return Err(Error::NotFound);
}
let udp = Arc::new(UdpSocket::bind(format!("0.0.0.0:{}", local_port)).await?);
let mut client_builder = StunClientBuilder::new(udp.clone());
@@ -605,17 +676,14 @@ impl StunInfoCollectorTrait for StunInfoCollector {
impl StunInfoCollector {
pub fn new(stun_servers: Vec<String>) -> Self {
let mut ret = Self {
Self {
stun_servers: Arc::new(RwLock::new(stun_servers)),
udp_nat_test_result: Arc::new(RwLock::new(None)),
nat_test_result_time: Arc::new(AtomicCell::new(Local::now())),
redetect_notify: Arc::new(tokio::sync::Notify::new()),
tasks: JoinSet::new(),
};
ret.start_stun_routine();
ret
tasks: std::sync::Mutex::new(JoinSet::new()),
started: AtomicBool::new(false),
}
}
pub fn new_with_default_servers() -> Self {
@@ -627,9 +695,9 @@ impl StunInfoCollector {
// stun server cross nation may return a external ip address with high latency and loss rate
vec![
"stun.miwifi.com",
"stun.cdnbye.com",
"stun.hitv.com",
"stun.chat.bilibili.com",
"stun.hitv.com",
"stun.cdnbye.com",
"stun.douyucdn.cn:18000",
"fwa.lifesizecloud.com",
"global.turn.twilio.com",
@@ -648,12 +716,18 @@ impl StunInfoCollector {
.collect()
}
fn start_stun_routine(&mut self) {
fn start_stun_routine(&self) {
if self.started.load(std::sync::atomic::Ordering::Relaxed) {
return;
}
self.started
.store(true, std::sync::atomic::Ordering::Relaxed);
let stun_servers = self.stun_servers.clone();
let udp_nat_test_result = self.udp_nat_test_result.clone();
let udp_test_time = self.nat_test_result_time.clone();
let redetect_notify = self.redetect_notify.clone();
self.tasks.spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let servers = stun_servers.read().unwrap().clone();
// use first three and random choose one from the rest
@@ -664,38 +738,41 @@ impl StunInfoCollector {
.map(|x| x.to_string())
.collect();
let detector = UdpNatTypeDetector::new(servers, 1);
let ret = detector.detect_nat_type(0).await;
let mut ret = detector.detect_nat_type(0).await;
tracing::debug!(?ret, "finish udp nat type detect");
let mut nat_type = NatType::Unknown;
let sleep_sec = match &ret {
Ok(resp) => {
*udp_nat_test_result.write().unwrap() = Some(resp.clone());
udp_test_time.store(Local::now());
nat_type = resp.nat_type();
if nat_type == NatType::Unknown {
15
} else {
600
}
}
_ => 15,
};
if let Ok(resp) = &ret {
tracing::debug!(?resp, "got udp nat type detect result");
nat_type = resp.nat_type();
}
// if nat type is symmtric, detect with another port to gather more info
if nat_type == NatType::Symmetric {
let old_resp = ret.unwrap();
let old_local_port = old_resp.local_addr().port();
let new_port = if old_local_port >= 65535 {
old_local_port - 1
} else {
old_local_port + 1
};
let ret = detector.detect_nat_type(new_port).await;
tracing::debug!(?ret, "finish udp nat type detect with another port");
if let Ok(resp) = ret {
udp_nat_test_result.write().unwrap().as_mut().map(|x| {
x.extend_result(resp);
});
let old_resp = ret.as_mut().unwrap();
tracing::debug!(?old_resp, "start get extra bind result");
let available_stun_servers = old_resp.collect_available_stun_server();
for server in available_stun_servers.iter() {
let ret = detector
.get_extra_bind_result(0, *server)
.await
.with_context(|| "get extra bind result failed");
tracing::debug!(?ret, "finish udp nat type detect with another port");
if let Ok(resp) = ret {
old_resp.extra_bind_test = Some(resp);
break;
}
}
}
let mut sleep_sec = 10;
if let Ok(resp) = &ret {
udp_test_time.store(Local::now());
*udp_nat_test_result.write().unwrap() = Some(resp.clone());
if nat_type != NatType::Unknown
&& (nat_type != NatType::Symmetric || resp.extra_bind_test.is_some())
{
sleep_sec = 600
}
}
@@ -712,6 +789,31 @@ impl StunInfoCollector {
}
}
pub struct MockStunInfoCollector {
pub udp_nat_type: NatType,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for MockStunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
StunInfo {
udp_nat_type: self.udp_nat_type as i32,
tcp_nat_type: NatType::Unknown as i32,
last_update_time: std::time::Instant::now().elapsed().as_secs() as i64,
min_port: 100,
max_port: 200,
public_ip: vec!["127.0.0.1".to_string()],
}
}
async fn get_udp_port_mapping(&self, mut port: u16) -> Result<std::net::SocketAddr, Error> {
if port == 0 {
port = 40144;
}
Ok(format!("127.0.0.1:{}", port).parse().unwrap())
}
}
#[cfg(test)]
mod tests {
use super::*;
+171 -97
View File
@@ -1,13 +1,25 @@
// try connect peers directly, with either its public ip or lan ip
use std::{net::SocketAddr, sync::Arc};
use std::{net::SocketAddr, sync::Arc, time::Duration};
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, PeerId},
peers::{peer_manager::PeerManager, peer_rpc::PeerRpcManager},
peers::{
peer_manager::PeerManager, peer_rpc::PeerRpcManager,
peer_rpc_service::DirectConnectorManagerRpcServer,
},
proto::{
peer_rpc::{
DirectConnectorRpc, DirectConnectorRpcClientFactory, DirectConnectorRpcServer,
GetIpListRequest, GetIpListResponse,
},
rpc_types::controller::BaseController,
},
};
use crate::rpc::{peer::GetIpListResponse, PeerConnInfo};
use crate::proto::cli::PeerConnInfo;
use anyhow::Context;
use rand::Rng;
use tokio::{task::JoinSet, time::timeout};
use tracing::Instrument;
use url::Host;
@@ -17,11 +29,6 @@ use super::create_connector_by_url;
pub const DIRECT_CONNECTOR_SERVICE_ID: u32 = 1;
pub const DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC: u64 = 300;
#[tarpc::service]
pub trait DirectConnectorRpc {
async fn get_ip_list() -> GetIpListResponse;
}
#[async_trait::async_trait]
pub trait PeerManagerForDirectConnector {
async fn list_peers(&self) -> Vec<PeerId>;
@@ -35,7 +42,10 @@ impl PeerManagerForDirectConnector for PeerManager {
let mut ret = vec![];
let routes = self.list_routes().await;
for r in routes.iter() {
for r in routes
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
ret.push(r.peer_id);
}
@@ -51,38 +61,17 @@ impl PeerManagerForDirectConnector for PeerManager {
}
}
#[derive(Clone)]
struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[tarpc::server]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
async fn get_ip_list(self, _: tarpc::context::Context) -> GetIpListResponse {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self.global_ctx.get_running_listeners();
ret
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}
#[derive(Hash, Eq, PartialEq, Clone)]
struct DstBlackListItem(PeerId, String);
#[derive(Hash, Eq, PartialEq, Clone)]
struct DstSchemeBlackListItem(PeerId, String);
struct DstListenerUrlBlackListItem(PeerId, url::Url);
struct DirectConnectorManagerData {
global_ctx: ArcGlobalCtx,
peer_manager: Arc<PeerManager>,
dst_blacklist: timedmap::TimedMap<DstBlackListItem, ()>,
dst_sceme_blacklist: timedmap::TimedMap<DstSchemeBlackListItem, ()>,
dst_listener_blacklist: timedmap::TimedMap<DstListenerUrlBlackListItem, ()>,
}
impl DirectConnectorManagerData {
@@ -91,7 +80,7 @@ impl DirectConnectorManagerData {
global_ctx,
peer_manager,
dst_blacklist: timedmap::TimedMap::new(),
dst_sceme_blacklist: timedmap::TimedMap::new(),
dst_listener_blacklist: timedmap::TimedMap::new(),
}
}
}
@@ -130,10 +119,17 @@ impl DirectConnectorManager {
}
pub fn run_as_server(&mut self) {
self.data.peer_manager.get_peer_rpc_mgr().run_service(
DIRECT_CONNECTOR_SERVICE_ID,
DirectConnectorManagerRpcServer::new(self.global_ctx.clone()).serve(),
);
self.data
.peer_manager
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
DirectConnectorRpcServer::new(DirectConnectorManagerRpcServer::new(
self.global_ctx.clone(),
)),
&self.data.global_ctx.get_network_name(),
);
}
pub fn run_as_client(&mut self) {
@@ -152,7 +148,7 @@ impl DirectConnectorManager {
}
while let Some(task_ret) = tasks.join_next().await {
tracing::trace!(?task_ret, "direct connect task ret");
tracing::debug!(?task_ret, ?my_peer_id, "direct connect task ret");
}
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
}
@@ -173,7 +169,7 @@ impl DirectConnectorManager {
.dst_blacklist
.contains(&DstBlackListItem(dst_peer_id.clone(), addr.clone()))
{
tracing::trace!("try_connect_to_ip failed, addr in blacklist: {}", addr);
tracing::debug!("try_connect_to_ip failed, addr in blacklist: {}", addr);
return Err(Error::UrlInBlacklist);
}
@@ -208,24 +204,38 @@ impl DirectConnectorManager {
dst_peer_id: PeerId,
addr: String,
) -> Result<(), Error> {
let ret = Self::do_try_connect_to_ip(data.clone(), dst_peer_id, addr.clone()).await;
if let Err(e) = ret {
if !matches!(e, Error::UrlInBlacklist) {
tracing::info!(
"try_connect_to_ip failed: {:?}, peer_id: {}",
e,
dst_peer_id
);
let mut rand_gen = rand::rngs::OsRng::default();
let backoff_ms = vec![1000, 2000, 4000];
let mut backoff_idx = 0;
loop {
let ret = Self::do_try_connect_to_ip(data.clone(), dst_peer_id, addr.clone()).await;
tracing::debug!(?ret, ?dst_peer_id, ?addr, "try_connect_to_ip return");
if matches!(ret, Err(Error::UrlInBlacklist) | Ok(_)) {
return ret;
}
if backoff_idx < backoff_ms.len() {
let delta = backoff_ms[backoff_idx] >> 1;
assert!(delta > 0);
assert!(delta < backoff_ms[backoff_idx]);
tokio::time::sleep(Duration::from_millis(
(backoff_ms[backoff_idx] + rand_gen.gen_range(-delta..delta)) as u64,
))
.await;
backoff_idx += 1;
continue;
} else {
data.dst_blacklist.insert(
DstBlackListItem(dst_peer_id.clone(), addr.clone()),
(),
std::time::Duration::from_secs(DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC),
);
return ret;
}
return Err(e);
} else {
tracing::info!("try_connect_to_ip success, peer_id: {}", dst_peer_id);
return Ok(());
}
}
@@ -235,21 +245,25 @@ impl DirectConnectorManager {
dst_peer_id: PeerId,
ip_list: GetIpListResponse,
) -> Result<(), Error> {
data.dst_listener_blacklist.cleanup();
let enable_ipv6 = data.global_ctx.get_flags().enable_ipv6;
let available_listeners = ip_list
.listeners
.iter()
.into_iter()
.map(Into::<url::Url>::into)
.filter_map(|l| if l.scheme() != "ring" { Some(l) } else { None })
.filter(|l| l.port().is_some() && l.host().is_some())
.filter(|l| {
!data.dst_sceme_blacklist.contains(&DstSchemeBlackListItem(
dst_peer_id.clone(),
l.scheme().to_string(),
))
!data
.dst_listener_blacklist
.contains(&DstListenerUrlBlackListItem(dst_peer_id.clone(), l.clone()))
})
.filter(|l| enable_ipv6 || !matches!(l.host().unwrap().to_owned(), Host::Ipv6(_)))
.collect::<Vec<_>>();
tracing::debug!(?available_listeners, "got available listeners");
let mut listener = available_listeners.get(0).ok_or(anyhow::anyhow!(
"peer {} have no valid listener",
dst_peer_id
@@ -268,46 +282,84 @@ impl DirectConnectorManager {
Some(SocketAddr::V4(_)) => {
ip_list.interface_ipv4s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(ip.as_str())).is_ok() {
if addr.set_host(Some(ip.to_string().as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
} else {
tracing::error!(
?ip,
?listener,
?dst_peer_id,
"failed to set host for interface ipv4"
);
}
});
let mut addr = (*listener).clone();
if addr.set_host(Some(ip_list.public_ipv4.as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv4) = ip_list.public_ipv4 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(public_ipv4.to_string().as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
} else {
tracing::error!(
?public_ipv4,
?listener,
?dst_peer_id,
"failed to set host for public ipv4"
);
}
}
}
Some(SocketAddr::V6(_)) => {
ip_list.interface_ipv6s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(format!("[{}]", ip).as_str())).is_ok() {
if addr
.set_host(Some(format!("[{}]", ip.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
} else {
tracing::error!(
?ip,
?listener,
?dst_peer_id,
"failed to set host for interface ipv6"
);
}
});
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", ip_list.public_ipv6).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv6) = ip_list.public_ipv6 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", public_ipv6.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
} else {
tracing::error!(
?public_ipv6,
?listener,
?dst_peer_id,
"failed to set host for public ipv6"
);
}
}
}
p => {
@@ -317,16 +369,28 @@ impl DirectConnectorManager {
let mut has_succ = false;
while let Some(ret) = tasks.join_next().await {
if let Err(e) = ret {
tracing::error!("join direct connect task failed: {:?}", e);
} else if let Ok(Ok(_)) = ret {
has_succ = true;
match ret {
Ok(Ok(_)) => {
has_succ = true;
tracing::info!(
?dst_peer_id,
?listener,
"try direct connect to peer success"
);
break;
}
Ok(Err(e)) => {
tracing::info!(?e, "try direct connect to peer failed");
}
Err(e) => {
tracing::error!(?e, "try direct connect to peer task join failed");
}
}
}
if !has_succ {
data.dst_sceme_blacklist.insert(
DstSchemeBlackListItem(dst_peer_id.clone(), listener.scheme().to_string()),
data.dst_listener_blacklist.insert(
DstListenerUrlBlackListItem(dst_peer_id.clone(), listener.clone()),
(),
std::time::Duration::from_secs(DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC),
);
@@ -349,18 +413,23 @@ impl DirectConnectorManager {
}
}
tracing::trace!("try direct connect to peer: {}", dst_peer_id);
tracing::debug!("try direct connect to peer: {}", dst_peer_id);
let ip_list = peer_manager
let rpc_stub = peer_manager
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, dst_peer_id, |c| async {
let client =
DirectConnectorRpcClient::new(tarpc::client::Config::default(), c).spawn();
let ip_list = client.get_ip_list(tarpc::context::current()).await;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
ip_list
})
.await?;
.rpc_client()
.scoped_client::<DirectConnectorRpcClientFactory<BaseController>>(
peer_manager.my_peer_id(),
dst_peer_id,
data.global_ctx.get_network_name(),
);
let ip_list = rpc_stub
.get_ip_list(BaseController::default(), GetIpListRequest {})
.await
.with_context(|| format!("get ip list from peer {}", dst_peer_id))?;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
Self::do_try_direct_connect_internal(data, dst_peer_id, ip_list).await
}
@@ -373,14 +442,14 @@ mod tests {
use crate::{
connector::direct::{
DirectConnectorManager, DirectConnectorManagerData, DstBlackListItem,
DstSchemeBlackListItem,
DstListenerUrlBlackListItem,
},
instance::listeners::ListenerManager,
peers::tests::{
connect_peer_manager, create_mock_peer_manager, wait_route_appear,
wait_route_appear_with_cost,
},
rpc::peer::GetIpListResponse,
proto::peer_rpc::GetIpListResponse,
};
#[rstest::rstest]
@@ -436,20 +505,25 @@ mod tests {
p_a.get_global_ctx(),
p_a.clone(),
));
let mut ip_list = GetIpListResponse::new();
let mut ip_list = GetIpListResponse::default();
ip_list
.listeners
.push("tcp://127.0.0.1:10222".parse().unwrap());
ip_list.interface_ipv4s.push("127.0.0.1".to_string());
ip_list
.interface_ipv4s
.push("127.0.0.1".parse::<std::net::Ipv4Addr>().unwrap().into());
DirectConnectorManager::do_try_direct_connect_internal(data.clone(), 1, ip_list.clone())
.await
.unwrap();
assert!(data
.dst_sceme_blacklist
.contains(&DstSchemeBlackListItem(1, "tcp".into())));
.dst_listener_blacklist
.contains(&DstListenerUrlBlackListItem(
1,
"tcp://127.0.0.1:10222".parse().unwrap()
)));
assert!(data
.dst_blacklist
+45 -36
View File
@@ -11,7 +11,12 @@ use tokio::{
use crate::{
common::PeerId,
peers::peer_conn::PeerConnId,
rpc as easytier_rpc,
proto::{
cli::{
ConnectorManageAction, ListConnectorResponse, ManageConnectorResponse, PeerConnInfo,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{IpVersion, TunnelConnector},
};
@@ -23,9 +28,9 @@ use crate::{
},
connector::set_bind_addr_for_peer_connector,
peers::peer_manager::PeerManager,
rpc::{
connector_manage_rpc_server::ConnectorManageRpc, Connector, ConnectorStatus,
ListConnectorRequest, ManageConnectorRequest,
proto::cli::{
Connector, ConnectorManageRpc, ConnectorStatus, ListConnectorRequest,
ManageConnectorRequest,
},
use_global_var,
};
@@ -105,12 +110,18 @@ impl ManualConnectorManager {
Ok(())
}
pub async fn remove_connector(&self, url: &str) -> Result<(), Error> {
pub async fn remove_connector(&self, url: url::Url) -> Result<(), Error> {
tracing::info!("remove_connector: {}", url);
if !self.list_connectors().await.iter().any(|x| x.url == url) {
let url = url.into();
if !self
.list_connectors()
.await
.iter()
.any(|x| x.url.as_ref() == Some(&url))
{
return Err(Error::NotFound);
}
self.data.removed_conn_urls.insert(url.into());
self.data.removed_conn_urls.insert(url.to_string());
Ok(())
}
@@ -137,7 +148,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: status.into(),
},
);
@@ -154,7 +165,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: ConnectorStatus::Connecting.into(),
},
);
@@ -213,14 +224,14 @@ impl ManualConnectorManager {
}
async fn handle_event(event: &GlobalCtxEvent, data: &ConnectorManagerData) {
let need_add_alive = |conn_info: &easytier_rpc::PeerConnInfo| conn_info.is_client;
let need_add_alive = |conn_info: &PeerConnInfo| conn_info.is_client;
match event {
GlobalCtxEvent::PeerConnAdded(conn_info) => {
if !need_add_alive(conn_info) {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.insert(addr);
data.alive_conn_urls.insert(addr.unwrap().to_string());
tracing::warn!("peer conn added: {:?}", conn_info);
}
@@ -229,7 +240,7 @@ impl ManualConnectorManager {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.remove(&addr);
data.alive_conn_urls.remove(&addr.unwrap().to_string());
tracing::warn!("peer conn removed: {:?}", conn_info);
}
@@ -303,7 +314,7 @@ impl ManualConnectorManager {
tracing::info!("reconnect get tunnel succ: {:?}", tunnel);
assert_eq!(
dead_url,
tunnel.info().unwrap().remote_addr,
tunnel.info().unwrap().remote_addr.unwrap().to_string(),
"info: {:?}",
tunnel.info()
);
@@ -385,45 +396,43 @@ impl ManualConnectorManager {
}
}
#[derive(Clone)]
pub struct ConnectorManagerRpcService(pub Arc<ManualConnectorManager>);
#[tonic::async_trait]
#[async_trait::async_trait]
impl ConnectorManageRpc for ConnectorManagerRpcService {
type Controller = BaseController;
async fn list_connector(
&self,
_request: tonic::Request<ListConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ListConnectorResponse>, tonic::Status> {
let mut ret = easytier_rpc::ListConnectorResponse::default();
_: BaseController,
_request: ListConnectorRequest,
) -> Result<ListConnectorResponse, rpc_types::error::Error> {
let mut ret = ListConnectorResponse::default();
let connectors = self.0.list_connectors().await;
ret.connectors = connectors;
Ok(tonic::Response::new(ret))
Ok(ret)
}
async fn manage_connector(
&self,
request: tonic::Request<ManageConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ManageConnectorResponse>, tonic::Status> {
let req = request.into_inner();
let url = url::Url::parse(&req.url)
.map_err(|_| tonic::Status::invalid_argument("invalid url"))?;
if req.action == easytier_rpc::ConnectorManageAction::Remove as i32 {
self.0.remove_connector(url.path()).await.map_err(|e| {
tonic::Status::invalid_argument(format!("remove connector failed: {:?}", e))
})?;
return Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
));
_: BaseController,
req: ManageConnectorRequest,
) -> Result<ManageConnectorResponse, rpc_types::error::Error> {
let url: url::Url = req.url.ok_or(anyhow::anyhow!("url is empty"))?.into();
if req.action == ConnectorManageAction::Remove as i32 {
self.0
.remove_connector(url.clone())
.await
.with_context(|| format!("remove connector failed: {:?}", url))?;
return Ok(ManageConnectorResponse::default());
} else {
self.0
.add_connector_by_url(url.as_str())
.await
.map_err(|e| {
tonic::Status::invalid_argument(format!("add connector failed: {:?}", e))
})?;
.with_context(|| format!("add connector failed: {:?}", url))?;
}
Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
))
Ok(ManageConnectorResponse::default())
}
}
+2 -2
View File
@@ -32,14 +32,14 @@ async fn set_bind_addr_for_peer_connector(
if is_ipv4 {
let mut bind_addrs = vec![];
for ipv4 in ips.interface_ipv4s {
let socket_addr = SocketAddrV4::new(ipv4.parse().unwrap(), 0).into();
let socket_addr = SocketAddrV4::new(ipv4.into(), 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);
} else {
let mut bind_addrs = vec![];
for ipv6 in ips.interface_ipv6s {
let socket_addr = SocketAddrV6::new(ipv6.parse().unwrap(), 0, 0, 0).into();
let socket_addr = SocketAddrV6::new(ipv6.into(), 0, 0, 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,399 @@
use std::{
net::{IpAddr, SocketAddr, SocketAddrV4},
sync::Arc,
time::{Duration, Instant},
};
use anyhow::Context;
use tokio::sync::Mutex;
use crate::{
common::{scoped_task::ScopedTask, stun::StunInfoCollectorTrait, PeerId},
connector::udp_hole_punch::common::{
try_connect_with_socket, UdpHolePunchListener, HOLE_PUNCH_PACKET_BODY_LEN,
},
peers::peer_manager::PeerManager,
proto::{
peer_rpc::{
SendPunchPacketBothEasySymRequest, SendPunchPacketBothEasySymResponse,
UdpHolePunchRpcClientFactory,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{udp::new_hole_punch_packet, Tunnel},
};
use super::common::{PunchHoleServerCommon, UdpNatType, UdpSocketArray};
const UDP_ARRAY_SIZE_FOR_BOTH_EASY_SYM: usize = 25;
const DST_PORT_OFFSET: u16 = 20;
const REMOTE_WAIT_TIME_MS: u64 = 5000;
pub(crate) struct PunchBothEasySymHoleServer {
common: Arc<PunchHoleServerCommon>,
task: Mutex<Option<ScopedTask<()>>>,
}
impl PunchBothEasySymHoleServer {
pub(crate) fn new(common: Arc<PunchHoleServerCommon>) -> Self {
Self {
common,
task: Mutex::new(None),
}
}
// hard sym means public port is random and cannot be predicted
#[tracing::instrument(skip(self), ret, err)]
pub(crate) async fn send_punch_packet_both_easy_sym(
&self,
request: SendPunchPacketBothEasySymRequest,
) -> Result<SendPunchPacketBothEasySymResponse, rpc_types::error::Error> {
tracing::info!("send_punch_packet_both_easy_sym start");
let busy_resp = Ok(SendPunchPacketBothEasySymResponse {
is_busy: true,
..Default::default()
});
let Ok(mut locked_task) = self.task.try_lock() else {
return busy_resp;
};
if locked_task.is_some() && !locked_task.as_ref().unwrap().is_finished() {
return busy_resp;
}
let global_ctx = self.common.get_global_ctx();
let cur_mapped_addr = global_ctx
.get_stun_info_collector()
.get_udp_port_mapping(0)
.await
.with_context(|| "failed to get udp port mapping")?;
tracing::info!("send_punch_packet_hard_sym start");
let socket_count = request.udp_socket_count as usize;
let public_ips = request
.public_ip
.ok_or(anyhow::anyhow!("public_ip is required"))?;
let transaction_id = request.transaction_id;
let udp_array =
UdpSocketArray::new(socket_count, self.common.get_global_ctx().net_ns.clone());
udp_array.start().await?;
udp_array.add_intreast_tid(transaction_id);
let peer_mgr = self.common.get_peer_mgr();
let punch_packet =
new_hole_punch_packet(transaction_id, HOLE_PUNCH_PACKET_BODY_LEN).into_bytes();
let mut punched = vec![];
let common = self.common.clone();
let task = tokio::spawn(async move {
let mut listeners = Vec::new();
let start_time = Instant::now();
let wait_time_ms = request.wait_time_ms.min(8000);
while start_time.elapsed() < Duration::from_millis(wait_time_ms as u64) {
if let Err(e) = udp_array
.send_with_all(
&punch_packet,
SocketAddr::V4(SocketAddrV4::new(
public_ips.into(),
request.dst_port_num as u16,
)),
)
.await
{
tracing::error!(?e, "failed to send hole punch packet");
break;
}
tokio::time::sleep(Duration::from_millis(100)).await;
if let Some(s) = udp_array.try_fetch_punched_socket(transaction_id) {
tracing::info!(?s, ?transaction_id, "got punched socket in both easy sym");
assert!(Arc::strong_count(&s.socket) == 1);
let Some(port) = s.socket.local_addr().ok().map(|addr| addr.port()) else {
tracing::warn!("failed to get local addr from punched socket");
continue;
};
let remote_addr = s.remote_addr;
drop(s);
let listener =
match UdpHolePunchListener::new_ext(peer_mgr.clone(), false, Some(port))
.await
{
Ok(l) => l,
Err(e) => {
tracing::warn!(?e, "failed to create listener");
continue;
}
};
punched.push((listener.get_socket().await, remote_addr));
listeners.push(listener);
}
// if any listener is punched, we can break the loop
for l in &listeners {
if l.get_conn_count().await > 0 {
tracing::info!(?l, "got punched listener");
break;
}
}
if !punched.is_empty() {
tracing::debug!(?punched, "got punched socket and keep sending punch packet");
}
for p in &punched {
let (socket, remote_addr) = p;
let send_remote_ret = socket.send_to(&punch_packet, remote_addr).await;
tracing::debug!(
?send_remote_ret,
?socket,
"send hole punch packet to punched remote"
);
}
}
for l in listeners {
if l.get_conn_count().await > 0 {
common.add_listener(l).await;
}
}
});
*locked_task = Some(task.into());
return Ok(SendPunchPacketBothEasySymResponse {
is_busy: false,
base_mapped_addr: Some(cur_mapped_addr.into()),
});
}
}
#[derive(Debug)]
pub(crate) struct PunchBothEasySymHoleClient {
peer_mgr: Arc<PeerManager>,
}
impl PunchBothEasySymHoleClient {
pub(crate) fn new(peer_mgr: Arc<PeerManager>) -> Self {
Self { peer_mgr }
}
#[tracing::instrument(ret)]
pub(crate) async fn do_hole_punching(
&self,
dst_peer_id: PeerId,
my_nat_info: UdpNatType,
peer_nat_info: UdpNatType,
is_busy: &mut bool,
) -> Result<Option<Box<dyn Tunnel>>, anyhow::Error> {
*is_busy = false;
let udp_array = UdpSocketArray::new(
UDP_ARRAY_SIZE_FOR_BOTH_EASY_SYM,
self.peer_mgr.get_global_ctx().net_ns.clone(),
);
udp_array.start().await?;
let global_ctx = self.peer_mgr.get_global_ctx();
let cur_mapped_addr = global_ctx
.get_stun_info_collector()
.get_udp_port_mapping(0)
.await
.with_context(|| "failed to get udp port mapping")?;
let my_public_ip = match cur_mapped_addr.ip() {
IpAddr::V4(v4) => v4,
_ => {
anyhow::bail!("ipv6 is not supported");
}
};
let me_is_incremental = my_nat_info
.get_inc_of_easy_sym()
.ok_or(anyhow::anyhow!("me_is_incremental is required"))?;
let peer_is_incremental = peer_nat_info
.get_inc_of_easy_sym()
.ok_or(anyhow::anyhow!("peer_is_incremental is required"))?;
let rpc_stub = self
.peer_mgr
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
self.peer_mgr.my_peer_id(),
dst_peer_id,
global_ctx.get_network_name(),
);
let tid = rand::random();
udp_array.add_intreast_tid(tid);
let remote_ret = rpc_stub
.send_punch_packet_both_easy_sym(
BaseController {
timeout_ms: 2000,
..Default::default()
},
SendPunchPacketBothEasySymRequest {
transaction_id: tid,
public_ip: Some(my_public_ip.into()),
dst_port_num: if me_is_incremental {
cur_mapped_addr.port().saturating_add(DST_PORT_OFFSET)
} else {
cur_mapped_addr.port().saturating_sub(DST_PORT_OFFSET)
} as u32,
udp_socket_count: UDP_ARRAY_SIZE_FOR_BOTH_EASY_SYM as u32,
wait_time_ms: REMOTE_WAIT_TIME_MS as u32,
},
)
.await?;
if remote_ret.is_busy {
*is_busy = true;
anyhow::bail!("remote is busy");
}
let mut remote_mapped_addr = remote_ret
.base_mapped_addr
.ok_or(anyhow::anyhow!("remote_mapped_addr is required"))?;
let now = Instant::now();
remote_mapped_addr.port = if peer_is_incremental {
remote_mapped_addr
.port
.saturating_add(DST_PORT_OFFSET as u32)
} else {
remote_mapped_addr
.port
.saturating_sub(DST_PORT_OFFSET as u32)
};
tracing::debug!(
?remote_mapped_addr,
?remote_ret,
"start send hole punch packet for both easy sym"
);
while now.elapsed().as_millis() < (REMOTE_WAIT_TIME_MS + 1000).into() {
udp_array
.send_with_all(
&new_hole_punch_packet(tid, HOLE_PUNCH_PACKET_BODY_LEN).into_bytes(),
remote_mapped_addr.into(),
)
.await?;
tokio::time::sleep(Duration::from_millis(100)).await;
let Some(socket) = udp_array.try_fetch_punched_socket(tid) else {
tracing::trace!(
?remote_mapped_addr,
?tid,
"no punched socket found, send some more hole punch packets"
);
continue;
};
tracing::info!(
?socket,
?remote_mapped_addr,
?tid,
"got punched socket in both easy sym"
);
for _ in 0..2 {
match try_connect_with_socket(socket.socket.clone(), remote_mapped_addr.into())
.await
{
Ok(tunnel) => {
return Ok(Some(tunnel));
}
Err(e) => {
tracing::error!(?e, "failed to connect with socket");
continue;
}
}
}
udp_array.add_new_socket(socket.socket).await?;
}
Ok(None)
}
}
#[cfg(test)]
pub mod tests {
use std::{
sync::{atomic::AtomicU32, Arc},
time::Duration,
};
use tokio::net::UdpSocket;
use crate::connector::udp_hole_punch::RUN_TESTING;
use crate::{
connector::udp_hole_punch::{
tests::create_mock_peer_manager_with_mock_stun, UdpHolePunchConnector,
},
peers::tests::{connect_peer_manager, wait_route_appear},
proto::common::NatType,
tunnel::common::tests::wait_for_condition,
};
#[rstest::rstest]
#[tokio::test]
#[serial_test::serial(hole_punch)]
async fn hole_punching_easy_sym(#[values("true", "false")] is_inc: bool) {
RUN_TESTING.store(true, std::sync::atomic::Ordering::Relaxed);
let p_a = create_mock_peer_manager_with_mock_stun(if is_inc {
NatType::SymmetricEasyInc
} else {
NatType::SymmetricEasyDec
})
.await;
let p_b = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
let p_c = create_mock_peer_manager_with_mock_stun(if !is_inc {
NatType::SymmetricEasyInc
} else {
NatType::SymmetricEasyDec
})
.await;
connect_peer_manager(p_a.clone(), p_b.clone()).await;
connect_peer_manager(p_b.clone(), p_c.clone()).await;
wait_route_appear(p_a.clone(), p_c.clone()).await.unwrap();
let mut hole_punching_a = UdpHolePunchConnector::new(p_a.clone());
let mut hole_punching_c = UdpHolePunchConnector::new(p_c.clone());
hole_punching_a.run().await.unwrap();
hole_punching_c.run().await.unwrap();
// 144 + DST_PORT_OFFSET = 164
let udp1 = Arc::new(UdpSocket::bind("0.0.0.0:40164").await.unwrap());
// 144 - DST_PORT_OFFSET = 124
let udp2 = Arc::new(UdpSocket::bind("0.0.0.0:40124").await.unwrap());
let udps = vec![udp1, udp2];
let counter = Arc::new(AtomicU32::new(0));
// all these sockets should receive hole punching packet
for udp in udps.iter().map(Arc::clone) {
let counter = counter.clone();
tokio::spawn(async move {
let mut buf = [0u8; 1024];
let (len, addr) = udp.recv_from(&mut buf).await.unwrap();
println!(
"got predictable punch packet, {:?} {:?} {:?}",
len,
addr,
udp.local_addr()
);
counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
});
}
hole_punching_a.client.run_immediately().await;
let udp_len = udps.len();
wait_for_condition(
|| async { counter.load(std::sync::atomic::Ordering::Relaxed) == udp_len as u32 },
Duration::from_secs(30),
)
.await;
}
}
@@ -0,0 +1,589 @@
use std::{
net::{Ipv4Addr, SocketAddr, SocketAddrV4},
sync::Arc,
time::Duration,
};
use crossbeam::atomic::AtomicCell;
use dashmap::{DashMap, DashSet};
use rand::seq::SliceRandom as _;
use tokio::{net::UdpSocket, sync::Mutex, task::JoinSet};
use tracing::{instrument, Instrument, Level};
use zerocopy::FromBytes as _;
use crate::{
common::{
error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS,
stun::StunInfoCollectorTrait as _, PeerId,
},
defer,
peers::peer_manager::PeerManager,
proto::common::NatType,
tunnel::{
packet_def::{UDPTunnelHeader, UdpPacketType, UDP_TUNNEL_HEADER_SIZE},
udp::{new_hole_punch_packet, UdpTunnelConnector, UdpTunnelListener},
Tunnel, TunnelConnCounter, TunnelListener as _,
},
};
pub(crate) const HOLE_PUNCH_PACKET_BODY_LEN: u16 = 16;
fn generate_shuffled_port_vec() -> Vec<u16> {
let mut rng = rand::thread_rng();
let mut port_vec: Vec<u16> = (1..=65535).collect();
port_vec.shuffle(&mut rng);
port_vec
}
pub(crate) enum UdpPunchClientMethod {
None,
ConeToCone,
SymToCone,
EasySymToEasySym,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub(crate) enum UdpNatType {
Unknown,
Open(NatType),
Cone(NatType),
// bool means if it is incremental
EasySymmetric(NatType, bool),
HardSymmetric(NatType),
}
impl From<NatType> for UdpNatType {
fn from(nat_type: NatType) -> Self {
match nat_type {
NatType::Unknown => UdpNatType::Unknown,
NatType::NoPat | NatType::OpenInternet => UdpNatType::Open(nat_type),
NatType::FullCone | NatType::Restricted | NatType::PortRestricted => {
UdpNatType::Cone(nat_type)
}
NatType::Symmetric | NatType::SymUdpFirewall => UdpNatType::HardSymmetric(nat_type),
NatType::SymmetricEasyInc => UdpNatType::EasySymmetric(nat_type, true),
NatType::SymmetricEasyDec => UdpNatType::EasySymmetric(nat_type, false),
}
}
}
impl Into<NatType> for UdpNatType {
fn into(self) -> NatType {
match self {
UdpNatType::Unknown => NatType::Unknown,
UdpNatType::Open(nat_type) => nat_type,
UdpNatType::Cone(nat_type) => nat_type,
UdpNatType::EasySymmetric(nat_type, _) => nat_type,
UdpNatType::HardSymmetric(nat_type) => nat_type,
}
}
}
impl UdpNatType {
pub(crate) fn is_open(&self) -> bool {
matches!(self, UdpNatType::Open(_))
}
pub(crate) fn is_unknown(&self) -> bool {
matches!(self, UdpNatType::Unknown)
}
pub(crate) fn is_sym(&self) -> bool {
self.is_hard_sym() || self.is_easy_sym()
}
pub(crate) fn is_hard_sym(&self) -> bool {
matches!(self, UdpNatType::HardSymmetric(_))
}
pub(crate) fn is_easy_sym(&self) -> bool {
matches!(self, UdpNatType::EasySymmetric(_, _))
}
pub(crate) fn is_cone(&self) -> bool {
matches!(self, UdpNatType::Cone(_))
}
pub(crate) fn get_inc_of_easy_sym(&self) -> Option<bool> {
match self {
UdpNatType::EasySymmetric(_, inc) => Some(*inc),
_ => None,
}
}
pub(crate) fn get_punch_hole_method(&self, other: Self) -> UdpPunchClientMethod {
if other.is_unknown() {
if self.is_sym() {
return UdpPunchClientMethod::SymToCone;
} else {
return UdpPunchClientMethod::ConeToCone;
}
}
if self.is_unknown() {
if other.is_sym() {
return UdpPunchClientMethod::None;
} else {
return UdpPunchClientMethod::ConeToCone;
}
}
if self.is_open() || other.is_open() {
// open nat does not need to punch hole
return UdpPunchClientMethod::None;
}
if self.is_cone() {
if other.is_sym() {
return UdpPunchClientMethod::None;
} else {
return UdpPunchClientMethod::ConeToCone;
}
} else if self.is_easy_sym() {
if other.is_hard_sym() {
return UdpPunchClientMethod::None;
} else if other.is_easy_sym() {
return UdpPunchClientMethod::EasySymToEasySym;
} else {
return UdpPunchClientMethod::SymToCone;
}
} else if self.is_hard_sym() {
if other.is_sym() {
return UdpPunchClientMethod::None;
} else {
return UdpPunchClientMethod::SymToCone;
}
}
unreachable!("invalid nat type");
}
pub(crate) fn can_punch_hole_as_client(
&self,
other: Self,
my_peer_id: PeerId,
dst_peer_id: PeerId,
) -> bool {
match self.get_punch_hole_method(other) {
UdpPunchClientMethod::None => false,
UdpPunchClientMethod::ConeToCone | UdpPunchClientMethod::SymToCone => true,
UdpPunchClientMethod::EasySymToEasySym => my_peer_id < dst_peer_id,
}
}
}
#[derive(Debug)]
pub(crate) struct PunchedUdpSocket {
pub(crate) socket: Arc<UdpSocket>,
pub(crate) tid: u32,
pub(crate) remote_addr: SocketAddr,
}
// used for symmetric hole punching, binding to multiple ports to increase the chance of success
pub(crate) struct UdpSocketArray {
sockets: Arc<DashMap<SocketAddr, Arc<UdpSocket>>>,
max_socket_count: usize,
net_ns: NetNS,
tasks: Arc<std::sync::Mutex<JoinSet<()>>>,
intreast_tids: Arc<DashSet<u32>>,
tid_to_socket: Arc<DashMap<u32, Vec<PunchedUdpSocket>>>,
}
impl UdpSocketArray {
pub fn new(max_socket_count: usize, net_ns: NetNS) -> Self {
let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new()));
join_joinset_background(tasks.clone(), "UdpSocketArray".to_owned());
Self {
sockets: Arc::new(DashMap::new()),
max_socket_count,
net_ns,
tasks,
intreast_tids: Arc::new(DashSet::new()),
tid_to_socket: Arc::new(DashMap::new()),
}
}
pub fn started(&self) -> bool {
!self.sockets.is_empty()
}
pub async fn add_new_socket(&self, socket: Arc<UdpSocket>) -> Result<(), anyhow::Error> {
let socket_map = self.sockets.clone();
let local_addr = socket.local_addr()?;
let intreast_tids = self.intreast_tids.clone();
let tid_to_socket = self.tid_to_socket.clone();
socket_map.insert(local_addr, socket.clone());
self.tasks.lock().unwrap().spawn(
async move {
defer!(socket_map.remove(&local_addr););
let mut buf = [0u8; UDP_TUNNEL_HEADER_SIZE + HOLE_PUNCH_PACKET_BODY_LEN as usize];
tracing::trace!(?local_addr, "udp socket added");
loop {
let Ok((len, addr)) = socket.recv_from(&mut buf).await else {
break;
};
tracing::debug!(?len, ?addr, "got raw packet");
if len != UDP_TUNNEL_HEADER_SIZE + HOLE_PUNCH_PACKET_BODY_LEN as usize {
continue;
}
let Some(p) = UDPTunnelHeader::ref_from_prefix(&buf) else {
continue;
};
let tid = p.conn_id.get();
let valid = p.msg_type == UdpPacketType::HolePunch as u8
&& p.len.get() == HOLE_PUNCH_PACKET_BODY_LEN;
tracing::debug!(?p, ?addr, ?tid, ?valid, ?p, "got udp hole punch packet");
if !valid {
continue;
}
if intreast_tids.contains(&tid) {
tracing::info!(?addr, ?tid, "got hole punching packet with intreast tid");
tid_to_socket
.entry(tid)
.or_insert_with(Vec::new)
.push(PunchedUdpSocket {
socket: socket.clone(),
tid,
remote_addr: addr,
});
break;
}
}
tracing::debug!(?local_addr, "udp socket recv loop end");
}
.instrument(tracing::info_span!("udp array socket recv loop")),
);
Ok(())
}
#[instrument(err)]
pub async fn start(&self) -> Result<(), anyhow::Error> {
tracing::info!("starting udp socket array");
while self.sockets.len() < self.max_socket_count {
let socket = {
let _g = self.net_ns.guard();
Arc::new(UdpSocket::bind("0.0.0.0:0").await?)
};
self.add_new_socket(socket).await?;
}
Ok(())
}
#[instrument(err)]
pub async fn send_with_all(&self, data: &[u8], addr: SocketAddr) -> Result<(), anyhow::Error> {
tracing::info!(?addr, "sending hole punching packet");
for socket in self.sockets.iter() {
let socket = socket.value();
socket.send_to(data, addr).await?;
}
Ok(())
}
#[instrument(ret(level = Level::DEBUG))]
pub fn try_fetch_punched_socket(&self, tid: u32) -> Option<PunchedUdpSocket> {
tracing::debug!(?tid, "try fetch punched socket");
self.tid_to_socket.get_mut(&tid)?.value_mut().pop()
}
pub fn add_intreast_tid(&self, tid: u32) {
self.intreast_tids.insert(tid);
}
pub fn remove_intreast_tid(&self, tid: u32) {
self.intreast_tids.remove(&tid);
self.tid_to_socket.remove(&tid);
}
}
impl std::fmt::Debug for UdpSocketArray {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("UdpSocketArray")
.field("sockets", &self.sockets.len())
.field("max_socket_count", &self.max_socket_count)
.field("started", &self.started())
.field("intreast_tids", &self.intreast_tids.len())
.field("tid_to_socket", &self.tid_to_socket.len())
.finish()
}
}
#[derive(Debug)]
pub(crate) struct UdpHolePunchListener {
socket: Arc<UdpSocket>,
tasks: JoinSet<()>,
running: Arc<AtomicCell<bool>>,
mapped_addr: SocketAddr,
conn_counter: Arc<Box<dyn TunnelConnCounter>>,
listen_time: std::time::Instant,
last_select_time: AtomicCell<std::time::Instant>,
last_active_time: Arc<AtomicCell<std::time::Instant>>,
}
impl UdpHolePunchListener {
async fn get_avail_port() -> Result<u16, Error> {
let socket = UdpSocket::bind("0.0.0.0:0").await?;
Ok(socket.local_addr()?.port())
}
#[instrument(err)]
pub async fn new(peer_mgr: Arc<PeerManager>) -> Result<Self, Error> {
Self::new_ext(peer_mgr, true, None).await
}
#[instrument(err)]
pub async fn new_ext(
peer_mgr: Arc<PeerManager>,
with_mapped_addr: bool,
port: Option<u16>,
) -> Result<Self, Error> {
let port = port.unwrap_or(Self::get_avail_port().await?);
let listen_url = format!("udp://0.0.0.0:{}", port);
let mapped_addr = if with_mapped_addr {
let gctx = peer_mgr.get_global_ctx();
let stun_info_collect = gctx.get_stun_info_collector();
stun_info_collect.get_udp_port_mapping(port).await?
} else {
SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(0, 0, 0, 0), port))
};
let mut listener = UdpTunnelListener::new(listen_url.parse().unwrap());
{
let _g = peer_mgr.get_global_ctx().net_ns.guard();
listener.listen().await?;
}
let socket = listener.get_socket().unwrap();
let running = Arc::new(AtomicCell::new(true));
let running_clone = running.clone();
let conn_counter = listener.get_conn_counter();
let mut tasks = JoinSet::new();
tasks.spawn(async move {
while let Ok(conn) = listener.accept().await {
tracing::warn!(?conn, "udp hole punching listener got peer connection");
let peer_mgr = peer_mgr.clone();
tokio::spawn(async move {
if let Err(e) = peer_mgr.add_tunnel_as_server(conn).await {
tracing::error!(
?e,
"failed to add tunnel as server in hole punch listener"
);
}
});
}
running_clone.store(false);
});
let last_active_time = Arc::new(AtomicCell::new(std::time::Instant::now()));
let conn_counter_clone = conn_counter.clone();
let last_active_time_clone = last_active_time.clone();
tasks.spawn(async move {
loop {
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
if conn_counter_clone.get().unwrap_or(0) != 0 {
last_active_time_clone.store(std::time::Instant::now());
}
}
});
tracing::warn!(?mapped_addr, ?socket, "udp hole punching listener started");
Ok(Self {
tasks,
socket,
running,
mapped_addr,
conn_counter,
listen_time: std::time::Instant::now(),
last_select_time: AtomicCell::new(std::time::Instant::now()),
last_active_time,
})
}
pub async fn get_socket(&self) -> Arc<UdpSocket> {
self.last_select_time.store(std::time::Instant::now());
self.socket.clone()
}
pub async fn get_conn_count(&self) -> usize {
self.conn_counter.get().unwrap_or(0) as usize
}
}
pub(crate) struct PunchHoleServerCommon {
peer_mgr: Arc<PeerManager>,
listeners: Arc<Mutex<Vec<UdpHolePunchListener>>>,
tasks: Arc<std::sync::Mutex<JoinSet<()>>>,
}
impl PunchHoleServerCommon {
pub(crate) fn new(peer_mgr: Arc<PeerManager>) -> Self {
let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new()));
join_joinset_background(tasks.clone(), "PunchHoleServerCommon".to_owned());
let listeners = Arc::new(Mutex::new(Vec::<UdpHolePunchListener>::new()));
let l = listeners.clone();
tasks.lock().unwrap().spawn(async move {
loop {
tokio::time::sleep(Duration::from_secs(5)).await;
{
// remove listener that is not active for 40 seconds but keep listeners that are selected less than 30 seconds
l.lock().await.retain(|listener| {
listener.last_active_time.load().elapsed().as_secs() < 40
|| listener.last_select_time.load().elapsed().as_secs() < 30
});
}
}
});
Self {
peer_mgr,
listeners,
tasks,
}
}
pub(crate) async fn add_listener(&self, listener: UdpHolePunchListener) {
self.listeners.lock().await.push(listener);
}
pub(crate) async fn find_listener(&self, addr: &SocketAddr) -> Option<Arc<UdpSocket>> {
let all_listener_sockets = self.listeners.lock().await;
let listener = all_listener_sockets
.iter()
.find(|listener| listener.mapped_addr == *addr && listener.running.load())?;
Some(listener.get_socket().await)
}
pub(crate) async fn my_udp_nat_type(&self) -> i32 {
self.peer_mgr
.get_global_ctx()
.get_stun_info_collector()
.get_stun_info()
.udp_nat_type
}
pub(crate) async fn select_listener(
&self,
use_new_listener: bool,
) -> Option<(Arc<UdpSocket>, SocketAddr)> {
let all_listener_sockets = &self.listeners;
let mut use_last = false;
if all_listener_sockets.lock().await.len() < 16 || use_new_listener {
tracing::warn!("creating new udp hole punching listener");
all_listener_sockets.lock().await.push(
UdpHolePunchListener::new(self.peer_mgr.clone())
.await
.ok()?,
);
use_last = true;
}
let mut locked = all_listener_sockets.lock().await;
let listener = if use_last {
locked.last_mut()?
} else {
// use the listener that is active most recently
locked
.iter_mut()
.max_by_key(|listener| listener.last_active_time.load())?
};
if listener.mapped_addr.ip().is_unspecified() {
tracing::info!("listener mapped addr is unspecified, trying to get mapped addr");
listener.mapped_addr = self
.get_global_ctx()
.get_stun_info_collector()
.get_udp_port_mapping(listener.mapped_addr.port())
.await
.ok()?;
}
Some((listener.get_socket().await, listener.mapped_addr))
}
pub(crate) fn get_joinset(&self) -> Arc<std::sync::Mutex<JoinSet<()>>> {
self.tasks.clone()
}
pub(crate) fn get_global_ctx(&self) -> ArcGlobalCtx {
self.peer_mgr.get_global_ctx()
}
pub(crate) fn get_peer_mgr(&self) -> Arc<PeerManager> {
self.peer_mgr.clone()
}
}
#[tracing::instrument(err, ret(level=Level::DEBUG), skip(ports))]
pub(crate) async fn send_symmetric_hole_punch_packet(
ports: &Vec<u16>,
udp: Arc<UdpSocket>,
transaction_id: u32,
public_ips: &Vec<Ipv4Addr>,
port_start_idx: usize,
max_packets: usize,
) -> Result<usize, Error> {
tracing::debug!("sending hard symmetric hole punching packet");
let mut sent_packets = 0;
let mut cur_port_idx = port_start_idx;
while sent_packets < max_packets {
let port = ports[cur_port_idx % ports.len()];
for pub_ip in public_ips {
let addr = SocketAddr::V4(SocketAddrV4::new(*pub_ip, port));
let packet = new_hole_punch_packet(transaction_id, HOLE_PUNCH_PACKET_BODY_LEN);
udp.send_to(&packet.into_bytes(), addr).await?;
sent_packets += 1;
}
cur_port_idx = cur_port_idx.wrapping_add(1);
tokio::time::sleep(Duration::from_millis(3)).await;
}
Ok(cur_port_idx % ports.len())
}
pub(crate) async fn try_connect_with_socket(
socket: Arc<UdpSocket>,
remote_mapped_addr: SocketAddr,
) -> Result<Box<dyn Tunnel>, Error> {
let connector = UdpTunnelConnector::new(
format!(
"udp://{}:{}",
remote_mapped_addr.ip(),
remote_mapped_addr.port()
)
.to_string()
.parse()
.unwrap(),
);
connector
.try_connect_with_socket(socket, remote_mapped_addr)
.await
.map_err(|e| Error::from(e))
}
@@ -0,0 +1,264 @@
use std::{
sync::Arc,
time::{Duration, Instant},
};
use anyhow::Context;
use tokio::net::UdpSocket;
use crate::{
common::{scoped_task::ScopedTask, stun::StunInfoCollectorTrait, PeerId},
connector::udp_hole_punch::common::{
try_connect_with_socket, UdpSocketArray, HOLE_PUNCH_PACKET_BODY_LEN,
},
peers::peer_manager::PeerManager,
proto::{
common::Void,
peer_rpc::{
SelectPunchListenerRequest, SendPunchPacketConeRequest, UdpHolePunchRpcClientFactory,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{udp::new_hole_punch_packet, Tunnel},
};
use super::common::PunchHoleServerCommon;
pub(crate) struct PunchConeHoleServer {
common: Arc<PunchHoleServerCommon>,
}
impl PunchConeHoleServer {
pub(crate) fn new(common: Arc<PunchHoleServerCommon>) -> Self {
Self { common }
}
#[tracing::instrument(skip(self), ret, err)]
pub(crate) async fn send_punch_packet_cone(
&self,
_: BaseController,
request: SendPunchPacketConeRequest,
) -> Result<Void, rpc_types::error::Error> {
let listener_addr = request.listener_mapped_addr.ok_or(anyhow::anyhow!(
"send_punch_packet_for_cone request missing listener_mapped_addr"
))?;
let listener_addr = std::net::SocketAddr::from(listener_addr);
let listener = self
.common
.find_listener(&listener_addr)
.await
.ok_or(anyhow::anyhow!(
"send_punch_packet_for_cone failed to find listener"
))?;
let dest_addr = request.dest_addr.ok_or(anyhow::anyhow!(
"send_punch_packet_for_cone request missing dest_addr"
))?;
let dest_addr = std::net::SocketAddr::from(dest_addr);
let dest_ip = dest_addr.ip();
if dest_ip.is_unspecified() || dest_ip.is_multicast() {
return Err(anyhow::anyhow!(
"send_punch_packet_for_cone dest_ip is malformed, {:?}",
request
)
.into());
}
for _ in 0..request.packet_batch_count {
tracing::info!(?request, "sending hole punching packet");
for _ in 0..request.packet_count_per_batch {
let udp_packet =
new_hole_punch_packet(request.transaction_id, HOLE_PUNCH_PACKET_BODY_LEN);
if let Err(e) = listener.send_to(&udp_packet.into_bytes(), &dest_addr).await {
tracing::error!(?e, "failed to send hole punch packet to dest addr");
}
}
tokio::time::sleep(Duration::from_millis(request.packet_interval_ms as u64)).await;
}
Ok(Void::default())
}
}
pub(crate) struct PunchConeHoleClient {
peer_mgr: Arc<PeerManager>,
}
impl PunchConeHoleClient {
pub(crate) fn new(peer_mgr: Arc<PeerManager>) -> Self {
Self { peer_mgr }
}
#[tracing::instrument(skip(self))]
pub(crate) async fn do_hole_punching(
&self,
dst_peer_id: PeerId,
) -> Result<Option<Box<dyn Tunnel>>, anyhow::Error> {
tracing::info!(?dst_peer_id, "start hole punching");
let tid = rand::random();
let global_ctx = self.peer_mgr.get_global_ctx();
let udp_array = UdpSocketArray::new(1, global_ctx.net_ns.clone());
let local_socket = {
let _g = self.peer_mgr.get_global_ctx().net_ns.guard();
Arc::new(UdpSocket::bind("0.0.0.0:0").await?)
};
let local_addr = local_socket
.local_addr()
.with_context(|| anyhow::anyhow!("failed to get local port from udp array"))?;
let local_port = local_addr.port();
drop(local_socket);
let local_mapped_addr = global_ctx
.get_stun_info_collector()
.get_udp_port_mapping(local_port)
.await
.with_context(|| "failed to get udp port mapping")?;
let local_socket = {
let _g = self.peer_mgr.get_global_ctx().net_ns.guard();
Arc::new(UdpSocket::bind(local_addr).await?)
};
// client -> server: tell server the mapped port, server will return the mapped address of listening port.
let rpc_stub = self
.peer_mgr
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
self.peer_mgr.my_peer_id(),
dst_peer_id,
global_ctx.get_network_name(),
);
let resp = rpc_stub
.select_punch_listener(
BaseController::default(),
SelectPunchListenerRequest { force_new: false },
)
.await
.with_context(|| "failed to select punch listener")?;
let remote_mapped_addr = resp.listener_mapped_addr.ok_or(anyhow::anyhow!(
"select_punch_listener response missing listener_mapped_addr"
))?;
tracing::debug!(
?local_mapped_addr,
?remote_mapped_addr,
"hole punch got remote listener"
);
udp_array.add_new_socket(local_socket).await?;
udp_array.add_intreast_tid(tid);
let send_from_local = || async {
udp_array
.send_with_all(
&new_hole_punch_packet(tid, HOLE_PUNCH_PACKET_BODY_LEN).into_bytes(),
remote_mapped_addr.clone().into(),
)
.await
.with_context(|| "failed to send hole punch packet from local")
};
send_from_local().await?;
let scoped_punch_task: ScopedTask<()> = tokio::spawn(async move {
if let Err(e) = rpc_stub
.send_punch_packet_cone(
BaseController {
timeout_ms: 4000,
..Default::default()
},
SendPunchPacketConeRequest {
listener_mapped_addr: Some(remote_mapped_addr.into()),
dest_addr: Some(local_mapped_addr.into()),
transaction_id: tid,
packet_count_per_batch: 2,
packet_batch_count: 5,
packet_interval_ms: 400,
},
)
.await
{
tracing::error!(?e, "failed to call remote send punch packet");
}
})
.into();
// server: will send some punching resps, total 10 packets.
// client: use the socket to create UdpTunnel with UdpTunnelConnector
// NOTICE: UdpTunnelConnector will ignore the punching resp packet sent by remote.
let mut finish_time: Option<Instant> = None;
while finish_time.is_none() || finish_time.as_ref().unwrap().elapsed().as_millis() < 1000 {
tokio::time::sleep(Duration::from_millis(200)).await;
if finish_time.is_none() && (*scoped_punch_task).is_finished() {
finish_time = Some(Instant::now());
}
let Some(socket) = udp_array.try_fetch_punched_socket(tid) else {
tracing::debug!("no punched socket found, send some more hole punch packets");
send_from_local().await?;
continue;
};
tracing::debug!(?socket, ?tid, "punched socket found, try connect with it");
for _ in 0..2 {
match try_connect_with_socket(socket.socket.clone(), remote_mapped_addr.into())
.await
{
Ok(tunnel) => {
tracing::info!(?tunnel, "hole punched");
return Ok(Some(tunnel));
}
Err(e) => {
tracing::error!(?e, "failed to connect with socket");
}
}
}
}
return Ok(None);
}
}
#[cfg(test)]
pub mod tests {
use crate::{
connector::udp_hole_punch::{
tests::create_mock_peer_manager_with_mock_stun, UdpHolePunchConnector,
},
peers::tests::{connect_peer_manager, wait_route_appear, wait_route_appear_with_cost},
proto::common::NatType,
};
#[tokio::test]
async fn hole_punching_cone() {
let p_a = create_mock_peer_manager_with_mock_stun(NatType::Restricted).await;
let p_b = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
let p_c = create_mock_peer_manager_with_mock_stun(NatType::Restricted).await;
connect_peer_manager(p_a.clone(), p_b.clone()).await;
connect_peer_manager(p_b.clone(), p_c.clone()).await;
wait_route_appear(p_a.clone(), p_c.clone()).await.unwrap();
println!("{:?}", p_a.list_routes().await);
let mut hole_punching_a = UdpHolePunchConnector::new(p_a.clone());
let mut hole_punching_c = UdpHolePunchConnector::new(p_c.clone());
hole_punching_a.run_as_client().await.unwrap();
hole_punching_c.run_as_server().await.unwrap();
hole_punching_a.client.run_immediately().await;
wait_route_appear_with_cost(p_a.clone(), p_c.my_peer_id(), Some(1))
.await
.unwrap();
println!("{:?}", p_a.list_routes().await);
}
}
@@ -0,0 +1,559 @@
use std::sync::{atomic::AtomicBool, Arc};
use anyhow::{Context, Error};
use both_easy_sym::{PunchBothEasySymHoleClient, PunchBothEasySymHoleServer};
use common::{PunchHoleServerCommon, UdpNatType, UdpPunchClientMethod};
use cone::{PunchConeHoleClient, PunchConeHoleServer};
use dashmap::DashMap;
use once_cell::sync::Lazy;
use sym_to_cone::{PunchSymToConeHoleClient, PunchSymToConeHoleServer};
use tokio::{sync::Mutex, task::JoinHandle};
use crate::{
common::{stun::StunInfoCollectorTrait, PeerId},
connector::direct::PeerManagerForDirectConnector,
peers::{
peer_manager::PeerManager,
peer_task::{PeerTaskLauncher, PeerTaskManager},
},
proto::{
common::{NatType, Void},
peer_rpc::{
SelectPunchListenerRequest, SelectPunchListenerResponse,
SendPunchPacketBothEasySymRequest, SendPunchPacketBothEasySymResponse,
SendPunchPacketConeRequest, SendPunchPacketEasySymRequest,
SendPunchPacketHardSymRequest, SendPunchPacketHardSymResponse, UdpHolePunchRpc,
UdpHolePunchRpcServer,
},
rpc_types::{self, controller::BaseController},
},
tunnel::Tunnel,
};
pub(crate) mod both_easy_sym;
pub(crate) mod common;
pub(crate) mod cone;
pub(crate) mod sym_to_cone;
// sym punch should be serialized
static SYM_PUNCH_LOCK: Lazy<DashMap<PeerId, Arc<Mutex<()>>>> = Lazy::new(|| DashMap::new());
static RUN_TESTING: Lazy<AtomicBool> = Lazy::new(|| AtomicBool::new(false));
fn get_sym_punch_lock(peer_id: PeerId) -> Arc<Mutex<()>> {
SYM_PUNCH_LOCK
.entry(peer_id)
.or_insert_with(|| Arc::new(Mutex::new(())))
.value()
.clone()
}
struct UdpHolePunchServer {
common: Arc<PunchHoleServerCommon>,
cone_server: PunchConeHoleServer,
sym_to_cone_server: PunchSymToConeHoleServer,
both_easy_sym_server: PunchBothEasySymHoleServer,
}
impl UdpHolePunchServer {
pub fn new(peer_mgr: Arc<PeerManager>) -> Arc<Self> {
let common = Arc::new(PunchHoleServerCommon::new(peer_mgr.clone()));
let cone_server = PunchConeHoleServer::new(common.clone());
let sym_to_cone_server = PunchSymToConeHoleServer::new(common.clone());
let both_easy_sym_server = PunchBothEasySymHoleServer::new(common.clone());
Arc::new(Self {
common,
cone_server,
sym_to_cone_server,
both_easy_sym_server,
})
}
}
#[async_trait::async_trait]
impl UdpHolePunchRpc for UdpHolePunchServer {
type Controller = BaseController;
async fn select_punch_listener(
&self,
_ctrl: Self::Controller,
input: SelectPunchListenerRequest,
) -> rpc_types::error::Result<SelectPunchListenerResponse> {
let (_, addr) = self
.common
.select_listener(input.force_new)
.await
.ok_or(anyhow::anyhow!("no listener available"))?;
Ok(SelectPunchListenerResponse {
listener_mapped_addr: Some(addr.into()),
})
}
/// send packet to one remote_addr, used by nat1-3 to nat1-3
async fn send_punch_packet_cone(
&self,
ctrl: Self::Controller,
input: SendPunchPacketConeRequest,
) -> rpc_types::error::Result<Void> {
self.cone_server.send_punch_packet_cone(ctrl, input).await
}
/// send packet to multiple remote_addr (birthday attack), used by nat4 to nat1-3
async fn send_punch_packet_hard_sym(
&self,
_ctrl: Self::Controller,
input: SendPunchPacketHardSymRequest,
) -> rpc_types::error::Result<SendPunchPacketHardSymResponse> {
let _locked = get_sym_punch_lock(self.common.get_peer_mgr().my_peer_id())
.try_lock_owned()
.with_context(|| "sym punch lock is busy")?;
self.sym_to_cone_server
.send_punch_packet_hard_sym(input)
.await
}
async fn send_punch_packet_easy_sym(
&self,
_ctrl: Self::Controller,
input: SendPunchPacketEasySymRequest,
) -> rpc_types::error::Result<Void> {
let _locked = get_sym_punch_lock(self.common.get_peer_mgr().my_peer_id())
.try_lock_owned()
.with_context(|| "sym punch lock is busy")?;
self.sym_to_cone_server
.send_punch_packet_easy_sym(input)
.await
.map(|_| Void {})
}
/// nat4 to nat4 (both predictably)
async fn send_punch_packet_both_easy_sym(
&self,
_ctrl: Self::Controller,
input: SendPunchPacketBothEasySymRequest,
) -> rpc_types::error::Result<SendPunchPacketBothEasySymResponse> {
let _locked = get_sym_punch_lock(self.common.get_peer_mgr().my_peer_id())
.try_lock_owned()
.with_context(|| "sym punch lock is busy")?;
self.both_easy_sym_server
.send_punch_packet_both_easy_sym(input)
.await
}
}
#[derive(Debug)]
struct BackOff {
backoffs_ms: Vec<u64>,
current_idx: usize,
}
impl BackOff {
pub fn new(backoffs_ms: Vec<u64>) -> Self {
Self {
backoffs_ms,
current_idx: 0,
}
}
pub fn next_backoff(&mut self) -> u64 {
let backoff = self.backoffs_ms[self.current_idx];
self.current_idx = (self.current_idx + 1).min(self.backoffs_ms.len() - 1);
backoff
}
pub fn rollback(&mut self) {
self.current_idx = self.current_idx.saturating_sub(1);
}
pub async fn sleep_for_next_backoff(&mut self) {
let backoff = self.next_backoff();
if backoff > 0 {
tokio::time::sleep(tokio::time::Duration::from_millis(backoff)).await;
}
}
}
struct UdpHoePunchConnectorData {
cone_client: PunchConeHoleClient,
sym_to_cone_client: PunchSymToConeHoleClient,
both_easy_sym_client: PunchBothEasySymHoleClient,
peer_mgr: Arc<PeerManager>,
}
impl UdpHoePunchConnectorData {
pub fn new(peer_mgr: Arc<PeerManager>) -> Arc<Self> {
let cone_client = PunchConeHoleClient::new(peer_mgr.clone());
let sym_to_cone_client = PunchSymToConeHoleClient::new(peer_mgr.clone());
let both_easy_sym_client = PunchBothEasySymHoleClient::new(peer_mgr.clone());
Arc::new(Self {
cone_client,
sym_to_cone_client,
both_easy_sym_client,
peer_mgr,
})
}
#[tracing::instrument(skip(self))]
async fn handle_punch_result(
self: &Self,
ret: Result<Option<Box<dyn Tunnel>>, Error>,
backoff: Option<&mut BackOff>,
round: Option<&mut u32>,
) -> bool {
let op = |rollback: bool| {
if rollback {
if let Some(backoff) = backoff {
backoff.rollback();
}
if let Some(round) = round {
*round = round.saturating_sub(1);
}
} else {
if let Some(round) = round {
*round += 1;
}
}
};
match ret {
Ok(Some(tunnel)) => {
tracing::info!(?tunnel, "hole punching get tunnel success");
if let Err(e) = self.peer_mgr.add_client_tunnel(tunnel).await {
tracing::warn!(?e, "add client tunnel failed");
op(true);
false
} else {
true
}
}
Ok(None) => {
tracing::info!("hole punching failed, no punch tunnel");
op(false);
false
}
Err(e) => {
tracing::info!(?e, "hole punching failed");
op(true);
false
}
}
}
#[tracing::instrument(skip(self))]
async fn cone_to_cone(self: Arc<Self>, task_info: PunchTaskInfo) -> Result<(), Error> {
let mut backoff = BackOff::new(vec![0, 1000, 2000, 4000, 4000, 8000, 8000, 16000]);
loop {
backoff.sleep_for_next_backoff().await;
let ret = self
.cone_client
.do_hole_punching(task_info.dst_peer_id)
.await;
if self
.handle_punch_result(ret, Some(&mut backoff), None)
.await
{
break;
}
}
Ok(())
}
#[tracing::instrument(skip(self))]
async fn sym_to_cone(self: Arc<Self>, task_info: PunchTaskInfo) -> Result<(), Error> {
let mut backoff = BackOff::new(vec![0, 1000, 2000, 4000, 4000, 8000, 8000, 16000, 64000]);
let mut round = 0;
let mut port_idx = rand::random();
loop {
backoff.sleep_for_next_backoff().await;
// always try cone first
if !RUN_TESTING.load(std::sync::atomic::Ordering::Relaxed) {
let ret = self
.cone_client
.do_hole_punching(task_info.dst_peer_id)
.await;
if self.handle_punch_result(ret, None, None).await {
break;
}
}
let ret = {
let _lock = get_sym_punch_lock(self.peer_mgr.my_peer_id())
.lock_owned()
.await;
self.sym_to_cone_client
.do_hole_punching(
task_info.dst_peer_id,
round,
&mut port_idx,
task_info.my_nat_type,
)
.await
};
if self
.handle_punch_result(ret, Some(&mut backoff), Some(&mut round))
.await
{
break;
}
}
Ok(())
}
#[tracing::instrument(skip(self))]
async fn both_easy_sym(self: Arc<Self>, task_info: PunchTaskInfo) -> Result<(), Error> {
let mut backoff = BackOff::new(vec![0, 1000, 2000, 4000, 4000, 8000, 8000, 16000, 64000]);
loop {
backoff.sleep_for_next_backoff().await;
// always try cone first
if !RUN_TESTING.load(std::sync::atomic::Ordering::Relaxed) {
let ret = self
.cone_client
.do_hole_punching(task_info.dst_peer_id)
.await;
if self.handle_punch_result(ret, None, None).await {
break;
}
}
let mut is_busy = false;
let ret = {
let _lock = get_sym_punch_lock(self.peer_mgr.my_peer_id())
.lock_owned()
.await;
self.both_easy_sym_client
.do_hole_punching(
task_info.dst_peer_id,
task_info.my_nat_type,
task_info.dst_nat_type,
&mut is_busy,
)
.await
};
if is_busy {
backoff.rollback();
} else if self
.handle_punch_result(ret, Some(&mut backoff), None)
.await
{
break;
}
}
Ok(())
}
}
#[derive(Clone)]
struct UdpHolePunchPeerTaskLauncher {}
#[derive(Clone, Debug, Hash, Eq, PartialEq)]
struct PunchTaskInfo {
dst_peer_id: PeerId,
dst_nat_type: UdpNatType,
my_nat_type: UdpNatType,
}
#[async_trait::async_trait]
impl PeerTaskLauncher for UdpHolePunchPeerTaskLauncher {
type Data = Arc<UdpHoePunchConnectorData>;
type CollectPeerItem = PunchTaskInfo;
type TaskRet = ();
fn new_data(&self, peer_mgr: Arc<PeerManager>) -> Self::Data {
UdpHoePunchConnectorData::new(peer_mgr)
}
async fn collect_peers_need_task(&self, data: &Self::Data) -> Vec<Self::CollectPeerItem> {
let my_nat_type = data
.peer_mgr
.get_global_ctx()
.get_stun_info_collector()
.get_stun_info()
.udp_nat_type;
let my_nat_type: UdpNatType = NatType::try_from(my_nat_type)
.unwrap_or(NatType::Unknown)
.into();
if !my_nat_type.is_sym() {
data.sym_to_cone_client.clear_udp_array().await;
}
let mut peers_to_connect: Vec<Self::CollectPeerItem> = Vec::new();
// do not do anything if:
// 1. our nat type is OpenInternet or NoPat, which means we can wait other peers to connect us
// notice that if we are unknown, we treat ourselves as cone
if my_nat_type.is_open() {
return peers_to_connect;
}
let my_peer_id = data.peer_mgr.my_peer_id();
// collect peer list from peer manager and do some filter:
// 1. peers without direct conns;
// 2. peers is full cone (any restricted type);
for route in data.peer_mgr.list_routes().await.iter() {
if route
.feature_flag
.map(|x| x.is_public_server)
.unwrap_or(false)
{
continue;
}
let peer_nat_type = route
.stun_info
.as_ref()
.map(|x| x.udp_nat_type)
.unwrap_or(0);
let Ok(peer_nat_type) = NatType::try_from(peer_nat_type) else {
continue;
};
let peer_nat_type = peer_nat_type.into();
let peer_id: PeerId = route.peer_id;
let conns = data.peer_mgr.list_peer_conns(peer_id).await;
if conns.is_some() && conns.unwrap().len() > 0 {
continue;
}
if !my_nat_type.can_punch_hole_as_client(peer_nat_type, my_peer_id, peer_id) {
continue;
}
tracing::info!(
?peer_id,
?peer_nat_type,
?my_nat_type,
"found peer to do hole punching"
);
peers_to_connect.push(PunchTaskInfo {
dst_peer_id: peer_id,
dst_nat_type: peer_nat_type,
my_nat_type,
});
}
peers_to_connect
}
async fn launch_task(
&self,
data: &Self::Data,
item: Self::CollectPeerItem,
) -> JoinHandle<Result<Self::TaskRet, Error>> {
let data = data.clone();
let punch_method = item.my_nat_type.get_punch_hole_method(item.dst_nat_type);
match punch_method {
UdpPunchClientMethod::ConeToCone => tokio::spawn(data.cone_to_cone(item)),
UdpPunchClientMethod::SymToCone => tokio::spawn(data.sym_to_cone(item)),
UdpPunchClientMethod::EasySymToEasySym => tokio::spawn(data.both_easy_sym(item)),
_ => unreachable!(),
}
}
async fn all_task_done(&self, data: &Self::Data) {
data.sym_to_cone_client.clear_udp_array().await;
}
fn loop_interval_ms(&self) -> u64 {
5000
}
}
pub struct UdpHolePunchConnector {
server: Arc<UdpHolePunchServer>,
client: PeerTaskManager<UdpHolePunchPeerTaskLauncher>,
peer_mgr: Arc<PeerManager>,
}
// Currently support:
// Symmetric -> Full Cone
// Any Type of Full Cone -> Any Type of Full Cone
// if same level of full cone, node with smaller peer_id will be the initiator
// if different level of full cone, node with more strict level will be the initiator
impl UdpHolePunchConnector {
pub fn new(peer_mgr: Arc<PeerManager>) -> Self {
Self {
server: UdpHolePunchServer::new(peer_mgr.clone()),
client: PeerTaskManager::new(UdpHolePunchPeerTaskLauncher {}, peer_mgr.clone()),
peer_mgr,
}
}
pub async fn run_as_client(&mut self) -> Result<(), Error> {
self.client.start();
Ok(())
}
pub async fn run_as_server(&mut self) -> Result<(), Error> {
self.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
UdpHolePunchRpcServer::new(self.server.clone()),
&self.peer_mgr.get_global_ctx().get_network_name(),
);
Ok(())
}
pub async fn run(&mut self) -> Result<(), Error> {
let global_ctx = self.peer_mgr.get_global_ctx();
if global_ctx.get_flags().disable_p2p {
return Ok(());
}
if global_ctx.get_flags().disable_udp_hole_punching {
return Ok(());
}
self.run_as_client().await?;
self.run_as_server().await?;
Ok(())
}
}
#[cfg(test)]
pub mod tests {
use std::sync::Arc;
use crate::common::stun::MockStunInfoCollector;
use crate::proto::common::NatType;
use crate::peers::{peer_manager::PeerManager, tests::create_mock_peer_manager};
pub fn replace_stun_info_collector(peer_mgr: Arc<PeerManager>, udp_nat_type: NatType) {
let collector = Box::new(MockStunInfoCollector { udp_nat_type });
peer_mgr
.get_global_ctx()
.replace_stun_info_collector(collector);
}
pub async fn create_mock_peer_manager_with_mock_stun(
udp_nat_type: NatType,
) -> Arc<PeerManager> {
let p_a = create_mock_peer_manager().await;
replace_stun_info_collector(p_a.clone(), udp_nat_type);
p_a
}
}
@@ -0,0 +1,589 @@
use std::{
net::Ipv4Addr,
ops::{Div, Mul},
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
time::{Duration, Instant},
};
use anyhow::Context;
use rand::{seq::SliceRandom, Rng};
use tokio::{net::UdpSocket, sync::RwLock};
use tracing::Level;
use crate::{
common::{scoped_task::ScopedTask, stun::StunInfoCollectorTrait, PeerId},
connector::udp_hole_punch::common::{
send_symmetric_hole_punch_packet, try_connect_with_socket, HOLE_PUNCH_PACKET_BODY_LEN,
},
defer,
peers::peer_manager::PeerManager,
proto::{
peer_rpc::{
SelectPunchListenerRequest, SendPunchPacketEasySymRequest,
SendPunchPacketHardSymRequest, SendPunchPacketHardSymResponse,
UdpHolePunchRpcClientFactory,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{udp::new_hole_punch_packet, Tunnel},
};
use super::common::{PunchHoleServerCommon, UdpNatType, UdpSocketArray};
const UDP_ARRAY_SIZE_FOR_HARD_SYM: usize = 84;
pub(crate) struct PunchSymToConeHoleServer {
common: Arc<PunchHoleServerCommon>,
shuffled_port_vec: Arc<Vec<u16>>,
}
impl PunchSymToConeHoleServer {
pub(crate) fn new(common: Arc<PunchHoleServerCommon>) -> Self {
let mut shuffled_port_vec: Vec<u16> = (1..=65535).collect();
shuffled_port_vec.shuffle(&mut rand::thread_rng());
Self {
common,
shuffled_port_vec: Arc::new(shuffled_port_vec),
}
}
// hard sym means public port is random and cannot be predicted
#[tracing::instrument(skip(self), ret)]
pub(crate) async fn send_punch_packet_easy_sym(
&self,
request: SendPunchPacketEasySymRequest,
) -> Result<(), rpc_types::error::Error> {
tracing::info!("send_punch_packet_easy_sym start");
let listener_addr = request.listener_mapped_addr.ok_or(anyhow::anyhow!(
"send_punch_packet_easy_sym request missing listener_addr"
))?;
let listener_addr = std::net::SocketAddr::from(listener_addr);
let listener = self
.common
.find_listener(&listener_addr)
.await
.ok_or(anyhow::anyhow!(
"send_punch_packet_easy_sym failed to find listener"
))?;
let public_ips = request
.public_ips
.into_iter()
.map(|ip| std::net::Ipv4Addr::from(ip))
.collect::<Vec<_>>();
if public_ips.len() == 0 {
tracing::warn!("send_punch_packet_easy_sym got zero len public ip");
return Err(
anyhow::anyhow!("send_punch_packet_easy_sym got zero len public ip").into(),
);
}
let transaction_id = request.transaction_id;
let base_port_num = request.base_port_num;
let max_port_num = request.max_port_num.max(1);
let is_incremental = request.is_incremental;
let port_start = if is_incremental {
base_port_num.saturating_add(1)
} else {
base_port_num.saturating_sub(max_port_num)
};
let port_end = if is_incremental {
base_port_num.saturating_add(max_port_num)
} else {
base_port_num.saturating_sub(1)
};
if port_end <= port_start {
return Err(anyhow::anyhow!("send_punch_packet_easy_sym invalid port range").into());
}
let ports = (port_start..=port_end)
.map(|x| x as u16)
.collect::<Vec<_>>();
tracing::debug!(
?ports,
?public_ips,
"send_punch_packet_easy_sym send to ports"
);
send_symmetric_hole_punch_packet(
&ports,
listener,
transaction_id,
&public_ips,
0,
ports.len(),
)
.await
.with_context(|| "failed to send symmetric hole punch packet")?;
Ok(())
}
// hard sym means public port is random and cannot be predicted
#[tracing::instrument(skip(self))]
pub(crate) async fn send_punch_packet_hard_sym(
&self,
request: SendPunchPacketHardSymRequest,
) -> Result<SendPunchPacketHardSymResponse, rpc_types::error::Error> {
tracing::info!("try_punch_symmetric start");
let listener_addr = request.listener_mapped_addr.ok_or(anyhow::anyhow!(
"try_punch_symmetric request missing listener_addr"
))?;
let listener_addr = std::net::SocketAddr::from(listener_addr);
let listener = self
.common
.find_listener(&listener_addr)
.await
.ok_or(anyhow::anyhow!(
"send_punch_packet_for_cone failed to find listener"
))?;
let public_ips = request
.public_ips
.into_iter()
.map(|ip| std::net::Ipv4Addr::from(ip))
.collect::<Vec<_>>();
if public_ips.len() == 0 {
tracing::warn!("try_punch_symmetric got zero len public ip");
return Err(anyhow::anyhow!("try_punch_symmetric got zero len public ip").into());
}
let transaction_id = request.transaction_id;
let last_port_index = request.port_index as usize;
let round = std::cmp::max(request.round, 1);
// send max k1 packets if we are predicting the dst port
let max_k1: u32 = 180;
// send max k2 packets if we are sending to random port
let mut max_k2: u32 = rand::thread_rng().gen_range(600..800);
if round > 2 {
max_k2 = max_k2.mul(2).div(round).max(max_k1);
}
let next_port_index = send_symmetric_hole_punch_packet(
&self.shuffled_port_vec,
listener.clone(),
transaction_id,
&public_ips,
last_port_index,
max_k2 as usize,
)
.await
.with_context(|| "failed to send symmetric hole punch packet randomly")?;
return Ok(SendPunchPacketHardSymResponse {
next_port_index: next_port_index as u32,
});
}
}
pub(crate) struct PunchSymToConeHoleClient {
peer_mgr: Arc<PeerManager>,
udp_array: RwLock<Option<Arc<UdpSocketArray>>>,
try_direct_connect: AtomicBool,
punch_predicablely: AtomicBool,
punch_randomly: AtomicBool,
}
impl PunchSymToConeHoleClient {
pub(crate) fn new(peer_mgr: Arc<PeerManager>) -> Self {
Self {
peer_mgr,
udp_array: RwLock::new(None),
try_direct_connect: AtomicBool::new(true),
punch_predicablely: AtomicBool::new(true),
punch_randomly: AtomicBool::new(true),
}
}
async fn prepare_udp_array(&self) -> Result<Arc<UdpSocketArray>, anyhow::Error> {
let rlocked = self.udp_array.read().await;
if let Some(udp_array) = rlocked.clone() {
return Ok(udp_array);
}
drop(rlocked);
let mut wlocked = self.udp_array.write().await;
if let Some(udp_array) = wlocked.clone() {
return Ok(udp_array);
}
let udp_array = Arc::new(UdpSocketArray::new(
UDP_ARRAY_SIZE_FOR_HARD_SYM,
self.peer_mgr.get_global_ctx().net_ns.clone(),
));
udp_array.start().await?;
wlocked.replace(udp_array.clone());
Ok(udp_array)
}
pub(crate) async fn clear_udp_array(&self) {
let mut wlocked = self.udp_array.write().await;
wlocked.take();
}
async fn get_base_port_for_easy_sym(&self, my_nat_info: UdpNatType) -> Option<u16> {
let global_ctx = self.peer_mgr.get_global_ctx();
if my_nat_info.is_easy_sym() {
match global_ctx
.get_stun_info_collector()
.get_udp_port_mapping(0)
.await
{
Ok(addr) => Some(addr.port()),
ret => {
tracing::warn!(?ret, "failed to get udp port mapping for easy sym");
None
}
}
} else {
None
}
}
#[tracing::instrument(err(level = Level::ERROR), skip(self))]
pub(crate) async fn do_hole_punching(
&self,
dst_peer_id: PeerId,
round: u32,
last_port_idx: &mut usize,
my_nat_info: UdpNatType,
) -> Result<Option<Box<dyn Tunnel>>, anyhow::Error> {
let udp_array = self.prepare_udp_array().await?;
let global_ctx = self.peer_mgr.get_global_ctx();
let rpc_stub = self
.peer_mgr
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
self.peer_mgr.my_peer_id(),
dst_peer_id,
global_ctx.get_network_name(),
);
let resp = rpc_stub
.select_punch_listener(
BaseController::default(),
SelectPunchListenerRequest { force_new: false },
)
.await
.with_context(|| "failed to select punch listener")?;
let remote_mapped_addr = resp.listener_mapped_addr.ok_or(anyhow::anyhow!(
"select_punch_listener response missing listener_mapped_addr"
))?;
// try direct connect first
if self.try_direct_connect.load(Ordering::Relaxed) {
if let Ok(tunnel) = try_connect_with_socket(
Arc::new(UdpSocket::bind("0.0.0.0:0").await?),
remote_mapped_addr.into(),
)
.await
{
return Ok(Some(tunnel));
}
}
let stun_info = global_ctx.get_stun_info_collector().get_stun_info();
let public_ips: Vec<Ipv4Addr> = stun_info
.public_ip
.iter()
.map(|x| x.parse().unwrap())
.collect();
if public_ips.is_empty() {
return Err(anyhow::anyhow!("failed to get public ips"));
}
let tid = rand::thread_rng().gen();
let packet = new_hole_punch_packet(tid, HOLE_PUNCH_PACKET_BODY_LEN).into_bytes();
udp_array.add_intreast_tid(tid);
defer! { udp_array.remove_intreast_tid(tid);}
udp_array
.send_with_all(&packet, remote_mapped_addr.into())
.await?;
let port_index = *last_port_idx as u32;
let base_port_for_easy_sym = self.get_base_port_for_easy_sym(my_nat_info).await;
let punch_random = self.punch_randomly.load(Ordering::Relaxed);
let punch_predicable = self.punch_predicablely.load(Ordering::Relaxed);
let scoped_punch_task: ScopedTask<Option<u32>> = tokio::spawn(async move {
if punch_predicable {
if let Some(inc) = my_nat_info.get_inc_of_easy_sym() {
let req = SendPunchPacketEasySymRequest {
listener_mapped_addr: remote_mapped_addr.clone().into(),
public_ips: public_ips.clone().into_iter().map(|x| x.into()).collect(),
transaction_id: tid,
base_port_num: base_port_for_easy_sym.unwrap() as u32,
max_port_num: 50,
is_incremental: inc,
};
tracing::debug!(?req, "send punch packet for easy sym start");
let ret = rpc_stub
.send_punch_packet_easy_sym(
BaseController {
timeout_ms: 4000,
trace_id: 0,
},
req,
)
.await;
tracing::debug!(?ret, "send punch packet for easy sym return");
}
}
if punch_random {
let req = SendPunchPacketHardSymRequest {
listener_mapped_addr: remote_mapped_addr.clone().into(),
public_ips: public_ips.clone().into_iter().map(|x| x.into()).collect(),
transaction_id: tid,
round,
port_index,
};
tracing::debug!(?req, "send punch packet for hard sym start");
match rpc_stub
.send_punch_packet_hard_sym(
BaseController {
timeout_ms: 4000,
trace_id: 0,
},
req,
)
.await
{
Err(e) => {
tracing::error!(?e, "failed to send punch packet for hard sym");
return None;
}
Ok(resp) => return Some(resp.next_port_index),
}
}
None
})
.into();
// no matter what the result is, we should check if we received any hole punching packet
let mut ret_tunnel: Option<Box<dyn Tunnel>> = None;
let mut finish_time: Option<Instant> = None;
while finish_time.is_none() || finish_time.as_ref().unwrap().elapsed().as_millis() < 1000 {
tokio::time::sleep(Duration::from_millis(200)).await;
if finish_time.is_none() && (*scoped_punch_task).is_finished() {
finish_time = Some(Instant::now());
}
let Some(socket) = udp_array.try_fetch_punched_socket(tid) else {
tracing::debug!("no punched socket found, wait for more time");
continue;
};
// if hole punched but tunnel creation failed, need to retry entire process.
match try_connect_with_socket(socket.socket.clone(), remote_mapped_addr.into()).await {
Ok(tunnel) => {
ret_tunnel.replace(tunnel);
break;
}
Err(e) => {
tracing::error!(?e, "failed to connect with socket");
udp_array.add_new_socket(socket.socket).await?;
continue;
}
}
}
let punch_task_result = scoped_punch_task.await;
tracing::debug!(?punch_task_result, ?ret_tunnel, "punch task got result");
if let Ok(Some(next_port_idx)) = punch_task_result {
*last_port_idx = next_port_idx as usize;
} else {
*last_port_idx = rand::random();
}
Ok(ret_tunnel)
}
}
#[cfg(test)]
pub mod tests {
use std::{
sync::{atomic::AtomicU32, Arc},
time::Duration,
};
use tokio::net::UdpSocket;
use crate::{
connector::udp_hole_punch::{
tests::create_mock_peer_manager_with_mock_stun, UdpHolePunchConnector, RUN_TESTING,
},
peers::tests::{connect_peer_manager, wait_route_appear, wait_route_appear_with_cost},
proto::common::NatType,
tunnel::common::tests::wait_for_condition,
};
#[tokio::test]
#[serial_test::serial(hole_punch)]
async fn hole_punching_symmetric_only_random() {
RUN_TESTING.store(true, std::sync::atomic::Ordering::Relaxed);
let p_a = create_mock_peer_manager_with_mock_stun(NatType::Symmetric).await;
let p_b = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
let p_c = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
connect_peer_manager(p_a.clone(), p_b.clone()).await;
connect_peer_manager(p_b.clone(), p_c.clone()).await;
wait_route_appear(p_a.clone(), p_c.clone()).await.unwrap();
let mut hole_punching_a = UdpHolePunchConnector::new(p_a.clone());
let mut hole_punching_c = UdpHolePunchConnector::new(p_c.clone());
hole_punching_a
.client
.data()
.sym_to_cone_client
.try_direct_connect
.store(false, std::sync::atomic::Ordering::Relaxed);
hole_punching_a
.client
.data()
.sym_to_cone_client
.punch_predicablely
.store(false, std::sync::atomic::Ordering::Relaxed);
hole_punching_a.run().await.unwrap();
hole_punching_c.run().await.unwrap();
hole_punching_a.client.run_immediately().await;
wait_for_condition(
|| async {
hole_punching_a
.client
.data()
.sym_to_cone_client
.udp_array
.read()
.await
.is_some()
},
Duration::from_secs(5),
)
.await;
wait_for_condition(
|| async {
wait_route_appear_with_cost(p_a.clone(), p_c.my_peer_id(), Some(1))
.await
.is_ok()
},
Duration::from_secs(5),
)
.await;
println!("{:?}", p_a.list_routes().await);
wait_for_condition(
|| async {
hole_punching_a
.client
.data()
.sym_to_cone_client
.udp_array
.read()
.await
.is_none()
},
Duration::from_secs(10),
)
.await;
}
#[rstest::rstest]
#[tokio::test]
#[serial_test::serial(hole_punch)]
async fn hole_punching_symmetric_only_predict(#[values("true", "false")] is_inc: bool) {
RUN_TESTING.store(true, std::sync::atomic::Ordering::Relaxed);
let p_a = create_mock_peer_manager_with_mock_stun(if is_inc {
NatType::SymmetricEasyInc
} else {
NatType::SymmetricEasyDec
})
.await;
let p_b = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
let p_c = create_mock_peer_manager_with_mock_stun(NatType::PortRestricted).await;
connect_peer_manager(p_a.clone(), p_b.clone()).await;
connect_peer_manager(p_b.clone(), p_c.clone()).await;
wait_route_appear(p_a.clone(), p_c.clone()).await.unwrap();
let mut hole_punching_a = UdpHolePunchConnector::new(p_a.clone());
let mut hole_punching_c = UdpHolePunchConnector::new(p_c.clone());
hole_punching_a
.client
.data()
.sym_to_cone_client
.try_direct_connect
.store(false, std::sync::atomic::Ordering::Relaxed);
hole_punching_a
.client
.data()
.sym_to_cone_client
.punch_randomly
.store(false, std::sync::atomic::Ordering::Relaxed);
hole_punching_a.run().await.unwrap();
hole_punching_c.run().await.unwrap();
let udps = if is_inc {
let udp1 = Arc::new(UdpSocket::bind("0.0.0.0:40147").await.unwrap());
let udp2 = Arc::new(UdpSocket::bind("0.0.0.0:40194").await.unwrap());
vec![udp1, udp2]
} else {
let udp1 = Arc::new(UdpSocket::bind("0.0.0.0:40141").await.unwrap());
let udp2 = Arc::new(UdpSocket::bind("0.0.0.0:40100").await.unwrap());
vec![udp1, udp2]
};
// let udp_dec = Arc::new(UdpSocket::bind("0.0.0.0:40140").await.unwrap());
// let udp_dec2 = Arc::new(UdpSocket::bind("0.0.0.0:40050").await.unwrap());
let counter = Arc::new(AtomicU32::new(0));
// all these sockets should receive hole punching packet
for udp in udps.iter().map(Arc::clone) {
let counter = counter.clone();
tokio::spawn(async move {
let mut buf = [0u8; 1024];
let (len, addr) = udp.recv_from(&mut buf).await.unwrap();
println!(
"got predictable punch packet, {:?} {:?} {:?}",
len,
addr,
udp.local_addr()
);
counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
});
}
hole_punching_a.client.run_immediately().await;
let udp_len = udps.len();
wait_for_condition(
|| async { counter.load(std::sync::atomic::Ordering::Relaxed) == udp_len as u32 },
Duration::from_secs(30),
)
.await;
}
}
+214 -65
View File
@@ -1,33 +1,36 @@
#![allow(dead_code)]
use std::{net::SocketAddr, time::Duration, vec};
use std::{net::SocketAddr, sync::Mutex, time::Duration, vec};
use anyhow::{Context, Ok};
use clap::{command, Args, Parser, Subcommand};
use common::stun::StunInfoCollectorTrait;
use rpc::vpn_portal_rpc_client::VpnPortalRpcClient;
use common::{constants::EASYTIER_VERSION, stun::StunInfoCollectorTrait};
use proto::{
common::NatType,
peer_rpc::{GetGlobalPeerMapRequest, PeerCenterRpc, PeerCenterRpcClientFactory},
rpc_impl::standalone::StandAloneClient,
rpc_types::controller::BaseController,
};
use tokio::time::timeout;
use tunnel::tcp::TcpTunnelConnector;
use utils::{list_peer_route_pair, PeerRoutePair};
mod arch;
mod common;
mod rpc;
mod proto;
mod tunnel;
mod utils;
use crate::{
common::stun::StunInfoCollector,
rpc::{
connector_manage_rpc_client::ConnectorManageRpcClient,
peer_center_rpc_client::PeerCenterRpcClient, peer_manage_rpc_client::PeerManageRpcClient,
*,
},
proto::cli::*,
utils::{cost_to_str, float_to_str},
};
use humansize::format_size;
use tabled::settings::Style;
#[derive(Parser, Debug)]
#[command(name = "easytier-cli", author, version, about, long_about = None)]
#[command(name = "easytier-cli", author, version = EASYTIER_VERSION, about, long_about = None)]
struct Cli {
/// the instance name
#[arg(short = 'p', long, default_value = "127.0.0.1:15888")]
@@ -69,6 +72,7 @@ enum PeerSubCommand {
Remove,
List(PeerListArgs),
ListForeign,
ListGlobalForeign,
}
#[derive(Args, Debug)]
@@ -114,58 +118,78 @@ struct NodeArgs {
sub_command: Option<NodeSubCommand>,
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error("tonic transport error")]
TonicTransportError(#[from] tonic::transport::Error),
#[error("tonic rpc error")]
TonicRpcError(#[from] tonic::Status),
#[error("anyhow error")]
Anyhow(#[from] anyhow::Error),
}
type Error = anyhow::Error;
struct CommandHandler {
addr: String,
client: Mutex<RpcClient>,
verbose: bool,
}
type RpcClient = StandAloneClient<TcpTunnelConnector>;
impl CommandHandler {
async fn get_peer_manager_client(
&self,
) -> Result<PeerManageRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer manager client")?)
}
async fn get_connector_manager_client(
&self,
) -> Result<ConnectorManageRpcClient<tonic::transport::Channel>, Error> {
Ok(ConnectorManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn ConnectorManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<ConnectorManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get connector manager client")?)
}
async fn get_peer_center_client(
&self,
) -> Result<PeerCenterRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerCenterRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerCenterRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer center client")?)
}
async fn get_vpn_portal_client(
&self,
) -> Result<VpnPortalRpcClient<tonic::transport::Channel>, Error> {
Ok(VpnPortalRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn VpnPortalRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<VpnPortalRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get vpn portal client")?)
}
async fn list_peers(&self) -> Result<ListPeerResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListPeerRequest::default());
let response = client.list_peer(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListPeerRequest::default();
let response = client.list_peer(BaseController::default(), request).await?;
Ok(response)
}
async fn list_routes(&self) -> Result<ListRouteResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListRouteRequest::default());
let response = client.list_route(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListRouteRequest::default();
let response = client
.list_route(BaseController::default(), request)
.await?;
Ok(response)
}
async fn list_peer_route_pair(&self) -> Result<Vec<PeerRoutePair>, Error> {
@@ -197,12 +221,18 @@ impl CommandHandler {
tunnel_proto: String,
nat_type: String,
id: String,
version: String,
}
impl From<PeerRoutePair> for PeerTableItem {
fn from(p: PeerRoutePair) -> Self {
PeerTableItem {
ipv4: p.route.ipv4_addr.clone(),
ipv4: p
.route
.ipv4_addr
.clone()
.map(|ip| ip.to_string())
.unwrap_or_default(),
hostname: p.route.hostname.clone(),
cost: cost_to_str(p.route.cost),
lat_ms: float_to_str(p.get_latency_ms().unwrap_or(0.0), 3),
@@ -212,6 +242,33 @@ impl CommandHandler {
tunnel_proto: p.get_conn_protos().unwrap_or(vec![]).join(",").to_string(),
nat_type: p.get_udp_nat_type(),
id: p.route.peer_id.to_string(),
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
}
}
}
impl From<NodeInfo> for PeerTableItem {
fn from(p: NodeInfo) -> Self {
PeerTableItem {
ipv4: p.ipv4_addr.clone(),
hostname: p.hostname.clone(),
cost: "Local".to_string(),
lat_ms: "-".to_string(),
loss_rate: "-".to_string(),
rx_bytes: "-".to_string(),
tx_bytes: "-".to_string(),
tunnel_proto: "-".to_string(),
nat_type: if let Some(info) = p.stun_info {
info.udp_nat_type().as_str_name().to_string()
} else {
"Unknown".to_string()
},
id: p.peer_id.to_string(),
version: p.version,
}
}
}
@@ -223,6 +280,14 @@ impl CommandHandler {
return Ok(());
}
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController::default(), ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(node_info.into());
for p in peer_routes {
items.push(p.into());
}
@@ -236,18 +301,22 @@ impl CommandHandler {
}
async fn handle_route_dump(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(DumpRouteRequest::default());
let response = client.dump_route(request).await?;
println!("response: {}", response.into_inner().result);
let client = self.get_peer_manager_client().await?;
let request = DumpRouteRequest::default();
let response = client
.dump_route(BaseController::default(), request)
.await?;
println!("response: {}", response.result);
Ok(())
}
async fn handle_foreign_network_list(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListForeignNetworkRequest::default());
let response = client.list_foreign_network(request).await?;
let network_map = response.into_inner();
let client = self.get_peer_manager_client().await?;
let request = ListForeignNetworkRequest::default();
let response = client
.list_foreign_network(BaseController::default(), request)
.await?;
let network_map = response;
if self.verbose {
println!("{:#?}", network_map);
return Ok(());
@@ -266,7 +335,7 @@ impl CommandHandler {
"remote_addr: {}, rx_bytes: {}, tx_bytes: {}, latency_us: {}",
conn.tunnel
.as_ref()
.map(|t| t.remote_addr.clone())
.map(|t| t.remote_addr.clone().unwrap_or_default())
.unwrap_or_default(),
conn.stats.as_ref().map(|s| s.rx_bytes).unwrap_or_default(),
conn.stats.as_ref().map(|s| s.tx_bytes).unwrap_or_default(),
@@ -283,6 +352,30 @@ impl CommandHandler {
Ok(())
}
async fn handle_global_foreign_network_list(&self) -> Result<(), Error> {
let client = self.get_peer_manager_client().await?;
let request = ListGlobalForeignNetworkRequest::default();
let response = client
.list_global_foreign_network(BaseController::default(), request)
.await?;
if self.verbose {
println!("{:#?}", response);
return Ok(());
}
for (k, v) in response.foreign_networks.iter() {
println!("Peer ID: {}", k);
for n in v.foreign_networks.iter() {
println!(
" Network Name: {}, Last Updated: {}, Version: {}, PeerIds: {:?}",
n.network_name, n.last_updated, n.version, n.peer_ids
);
}
}
Ok(())
}
async fn handle_route_list(&self) -> Result<(), Error> {
#[derive(tabled::Tabled)]
struct RouteTableItem {
@@ -293,9 +386,27 @@ impl CommandHandler {
next_hop_hostname: String,
next_hop_lat: f64,
cost: i32,
version: String,
}
let mut items: Vec<RouteTableItem> = vec![];
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController::default(), ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(RouteTableItem {
ipv4: node_info.ipv4_addr.clone(),
hostname: node_info.hostname.clone(),
proxy_cidrs: node_info.proxy_cidrs.join(", "),
next_hop_ipv4: "-".to_string(),
next_hop_hostname: "Local".to_string(),
next_hop_lat: 0.0,
cost: 0,
version: node_info.version.clone(),
});
let peer_routes = self.list_peer_route_pair().await?;
for p in peer_routes.iter() {
let Some(next_hop_pair) = peer_routes
@@ -307,23 +418,48 @@ impl CommandHandler {
if p.route.cost == 1 {
items.push(RouteTableItem {
ipv4: p.route.ipv4_addr.clone(),
ipv4: p
.route
.ipv4_addr
.clone()
.map(|ip| ip.to_string())
.unwrap_or_default(),
hostname: p.route.hostname.clone(),
proxy_cidrs: p.route.proxy_cidrs.clone().join(",").to_string(),
next_hop_ipv4: "DIRECT".to_string(),
next_hop_hostname: "".to_string(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
} else {
items.push(RouteTableItem {
ipv4: p.route.ipv4_addr.clone(),
ipv4: p
.route
.ipv4_addr
.clone()
.map(|ip| ip.to_string())
.unwrap_or_default(),
hostname: p.route.hostname.clone(),
proxy_cidrs: p.route.proxy_cidrs.clone().join(",").to_string(),
next_hop_ipv4: next_hop_pair.route.ipv4_addr.clone(),
next_hop_ipv4: next_hop_pair
.route
.ipv4_addr
.clone()
.map(|ip| ip.to_string())
.unwrap_or_default(),
next_hop_hostname: next_hop_pair.route.hostname.clone(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
}
}
@@ -337,10 +473,12 @@ impl CommandHandler {
}
async fn handle_connector_list(&self) -> Result<(), Error> {
let mut client = self.get_connector_manager_client().await?;
let request = tonic::Request::new(ListConnectorRequest::default());
let response = client.list_connector(request).await?;
println!("response: {:#?}", response.into_inner());
let client = self.get_connector_manager_client().await?;
let request = ListConnectorRequest::default();
let response = client
.list_connector(BaseController::default(), request)
.await?;
println!("response: {:#?}", response);
Ok(())
}
}
@@ -349,8 +487,13 @@ impl CommandHandler {
#[tracing::instrument]
async fn main() -> Result<(), Error> {
let cli = Cli::parse();
let client = RpcClient::new(TcpTunnelConnector::new(
format!("tcp://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port())
.parse()
.unwrap(),
));
let handler = CommandHandler {
addr: format!("http://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port()),
client: Mutex::new(client),
verbose: cli.verbose,
};
@@ -372,6 +515,9 @@ async fn main() -> Result<(), Error> {
Some(PeerSubCommand::ListForeign) => {
handler.handle_foreign_network_list().await?;
}
Some(PeerSubCommand::ListGlobalForeign) => {
handler.handle_global_foreign_network_list().await?;
}
None => {
handler.handle_peer_list(&peer_args).await?;
}
@@ -395,7 +541,7 @@ async fn main() -> Result<(), Error> {
Some(RouteSubCommand::Dump) => handler.handle_route_dump().await?,
},
SubCommand::Stun => {
timeout(Duration::from_secs(5), async move {
timeout(Duration::from_secs(25), async move {
let collector = StunInfoCollector::new_with_default_servers();
loop {
let ret = collector.get_stun_info();
@@ -410,11 +556,13 @@ async fn main() -> Result<(), Error> {
.unwrap();
}
SubCommand::PeerCenter => {
let mut peer_center_client = handler.get_peer_center_client().await?;
let peer_center_client = handler.get_peer_center_client().await?;
let resp = peer_center_client
.get_global_peer_map(GetGlobalPeerMapRequest::default())
.await?
.into_inner();
.get_global_peer_map(
BaseController::default(),
GetGlobalPeerMapRequest::default(),
)
.await?;
#[derive(tabled::Tabled)]
struct PeerCenterTableItem {
@@ -444,11 +592,13 @@ async fn main() -> Result<(), Error> {
);
}
SubCommand::VpnPortal => {
let mut vpn_portal_client = handler.get_vpn_portal_client().await?;
let vpn_portal_client = handler.get_vpn_portal_client().await?;
let resp = vpn_portal_client
.get_vpn_portal_info(GetVpnPortalInfoRequest::default())
.get_vpn_portal_info(
BaseController::default(),
GetVpnPortalInfoRequest::default(),
)
.await?
.into_inner()
.vpn_portal_info
.unwrap_or_default();
println!("portal_name: {}", resp.vpn_type);
@@ -463,11 +613,10 @@ async fn main() -> Result<(), Error> {
println!("connected_clients:\n{:#?}", resp.connected_clients);
}
SubCommand::Node(sub_cmd) => {
let mut client = handler.get_peer_manager_client().await?;
let client = handler.get_peer_manager_client().await?;
let node_info = client
.show_node_info(ShowNodeInfoRequest::default())
.show_node_info(BaseController::default(), ShowNodeInfoRequest::default())
.await?
.into_inner()
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
match sub_cmd.sub_command {
+49 -49
View File
@@ -21,13 +21,14 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod rpc;
mod proto;
mod tunnel;
mod utils;
mod vpn_portal;
use common::config::{
ConsoleLoggerConfig, FileLoggerConfig, NetworkIdentity, PeerConfig, VpnPortalConfig,
use common::{
config::{ConsoleLoggerConfig, FileLoggerConfig, NetworkIdentity, PeerConfig, VpnPortalConfig},
constants::EASYTIER_VERSION,
};
use instance::instance::Instance;
use tokio::net::TcpSocket;
@@ -49,7 +50,7 @@ use mimalloc_rust::*;
static GLOBAL_MIMALLOC: GlobalMiMalloc = GlobalMiMalloc;
#[derive(Parser, Debug)]
#[command(name = "easytier-core", author, version, about, long_about = None)]
#[command(name = "easytier-core", author, version = EASYTIER_VERSION , about, long_about = None)]
struct Cli {
#[arg(
short,
@@ -266,6 +267,13 @@ struct Cli {
)]
disable_p2p: bool,
#[arg(
long,
help = t!("core_clap.disable_udp_hole_punching").to_string(),
default_value = "false"
)]
disable_udp_hole_punching: bool,
#[arg(
long,
help = t!("core_clap.relay_all_peer_rpc").to_string(),
@@ -279,20 +287,25 @@ struct Cli {
help = t!("core_clap.socks5").to_string()
)]
socks5: Option<u16>,
#[arg(
long,
help = t!("core_clap.ipv6_listener").to_string()
)]
ipv6_listener: Option<String>,
}
rust_i18n::i18n!("locales", fallback = "en");
impl Cli {
fn parse_listeners(&self) -> Vec<String> {
println!("parsing listeners: {:?}", self.listeners);
fn parse_listeners(no_listener: bool, listeners: Vec<String>) -> Vec<String> {
let proto_port_offset = vec![("tcp", 0), ("udp", 0), ("wg", 1), ("ws", 1), ("wss", 2)];
if self.no_listener || self.listeners.is_empty() {
if no_listener || listeners.is_empty() {
return vec![];
}
let origin_listners = self.listeners.clone();
let origin_listners = listeners;
let mut listeners: Vec<String> = Vec::new();
if origin_listners.len() == 1 {
if let Ok(port) = origin_listners[0].parse::<u16>() {
@@ -333,12 +346,12 @@ impl Cli {
}
fn check_tcp_available(port: u16) -> Option<SocketAddr> {
let s = format!("127.0.0.1:{}", port).parse::<SocketAddr>().unwrap();
let s = format!("0.0.0.0:{}", port).parse::<SocketAddr>().unwrap();
TcpSocket::new_v4().unwrap().bind(s).map(|_| s).ok()
}
fn parse_rpc_portal(&self) -> SocketAddr {
if let Ok(port) = self.rpc_portal.parse::<u16>() {
fn parse_rpc_portal(rpc_portal: String) -> SocketAddr {
if let Ok(port) = rpc_portal.parse::<u16>() {
if port == 0 {
// check tcp 15888 first
for i in 15888..15900 {
@@ -346,12 +359,12 @@ impl Cli {
return s;
}
}
return "127.0.0.1:0".parse().unwrap();
return "0.0.0.0:0".parse().unwrap();
}
return format!("127.0.0.1:{}", port).parse().unwrap();
return format!("0.0.0.0:{}", port).parse().unwrap();
}
self.rpc_portal.parse().unwrap()
rpc_portal.parse().unwrap()
}
}
@@ -369,14 +382,9 @@ impl From<Cli> for TomlConfigLoader {
let cfg = TomlConfigLoader::default();
cfg.set_inst_name(cli.instance_name.clone());
cfg.set_hostname(cli.hostname);
cfg.set_hostname(cli.hostname.clone());
cfg.set_network_identity(NetworkIdentity::new(
cli.network_name.clone(),
cli.network_secret.clone(),
));
cfg.set_network_identity(NetworkIdentity::new(cli.network_name, cli.network_secret));
cfg.set_dhcp(cli.dhcp);
@@ -401,7 +409,7 @@ impl From<Cli> for TomlConfigLoader {
);
cfg.set_listeners(
cli.parse_listeners()
Cli::parse_listeners(cli.no_listener, cli.listeners)
.into_iter()
.map(|s| s.parse().unwrap())
.collect(),
@@ -415,21 +423,15 @@ impl From<Cli> for TomlConfigLoader {
);
}
cfg.set_rpc_portal(cli.parse_rpc_portal());
cfg.set_rpc_portal(Cli::parse_rpc_portal(cli.rpc_portal));
if cli.external_node.is_some() {
if let Some(external_nodes) = cli.external_node {
let mut old_peers = cfg.get_peers();
old_peers.push(PeerConfig {
uri: cli
.external_node
.clone()
.unwrap()
uri: external_nodes
.parse()
.with_context(|| {
format!(
"failed to parse external node uri: {}",
cli.external_node.unwrap()
)
format!("failed to parse external node uri: {}", external_nodes)
})
.unwrap(),
});
@@ -438,7 +440,7 @@ impl From<Cli> for TomlConfigLoader {
if cli.console_log_level.is_some() {
cfg.set_console_logger_config(ConsoleLoggerConfig {
level: cli.console_log_level.clone(),
level: cli.console_log_level,
});
}
@@ -450,18 +452,12 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.vpn_portal.is_some() {
let url: url::Url = cli
.vpn_portal
.clone()
.unwrap()
cfg.set_inst_name(cli.instance_name);
if let Some(vpn_portal) = cli.vpn_portal {
let url: url::Url = vpn_portal
.parse()
.with_context(|| {
format!(
"failed to parse vpn portal url: {}",
cli.vpn_portal.unwrap()
)
})
.with_context(|| format!("failed to parse vpn portal url: {}", vpn_portal))
.unwrap();
cfg.set_vpn_portal_config(VpnPortalConfig {
client_cidr: url.path()[1..]
@@ -482,11 +478,9 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.manual_routes.is_some() {
if let Some(manual_routes) = cli.manual_routes {
cfg.set_routes(Some(
cli.manual_routes
.clone()
.unwrap()
manual_routes
.iter()
.map(|s| {
s.parse()
@@ -525,6 +519,12 @@ impl From<Cli> for TomlConfigLoader {
}
f.disable_p2p = cli.disable_p2p;
f.relay_all_peer_rpc = cli.relay_all_peer_rpc;
if let Some(ipv6_listener) = cli.ipv6_listener {
f.ipv6_listener = ipv6_listener
.parse()
.with_context(|| format!("failed to parse ipv6 listener: {}", ipv6_listener))
.unwrap();
}
cfg.set_flags(f);
cfg.set_exit_nodes(cli.exit_nodes.clone());
@@ -541,7 +541,7 @@ fn print_event(msg: String) {
);
}
fn peer_conn_info_to_string(p: crate::rpc::PeerConnInfo) -> String {
fn peer_conn_info_to_string(p: crate::proto::cli::PeerConnInfo) -> String {
format!(
"my_peer_id: {}, dst_peer_id: {}, tunnel_info: {:?}",
p.my_peer_id, p.peer_id, p.tunnel
-4
View File
@@ -187,10 +187,6 @@ pub enum SocksError {
#[error("Error with reply: {0}.")]
ReplyError(#[from] ReplyError),
#[cfg(feature = "socks4")]
#[error("Error with reply: {0}.")]
ReplySocks4Error(#[from] socks4::ReplyError),
#[error("Argument input error: `{0}`.")]
ArgumentInputError(&'static str),
+14 -2
View File
@@ -358,7 +358,12 @@ impl IcmpProxy {
if !self.cidr_set.contains_v4(ipv4.get_destination())
&& !is_exit_node
&& !(self.global_ctx.no_tun()
&& Some(ipv4.get_destination()) == self.global_ctx.get_ipv4())
&& Some(ipv4.get_destination())
== self
.global_ctx
.get_ipv4()
.as_ref()
.map(cidr::Ipv4Inet::address))
{
return None;
}
@@ -382,7 +387,14 @@ impl IcmpProxy {
return None;
}
if self.global_ctx.no_tun() && Some(ipv4.get_destination()) == self.global_ctx.get_ipv4() {
if self.global_ctx.no_tun()
&& Some(ipv4.get_destination())
== self
.global_ctx
.get_ipv4()
.as_ref()
.map(cidr::Ipv4Inet::address)
{
self.send_icmp_reply_to_peer(
&ipv4.get_destination(),
&ipv4.get_source(),
+6 -4
View File
@@ -111,7 +111,7 @@ struct Socks5Entry {
type Socks5EntrySet = Arc<DashSet<Socks5Entry>>;
struct Socks5ServerNet {
ipv4_addr: Ipv4Addr,
ipv4_addr: cidr::Ipv4Inet,
auth: Option<SimpleUserPassword>,
smoltcp_net: Arc<Net>,
@@ -122,7 +122,7 @@ struct Socks5ServerNet {
impl Socks5ServerNet {
pub fn new(
ipv4_addr: Ipv4Addr,
ipv4_addr: cidr::Ipv4Inet,
auth: Option<SimpleUserPassword>,
peer_manager: Arc<PeerManager>,
packet_recv: Arc<Mutex<mpsc::Receiver<ZCPacket>>>,
@@ -173,8 +173,10 @@ impl Socks5ServerNet {
dev,
NetConfig::new(
interface_config,
format!("{}/24", ipv4_addr).parse().unwrap(),
vec![format!("{}", ipv4_addr).parse().unwrap()],
format!("{}/{}", ipv4_addr.address(), ipv4_addr.network_length())
.parse()
.unwrap(),
vec![format!("{}", ipv4_addr.address()).parse().unwrap()],
),
);
+9 -3
View File
@@ -1,3 +1,4 @@
use cidr::Ipv4Inet;
use core::panic;
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
@@ -526,7 +527,8 @@ impl TcpProxy {
tracing::warn!("set_nodelay failed, ignore it: {:?}", e);
}
let nat_dst = if Some(nat_entry.dst.ip()) == global_ctx.get_ipv4().map(|ip| IpAddr::V4(ip))
let nat_dst = if Some(nat_entry.dst.ip())
== global_ctx.get_ipv4().map(|ip| IpAddr::V4(ip.address()))
{
format!("127.0.0.1:{}", nat_entry.dst.port())
.parse()
@@ -591,7 +593,10 @@ impl TcpProxy {
{
Some(Ipv4Addr::new(192, 88, 99, 254))
} else {
self.global_ctx.get_ipv4()
self.global_ctx
.get_ipv4()
.as_ref()
.map(cidr::Ipv4Inet::address)
}
}
@@ -621,7 +626,8 @@ impl TcpProxy {
if !self.cidr_set.contains_v4(ipv4.get_destination())
&& !is_exit_node
&& !(self.global_ctx.no_tun()
&& Some(ipv4.get_destination()) == self.global_ctx.get_ipv4())
&& Some(ipv4.get_destination())
== self.global_ctx.get_ipv4().as_ref().map(Ipv4Inet::address))
{
return None;
}
+38 -19
View File
@@ -4,6 +4,8 @@ use std::{
time::Duration,
};
use cidr::Ipv4Inet;
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use pnet::packet::{
ip::IpNextHeaderProtocols,
@@ -11,12 +13,10 @@ use pnet::packet::{
udp::{self, MutableUdpPacket},
Packet,
};
use tachyonix::{channel, Receiver, Sender, TrySendError};
use tokio::{
net::UdpSocket,
sync::{
mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},
Mutex,
},
sync::Mutex,
task::{JoinHandle, JoinSet},
time::timeout,
};
@@ -49,6 +49,7 @@ struct UdpNatEntry {
forward_task: Mutex<Option<JoinHandle<()>>>,
stopped: AtomicBool,
start_time: std::time::Instant,
last_active_time: AtomicCell<std::time::Instant>,
}
impl UdpNatEntry {
@@ -72,6 +73,7 @@ impl UdpNatEntry {
forward_task: Mutex::new(None),
stopped: AtomicBool::new(false),
start_time: std::time::Instant::now(),
last_active_time: AtomicCell::new(std::time::Instant::now()),
})
}
@@ -82,7 +84,7 @@ impl UdpNatEntry {
async fn compose_ipv4_packet(
self: &Arc<Self>,
packet_sender: &mut UnboundedSender<ZCPacket>,
packet_sender: &mut Sender<ZCPacket>,
buf: &mut [u8],
src_v4: &SocketAddrV4,
payload_len: usize,
@@ -119,11 +121,13 @@ impl UdpNatEntry {
p.fill_peer_manager_hdr(self.my_peer_id, self.src_peer_id, PacketType::Data as u8);
p.mut_peer_manager_header().unwrap().set_no_proxy(true);
if let Err(e) = packet_sender.send(p) {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
return Err(Error::AnyhowError(e.into()));
match packet_sender.try_send(p) {
Err(TrySendError::Closed(e)) => {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
Err(Error::Unknown)
}
_ => Ok(()),
}
Ok(())
},
)?;
@@ -132,7 +136,7 @@ impl UdpNatEntry {
async fn forward_task(
self: Arc<Self>,
mut packet_sender: UnboundedSender<ZCPacket>,
mut packet_sender: Sender<ZCPacket>,
virtual_ipv4: Ipv4Addr,
) {
let mut buf = [0u8; 65536];
@@ -141,7 +145,7 @@ impl UdpNatEntry {
loop {
let (len, src_socket) = match timeout(
Duration::from_secs(30),
Duration::from_secs(120),
self.socket.recv_from(&mut udp_body),
)
.await
@@ -167,6 +171,8 @@ impl UdpNatEntry {
continue;
};
self.mark_active();
if src_v4.ip().is_loopback() {
src_v4.set_ip(virtual_ipv4);
}
@@ -189,6 +195,14 @@ impl UdpNatEntry {
self.stop();
}
fn mark_active(&self) {
self.last_active_time.store(std::time::Instant::now());
}
fn is_active(&self) -> bool {
self.last_active_time.load().elapsed().as_secs() < 180
}
}
#[derive(Debug)]
@@ -200,8 +214,8 @@ pub struct UdpProxy {
nat_table: Arc<DashMap<UdpNatKey, Arc<UdpNatEntry>>>,
sender: UnboundedSender<ZCPacket>,
receiver: Mutex<Option<UnboundedReceiver<ZCPacket>>>,
sender: Sender<ZCPacket>,
receiver: Mutex<Option<Receiver<ZCPacket>>>,
tasks: Mutex<JoinSet<()>>,
@@ -232,7 +246,8 @@ impl UdpProxy {
if !self.cidr_set.contains_v4(ipv4.get_destination())
&& !is_exit_node
&& !(self.global_ctx.no_tun()
&& Some(ipv4.get_destination()) == self.global_ctx.get_ipv4())
&& Some(ipv4.get_destination())
== self.global_ctx.get_ipv4().as_ref().map(Ipv4Inet::address))
{
return None;
}
@@ -283,12 +298,16 @@ impl UdpProxy {
.replace(tokio::spawn(UdpNatEntry::forward_task(
nat_entry.clone(),
self.sender.clone(),
self.global_ctx.get_ipv4()?,
self.global_ctx.get_ipv4().map(|x| x.address())?,
)));
}
nat_entry.mark_active();
// TODO: should it be async.
let dst_socket = if Some(ipv4.get_destination()) == self.global_ctx.get_ipv4() {
let dst_socket = if Some(ipv4.get_destination())
== self.global_ctx.get_ipv4().as_ref().map(Ipv4Inet::address)
{
format!("127.0.0.1:{}", udp_packet.get_destination())
.parse()
.unwrap()
@@ -335,7 +354,7 @@ impl UdpProxy {
peer_manager: Arc<PeerManager>,
) -> Result<Arc<Self>, Error> {
let cidr_set = CidrSet::new(global_ctx.clone());
let (sender, receiver) = unbounded_channel();
let (sender, receiver) = channel(1024);
let ret = Self {
global_ctx,
peer_manager,
@@ -360,7 +379,7 @@ impl UdpProxy {
loop {
tokio::time::sleep(Duration::from_secs(15)).await;
nat_table.retain(|_, v| {
if v.start_time.elapsed().as_secs() > 120 {
if !v.is_active() {
tracing::info!(?v, "udp nat table entry removed");
v.stop();
false
@@ -383,7 +402,7 @@ impl UdpProxy {
let mut receiver = self.receiver.lock().await.take().unwrap();
let peer_manager = self.peer_manager.clone();
self.tasks.lock().await.spawn(async move {
while let Some(msg) = receiver.recv().await {
while let Ok(msg) = receiver.recv().await {
let to_peer_id: PeerId = msg.peer_manager_header().unwrap().to_peer_id.get();
tracing::trace!(?msg, ?to_peer_id, "udp nat packet response send");
let ret = peer_manager.send_msg(msg, to_peer_id).await;
+63 -73
View File
@@ -8,8 +8,6 @@ use anyhow::Context;
use cidr::Ipv4Inet;
use tokio::{sync::Mutex, task::JoinSet};
use tonic::transport::server::TcpIncoming;
use tonic::transport::Server;
use crate::common::config::ConfigLoader;
use crate::common::error::Error;
@@ -26,8 +24,13 @@ use crate::peers::peer_conn::PeerConnId;
use crate::peers::peer_manager::{PeerManager, RouteAlgoType};
use crate::peers::rpc_service::PeerManagerRpcService;
use crate::peers::PacketRecvChanReceiver;
use crate::rpc::vpn_portal_rpc_server::VpnPortalRpc;
use crate::rpc::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::cli::VpnPortalRpc;
use crate::proto::cli::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::peer_rpc::PeerCenterRpcServer;
use crate::proto::rpc_impl::standalone::StandAloneServer;
use crate::proto::rpc_types;
use crate::proto::rpc_types::controller::BaseController;
use crate::tunnel::tcp::TcpTunnelListener;
use crate::vpn_portal::{self, VpnPortal};
use super::listeners::ListenerManager;
@@ -104,8 +107,6 @@ pub struct Instance {
nic_ctx: ArcNicCtx,
tasks: JoinSet<()>,
peer_packet_receiver: Arc<Mutex<PacketRecvChanReceiver>>,
peer_manager: Arc<PeerManager>,
listener_manager: Arc<Mutex<ListenerManager<PeerManager>>>,
@@ -122,6 +123,8 @@ pub struct Instance {
#[cfg(feature = "socks5")]
socks5_server: Arc<Socks5Server>,
rpc_server: Option<StandAloneServer<TcpTunnelListener>>,
global_ctx: ArcGlobalCtx,
}
@@ -158,7 +161,7 @@ impl Instance {
DirectConnectorManager::new(global_ctx.clone(), peer_manager.clone());
direct_conn_manager.run();
let udp_hole_puncher = UdpHolePunchConnector::new(global_ctx.clone(), peer_manager.clone());
let udp_hole_puncher = UdpHolePunchConnector::new(peer_manager.clone());
let peer_center = Arc::new(PeerCenterInstance::new(peer_manager.clone()));
@@ -170,6 +173,12 @@ impl Instance {
#[cfg(feature = "socks5")]
let socks5_server = Socks5Server::new(global_ctx.clone(), peer_manager.clone(), None);
let rpc_server = global_ctx.config.get_rpc_portal().and_then(|s| {
Some(StandAloneServer::new(TcpTunnelListener::new(
format!("tcp://{}", s).parse().unwrap(),
)))
});
Instance {
inst_name: global_ctx.inst_name.clone(),
id,
@@ -177,7 +186,6 @@ impl Instance {
peer_packet_receiver: Arc::new(Mutex::new(peer_packet_receiver)),
nic_ctx: Arc::new(Mutex::new(None)),
tasks: JoinSet::new(),
peer_manager,
listener_manager,
conn_manager,
@@ -193,6 +201,8 @@ impl Instance {
#[cfg(feature = "socks5")]
socks5_server,
rpc_server,
global_ctx,
}
}
@@ -260,19 +270,11 @@ impl Instance {
let mut used_ipv4 = HashSet::new();
for route in routes {
if route.ipv4_addr.is_empty() {
continue;
}
let Ok(peer_ipv4_addr) = route.ipv4_addr.parse::<Ipv4Addr>() else {
let Some(peer_ipv4_addr) = route.ipv4_addr else {
continue;
};
let Ok(peer_ipv4_addr) = Ipv4Inet::new(peer_ipv4_addr, 24) else {
continue;
};
used_ipv4.insert(peer_ipv4_addr);
used_ipv4.insert(peer_ipv4_addr.into());
}
let dhcp_inet = used_ipv4.iter().next().unwrap_or(&default_ipv4_addr);
@@ -294,7 +296,7 @@ impl Instance {
continue;
}
let last_ip = current_dhcp_ip.as_ref().map(Ipv4Inet::address);
let last_ip = current_dhcp_ip.clone();
tracing::debug!(
?current_dhcp_ip,
?candidate_ipv4_addr,
@@ -306,11 +308,9 @@ impl Instance {
if let Some(ip) = candidate_ipv4_addr {
if global_ctx_c.no_tun() {
current_dhcp_ip = Some(ip);
global_ctx_c.set_ipv4(Some(ip.address()));
global_ctx_c.issue_event(GlobalCtxEvent::DhcpIpv4Changed(
last_ip,
Some(ip.address()),
));
global_ctx_c.set_ipv4(Some(ip));
global_ctx_c
.issue_event(GlobalCtxEvent::DhcpIpv4Changed(last_ip, Some(ip)));
continue;
}
@@ -321,7 +321,7 @@ impl Instance {
&peer_manager_c,
_peer_packet_receiver.clone(),
);
if let Err(e) = new_nic_ctx.run(ip.address()).await {
if let Err(e) = new_nic_ctx.run(ip).await {
tracing::error!(
?current_dhcp_ip,
?candidate_ipv4_addr,
@@ -335,9 +335,8 @@ impl Instance {
}
current_dhcp_ip = Some(ip);
global_ctx_c.set_ipv4(Some(ip.address()));
global_ctx_c
.issue_event(GlobalCtxEvent::DhcpIpv4Changed(last_ip, Some(ip.address())));
global_ctx_c.set_ipv4(Some(ip));
global_ctx_c.issue_event(GlobalCtxEvent::DhcpIpv4Changed(last_ip, Some(ip)));
} else {
current_dhcp_ip = None;
global_ctx_c.set_ipv4(None);
@@ -375,7 +374,7 @@ impl Instance {
self.check_dhcp_ip_conflict();
}
self.run_rpc_server()?;
self.run_rpc_server().await?;
// run after tun device created, so listener can bind to tun device, which may be required by win 10
self.ip_proxy = Some(IpProxy::new(
@@ -441,11 +440,8 @@ impl Instance {
Ok(())
}
pub async fn wait(&mut self) {
while let Some(ret) = self.tasks.join_next().await {
tracing::info!("task finished: {:?}", ret);
ret.unwrap();
}
pub async fn wait(&self) {
self.peer_manager.wait().await;
}
pub fn id(&self) -> uuid::Uuid {
@@ -456,24 +452,28 @@ impl Instance {
self.peer_manager.my_peer_id()
}
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc {
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc<Controller = BaseController> + Clone {
#[derive(Clone)]
struct VpnPortalRpcService {
peer_mgr: Weak<PeerManager>,
vpn_portal: Weak<Mutex<Box<dyn VpnPortal>>>,
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl VpnPortalRpc for VpnPortalRpcService {
type Controller = BaseController;
async fn get_vpn_portal_info(
&self,
_request: tonic::Request<GetVpnPortalInfoRequest>,
) -> Result<tonic::Response<GetVpnPortalInfoResponse>, tonic::Status> {
_: BaseController,
_request: GetVpnPortalInfoRequest,
) -> Result<GetVpnPortalInfoResponse, rpc_types::error::Error> {
let Some(vpn_portal) = self.vpn_portal.upgrade() else {
return Err(tonic::Status::unavailable("vpn portal not available"));
return Err(anyhow::anyhow!("vpn portal not available").into());
};
let Some(peer_mgr) = self.peer_mgr.upgrade() else {
return Err(tonic::Status::unavailable("peer manager not available"));
return Err(anyhow::anyhow!("peer manager not available").into());
};
let vpn_portal = vpn_portal.lock().await;
@@ -485,7 +485,7 @@ impl Instance {
}),
};
Ok(tonic::Response::new(ret))
Ok(ret)
}
}
@@ -495,46 +495,36 @@ impl Instance {
}
}
fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(addr) = self.global_ctx.config.get_rpc_portal() else {
async fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(_) = self.global_ctx.config.get_rpc_portal() else {
tracing::info!("rpc server not enabled, because rpc_portal is not set.");
return Ok(());
};
use crate::proto::cli::*;
let peer_mgr = self.peer_manager.clone();
let conn_manager = self.conn_manager.clone();
let net_ns = self.global_ctx.net_ns.clone();
let peer_center = self.peer_center.clone();
let vpn_portal_rpc = self.get_vpn_portal_rpc_service();
let incoming = TcpIncoming::new(addr, true, None)
.map_err(|e| anyhow::anyhow!("create rpc server failed. addr: {}, err: {}", addr, e))?;
self.tasks.spawn(async move {
let _g = net_ns.guard();
Server::builder()
.add_service(
crate::rpc::peer_manage_rpc_server::PeerManageRpcServer::new(
PeerManagerRpcService::new(peer_mgr),
),
)
.add_service(
crate::rpc::connector_manage_rpc_server::ConnectorManageRpcServer::new(
ConnectorManagerRpcService(conn_manager.clone()),
),
)
.add_service(
crate::rpc::peer_center_rpc_server::PeerCenterRpcServer::new(
peer_center.get_rpc_service(),
),
)
.add_service(crate::rpc::vpn_portal_rpc_server::VpnPortalRpcServer::new(
vpn_portal_rpc,
))
.serve_with_incoming(incoming)
.await
.with_context(|| format!("rpc server failed. addr: {}", addr))
.unwrap();
});
Ok(())
let s = self.rpc_server.as_mut().unwrap();
s.registry().register(
PeerManageRpcServer::new(PeerManagerRpcService::new(peer_mgr)),
"",
);
s.registry().register(
ConnectorManageRpcServer::new(ConnectorManagerRpcService(conn_manager)),
"",
);
s.registry()
.register(PeerCenterRpcServer::new(peer_center.get_rpc_service()), "");
s.registry()
.register(VpnPortalRpcServer::new(vpn_portal_rpc), "");
let _g = self.global_ctx.net_ns.guard();
Ok(s.serve().await.with_context(|| "rpc server start failed")?)
}
pub fn get_global_ctx(&self) -> ArcGlobalCtx {
+14 -5
View File
@@ -111,9 +111,10 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
}
if self.global_ctx.config.get_flags().enable_ipv6 {
let ipv6_listener = self.global_ctx.config.get_flags().ipv6_listener.clone();
let _ = self
.add_listener(
UdpTunnelListener::new("udp://[::]:0".parse().unwrap()),
UdpTunnelListener::new(ipv6_listener.parse().unwrap()),
false,
)
.await?;
@@ -159,8 +160,16 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let tunnel_info = ret.info().unwrap();
global_ctx.issue_event(GlobalCtxEvent::ConnectionAccepted(
tunnel_info.local_addr.clone(),
tunnel_info.remote_addr.clone(),
tunnel_info
.local_addr
.clone()
.unwrap_or_default()
.to_string(),
tunnel_info
.remote_addr
.clone()
.unwrap_or_default()
.to_string(),
));
tracing::info!(ret = ?ret, "conn accepted");
let peer_manager = peer_manager.clone();
@@ -169,8 +178,8 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let server_ret = peer_manager.handle_tunnel(ret).await;
if let Err(e) = &server_ret {
global_ctx.issue_event(GlobalCtxEvent::ConnectionError(
tunnel_info.local_addr,
tunnel_info.remote_addr,
tunnel_info.local_addr.unwrap_or_default().to_string(),
tunnel_info.remote_addr.unwrap_or_default().to_string(),
e.to_string(),
));
tracing::error!(error = ?e, "handle conn error");
+63 -14
View File
@@ -242,6 +242,7 @@ pub struct VirtualNic {
ifname: Option<String>,
ifcfg: Box<dyn IfConfiguerTrait + Send + Sync + 'static>,
}
#[cfg(target_os = "windows")]
pub fn checkreg(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
@@ -352,20 +353,26 @@ impl VirtualNic {
Ok(_) => tracing::trace!("delete successful!"),
Err(e) => tracing::error!("An error occurred: {}", e),
}
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
if !dev_name.is_empty() {
config.tun_name(format!("{}", dev_name));
} else {
config.tun_name(format!("et_{}_{}", c, s));
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
let random_dev_name = format!("et_{}_{}", c, s);
config.tun_name(random_dev_name.clone());
let mut flags = self.global_ctx.get_flags();
flags.dev_name = random_dev_name.clone();
self.global_ctx.set_flags(flags);
}
config.platform_config(|config| {
@@ -484,6 +491,38 @@ impl VirtualNic {
}
}
#[cfg(target_os = "windows")]
pub fn reg_change_catrgory_in_profile(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
let profiles_key = hklm.open_subkey_with_flags(
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\NetworkList\\Profiles",
KEY_ALL_ACCESS,
)?;
for subkey_name in profiles_key.enum_keys().filter_map(Result::ok) {
let subkey = profiles_key.open_subkey_with_flags(&subkey_name, KEY_ALL_ACCESS)?;
match subkey.get_value::<String, _>("ProfileName") {
Ok(profile_name) => {
if !dev_name.is_empty() && dev_name == profile_name {
match subkey.set_value("Category", &1u32) {
Ok(_) => tracing::trace!("Successfully set Category in registry"),
Err(e) => tracing::error!("Failed to set Category in registry: {}", e),
}
}
}
Err(e) => {
tracing::error!(
"Failed to read ProfileName for subkey {}: {}",
subkey_name,
e
);
}
}
}
Ok(())
}
pub struct NicCtx {
global_ctx: ArcGlobalCtx,
peer_mgr: Weak<PeerManager>,
@@ -508,14 +547,16 @@ impl NicCtx {
}
}
async fn assign_ipv4_to_tun_device(&self, ipv4_addr: Ipv4Addr) -> Result<(), Error> {
async fn assign_ipv4_to_tun_device(&self, ipv4_addr: cidr::Ipv4Inet) -> Result<(), Error> {
let nic = self.nic.lock().await;
nic.link_up().await?;
nic.remove_ip(None).await?;
nic.add_ip(ipv4_addr, 24).await?;
nic.add_ip(ipv4_addr.address(), ipv4_addr.network_length() as i32)
.await?;
#[cfg(any(target_os = "macos", target_os = "freebsd"))]
{
nic.add_route(ipv4_addr, 24).await?;
nic.add_route(ipv4_addr.first_address(), ipv4_addr.network_length())
.await?;
}
Ok(())
}
@@ -558,6 +599,7 @@ impl NicCtx {
}
Self::do_forward_nic_to_peers_ipv4(ret.unwrap(), mgr.as_ref()).await;
}
panic!("nic stream closed");
});
Ok(())
@@ -578,6 +620,7 @@ impl NicCtx {
tracing::error!(?ret, "do_forward_tunnel_to_nic sink error");
}
}
panic!("peer packet receiver closed");
});
}
@@ -668,11 +711,17 @@ impl NicCtx {
Ok(())
}
pub async fn run(&mut self, ipv4_addr: Ipv4Addr) -> Result<(), Error> {
pub async fn run(&mut self, ipv4_addr: cidr::Ipv4Inet) -> Result<(), Error> {
let tunnel = {
let mut nic = self.nic.lock().await;
match nic.create_dev().await {
Ok(ret) => {
#[cfg(target_os = "windows")]
{
let dev_name = self.global_ctx.get_flags().dev_name;
let _ = reg_change_catrgory_in_profile(&dev_name);
}
self.global_ctx
.issue_event(GlobalCtxEvent::TunDeviceReady(nic.ifname().to_string()));
ret
+20 -3
View File
@@ -6,14 +6,16 @@ use std::{
use crate::{
common::{
config::{ConfigLoader, TomlConfigLoader},
constants::EASYTIER_VERSION,
global_ctx::GlobalCtxEvent,
stun::StunInfoCollectorTrait,
},
instance::instance::Instance,
peers::rpc_service::PeerManagerRpcService,
rpc::{
cli::{PeerInfo, Route, StunInfo},
peer::GetIpListResponse,
proto::{
cli::{PeerInfo, Route},
common::StunInfo,
peer_rpc::GetIpListResponse,
},
utils::{list_peer_route_pair, PeerRoutePair},
};
@@ -24,6 +26,8 @@ use tokio::task::JoinSet;
#[derive(Default, Clone, Debug, Serialize, Deserialize)]
pub struct MyNodeInfo {
pub virtual_ipv4: String,
pub hostname: String,
pub version: String,
pub ips: GetIpListResponse,
pub stun_info: StunInfo,
pub listeners: Vec<String>,
@@ -37,6 +41,7 @@ struct EasyTierData {
routes: Arc<RwLock<Vec<Route>>>,
peers: Arc<RwLock<Vec<PeerInfo>>>,
tun_fd: Arc<RwLock<Option<i32>>>,
tun_dev_name: Arc<RwLock<String>>,
}
pub struct EasyTierLauncher {
@@ -132,11 +137,17 @@ impl EasyTierLauncher {
let vpn_portal = instance.get_vpn_portal_inst();
tasks.spawn(async move {
loop {
// Update TUN Device Name
*data_c.tun_dev_name.write().unwrap() = global_ctx_c.get_flags().dev_name.clone();
let node_info = MyNodeInfo {
virtual_ipv4: global_ctx_c
.get_ipv4()
.map(|x| x.to_string())
.unwrap_or_default(),
hostname: global_ctx_c.get_hostname(),
version: EASYTIER_VERSION.to_string(),
ips: global_ctx_c.get_ip_collector().collect_ip_addrs().await,
stun_info: global_ctx_c.get_stun_info_collector().get_stun_info(),
listeners: global_ctx_c
@@ -229,6 +240,10 @@ impl EasyTierLauncher {
.load(std::sync::atomic::Ordering::Relaxed)
}
pub fn get_dev_name(&self) -> String {
self.data.tun_dev_name.read().unwrap().clone()
}
pub fn get_events(&self) -> Vec<(DateTime<Local>, GlobalCtxEvent)> {
let events = self.data.events.read().unwrap();
events.iter().cloned().collect()
@@ -261,6 +276,7 @@ impl Drop for EasyTierLauncher {
#[derive(Deserialize, Serialize, Debug)]
pub struct NetworkInstanceRunningInfo {
pub dev_name: String,
pub my_node_info: MyNodeInfo,
pub events: Vec<(DateTime<Local>, GlobalCtxEvent)>,
pub node_info: MyNodeInfo,
@@ -300,6 +316,7 @@ impl NetworkInstance {
let peer_route_pairs = list_peer_route_pair(peers.clone(), routes.clone());
Some(NetworkInstanceRunningInfo {
dev_name: launcher.get_dev_name(),
my_node_info: launcher.get_node_info(),
events: launcher.get_events(),
node_info: launcher.get_node_info(),
+3 -1
View File
@@ -6,10 +6,12 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod proto;
mod vpn_portal;
pub mod common;
pub mod launcher;
pub mod rpc;
pub mod tunnel;
pub mod utils;
pub const VERSION: &str = common::constants::EASYTIER_VERSION;
+104 -63
View File
@@ -1,7 +1,7 @@
use std::{
collections::BTreeSet,
sync::Arc,
time::{Duration, Instant, SystemTime},
time::{Duration, Instant},
};
use crossbeam::atomic::AtomicCell;
@@ -18,14 +18,17 @@ use crate::{
route_trait::{RouteCostCalculator, RouteCostCalculatorInterface},
rpc_service::PeerManagerRpcService,
},
rpc::{GetGlobalPeerMapRequest, GetGlobalPeerMapResponse},
proto::{
peer_rpc::{
GetGlobalPeerMapRequest, GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterRpc,
PeerCenterRpcClientFactory, PeerCenterRpcServer, PeerInfoForGlobalMap,
ReportPeersRequest, ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::{
server::PeerCenterServer,
service::{GlobalPeerMap, PeerCenterService, PeerCenterServiceClient, PeerInfoForGlobalMap},
Digest, Error,
};
use super::{server::PeerCenterServer, Digest, Error};
struct PeerCenterBase {
peer_mgr: Arc<PeerManager>,
@@ -44,11 +47,14 @@ struct PeridicJobCtx<T> {
impl PeerCenterBase {
pub async fn init(&self) -> Result<(), Error> {
self.peer_mgr.get_peer_rpc_mgr().run_service(
SERVICE_ID,
PeerCenterServer::new(self.peer_mgr.my_peer_id()).serve(),
);
self.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
PeerCenterRpcServer::new(PeerCenterServer::new(self.peer_mgr.my_peer_id())),
&self.peer_mgr.get_global_ctx().get_network_name(),
);
Ok(())
}
@@ -59,7 +65,10 @@ impl PeerCenterBase {
}
// find peer with alphabetical smallest id.
let mut min_peer = peer_mgr.my_peer_id();
for peer in peers.iter() {
for peer in peers
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
let peer_id = peer.peer_id;
if peer_id < min_peer {
min_peer = peer_id;
@@ -70,11 +79,17 @@ impl PeerCenterBase {
async fn init_periodic_job<
T: Send + Sync + 'static + Clone,
Fut: Future<Output = Result<u32, tarpc::client::RpcError>> + Send + 'static,
Fut: Future<Output = Result<u32, rpc_types::error::Error>> + Send + 'static,
>(
&self,
job_ctx: T,
job_fn: (impl Fn(PeerCenterServiceClient, Arc<PeridicJobCtx<T>>) -> Fut + Send + Sync + 'static),
job_fn: (impl Fn(
Box<dyn PeerCenterRpc<Controller = BaseController> + Send>,
Arc<PeridicJobCtx<T>>,
) -> Fut
+ Send
+ Sync
+ 'static),
) -> () {
let my_peer_id = self.peer_mgr.my_peer_id();
let peer_mgr = self.peer_mgr.clone();
@@ -96,14 +111,14 @@ impl PeerCenterBase {
tracing::trace!(?center_peer, "run periodic job");
let rpc_mgr = peer_mgr.get_peer_rpc_mgr();
let _g = lock.lock().await;
let ret = rpc_mgr
.do_client_rpc_scoped(SERVICE_ID, center_peer, |c| async {
let client =
PeerCenterServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
job_fn(client, ctx.clone()).await
})
.await;
let stub = rpc_mgr
.rpc_client()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>(
my_peer_id,
center_peer,
peer_mgr.get_global_ctx().get_network_name(),
);
let ret = job_fn(stub, ctx.clone()).await;
drop(_g);
let Ok(sleep_time_ms) = ret else {
@@ -130,25 +145,34 @@ impl PeerCenterBase {
}
}
#[derive(Clone)]
pub struct PeerCenterInstanceService {
global_peer_map: Arc<RwLock<GlobalPeerMap>>,
global_peer_map_digest: Arc<AtomicCell<Digest>>,
}
#[tonic::async_trait]
impl crate::rpc::cli::peer_center_rpc_server::PeerCenterRpc for PeerCenterInstanceService {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterInstanceService {
type Controller = BaseController;
async fn get_global_peer_map(
&self,
_request: tonic::Request<GetGlobalPeerMapRequest>,
) -> Result<tonic::Response<GetGlobalPeerMapResponse>, tonic::Status> {
let global_peer_map = self.global_peer_map.read().unwrap().clone();
Ok(tonic::Response::new(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map
.map
.into_iter()
.map(|(k, v)| (k, v))
.collect(),
}))
_: BaseController,
_: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let global_peer_map = self.global_peer_map.read().unwrap();
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map.clone(),
digest: Some(self.global_peer_map_digest.load()),
})
}
async fn report_peers(
&self,
_: BaseController,
_req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
Err(anyhow::anyhow!("not implemented").into())
}
}
@@ -166,7 +190,7 @@ impl PeerCenterInstance {
PeerCenterInstance {
peer_mgr: peer_mgr.clone(),
client: Arc::new(PeerCenterBase::new(peer_mgr.clone())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::new())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::default())),
global_peer_map_digest: Arc::new(AtomicCell::new(Digest::default())),
global_peer_map_update_time: Arc::new(AtomicCell::new(Instant::now())),
}
@@ -193,35 +217,38 @@ impl PeerCenterInstance {
self.client
.init_periodic_job(ctx, |client, ctx| async move {
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
if ctx
.job_ctx
.global_peer_map_update_time
.load()
.elapsed()
.as_secs()
> 60
> 120
{
ctx.job_ctx.global_peer_map_digest.store(Digest::default());
}
let ret = client
.get_global_peer_map(rpc_ctx, ctx.job_ctx.global_peer_map_digest.load())
.await?;
.get_global_peer_map(
BaseController::default(),
GetGlobalPeerMapRequest {
digest: ctx.job_ctx.global_peer_map_digest.load(),
},
)
.await;
let Ok(resp) = ret else {
tracing::error!(
"get global info from center server got error result: {:?}",
ret
);
return Ok(1000);
return Ok(10000);
};
let Some(resp) = resp else {
return Ok(5000);
};
if resp == GetGlobalPeerMapResponse::default() {
// digest match, no need to update
return Ok(15000);
}
tracing::info!(
"get global info from center server: {:?}, digest: {:?}",
@@ -229,13 +256,17 @@ impl PeerCenterInstance {
resp.digest
);
*ctx.job_ctx.global_peer_map.write().unwrap() = resp.global_peer_map;
ctx.job_ctx.global_peer_map_digest.store(resp.digest);
*ctx.job_ctx.global_peer_map.write().unwrap() = GlobalPeerMap {
map: resp.global_peer_map,
};
ctx.job_ctx
.global_peer_map_digest
.store(resp.digest.unwrap_or_default());
ctx.job_ctx
.global_peer_map_update_time
.store(Instant::now());
Ok(5000)
Ok(15000)
})
.await;
}
@@ -274,12 +305,15 @@ impl PeerCenterInstance {
return Ok(5000);
}
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
let ret = client
.report_peers(rpc_ctx, my_node_id.clone(), peers)
.await?;
.report_peers(
BaseController::default(),
ReportPeersRequest {
my_peer_id: my_node_id,
peer_infos: Some(peers),
},
)
.await;
if ret.is_ok() {
ctx.job_ctx.last_center_peer.store(ctx.center_peer.load());
@@ -311,15 +345,22 @@ impl PeerCenterInstance {
global_peer_map_update_time: Arc<AtomicCell<Instant>>,
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
let ret = self
.global_peer_map_clone
impl RouteCostCalculatorImpl {
fn directed_cost(&self, src: PeerId, dst: PeerId) -> Option<i32> {
self.global_peer_map_clone
.map
.get(&src)
.and_then(|src_peer_info| src_peer_info.direct_peers.get(&dst))
.and_then(|info| Some(info.latency_ms));
ret.unwrap_or(80)
.and_then(|info| Some(info.latency_ms))
}
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
if let Some(cost) = self.directed_cost(src, dst) {
return cost;
}
self.directed_cost(dst, src).unwrap_or(100)
}
fn begin_update(&mut self) {
@@ -339,7 +380,7 @@ impl PeerCenterInstance {
Box::new(RouteCostCalculatorImpl {
global_peer_map: self.global_peer_map.clone(),
global_peer_map_clone: GlobalPeerMap::new(),
global_peer_map_clone: GlobalPeerMap::default(),
last_update_time: AtomicCell::new(
self.global_peer_map_update_time.load() - Duration::from_secs(1),
),
@@ -395,7 +436,7 @@ mod tests {
false
}
},
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;
@@ -404,7 +445,7 @@ mod tests {
let rpc_service = pc.get_rpc_service();
wait_for_condition(
|| async { rpc_service.global_peer_map.read().unwrap().map.len() == 3 },
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;
+31 -1
View File
@@ -5,9 +5,13 @@
// peer center is not guaranteed to be stable and can be changed when peer enter or leave.
// it's used to reduce the cost to exchange infos between peers.
use std::collections::BTreeMap;
use crate::proto::cli::PeerInfo;
use crate::proto::peer_rpc::{DirectConnectedPeerInfo, PeerInfoForGlobalMap};
pub mod instance;
mod server;
mod service;
#[derive(thiserror::Error, Debug, serde::Deserialize, serde::Serialize)]
pub enum Error {
@@ -18,3 +22,29 @@ pub enum Error {
}
pub type Digest = u64;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}
+38 -24
View File
@@ -7,15 +7,22 @@ use std::{
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use once_cell::sync::Lazy;
use tokio::{task::JoinSet};
use tokio::task::JoinSet;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{
service::{GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterService, PeerInfoForGlobalMap},
Digest, Error,
use crate::{
common::PeerId,
proto::{
peer_rpc::{
DirectConnectedPeerInfo, GetGlobalPeerMapRequest, GetGlobalPeerMapResponse,
GlobalPeerMap, PeerCenterRpc, PeerInfoForGlobalMap, ReportPeersRequest,
ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::Digest;
#[derive(Debug, Clone, PartialEq, PartialOrd, Ord, Eq, Hash)]
pub(crate) struct SrcDstPeerPair {
src: PeerId,
@@ -95,15 +102,19 @@ impl PeerCenterServer {
}
}
#[tarpc::server]
impl PeerCenterService for PeerCenterServer {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterServer {
type Controller = BaseController;
#[tracing::instrument()]
async fn report_peers(
self,
_: tarpc::context::Context,
my_peer_id: PeerId,
peers: PeerInfoForGlobalMap,
) -> Result<(), Error> {
&self,
_: BaseController,
req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
let my_peer_id = req.my_peer_id;
let peers = req.peer_infos.unwrap_or_default();
tracing::debug!("receive report_peers");
let data = get_global_data(self.my_node_id);
@@ -125,20 +136,23 @@ impl PeerCenterService for PeerCenterServer {
data.digest
.store(PeerCenterServer::calc_global_digest(self.my_node_id));
Ok(())
Ok(ReportPeersResponse::default())
}
#[tracing::instrument()]
async fn get_global_peer_map(
self,
_: tarpc::context::Context,
digest: Digest,
) -> Result<Option<GetGlobalPeerMapResponse>, Error> {
&self,
_: BaseController,
req: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let digest = req.digest;
let data = get_global_data(self.my_node_id);
if digest == data.digest.load() && digest != 0 {
return Ok(None);
return Ok(GetGlobalPeerMapResponse::default());
}
let mut global_peer_map = GlobalPeerMap::new();
let mut global_peer_map = GlobalPeerMap::default();
for item in data.global_peer_map.iter() {
let (pair, entry) = item.pair();
global_peer_map
@@ -151,9 +165,9 @@ impl PeerCenterService for PeerCenterServer {
.insert(pair.dst, entry.info.clone());
}
Ok(Some(GetGlobalPeerMapResponse {
global_peer_map,
digest: data.digest.load(),
}))
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map,
digest: Some(data.digest.load()),
})
}
}
-64
View File
@@ -1,64 +0,0 @@
use std::collections::BTreeMap;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{Digest, Error};
use crate::rpc::PeerInfo;
pub type PeerInfoForGlobalMap = crate::rpc::cli::PeerInfoForGlobalMap;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}
// a global peer topology map, peers can use it to find optimal path to other peers
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct GlobalPeerMap {
pub map: BTreeMap<PeerId, PeerInfoForGlobalMap>,
}
impl GlobalPeerMap {
pub fn new() -> Self {
GlobalPeerMap {
map: BTreeMap::new(),
}
}
}
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub struct GetGlobalPeerMapResponse {
pub global_peer_map: GlobalPeerMap,
pub digest: Digest,
}
#[tarpc::service]
pub trait PeerCenterService {
// report center server which peer is directly connected to me
// digest is a hash of current peer map, if digest not match, we need to transfer the whole map
async fn report_peers(my_peer_id: PeerId, peers: PeerInfoForGlobalMap) -> Result<(), Error>;
async fn get_global_peer_map(digest: Digest)
-> Result<Option<GetGlobalPeerMapResponse>, Error>;
}
+27 -129
View File
@@ -1,27 +1,11 @@
use std::{
sync::Arc,
time::{Duration, SystemTime},
};
use dashmap::DashMap;
use tokio::{sync::Mutex, task::JoinSet};
use std::sync::{Arc, Mutex};
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
PeerId,
},
common::{error::Error, global_ctx::ArcGlobalCtx, scoped_task::ScopedTask, PeerId},
tunnel::packet_def::ZCPacket,
};
use super::{
foreign_network_manager::{ForeignNetworkServiceClient, FOREIGN_NETWORK_SERVICE_ID},
peer_conn::PeerConn,
peer_map::PeerMap,
peer_rpc::PeerRpcManager,
PacketRecvChan,
};
use super::{peer_conn::PeerConn, peer_map::PeerMap, peer_rpc::PeerRpcManager, PacketRecvChan};
pub struct ForeignNetworkClient {
global_ctx: ArcGlobalCtx,
@@ -29,9 +13,7 @@ pub struct ForeignNetworkClient {
my_peer_id: PeerId,
peer_map: Arc<PeerMap>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
tasks: Mutex<JoinSet<()>>,
task: Mutex<Option<ScopedTask<()>>>,
}
impl ForeignNetworkClient {
@@ -46,17 +28,13 @@ impl ForeignNetworkClient {
global_ctx.clone(),
my_peer_id,
));
let next_hop = Arc::new(DashMap::new());
Self {
global_ctx,
peer_rpc,
my_peer_id,
peer_map,
next_hop,
tasks: Mutex::new(JoinSet::new()),
task: Mutex::new(None),
}
}
@@ -65,91 +43,19 @@ impl ForeignNetworkClient {
self.peer_map.add_new_peer_conn(peer_conn).await
}
async fn collect_next_hop_in_foreign_network_task(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
) {
loop {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
peer_map.clean_peer_without_conn().await;
let new_next_hop = Self::collect_next_hop_in_foreign_network(
network_identity.clone(),
peer_map.clone(),
peer_rpc.clone(),
)
.await;
next_hop.clear();
for (k, v) in new_next_hop.into_iter() {
next_hop.insert(k, v);
}
}
}
async fn collect_next_hop_in_foreign_network(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
) -> DashMap<PeerId, PeerId> {
let peers = peer_map.list_peers().await;
let mut tasks = JoinSet::new();
if !peers.is_empty() {
tracing::warn!(?peers, my_peer_id = ?peer_rpc.my_peer_id(), "collect next hop in foreign network");
}
for peer in peers {
let peer_rpc = peer_rpc.clone();
let network_identity = network_identity.clone();
tasks.spawn(async move {
let Ok(Some(peers_in_foreign)) = peer_rpc
.do_client_rpc_scoped(FOREIGN_NETWORK_SERVICE_ID, peer, |c| async {
let c =
ForeignNetworkServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(2);
let ret = c.list_network_peers(rpc_ctx, network_identity).await;
ret
})
.await
else {
return (peer, vec![]);
};
(peer, peers_in_foreign)
});
}
let new_next_hop = DashMap::new();
while let Some(join_ret) = tasks.join_next().await {
let Ok((gateway, peer_ids)) = join_ret else {
tracing::error!(?join_ret, "collect next hop in foreign network failed");
continue;
};
for ret in peer_ids {
new_next_hop.insert(ret, gateway);
}
}
new_next_hop
}
pub fn has_next_hop(&self, peer_id: PeerId) -> bool {
self.get_next_hop(peer_id).is_some()
}
pub fn is_peer_public_node(&self, peer_id: &PeerId) -> bool {
self.peer_map.has_peer(*peer_id)
pub async fn list_public_peers(&self) -> Vec<PeerId> {
self.peer_map.list_peers().await
}
pub fn get_next_hop(&self, peer_id: PeerId) -> Option<PeerId> {
if self.peer_map.has_peer(peer_id) {
return Some(peer_id.clone());
}
self.next_hop.get(&peer_id).map(|v| v.clone())
None
}
pub async fn send_msg(&self, msg: ZCPacket, peer_id: PeerId) -> Result<(), Error> {
@@ -162,40 +68,32 @@ impl ForeignNetworkClient {
?next_hop,
"foreign network client send msg failed"
);
} else {
tracing::info!(
?peer_id,
?next_hop,
"foreign network client send msg success"
);
}
return ret;
}
Err(Error::RouteError(Some("no next hop".to_string())))
}
pub fn list_foreign_peers(&self) -> Vec<PeerId> {
let mut peers = vec![];
for item in self.next_hop.iter() {
if item.key() != &self.my_peer_id {
peers.push(item.key().clone());
}
}
peers
}
pub async fn run(&self) {
self.tasks
.lock()
.await
.spawn(Self::collect_next_hop_in_foreign_network_task(
self.global_ctx.get_network_identity(),
self.peer_map.clone(),
self.peer_rpc.clone(),
self.next_hop.clone(),
));
}
pub fn get_next_hop_table(&self) -> DashMap<PeerId, PeerId> {
let next_hop = DashMap::new();
for item in self.next_hop.iter() {
next_hop.insert(item.key().clone(), item.value().clone());
}
next_hop
let peer_map = Arc::downgrade(&self.peer_map);
*self.task.lock().unwrap() = Some(
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
let Some(peer_map) = peer_map.upgrade() else {
break;
};
peer_map.clean_peer_without_conn().await;
}
})
.into(),
);
}
pub fn get_peer_map(&self) -> Arc<PeerMap> {
File diff suppressed because it is too large Load Diff
+3 -1
View File
@@ -5,8 +5,8 @@ pub mod peer_conn_ping;
pub mod peer_manager;
pub mod peer_map;
pub mod peer_ospf_route;
pub mod peer_rip_route;
pub mod peer_rpc;
pub mod peer_rpc_service;
pub mod route_trait;
pub mod rpc_service;
@@ -15,6 +15,8 @@ pub mod foreign_network_manager;
pub mod encrypt;
pub mod peer_task;
#[cfg(test)]
pub mod tests;
+1 -1
View File
@@ -11,7 +11,7 @@ use super::{
peer_conn::{PeerConn, PeerConnId},
PacketRecvChan,
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::{
common::{
error::Error,
+79 -15
View File
@@ -8,7 +8,7 @@ use std::{
},
};
use futures::{SinkExt, StreamExt, TryFutureExt};
use futures::{StreamExt, TryFutureExt};
use prost::Message;
@@ -18,23 +18,26 @@ use tokio::{
time::{timeout, Duration},
};
use tokio_util::sync::PollSender;
use tracing::Instrument;
use zerocopy::AsBytes;
use crate::{
common::{
config::{NetworkIdentity, NetworkSecretDigest},
defer,
error::Error,
global_ctx::ArcGlobalCtx,
PeerId,
},
rpc::{HandshakeRequest, PeerConnInfo, PeerConnStats, TunnelInfo},
tunnel::packet_def::PacketType,
proto::{
cli::{PeerConnInfo, PeerConnStats},
common::TunnelInfo,
peer_rpc::HandshakeRequest,
},
tunnel::{
filter::{StatsRecorderTunnelFilter, TunnelFilter, TunnelWithFilter},
mpsc::{MpscTunnel, MpscTunnelSender},
packet_def::ZCPacket,
packet_def::{PacketType, ZCPacket},
stats::{Throughput, WindowLatency},
Tunnel, TunnelError, ZCPacketStream,
},
@@ -90,7 +93,7 @@ impl PeerConn {
let peer_conn_tunnel_filter = StatsRecorderTunnelFilter::new();
let throughput = peer_conn_tunnel_filter.filter_output();
let peer_conn_tunnel = TunnelWithFilter::new(tunnel, peer_conn_tunnel_filter);
let mut mpsc_tunnel = MpscTunnel::new(peer_conn_tunnel);
let mut mpsc_tunnel = MpscTunnel::new(peer_conn_tunnel, Some(Duration::from_secs(7)));
let (recv, sink) = (mpsc_tunnel.get_stream(), mpsc_tunnel.get_sink());
@@ -100,7 +103,9 @@ impl PeerConn {
my_peer_id,
global_ctx,
tunnel: Arc::new(Mutex::new(Box::new(mpsc_tunnel))),
tunnel: Arc::new(Mutex::new(Box::new(defer::Defer::new(move || {
mpsc_tunnel.close()
})))),
sink,
recv: Arc::new(Mutex::new(Some(recv))),
tunnel_info,
@@ -219,7 +224,12 @@ impl PeerConn {
self.info = Some(rsp);
self.is_client = Some(false);
self.send_handshake().await?;
Ok(())
if self.get_peer_id() == self.my_peer_id {
Err(Error::WaitRespError("peer id conflict".to_owned()))
} else {
Ok(())
}
}
#[tracing::instrument]
@@ -230,7 +240,12 @@ impl PeerConn {
tracing::info!("handshake response: {:?}", rsp);
self.info = Some(rsp);
self.is_client = Some(true);
Ok(())
if self.get_peer_id() == self.my_peer_id {
Err(Error::WaitRespError("peer id conflict".to_owned()))
} else {
Ok(())
}
}
pub fn handshake_done(&self) -> bool {
@@ -240,7 +255,7 @@ impl PeerConn {
pub async fn start_recv_loop(&mut self, packet_recv_chan: PacketRecvChan) {
let mut stream = self.recv.lock().await.take().unwrap();
let sink = self.sink.clone();
let mut sender = PollSender::new(packet_recv_chan.clone());
let sender = packet_recv_chan.clone();
let close_event_sender = self.close_event_sender.clone().unwrap();
let conn_id = self.conn_id;
let ctrl_sender = self.ctrl_resp_sender.clone();
@@ -306,6 +321,7 @@ impl PeerConn {
self.ctrl_resp_sender.clone(),
self.latency_stats.clone(),
self.loss_rate_stats.clone(),
self.throughput.clone(),
);
let close_event_sender = self.close_event_sender.clone().unwrap();
@@ -385,10 +401,29 @@ mod tests {
use super::*;
use crate::common::global_ctx::tests::get_mock_global_ctx;
use crate::common::new_peer_id;
use crate::common::scoped_task::ScopedTask;
use crate::tunnel::filter::tests::DropSendTunnelFilter;
use crate::tunnel::filter::PacketRecorderTunnelFilter;
use crate::tunnel::ring::create_ring_tunnel_pair;
#[tokio::test]
async fn peer_conn_handshake_same_id() {
let (c, s) = create_ring_tunnel_pair();
let c_peer_id = new_peer_id();
let s_peer_id = c_peer_id;
let mut c_peer = PeerConn::new(c_peer_id, get_mock_global_ctx(), Box::new(c));
let mut s_peer = PeerConn::new(s_peer_id, get_mock_global_ctx(), Box::new(s));
let (c_ret, s_ret) = tokio::join!(
c_peer.do_handshake_as_client(),
s_peer.do_handshake_as_server()
);
assert!(c_ret.is_err());
assert!(s_ret.is_err());
}
#[tokio::test]
async fn peer_conn_handshake() {
let (c, s) = create_ring_tunnel_pair();
@@ -426,13 +461,25 @@ mod tests {
assert_eq!(c_peer.get_network_identity(), NetworkIdentity::default());
}
async fn peer_conn_pingpong_test_common(drop_start: u32, drop_end: u32, conn_closed: bool) {
async fn peer_conn_pingpong_test_common(
drop_start: u32,
drop_end: u32,
conn_closed: bool,
drop_both: bool,
) {
let (c, s) = create_ring_tunnel_pair();
// drop 1-3 packets should not affect pingpong
let c_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
let c = TunnelWithFilter::new(c, c_recorder.clone());
let s = if drop_both {
let s_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
Box::new(TunnelWithFilter::new(s, s_recorder.clone()))
} else {
s
};
let c_peer_id = new_peer_id();
let s_peer_id = new_peer_id();
@@ -459,7 +506,15 @@ mod tests {
.start_recv_loop(tokio::sync::mpsc::channel(200).0)
.await;
// wait 5s, conn should not be disconnected
let throughput = c_peer.throughput.clone();
let _t = ScopedTask::from(tokio::spawn(async move {
// if not drop both, we mock some rx traffic for client peer to test pinger
while !drop_both {
tokio::time::sleep(Duration::from_millis(100)).await;
throughput.record_rx_bytes(3);
}
}));
tokio::time::sleep(Duration::from_secs(15)).await;
if conn_closed {
@@ -470,9 +525,18 @@ mod tests {
}
#[tokio::test]
async fn peer_conn_pingpong_timeout() {
peer_conn_pingpong_test_common(3, 5, false).await;
peer_conn_pingpong_test_common(5, 12, true).await;
async fn peer_conn_pingpong_timeout_not_close() {
peer_conn_pingpong_test_common(3, 5, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_oneside_timeout() {
peer_conn_pingpong_test_common(4, 12, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_bothside_timeout() {
peer_conn_pingpong_test_common(4, 12, true, true).await;
}
#[tokio::test]
+124 -18
View File
@@ -6,18 +6,98 @@ use std::{
time::Duration,
};
use tokio::{sync::broadcast, task::JoinSet, time::timeout};
use rand::{thread_rng, Rng};
use tokio::{
sync::broadcast,
task::JoinSet,
time::{timeout, Interval},
};
use crate::{
common::{error::Error, PeerId},
tunnel::{
mpsc::MpscTunnelSender,
packet_def::{PacketType, ZCPacket},
stats::WindowLatency,
stats::{Throughput, WindowLatency},
TunnelError,
},
};
struct PingIntervalController {
throughput: Arc<Throughput>,
loss_rate_20: Arc<WindowLatency>,
interval: Interval,
logic_time: u64,
last_send_logic_time: u64,
backoff_idx: i32,
max_backoff_idx: i32,
last_throughput: Throughput,
}
impl PingIntervalController {
fn new(throughput: Arc<Throughput>, loss_rate_20: Arc<WindowLatency>) -> Self {
let last_throughput = *throughput;
Self {
throughput,
loss_rate_20,
interval: tokio::time::interval(Duration::from_secs(1)),
logic_time: 0,
last_send_logic_time: 0,
backoff_idx: 0,
max_backoff_idx: 5,
last_throughput,
}
}
async fn tick(&mut self) {
self.interval.tick().await;
self.logic_time += 1;
}
fn tx_increase(&self) -> bool {
self.throughput.tx_packets() > self.last_throughput.tx_packets()
}
fn rx_increase(&self) -> bool {
self.throughput.rx_packets() > self.last_throughput.rx_packets()
}
fn should_send_ping(&mut self) -> bool {
if self.loss_rate_20.get_latency_us::<f64>() > 0.0 {
self.backoff_idx = 0;
} else if self.tx_increase()
&& !self.rx_increase()
&& self.logic_time - self.last_send_logic_time > 2
{
// if tx increase but rx not increase, we should do pingpong more frequently
self.backoff_idx = 0;
}
self.last_throughput = *self.throughput;
if (self.logic_time - self.last_send_logic_time) < (1 << self.backoff_idx) {
return false;
}
self.backoff_idx = std::cmp::min(self.backoff_idx + 1, self.max_backoff_idx);
// use this makes two peers not pingpong at the same time
if self.backoff_idx > self.max_backoff_idx - 2 && thread_rng().gen_bool(0.2) {
self.backoff_idx -= 1;
}
self.last_send_logic_time = self.logic_time;
return true;
}
}
pub struct PeerConnPinger {
my_peer_id: PeerId,
peer_id: PeerId,
@@ -25,6 +105,7 @@ pub struct PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
tasks: JoinSet<Result<(), TunnelError>>,
}
@@ -45,6 +126,7 @@ impl PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
) -> Self {
Self {
my_peer_id,
@@ -54,6 +136,7 @@ impl PeerConnPinger {
latency_stats,
ctrl_sender,
loss_rate_stats,
throughput_stats,
}
}
@@ -125,17 +208,23 @@ impl PeerConnPinger {
let (ping_res_sender, mut ping_res_receiver) = tokio::sync::mpsc::channel(100);
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = Arc::new(WindowLatency::new(5));
let stopped = Arc::new(AtomicU32::new(0));
// generate a pingpong task every 200ms
let mut pingpong_tasks = JoinSet::new();
let ctrl_resp_sender = self.ctrl_sender.clone();
let stopped_clone = stopped.clone();
let mut controller =
PingIntervalController::new(self.throughput_stats.clone(), loss_rate_stats_20.clone());
self.tasks.spawn(async move {
let mut req_seq = 0;
loop {
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
controller.tick().await;
if stopped_clone.load(Ordering::Relaxed) != 0 {
return Ok(());
@@ -145,7 +234,13 @@ impl PeerConnPinger {
pingpong_tasks.join_next().await;
}
if !controller.should_send_ping() {
continue;
}
let mut sink = sink.clone();
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
pingpong_tasks.spawn(async move {
let mut receiver = receiver.resubscribe();
let pingpong_once_ret = Self::do_pingpong_once(
@@ -163,16 +258,12 @@ impl PeerConnPinger {
});
req_seq = req_seq.wrapping_add(1);
tokio::time::sleep(Duration::from_millis(1000)).await;
}
});
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = WindowLatency::new(5);
let mut counter: u64 = 0;
let throughput = self.throughput_stats.clone();
let mut last_rx_packets = throughput.rx_packets();
while let Some(ret) = ping_res_receiver.recv().await {
counter += 1;
@@ -199,16 +290,31 @@ impl PeerConnPinger {
);
if (counter > 5 && loss_rate_20 > 0.74) || (counter > 150 && loss_rate_1 > 0.20) {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
"pingpong loss rate too high, closing"
);
break;
let current_rx_packets = throughput.rx_packets();
let need_close = if last_rx_packets != current_rx_packets {
// if we receive some packet from peers, we should relax the condition
counter > 50 && loss_rate_1 > 0.5
// TODO: wait more time to see if the loss rate is still high after no rx
} else {
true
};
if need_close {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
?last_rx_packets,
?current_rx_packets,
"pingpong loss rate too high, closing"
);
break;
}
}
last_rx_packets = throughput.rx_packets();
self.loss_rate_stats
.store((loss_rate_1 * 100.0) as u32, Ordering::Relaxed);
}
+254 -101
View File
@@ -2,12 +2,13 @@ use std::{
fmt::Debug,
net::Ipv4Addr,
sync::{Arc, Weak},
time::SystemTime,
};
use anyhow::Context;
use async_trait::async_trait;
use futures::StreamExt;
use dashmap::DashMap;
use tokio::{
sync::{
@@ -16,17 +17,28 @@ use tokio::{
},
task::JoinSet,
};
use tokio_stream::wrappers::ReceiverStream;
use tokio_util::bytes::Bytes;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, stun::StunInfoCollectorTrait, PeerId},
common::{
constants::EASYTIER_VERSION,
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
stun::StunInfoCollectorTrait,
PeerId,
},
peers::{
peer_conn::PeerConn,
peer_rpc::PeerRpcManagerTransport,
route_trait::{NextHopPolicy, RouteInterface},
route_trait::{ForeignNetworkRouteInfoMap, NextHopPolicy, RouteInterface},
PeerPacketFilter,
},
proto::{
cli::{
self, list_global_foreign_network_response::OneForeignNetwork,
ListGlobalForeignNetworkResponse,
},
peer_rpc::{ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey},
},
tunnel::{
self,
packet_def::{PacketType, ZCPacket},
@@ -37,11 +49,10 @@ use crate::{
use super::{
encrypt::{Encryptor, NullCipher},
foreign_network_client::ForeignNetworkClient,
foreign_network_manager::ForeignNetworkManager,
foreign_network_manager::{ForeignNetworkManager, GlobalForeignNetworkAccessor},
peer_conn::PeerConnId,
peer_map::PeerMap,
peer_ospf_route::PeerRoute,
peer_rip_route::BasicRoute,
peer_rpc::PeerRpcManager,
route_trait::{ArcRoute, Route},
BoxNicPacketFilter, BoxPeerPacketFilter, PacketRecvChanReceiver,
@@ -75,7 +86,15 @@ impl PeerRpcManagerTransport for RpcTransport {
.ok_or(Error::Unknown)?;
let peers = self.peers.upgrade().ok_or(Error::Unknown)?;
if let Some(gateway_id) = peers
if foreign_peers.has_next_hop(dst_peer_id) {
// do not encrypt for data sending to public server
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else if let Some(gateway_id) = peers
.get_gateway_peer_id(dst_peer_id, NextHopPolicy::LeastHop)
.await
{
@@ -88,20 +107,11 @@ impl PeerRpcManagerTransport for RpcTransport {
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
peers.send_msg_directly(msg, gateway_id).await
} else if foreign_peers.has_next_hop(dst_peer_id) {
if !foreign_peers.is_peer_public_node(&dst_peer_id) {
// do not encrypt for msg sending to public node
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
if peers.has_peer(gateway_id) {
peers.send_msg_directly(msg, gateway_id).await
} else {
foreign_peers.send_msg(msg, gateway_id).await
}
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else {
Err(Error::RouteError(Some(format!(
"peermgr RpcTransport no route for dst_peer_id: {}",
@@ -120,13 +130,11 @@ impl PeerRpcManagerTransport for RpcTransport {
}
pub enum RouteAlgoType {
Rip,
Ospf,
None,
}
enum RouteAlgoInst {
Rip(Arc<BasicRoute>),
Ospf(Arc<PeerRoute>),
None,
}
@@ -177,7 +185,7 @@ impl PeerManager {
) -> Self {
let my_peer_id = rand::random();
let (packet_send, packet_recv) = mpsc::channel(100);
let (packet_send, packet_recv) = mpsc::channel(128);
let peers = Arc::new(PeerMap::new(
packet_send.clone(),
global_ctx.clone(),
@@ -217,9 +225,6 @@ impl PeerManager {
let peer_rpc_mgr = Arc::new(PeerRpcManager::new(rpc_tspt.clone()));
let route_algo_inst = match route_algo {
RouteAlgoType::Rip => {
RouteAlgoInst::Rip(Arc::new(BasicRoute::new(my_peer_id, global_ctx.clone())))
}
RouteAlgoType::Ospf => RouteAlgoInst::Ospf(PeerRoute::new(
my_peer_id,
global_ctx.clone(),
@@ -232,6 +237,7 @@ impl PeerManager {
my_peer_id,
global_ctx.clone(),
packet_send.clone(),
Self::build_foreign_network_manager_accessor(&peers),
));
let foreign_network_client = Arc::new(ForeignNetworkClient::new(
global_ctx.clone(),
@@ -270,6 +276,34 @@ impl PeerManager {
}
}
fn build_foreign_network_manager_accessor(
peer_map: &Arc<PeerMap>,
) -> Box<dyn GlobalForeignNetworkAccessor> {
struct T {
peer_map: Weak<PeerMap>,
}
#[async_trait::async_trait]
impl GlobalForeignNetworkAccessor for T {
async fn list_global_foreign_peer(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let Some(peer_map) = self.peer_map.upgrade() else {
return vec![];
};
peer_map
.list_peers_own_foreign_network(network_identity)
.await
}
}
Box::new(T {
peer_map: Arc::downgrade(peer_map),
})
}
async fn add_new_peer_conn(&self, peer_conn: PeerConn) -> Result<(), Error> {
if self.global_ctx.get_network_identity() != peer_conn.get_network_identity() {
return Err(Error::SecretKeyError(
@@ -325,20 +359,85 @@ impl PeerManager {
Ok(())
}
async fn try_handle_foreign_network_packet(
packet: ZCPacket,
my_peer_id: PeerId,
peer_map: &PeerMap,
foreign_network_mgr: &ForeignNetworkManager,
) -> Result<(), ZCPacket> {
let pm_header = packet.peer_manager_header().unwrap();
if pm_header.packet_type != PacketType::ForeignNetworkPacket as u8 {
return Err(packet);
}
let from_peer_id = pm_header.from_peer_id.get();
let to_peer_id = pm_header.to_peer_id.get();
let foreign_hdr = packet.foreign_network_hdr().unwrap();
let foreign_network_name = foreign_hdr.get_network_name(packet.payload());
let foreign_peer_id = foreign_hdr.get_dst_peer_id();
if to_peer_id == my_peer_id {
// packet sent from other peer to me, extract the inner packet and forward it
if let Err(e) = foreign_network_mgr
.send_msg_to_peer(
&foreign_network_name,
foreign_peer_id,
packet.foreign_network_packet(),
)
.await
{
tracing::debug!(
?e,
?foreign_network_name,
?foreign_peer_id,
"foreign network mgr send_msg_to_peer failed"
);
}
Ok(())
} else if from_peer_id == my_peer_id {
// packet is generated from foreign network mgr and should be forward to other peer
if let Err(e) = peer_map
.send_msg(packet, to_peer_id, NextHopPolicy::LeastHop)
.await
{
tracing::debug!(
?e,
?to_peer_id,
"send_msg_directly failed when forward local generated foreign network packet"
);
}
Ok(())
} else {
// target is not me, forward it
Err(packet)
}
}
async fn start_peer_recv(&self) {
let mut recv = ReceiverStream::new(self.packet_recv.lock().await.take().unwrap());
let mut recv = self.packet_recv.lock().await.take().unwrap();
let my_peer_id = self.my_peer_id;
let peers = self.peers.clone();
let pipe_line = self.peer_packet_process_pipeline.clone();
let foreign_client = self.foreign_network_client.clone();
let foreign_mgr = self.foreign_network_manager.clone();
let encryptor = self.encryptor.clone();
self.tasks.lock().await.spawn(async move {
tracing::trace!("start_peer_recv");
while let Some(mut ret) = recv.next().await {
while let Some(ret) = recv.recv().await {
let Err(mut ret) =
Self::try_handle_foreign_network_packet(ret, my_peer_id, &peers, &foreign_mgr)
.await
else {
continue;
};
let Some(hdr) = ret.mut_peer_manager_header() else {
tracing::warn!(?ret, "invalid packet, skip");
continue;
};
tracing::trace!(?hdr, "peer recv a packet...");
let from_peer_id = hdr.from_peer_id.get();
let to_peer_id = hdr.to_peer_id.get();
@@ -438,7 +537,10 @@ impl PeerManager {
impl PeerPacketFilter for PeerRpcPacketProcessor {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::TaRpc as u8 {
if hdr.packet_type == PacketType::TaRpc as u8
|| hdr.packet_type == PacketType::RpcReq as u8
|| hdr.packet_type == PacketType::RpcResp as u8
{
self.peer_rpc_tspt_sender.send(packet).unwrap();
None
} else {
@@ -464,6 +566,7 @@ impl PeerManager {
my_peer_id: PeerId,
peers: Weak<PeerMap>,
foreign_network_client: Weak<ForeignNetworkClient>,
foreign_network_manager: Weak<ForeignNetworkManager>,
}
#[async_trait]
@@ -477,36 +580,45 @@ impl PeerManager {
return vec![];
};
let mut peers = foreign_client.list_foreign_peers();
let mut peers = foreign_client.list_public_peers().await;
peers.extend(peer_map.list_peers_with_conn().await);
peers
}
async fn send_route_packet(
&self,
msg: Bytes,
_route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error> {
let foreign_client = self
.foreign_network_client
.upgrade()
.ok_or(Error::Unknown)?;
let peer_map = self.peers.upgrade().ok_or(Error::Unknown)?;
let mut zc_packet = ZCPacket::new_with_payload(&msg);
zc_packet.fill_peer_manager_hdr(
self.my_peer_id,
dst_peer_id,
PacketType::Route as u8,
);
if foreign_client.has_next_hop(dst_peer_id) {
foreign_client.send_msg(zc_packet, dst_peer_id).await
} else {
peer_map.send_msg_directly(zc_packet, dst_peer_id).await
}
}
fn my_peer_id(&self) -> PeerId {
self.my_peer_id
}
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
let ret = DashMap::new();
let Some(foreign_mgr) = self.foreign_network_manager.upgrade() else {
return ret;
};
let networks = foreign_mgr.list_foreign_networks().await;
for (network_name, info) in networks.foreign_networks.iter() {
if info.peers.is_empty() {
continue;
}
let last_update = foreign_mgr
.get_foreign_network_last_update(network_name)
.unwrap_or(SystemTime::now());
ret.insert(
ForeignNetworkRouteInfoKey {
peer_id: self.my_peer_id,
network_name: network_name.clone(),
},
ForeignNetworkRouteInfoEntry {
foreign_peer_ids: info.peers.iter().map(|x| x.peer_id).collect(),
last_update: Some(last_update.into()),
version: 0,
network_secret_digest: info.network_secret_digest.clone(),
},
);
}
ret
}
}
let my_peer_id = self.my_peer_id;
@@ -515,6 +627,7 @@ impl PeerManager {
my_peer_id,
peers: Arc::downgrade(&self.peers),
foreign_network_client: Arc::downgrade(&self.foreign_network_client),
foreign_network_manager: Arc::downgrade(&self.foreign_network_manager),
}))
.await
.unwrap();
@@ -525,13 +638,12 @@ impl PeerManager {
pub fn get_route(&self) -> Box<dyn Route + Send + Sync + 'static> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => Box::new(route.clone()),
RouteAlgoInst::Ospf(route) => Box::new(route.clone()),
RouteAlgoInst::None => panic!("no route"),
}
}
pub async fn list_routes(&self) -> Vec<crate::rpc::Route> {
pub async fn list_routes(&self) -> Vec<cli::Route> {
self.get_route().list_routes().await
}
@@ -539,6 +651,28 @@ impl PeerManager {
self.get_route().dump().await
}
pub async fn list_global_foreign_network(&self) -> ListGlobalForeignNetworkResponse {
let mut resp = ListGlobalForeignNetworkResponse::default();
let ret = self.get_route().list_foreign_network_info().await;
for info in ret.infos.iter() {
let entry = resp
.foreign_networks
.entry(info.key.as_ref().unwrap().peer_id)
.or_insert_with(|| Default::default());
let mut f = OneForeignNetwork::default();
f.network_name = info.key.as_ref().unwrap().network_name.clone();
f.peer_ids
.extend(info.value.as_ref().unwrap().foreign_peer_ids.iter());
f.last_updated = format!("{}", info.value.as_ref().unwrap().last_update.unwrap());
f.version = info.value.as_ref().unwrap().version;
entry.foreign_networks.push(f);
}
resp
}
async fn run_nic_packet_process_pipeline(&self, data: &mut ZCPacket) {
for pipeline in self.nic_packet_process_pipeline.read().await.iter().rev() {
pipeline.try_process_packet_from_nic(data).await;
@@ -584,8 +718,16 @@ impl PeerManager {
let mut is_exit_node = false;
let mut dst_peers = vec![];
// NOTE: currently we only support ipv4 and cidr is 24
if ipv4_addr.is_broadcast() || ipv4_addr.is_multicast() || ipv4_addr.octets()[3] == 255 {
let network_length = self
.global_ctx
.get_ipv4()
.map(|x| x.network_length())
.unwrap_or(24);
let ipv4_inet = cidr::Ipv4Inet::new(ipv4_addr, network_length).unwrap();
if ipv4_addr.is_broadcast()
|| ipv4_addr.is_multicast()
|| ipv4_addr == ipv4_inet.last_address()
{
dst_peers.extend(
self.peers
.list_routes()
@@ -649,13 +791,23 @@ impl PeerManager {
.get_gateway_peer_id(*peer_id, next_hop_policy.clone())
.await
{
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(*peer_id) {
if let Err(e) = self.foreign_network_client.send_msg(msg, *peer_id).await {
errs.push(e);
if self.peers.has_peer(gateway) {
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(gateway) {
if let Err(e) = self.foreign_network_client.send_msg(msg, gateway).await {
errs.push(e);
}
} else {
tracing::warn!(
?gateway,
?peer_id,
"cannot send msg to peer through gateway"
);
}
} else {
tracing::debug!(?peer_id, "no gateway for peer");
}
}
@@ -686,14 +838,12 @@ impl PeerManager {
.await
.replace(Arc::downgrade(&self.foreign_network_client));
self.foreign_network_manager.run().await;
self.foreign_network_client.run().await;
}
pub async fn run(&self) -> Result<(), Error> {
match &self.route_algo_inst {
RouteAlgoInst::Ospf(route) => self.add_route(route.clone()).await,
RouteAlgoInst::Rip(route) => self.add_route(route.clone()).await,
RouteAlgoInst::None => {}
};
@@ -732,13 +882,6 @@ impl PeerManager {
self.nic_channel.clone()
}
pub fn get_basic_route(&self) -> Arc<BasicRoute> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => route.clone(),
_ => panic!("not rip route"),
}
}
pub fn get_foreign_network_manager(&self) -> Arc<ForeignNetworkManager> {
self.foreign_network_manager.clone()
}
@@ -747,8 +890,8 @@ impl PeerManager {
self.foreign_network_client.clone()
}
pub fn get_my_info(&self) -> crate::rpc::NodeInfo {
crate::rpc::NodeInfo {
pub fn get_my_info(&self) -> cli::NodeInfo {
cli::NodeInfo {
peer_id: self.my_peer_id,
ipv4_addr: self
.global_ctx
@@ -771,6 +914,14 @@ impl PeerManager {
.map(|x| x.to_string())
.collect(),
config: self.global_ctx.config.dump(),
version: EASYTIER_VERSION.to_string(),
feature_flag: Some(self.global_ctx.get_feature_flags()),
}
}
pub async fn wait(&self) {
while !self.tasks.lock().await.is_empty() {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
}
}
@@ -788,12 +939,11 @@ mod tests {
instance::listeners::get_listener_by_url,
peers::{
peer_manager::RouteAlgoType,
peer_rpc::tests::{MockService, TestRpcService, TestRpcServiceClient},
peer_rpc::tests::register_service,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
tunnel::common::tests::wait_for_condition,
tunnel::{TunnelConnector, TunnelListener},
proto::common::NatType,
tunnel::{common::tests::wait_for_condition, TunnelConnector, TunnelListener},
};
use super::PeerManager;
@@ -856,25 +1006,18 @@ mod tests {
#[values("tcp", "udp", "wg", "quic")] proto1: &str,
#[values("tcp", "udp", "wg", "quic")] proto2: &str,
) {
use crate::proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, SayHelloRequest},
};
let peer_mgr_a = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_a.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello a".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_a.peer_rpc_mgr, "", 0, "hello a");
let peer_mgr_b = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let peer_mgr_c = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_c.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello c".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_c.peer_rpc_mgr, "", 0, "hello c");
let mut listener1 = get_listener_by_url(
&format!("{}://0.0.0.0:31013", proto1).parse().unwrap(),
@@ -912,16 +1055,26 @@ mod tests {
.await
.unwrap();
let ret = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(100, peer_mgr_c.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), "abc".to_owned()).await;
ret
})
let stub = peer_mgr_a
.peer_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id,
peer_mgr_c.my_peer_id,
"".to_string(),
);
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest {
name: "abc".to_string(),
},
)
.await
.unwrap();
assert_eq!(ret, "hello c abc");
assert_eq!(ret.greeting, "hello c abc!");
}
#[tokio::test]
+30 -9
View File
@@ -7,12 +7,11 @@ use tokio::sync::RwLock;
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, GlobalCtxEvent},
global_ctx::{ArcGlobalCtx, GlobalCtxEvent, NetworkIdentity},
PeerId,
},
rpc::PeerConnInfo,
tunnel::packet_def::ZCPacket,
tunnel::TunnelError,
proto::cli::PeerConnInfo,
tunnel::{packet_def::ZCPacket, TunnelError},
};
use super::{
@@ -66,7 +65,7 @@ impl PeerMap {
}
pub fn has_peer(&self, peer_id: PeerId) -> bool {
self.peer_map.contains_key(&peer_id)
peer_id == self.my_peer_id || self.peer_map.contains_key(&peer_id)
}
pub async fn send_msg_directly(&self, msg: ZCPacket, dst_peer_id: PeerId) -> Result<(), Error> {
@@ -113,16 +112,28 @@ impl PeerMap {
.get_next_hop_with_policy(dst_peer_id, policy.clone())
.await
{
// for foreign network, gateway_peer_id may not connect to me
if self.has_peer(gateway_peer_id) {
return Some(gateway_peer_id);
}
// NOTIC: for foreign network, gateway_peer_id may not connect to me
return Some(gateway_peer_id);
}
}
None
}
pub async fn list_peers_own_foreign_network(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let mut ret = Vec::new();
for route in self.routes.read().await.iter() {
let peers = route
.list_peers_own_foreign_network(&network_identity)
.await;
ret.extend(peers);
}
ret
}
pub async fn send_msg(
&self,
msg: ZCPacket,
@@ -240,3 +251,13 @@ impl PeerMap {
route_map
}
}
impl Drop for PeerMap {
fn drop(&mut self) {
tracing::debug!(
self.my_peer_id,
network = ?self.global_ctx.get_network_identity(),
"PeerMap is dropped"
);
}
}
File diff suppressed because it is too large Load Diff
-753
View File
@@ -1,753 +0,0 @@
use std::{
net::Ipv4Addr,
sync::{atomic::AtomicU32, Arc},
time::{Duration, Instant},
};
use async_trait::async_trait;
use dashmap::DashMap;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use tokio_util::bytes::Bytes;
use tracing::Instrument;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, stun::StunInfoCollectorTrait, PeerId},
peers::route_trait::{Route, RouteInterfaceBox},
rpc::{NatType, StunInfo},
tunnel::packet_def::{PacketType, ZCPacket},
};
use super::PeerPacketFilter;
const SEND_ROUTE_PERIOD_SEC: u64 = 60;
const SEND_ROUTE_FAST_REPLY_SEC: u64 = 5;
const ROUTE_EXPIRED_SEC: u64 = 70;
type Version = u32;
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug, PartialEq)]
// Derives can be passed through to the generated type:
pub struct SyncPeerInfo {
// means next hop in route table.
pub peer_id: PeerId,
pub cost: u32,
pub ipv4_addr: Option<Ipv4Addr>,
pub proxy_cidrs: Vec<String>,
pub hostname: Option<String>,
pub udp_stun_info: i8,
}
impl SyncPeerInfo {
pub fn new_self(from_peer: PeerId, global_ctx: &ArcGlobalCtx) -> Self {
SyncPeerInfo {
peer_id: from_peer,
cost: 0,
ipv4_addr: global_ctx.get_ipv4(),
proxy_cidrs: global_ctx
.get_proxy_cidrs()
.iter()
.map(|x| x.to_string())
.chain(global_ctx.get_vpn_portal_cidr().map(|x| x.to_string()))
.collect(),
hostname: Some(global_ctx.get_hostname()),
udp_stun_info: global_ctx
.get_stun_info_collector()
.get_stun_info()
.udp_nat_type as i8,
}
}
pub fn clone_for_route_table(&self, next_hop: PeerId, cost: u32, from: &Self) -> Self {
SyncPeerInfo {
peer_id: next_hop,
cost,
ipv4_addr: from.ipv4_addr.clone(),
proxy_cidrs: from.proxy_cidrs.clone(),
hostname: from.hostname.clone(),
udp_stun_info: from.udp_stun_info,
}
}
}
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug)]
pub struct SyncPeer {
pub myself: SyncPeerInfo,
pub neighbors: Vec<SyncPeerInfo>,
// the route table version of myself
pub version: Version,
// the route table version of peer that we have received last time
pub peer_version: Option<Version>,
// if we do not have latest peer version, need_reply is true
pub need_reply: bool,
}
impl SyncPeer {
pub fn new(
from_peer: PeerId,
_to_peer: PeerId,
neighbors: Vec<SyncPeerInfo>,
global_ctx: ArcGlobalCtx,
version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Self {
SyncPeer {
myself: SyncPeerInfo::new_self(from_peer, &global_ctx),
neighbors,
version,
peer_version,
need_reply,
}
}
}
#[derive(Debug)]
struct SyncPeerFromRemote {
packet: SyncPeer,
last_update: std::time::Instant,
}
type SyncPeerFromRemoteMap = Arc<DashMap<PeerId, SyncPeerFromRemote>>;
#[derive(Debug)]
struct RouteTable {
route_info: DashMap<PeerId, SyncPeerInfo>,
ipv4_peer_id_map: DashMap<Ipv4Addr, PeerId>,
cidr_peer_id_map: DashMap<cidr::IpCidr, PeerId>,
}
impl RouteTable {
fn new() -> Self {
RouteTable {
route_info: DashMap::new(),
ipv4_peer_id_map: DashMap::new(),
cidr_peer_id_map: DashMap::new(),
}
}
fn copy_from(&self, other: &Self) {
self.route_info.clear();
for item in other.route_info.iter() {
let (k, v) = item.pair();
self.route_info.insert(*k, v.clone());
}
self.ipv4_peer_id_map.clear();
for item in other.ipv4_peer_id_map.iter() {
let (k, v) = item.pair();
self.ipv4_peer_id_map.insert(*k, *v);
}
self.cidr_peer_id_map.clear();
for item in other.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
self.cidr_peer_id_map.insert(*k, *v);
}
}
}
#[derive(Debug, Clone)]
struct RouteVersion(Arc<AtomicU32>);
impl RouteVersion {
fn new() -> Self {
// RouteVersion(Arc::new(AtomicU32::new(rand::random())))
RouteVersion(Arc::new(AtomicU32::new(0)))
}
fn get(&self) -> Version {
self.0.load(std::sync::atomic::Ordering::Relaxed)
}
fn inc(&self) {
self.0.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
pub struct BasicRoute {
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
interface: Arc<Mutex<Option<RouteInterfaceBox>>>,
route_table: Arc<RouteTable>,
sync_peer_from_remote: SyncPeerFromRemoteMap,
tasks: Mutex<JoinSet<()>>,
need_sync_notifier: Arc<tokio::sync::Notify>,
version: RouteVersion,
myself: Arc<RwLock<SyncPeerInfo>>,
last_send_time_map: Arc<DashMap<PeerId, (Version, Option<Version>, Instant)>>,
}
impl BasicRoute {
pub fn new(my_peer_id: PeerId, global_ctx: ArcGlobalCtx) -> Self {
BasicRoute {
my_peer_id,
global_ctx: global_ctx.clone(),
interface: Arc::new(Mutex::new(None)),
route_table: Arc::new(RouteTable::new()),
sync_peer_from_remote: Arc::new(DashMap::new()),
tasks: Mutex::new(JoinSet::new()),
need_sync_notifier: Arc::new(tokio::sync::Notify::new()),
version: RouteVersion::new(),
myself: Arc::new(RwLock::new(SyncPeerInfo::new_self(
my_peer_id.into(),
&global_ctx,
))),
last_send_time_map: Arc::new(DashMap::new()),
}
}
fn update_route_table(
my_id: PeerId,
sync_peer_reqs: SyncPeerFromRemoteMap,
route_table: Arc<RouteTable>,
) {
tracing::trace!(my_id = ?my_id, route_table = ?route_table, "update route table");
let new_route_table = Arc::new(RouteTable::new());
for item in sync_peer_reqs.iter() {
Self::update_route_table_with_req(my_id, &item.value().packet, new_route_table.clone());
}
route_table.copy_from(&new_route_table);
}
async fn update_myself(
my_peer_id: PeerId,
myself: &Arc<RwLock<SyncPeerInfo>>,
global_ctx: &ArcGlobalCtx,
) -> bool {
let new_myself = SyncPeerInfo::new_self(my_peer_id, &global_ctx);
if *myself.read().await != new_myself {
*myself.write().await = new_myself;
true
} else {
false
}
}
fn update_route_table_with_req(my_id: PeerId, packet: &SyncPeer, route_table: Arc<RouteTable>) {
let peer_id = packet.myself.peer_id.clone();
let update = |cost: u32, peer_info: &SyncPeerInfo| {
let node_id: PeerId = peer_info.peer_id.into();
let ret = route_table
.route_info
.entry(node_id.clone().into())
.and_modify(|info| {
if info.cost > cost {
*info = info.clone_for_route_table(peer_id, cost, &peer_info);
}
})
.or_insert(
peer_info
.clone()
.clone_for_route_table(peer_id, cost, &peer_info),
)
.value()
.clone();
if ret.cost > 6 {
tracing::error!(
"cost too large: {}, may lost connection, remove it",
ret.cost
);
route_table.route_info.remove(&node_id);
}
tracing::trace!(
"update route info, to: {:?}, gateway: {:?}, cost: {}, peer: {:?}",
node_id,
peer_id,
cost,
&peer_info
);
if let Some(ipv4) = peer_info.ipv4_addr {
route_table
.ipv4_peer_id_map
.insert(ipv4.clone(), node_id.clone().into());
}
for cidr in peer_info.proxy_cidrs.iter() {
let cidr: cidr::IpCidr = cidr.parse().unwrap();
route_table
.cidr_peer_id_map
.insert(cidr, node_id.clone().into());
}
};
for neighbor in packet.neighbors.iter() {
if neighbor.peer_id == my_id {
continue;
}
update(neighbor.cost + 1, &neighbor);
tracing::trace!("route info: {:?}", neighbor);
}
// add the sender peer to route info
update(1, &packet.myself);
tracing::trace!("my_id: {:?}, current route table: {:?}", my_id, route_table);
}
async fn send_sync_peer_request(
interface: &RouteInterfaceBox,
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
peer_id: PeerId,
route_table: Arc<RouteTable>,
my_version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Result<(), Error> {
let mut route_info_copy: Vec<SyncPeerInfo> = Vec::new();
// copy the route info
for item in route_table.route_info.iter() {
let (k, v) = item.pair();
route_info_copy.push(v.clone().clone_for_route_table(*k, v.cost, &v));
}
let msg = SyncPeer::new(
my_peer_id,
peer_id,
route_info_copy,
global_ctx,
my_version,
peer_version,
need_reply,
);
// TODO: this may exceed the MTU of the tunnel
interface
.send_route_packet(postcard::to_allocvec(&msg).unwrap().into(), 1, peer_id)
.await
}
async fn sync_peer_periodically(&self) {
let route_table = self.route_table.clone();
let global_ctx = self.global_ctx.clone();
let my_peer_id = self.my_peer_id.clone();
let interface = self.interface.clone();
let notifier = self.need_sync_notifier.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let myself = self.myself.clone();
let version = self.version.clone();
let last_send_time_map = self.last_send_time_map.clone();
self.tasks.lock().await.spawn(
async move {
loop {
if Self::update_myself(my_peer_id,&myself, &global_ctx).await {
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table version when myself changed"
);
}
let lockd_interface = interface.lock().await;
let interface = lockd_interface.as_ref().unwrap();
let last_send_time_map_new = DashMap::new();
let peers = interface.list_peers().await;
for peer in peers.iter() {
let last_send_time = last_send_time_map.get(peer).map(|v| *v).unwrap_or((0, None, Instant::now() - Duration::from_secs(3600)));
let my_version_peer_saved = sync_peer_from_remote.get(peer).and_then(|v| v.packet.peer_version);
let peer_have_latest_version = my_version_peer_saved == Some(version.get());
if peer_have_latest_version && last_send_time.2.elapsed().as_secs() < SEND_ROUTE_PERIOD_SEC {
last_send_time_map_new.insert(*peer, last_send_time);
continue;
}
tracing::trace!(
my_id = ?my_peer_id,
dst_peer_id = ?peer,
version = version.get(),
?my_version_peer_saved,
last_send_version = ?last_send_time.0,
last_send_peer_version = ?last_send_time.1,
last_send_elapse = ?last_send_time.2.elapsed().as_secs(),
"need send route info"
);
let peer_version_we_saved = sync_peer_from_remote.get(&peer).and_then(|v| Some(v.packet.version));
last_send_time_map_new.insert(*peer, (version.get(), peer_version_we_saved, Instant::now()));
let ret = Self::send_sync_peer_request(
interface,
my_peer_id.clone(),
global_ctx.clone(),
*peer,
route_table.clone(),
version.get(),
peer_version_we_saved,
!peer_have_latest_version,
)
.await;
match &ret {
Ok(_) => {
tracing::trace!("send sync peer request to peer: {}", peer);
}
Err(Error::PeerNoConnectionError(_)) => {
tracing::trace!("peer {} no connection", peer);
}
Err(e) => {
tracing::error!(
"send sync peer request to peer: {} error: {:?}",
peer,
e
);
}
};
}
last_send_time_map.clear();
for item in last_send_time_map_new.iter() {
let (k, v) = item.pair();
last_send_time_map.insert(*k, *v);
}
tokio::select! {
_ = notifier.notified() => {
tracing::trace!("sync peer request triggered by notifier");
}
_ = tokio::time::sleep(Duration::from_secs(1)) => {
tracing::trace!("sync peer request triggered by timeout");
}
}
}
}
.instrument(
tracing::info_span!("sync_peer_periodically", my_id = ?self.my_peer_id, global_ctx = ?self.global_ctx),
),
);
}
async fn check_expired_sync_peer_from_remote(&self) {
let route_table = self.route_table.clone();
let my_peer_id = self.my_peer_id.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let notifier = self.need_sync_notifier.clone();
let interface = self.interface.clone();
let version = self.version.clone();
self.tasks.lock().await.spawn(async move {
loop {
let mut need_update_route = false;
let now = std::time::Instant::now();
let mut need_remove = Vec::new();
let connected_peers = interface.lock().await.as_ref().unwrap().list_peers().await;
for item in sync_peer_from_remote.iter() {
let (k, v) = item.pair();
if now.duration_since(v.last_update).as_secs() > ROUTE_EXPIRED_SEC
|| !connected_peers.contains(k)
{
need_update_route = true;
need_remove.insert(0, k.clone());
}
}
for k in need_remove.iter() {
tracing::warn!("remove expired sync peer: {:?}", k);
sync_peer_from_remote.remove(k);
}
if need_update_route {
Self::update_route_table(
my_peer_id,
sync_peer_from_remote.clone(),
route_table.clone(),
);
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table when check expired peer"
);
notifier.notify_one();
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
});
}
fn get_peer_id_for_proxy(&self, ipv4: &Ipv4Addr) -> Option<PeerId> {
let ipv4 = std::net::IpAddr::V4(*ipv4);
for item in self.route_table.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
if k.contains(&ipv4) {
return Some(*v);
}
}
None
}
#[tracing::instrument(skip(self, packet), fields(my_id = ?self.my_peer_id, ctx = ?self.global_ctx))]
async fn handle_route_packet(&self, src_peer_id: PeerId, packet: Bytes) {
let packet = postcard::from_bytes::<SyncPeer>(&packet).unwrap();
let p = &packet;
let mut updated = true;
assert_eq!(packet.myself.peer_id, src_peer_id);
self.sync_peer_from_remote
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
if v.packet.myself == p.myself && v.packet.neighbors == p.neighbors {
updated = false;
} else {
v.packet = p.clone();
}
v.packet.version = p.version;
v.packet.peer_version = p.peer_version;
v.last_update = std::time::Instant::now();
})
.or_insert(SyncPeerFromRemote {
packet: p.clone(),
last_update: std::time::Instant::now(),
});
if updated {
Self::update_route_table(
self.my_peer_id.clone(),
self.sync_peer_from_remote.clone(),
self.route_table.clone(),
);
self.version.inc();
tracing::info!(
my_id = ?self.my_peer_id,
?p,
version = self.version.get(),
"update route table when receive route packet"
);
}
if packet.need_reply {
self.last_send_time_map
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
const FAST_REPLY_DURATION: u64 =
SEND_ROUTE_PERIOD_SEC - SEND_ROUTE_FAST_REPLY_SEC;
if v.0 != self.version.get() || v.1 != Some(p.version) {
v.2 = Instant::now() - Duration::from_secs(3600);
} else if v.2.elapsed().as_secs() < FAST_REPLY_DURATION {
// do not send same version route info too frequently
v.2 = Instant::now() - Duration::from_secs(FAST_REPLY_DURATION);
}
});
}
if updated || packet.need_reply {
self.need_sync_notifier.notify_one();
}
}
}
#[async_trait]
impl Route for BasicRoute {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()> {
*self.interface.lock().await = Some(interface);
self.sync_peer_periodically().await;
self.check_expired_sync_peer_from_remote().await;
Ok(1)
}
async fn close(&self) {}
async fn get_next_hop(&self, dst_peer_id: PeerId) -> Option<PeerId> {
match self.route_table.route_info.get(&dst_peer_id) {
Some(info) => {
return Some(info.peer_id.clone().into());
}
None => {
tracing::error!("no route info for dst_peer_id: {}", dst_peer_id);
return None;
}
}
}
async fn list_routes(&self) -> Vec<crate::rpc::Route> {
let mut routes = Vec::new();
let parse_route_info = |real_peer_id: PeerId, route_info: &SyncPeerInfo| {
let mut route = crate::rpc::Route::default();
route.ipv4_addr = if let Some(ipv4_addr) = route_info.ipv4_addr {
ipv4_addr.to_string()
} else {
"".to_string()
};
route.peer_id = real_peer_id;
route.next_hop_peer_id = route_info.peer_id;
route.cost = route_info.cost as i32;
route.proxy_cidrs = route_info.proxy_cidrs.clone();
route.hostname = route_info.hostname.clone().unwrap_or_default();
let mut stun_info = StunInfo::default();
if let Ok(udp_nat_type) = NatType::try_from(route_info.udp_stun_info as i32) {
stun_info.set_udp_nat_type(udp_nat_type);
}
route.stun_info = Some(stun_info);
route
};
self.route_table.route_info.iter().for_each(|item| {
routes.push(parse_route_info(*item.key(), item.value()));
});
routes
}
async fn get_peer_id_by_ipv4(&self, ipv4_addr: &Ipv4Addr) -> Option<PeerId> {
if let Some(peer_id) = self.route_table.ipv4_peer_id_map.get(ipv4_addr) {
return Some(*peer_id);
}
if let Some(peer_id) = self.get_peer_id_for_proxy(ipv4_addr) {
return Some(peer_id);
}
tracing::info!("no peer id for ipv4: {}", ipv4_addr);
return None;
}
}
#[async_trait::async_trait]
impl PeerPacketFilter for BasicRoute {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::Route as u8 {
let b = packet.payload().to_vec();
self.handle_route_packet(hdr.from_peer_id.get(), b.into())
.await;
None
} else {
Some(packet)
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::{
common::{global_ctx::tests::get_mock_global_ctx, PeerId},
connector::udp_hole_punch::tests::replace_stun_info_collector,
peers::{
peer_manager::{PeerManager, RouteAlgoType},
peer_rip_route::Version,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
};
async fn create_mock_pmgr() -> Arc<PeerManager> {
let (s, _r) = tokio::sync::mpsc::channel(1000);
let peer_mgr = Arc::new(PeerManager::new(
RouteAlgoType::Rip,
get_mock_global_ctx(),
s,
));
replace_stun_info_collector(peer_mgr.clone(), NatType::Unknown);
peer_mgr.run().await.unwrap();
peer_mgr
}
#[tokio::test]
async fn test_rip_route() {
let peer_mgr_a = create_mock_pmgr().await;
let peer_mgr_b = create_mock_pmgr().await;
let peer_mgr_c = create_mock_pmgr().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
wait_route_appear(peer_mgr_a.clone(), peer_mgr_c.clone())
.await
.unwrap();
let mgrs = vec![peer_mgr_a.clone(), peer_mgr_b.clone(), peer_mgr_c.clone()];
tokio::time::sleep(tokio::time::Duration::from_secs(4)).await;
let check_version = |version: Version, peer_id: PeerId, mgrs: &Vec<Arc<PeerManager>>| {
for mgr in mgrs.iter() {
tracing::warn!(
"check version: {:?}, {:?}, {:?}, {:?}",
version,
peer_id,
mgr,
mgr.get_basic_route().sync_peer_from_remote
);
assert_eq!(
version,
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.version,
);
assert_eq!(
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.peer_version
.unwrap(),
mgr.get_basic_route().version.get()
);
}
};
let check_sanity = || {
// check peer version in other peer mgr are correct.
check_version(
peer_mgr_b.get_basic_route().version.get(),
peer_mgr_b.my_peer_id(),
&vec![peer_mgr_a.clone(), peer_mgr_c.clone()],
);
check_version(
peer_mgr_a.get_basic_route().version.get(),
peer_mgr_a.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
check_version(
peer_mgr_c.get_basic_route().version.get(),
peer_mgr_c.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
};
check_sanity();
let versions = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
let versions2 = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
assert_eq!(versions, versions2);
check_sanity();
assert!(peer_mgr_a.get_basic_route().version.get() <= 3);
assert!(peer_mgr_b.get_basic_route().version.get() <= 6);
assert!(peer_mgr_c.get_basic_route().version.get() <= 3);
}
}
+175 -599
View File
@@ -1,27 +1,11 @@
use std::{
sync::{
atomic::{AtomicBool, AtomicU32, Ordering},
Arc,
},
time::Instant,
};
use std::sync::{Arc, Mutex};
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use futures::{SinkExt, StreamExt};
use prost::Message;
use tarpc::{server::Channel, transport::channel::UnboundedChannel};
use tokio::{
sync::mpsc::{self, UnboundedSender},
task::JoinSet,
};
use tracing::Instrument;
use futures::StreamExt;
use tokio::task::JoinSet;
use crate::{
common::{error::Error, PeerId},
rpc::TaRpcPacket,
proto::rpc_impl,
tunnel::packet_def::{PacketType, ZCPacket},
};
@@ -38,33 +22,13 @@ pub trait PeerRpcManagerTransport: Send + Sync + 'static {
async fn recv(&self) -> Result<ZCPacket, Error>;
}
type PacketSender = UnboundedSender<ZCPacket>;
struct PeerRpcEndPoint {
peer_id: PeerId,
packet_sender: PacketSender,
create_time: AtomicCell<Instant>,
finished: Arc<AtomicBool>,
tasks: JoinSet<()>,
}
type PeerRpcEndPointCreator =
Box<dyn Fn(PeerId, PeerRpcTransactId) -> PeerRpcEndPoint + Send + Sync + 'static>;
#[derive(Hash, Eq, PartialEq, Clone)]
struct PeerRpcClientCtxKey(PeerId, PeerRpcServiceId, PeerRpcTransactId);
// handle rpc request from one peer
pub struct PeerRpcManager {
service_map: Arc<DashMap<PeerRpcServiceId, PacketSender>>,
tasks: JoinSet<()>,
tspt: Arc<Box<dyn PeerRpcManagerTransport>>,
rpc_client: rpc_impl::client::Client,
rpc_server: rpc_impl::server::Server,
service_registry: Arc<DashMap<PeerRpcServiceId, PeerRpcEndPointCreator>>,
peer_rpc_endpoints: Arc<DashMap<PeerRpcClientCtxKey, PeerRpcEndPoint>>,
client_resp_receivers: Arc<DashMap<PeerRpcClientCtxKey, PacketSender>>,
transact_id: AtomicU32,
tasks: Arc<Mutex<JoinSet<()>>>,
}
impl std::fmt::Debug for PeerRpcManager {
@@ -75,470 +39,82 @@ impl std::fmt::Debug for PeerRpcManager {
}
}
struct PacketMerger {
first_piece: Option<TaRpcPacket>,
pieces: Vec<TaRpcPacket>,
}
impl PacketMerger {
fn new() -> Self {
Self {
first_piece: None,
pieces: Vec::new(),
}
}
fn try_merge_pieces(&self) -> Option<TaRpcPacket> {
if self.first_piece.is_none() || self.pieces.is_empty() {
return None;
}
for p in &self.pieces {
// some piece is missing
if p.total_pieces == 0 {
return None;
}
}
// all pieces are received
let mut content = Vec::new();
for p in &self.pieces {
content.extend_from_slice(&p.content);
}
let mut tmpl_packet = self.first_piece.as_ref().unwrap().clone();
tmpl_packet.total_pieces = 1;
tmpl_packet.piece_idx = 0;
tmpl_packet.content = content;
Some(tmpl_packet)
}
fn feed(
&mut self,
packet: ZCPacket,
expected_tid: Option<PeerRpcTransactId>,
) -> Result<Option<TaRpcPacket>, Error> {
let payload = packet.payload();
let rpc_packet =
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))?;
if expected_tid.is_some() && rpc_packet.transact_id != expected_tid.unwrap() {
return Ok(None);
}
let total_pieces = rpc_packet.total_pieces;
let piece_idx = rpc_packet.piece_idx;
// for compatibility with old version
if total_pieces == 0 && piece_idx == 0 {
return Ok(Some(rpc_packet));
}
if total_pieces > 100 || total_pieces == 0 {
return Err(Error::MessageDecodeError(format!(
"total_pieces is invalid: {}",
total_pieces
)));
}
if piece_idx >= total_pieces {
return Err(Error::MessageDecodeError(
"piece_idx >= total_pieces".to_owned(),
));
}
if self.first_piece.is_none()
|| self.first_piece.as_ref().unwrap().transact_id != rpc_packet.transact_id
|| self.first_piece.as_ref().unwrap().from_peer != rpc_packet.from_peer
{
self.first_piece = Some(rpc_packet.clone());
self.pieces.clear();
}
self.pieces
.resize(total_pieces as usize, Default::default());
self.pieces[piece_idx as usize] = rpc_packet;
Ok(self.try_merge_pieces())
}
}
impl PeerRpcManager {
pub fn new(tspt: impl PeerRpcManagerTransport) -> Self {
Self {
service_map: Arc::new(DashMap::new()),
tasks: JoinSet::new(),
tspt: Arc::new(Box::new(tspt)),
rpc_client: rpc_impl::client::Client::new(),
rpc_server: rpc_impl::server::Server::new(),
service_registry: Arc::new(DashMap::new()),
peer_rpc_endpoints: Arc::new(DashMap::new()),
client_resp_receivers: Arc::new(DashMap::new()),
transact_id: AtomicU32::new(0),
tasks: Arc::new(Mutex::new(JoinSet::new())),
}
}
pub fn run_service<S, Req>(self: &Self, service_id: PeerRpcServiceId, s: S) -> ()
where
S: tarpc::server::Serve<Req> + Clone + Send + Sync + 'static,
Req: Send + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Resp:
Send + std::fmt::Debug + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Fut: Send + 'static,
{
let tspt = self.tspt.clone();
let creator = Box::new(move |peer_id: PeerId, transact_id: PeerRpcTransactId| {
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
let (mut client_transport, server_transport) = tarpc::transport::channel::unbounded();
let server = tarpc::server::BaseChannel::with_defaults(server_transport);
let finished = Arc::new(AtomicBool::new(false));
let my_peer_id_clone = tspt.my_peer_id();
let peer_id_clone = peer_id.clone();
let o = server.execute(s.clone());
tasks.spawn(o);
let tspt = tspt.clone();
let finished_clone = finished.clone();
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
loop {
tokio::select! {
Some(resp) = client_transport.next() => {
tracing::debug!(resp = ?resp, ?transact_id, ?peer_id, "server recv packet from service provider");
if resp.is_err() {
tracing::warn!(err = ?resp.err(),
"[PEER RPC MGR] client_transport in server side got channel error, ignore it.");
continue;
}
let resp = resp.unwrap();
let serialized_resp = postcard::to_allocvec(&resp);
if serialized_resp.is_err() {
tracing::error!(error = ?serialized_resp.err(), "serialize resp failed");
continue;
}
let msgs = Self::build_rpc_packet(
tspt.my_peer_id(),
peer_id,
service_id,
transact_id,
false,
serialized_resp.as_ref().unwrap(),
);
for msg in msgs {
if let Err(e) = tspt.send(msg, peer_id).await {
tracing::error!(error = ?e, peer_id = ?peer_id, service_id = ?service_id, "send resp to peer failed");
break;
}
}
finished_clone.store(true, Ordering::Relaxed);
}
Some(packet) = packet_receiver.recv() => {
tracing::trace!("recv packet from peer, packet: {:?}", packet);
let info = match packet_merger.feed(packet, None) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
},
Ok(None) => {
continue;
},
Ok(Some(info)) => {
info
}
};
assert_eq!(info.service_id, service_id);
assert_eq!(info.from_peer, peer_id);
assert_eq!(info.transact_id, transact_id);
let decoded_ret = postcard::from_bytes(&info.content.as_slice());
if let Err(e) = decoded_ret {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
let decoded: tarpc::ClientMessage<Req> = decoded_ret.unwrap();
if let Err(e) = client_transport.send(decoded).await {
tracing::error!(error = ?e, "send to req to client transport failed");
}
}
else => {
tracing::warn!("[PEER RPC MGR] service runner destroy, peer_id: {}, service_id: {}", peer_id, service_id);
}
}
}
}.instrument(tracing::info_span!("service_runner", my_id = ?my_peer_id_clone, peer_id = ?peer_id_clone, service_id = ?service_id)));
tracing::info!(
"[PEER RPC MGR] create new service endpoint for peer {}, service {}",
peer_id,
service_id
);
return PeerRpcEndPoint {
peer_id,
packet_sender,
create_time: AtomicCell::new(Instant::now()),
finished,
tasks,
};
// let resp = client_transport.next().await;
});
if let Some(_) = self.service_registry.insert(service_id, creator) {
panic!(
"[PEER RPC MGR] service {} is already registered",
service_id
);
}
tracing::info!(
"[PEER RPC MGR] register service {} succeed, my_node_id {}",
service_id,
self.tspt.my_peer_id()
)
}
fn parse_rpc_packet(packet: &ZCPacket) -> Result<TaRpcPacket, Error> {
let payload = packet.payload();
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))
}
fn build_rpc_packet(
from_peer: PeerId,
to_peer: PeerId,
service_id: PeerRpcServiceId,
transact_id: PeerRpcTransactId,
is_req: bool,
content: &Vec<u8>,
) -> Vec<ZCPacket> {
let mut ret = Vec::new();
let content_mtu = RPC_PACKET_CONTENT_MTU;
let total_pieces = (content.len() + content_mtu - 1) / content_mtu;
let mut cur_offset = 0;
while cur_offset < content.len() {
let mut cur_len = content_mtu;
if cur_offset + cur_len > content.len() {
cur_len = content.len() - cur_offset;
}
let mut cur_content = Vec::new();
cur_content.extend_from_slice(&content[cur_offset..cur_offset + cur_len]);
let cur_packet = TaRpcPacket {
from_peer,
to_peer,
service_id,
transact_id,
is_req,
total_pieces: total_pieces as u32,
piece_idx: (cur_offset / content_mtu) as u32,
content: cur_content,
};
cur_offset += cur_len;
let mut buf = Vec::new();
cur_packet.encode(&mut buf).unwrap();
let mut zc_packet = ZCPacket::new_with_payload(&buf);
zc_packet.fill_peer_manager_hdr(from_peer, to_peer, PacketType::TaRpc as u8);
ret.push(zc_packet);
}
ret
}
pub fn run(&self) {
self.rpc_client.run();
self.rpc_server.run();
let (server_tx, mut server_rx) = (
self.rpc_server.get_transport_sink(),
self.rpc_server.get_transport_stream(),
);
let (client_tx, mut client_rx) = (
self.rpc_client.get_transport_sink(),
self.rpc_client.get_transport_stream(),
);
let tspt = self.tspt.clone();
let service_registry = self.service_registry.clone();
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
let client_resp_receivers = self.client_resp_receivers.clone();
tokio::spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let packet = tokio::select! {
Some(Ok(packet)) = server_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from server");
packet
}
Some(Ok(packet)) = client_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from client");
packet
}
else => {
tracing::warn!("rpc transport read aborted, exiting");
break;
}
};
let dst_peer_id = packet.peer_manager_header().unwrap().to_peer_id.into();
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
}
}
});
let tspt = self.tspt.clone();
self.tasks.lock().unwrap().spawn(async move {
loop {
let Ok(o) = tspt.recv().await else {
tracing::warn!("peer rpc transport read aborted, exiting");
break;
};
let info = Self::parse_rpc_packet(&o).unwrap();
tracing::debug!(?info, "recv rpc packet from peer");
if info.is_req {
if !service_registry.contains_key(&info.service_id) {
tracing::warn!(
"service {} not found, my_node_id: {}",
info.service_id,
tspt.my_peer_id()
);
continue;
}
let endpoint = peer_rpc_endpoints
.entry(PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
))
.or_insert_with(|| {
service_registry.get(&info.service_id).unwrap()(
info.from_peer,
info.transact_id,
)
});
endpoint.packet_sender.send(o).unwrap();
} else {
if let Some(a) = client_resp_receivers.get(&PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
)) {
tracing::trace!("recv resp: {:?}", info);
if let Err(e) = a.send(o) {
tracing::error!(error = ?e, "send resp to client failed");
}
} else {
tracing::warn!("client resp receiver not found, info: {:?}", info);
}
if o.peer_manager_header().unwrap().packet_type == PacketType::RpcReq as u8 {
server_tx.send(o).await.unwrap();
continue;
} else if o.peer_manager_header().unwrap().packet_type == PacketType::RpcResp as u8
{
client_tx.send(o).await.unwrap();
continue;
}
}
});
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
peer_rpc_endpoints.retain(|_, v| {
v.create_time.load().elapsed().as_secs() < 30
&& !v.finished.load(Ordering::Relaxed)
});
}
});
}
#[tracing::instrument(skip(f))]
pub async fn do_client_rpc_scoped<Resp, Req, RpcRet, Fut>(
&self,
service_id: PeerRpcServiceId,
dst_peer_id: PeerId,
f: impl FnOnce(UnboundedChannel<Resp, Req>) -> Fut,
) -> RpcRet
where
Resp: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Req: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Fut: std::future::Future<Output = RpcRet>,
{
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
pub fn rpc_client(&self) -> &rpc_impl::client::Client {
&self.rpc_client
}
let (client_transport, server_transport) =
tarpc::transport::channel::unbounded::<Resp, Req>();
let (mut server_s, mut server_r) = server_transport.split();
let transact_id = self
.transact_id
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let tspt = self.tspt.clone();
tasks.spawn(async move {
while let Some(a) = server_r.next().await {
if a.is_err() {
tracing::error!(error = ?a.err(), "channel error");
continue;
}
let req = postcard::to_allocvec(&a.unwrap());
if req.is_err() {
tracing::error!(error = ?req.err(), "bincode serialize failed");
continue;
}
let packets = Self::build_rpc_packet(
tspt.my_peer_id(),
dst_peer_id,
service_id,
transact_id,
true,
req.as_ref().unwrap(),
);
tracing::debug!(?packets, ?req, ?transact_id, "client send rpc packet to peer");
for packet in packets {
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
break;
}
}
}
tracing::warn!("[PEER RPC MGR] server trasport read aborted");
});
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
while let Some(packet) = packet_receiver.recv().await {
tracing::trace!("tunnel recv: {:?}", packet);
let info = match packet_merger.feed(packet, Some(transact_id)) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
}
Ok(None) => {
continue;
}
Ok(Some(info)) => info,
};
let decoded = postcard::from_bytes(&info.content.as_slice());
tracing::debug!(?info, ?decoded, "client recv rpc packet from peer");
assert_eq!(info.transact_id, transact_id);
if let Err(e) = decoded {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
if let Err(e) = server_s.send(decoded.unwrap()).await {
tracing::error!(error = ?e, "send to rpc server channel failed");
}
}
tracing::warn!("[PEER RPC MGR] server packet read aborted");
});
let key = PeerRpcClientCtxKey(dst_peer_id, service_id, transact_id);
let _insert_ret = self
.client_resp_receivers
.insert(key.clone(), packet_sender);
let ret = f(client_transport).await;
self.client_resp_receivers.remove(&key);
ret
pub fn rpc_server(&self) -> &rpc_impl::server::Server {
&self.rpc_server
}
pub fn my_peer_id(&self) -> PeerId {
@@ -546,9 +122,15 @@ impl PeerRpcManager {
}
}
impl Drop for PeerRpcManager {
fn drop(&mut self) {
tracing::debug!("PeerRpcManager drop, my_peer_id: {:?}", self.my_peer_id());
}
}
#[cfg(test)]
pub mod tests {
use std::{pin::Pin, sync::Arc, time::Duration};
use std::{pin::Pin, sync::Arc};
use futures::{SinkExt, StreamExt};
use tokio::sync::Mutex;
@@ -559,31 +141,18 @@ pub mod tests {
peer_rpc::PeerRpcManager,
tests::{connect_peer_manager, create_mock_peer_manager, wait_route_appear},
},
proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, GreetingServer, GreetingService, SayHelloRequest},
},
tunnel::{
common::tests::wait_for_condition, packet_def::ZCPacket, ring::create_ring_tunnel_pair,
Tunnel, ZCPacketSink, ZCPacketStream,
packet_def::ZCPacket, ring::create_ring_tunnel_pair, Tunnel, ZCPacketSink,
ZCPacketStream,
},
};
use super::PeerRpcManagerTransport;
#[tarpc::service]
pub trait TestRpcService {
async fn hello(s: String) -> String;
}
#[derive(Clone)]
pub struct MockService {
pub prefix: String,
}
#[tarpc::server]
impl TestRpcService for MockService {
async fn hello(self, _: tarpc::context::Context, s: String) -> String {
format!("{} {}", self.prefix, s)
}
}
fn random_string(len: usize) -> String {
use rand::distributions::Alphanumeric;
use rand::Rng;
@@ -595,6 +164,16 @@ pub mod tests {
String::from_utf8(s).unwrap()
}
pub fn register_service(rpc_mgr: &PeerRpcManager, domain: &str, delay_ms: u64, prefix: &str) {
rpc_mgr.rpc_server().registry().register(
GreetingServer::new(GreetingService {
delay_ms,
prefix: prefix.to_string(),
}),
domain,
);
}
#[tokio::test]
async fn peer_rpc_basic_test() {
struct MockTransport {
@@ -630,10 +209,7 @@ pub mod tests {
my_peer_id: new_peer_id(),
});
server_rpc_mgr.run();
let s = MockService {
prefix: "hello".to_owned(),
};
server_rpc_mgr.run_service(1, s.serve());
register_service(&server_rpc_mgr, "test", 0, "Hello");
let client_rpc_mgr = PeerRpcManager::new(MockTransport {
sink: Arc::new(Mutex::new(stsr)),
@@ -642,35 +218,33 @@ pub mod tests {
});
client_rpc_mgr.run();
let stub = client_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(1, 1, "test".to_string());
let msg = random_string(8192);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(10);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
wait_for_condition(
|| async { server_rpc_mgr.peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
@@ -680,6 +254,7 @@ pub mod tests {
let peer_mgr_c = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
@@ -699,51 +274,51 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test", 0, "Hello");
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
let stub = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test".to_string(),
);
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
// call again
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_c
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
async fn test_multi_service_with_peer_manager() {
async fn test_multi_domain_with_peer_manager() {
let peer_mgr_a = create_mock_peer_manager().await;
let peer_mgr_b = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
@@ -757,42 +332,43 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello_a".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
let b = MockService {
prefix: "hello_b".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(2, b.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test1", 0, "Hello");
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test2", 20000, "Hello2");
let stub1 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test1".to_string(),
);
let stub2 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test2".to_string(),
);
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
let ret = stub1
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let ret = stub2
.say_hello(
RpcController::default(),
SayHelloRequest { name: msg.clone() },
)
.await;
assert_eq!(ip_list.unwrap(), format!("hello_a {}", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(2, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
assert_eq!(ip_list.unwrap(), format!("hello_b {}", msg));
wait_for_condition(
|| async { peer_mgr_b.get_peer_rpc_mgr().peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert!(ret.is_err() && ret.unwrap_err().to_string().contains("Timeout"));
}
}
+39
View File
@@ -0,0 +1,39 @@
use crate::{
common::global_ctx::ArcGlobalCtx,
proto::{
peer_rpc::{DirectConnectorRpc, GetIpListRequest, GetIpListResponse},
rpc_types::{self, controller::BaseController},
},
};
#[derive(Clone)]
pub struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[async_trait::async_trait]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
type Controller = BaseController;
async fn get_ip_list(
&self,
_: BaseController,
_: GetIpListRequest,
) -> rpc_types::error::Result<GetIpListResponse> {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self
.global_ctx
.get_running_listeners()
.into_iter()
.map(Into::into)
.collect();
Ok(ret)
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}
+138
View File
@@ -0,0 +1,138 @@
use std::result::Result;
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use dashmap::DashMap;
use tokio::select;
use tokio::sync::Notify;
use tokio::task::JoinHandle;
use crate::common::scoped_task::ScopedTask;
use anyhow::Error;
use super::peer_manager::PeerManager;
#[async_trait]
pub trait PeerTaskLauncher: Send + Sync + Clone + 'static {
type Data;
type CollectPeerItem;
type TaskRet;
fn new_data(&self, peer_mgr: Arc<PeerManager>) -> Self::Data;
async fn collect_peers_need_task(&self, data: &Self::Data) -> Vec<Self::CollectPeerItem>;
async fn launch_task(
&self,
data: &Self::Data,
item: Self::CollectPeerItem,
) -> JoinHandle<Result<Self::TaskRet, Error>>;
async fn all_task_done(&self, _data: &Self::Data) {}
fn loop_interval_ms(&self) -> u64 {
5000
}
}
pub struct PeerTaskManager<Launcher: PeerTaskLauncher> {
launcher: Launcher,
peer_mgr: Arc<PeerManager>,
main_loop_task: Mutex<Option<ScopedTask<()>>>,
run_signal: Arc<Notify>,
data: Launcher::Data,
}
impl<D, C, T, L> PeerTaskManager<L>
where
D: Send + Sync + Clone + 'static,
C: std::fmt::Debug + Send + Sync + Clone + core::hash::Hash + Eq + 'static,
T: Send + 'static,
L: PeerTaskLauncher<Data = D, CollectPeerItem = C, TaskRet = T> + 'static,
{
pub fn new(launcher: L, peer_mgr: Arc<PeerManager>) -> Self {
let data = launcher.new_data(peer_mgr.clone());
Self {
launcher,
peer_mgr,
main_loop_task: Mutex::new(None),
run_signal: Arc::new(Notify::new()),
data,
}
}
pub fn start(&self) {
let task = tokio::spawn(Self::main_loop(
self.launcher.clone(),
self.data.clone(),
self.run_signal.clone(),
))
.into();
self.main_loop_task.lock().unwrap().replace(task);
}
async fn main_loop(launcher: L, data: D, signal: Arc<Notify>) {
let peer_task_map = Arc::new(DashMap::<C, ScopedTask<Result<T, Error>>>::new());
loop {
let peers_to_connect = launcher.collect_peers_need_task(&data).await;
// remove task not in peers_to_connect
let mut to_remove = vec![];
for item in peer_task_map.iter() {
if !peers_to_connect.contains(item.key()) || item.value().is_finished() {
to_remove.push(item.key().clone());
}
}
tracing::debug!(
?peers_to_connect,
?to_remove,
"got peers to connect and remove"
);
for key in to_remove {
if let Some((_, task)) = peer_task_map.remove(&key) {
task.abort();
match task.await {
Ok(Ok(_)) => {}
Ok(Err(task_ret)) => {
tracing::error!(?task_ret, "hole punching task failed");
}
Err(e) => {
tracing::error!(?e, "hole punching task aborted");
}
}
}
}
if !peers_to_connect.is_empty() {
for item in peers_to_connect {
if peer_task_map.contains_key(&item) {
continue;
}
tracing::debug!(?item, "launch hole punching task");
peer_task_map
.insert(item.clone(), launcher.launch_task(&data, item).await.into());
}
} else if peer_task_map.is_empty() {
tracing::debug!("all task done");
launcher.all_task_done(&data).await;
}
select! {
_ = tokio::time::sleep(std::time::Duration::from_millis(
launcher.loop_interval_ms(),
)) => {},
_ = signal.notified() => {}
}
}
}
pub async fn run_immediately(&self) {
self.run_signal.notify_one();
}
pub fn data(&self) -> D {
self.data.clone()
}
}
+27 -12
View File
@@ -1,9 +1,13 @@
use std::{net::Ipv4Addr, sync::Arc};
use async_trait::async_trait;
use tokio_util::bytes::Bytes;
use dashmap::DashMap;
use crate::common::{error::Error, PeerId};
use crate::{
common::{global_ctx::NetworkIdentity, PeerId},
proto::peer_rpc::{
ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey, RouteForeignNetworkInfos,
},
};
#[derive(Clone, Debug)]
pub enum NextHopPolicy {
@@ -17,16 +21,16 @@ impl Default for NextHopPolicy {
}
}
#[async_trait]
pub type ForeignNetworkRouteInfoMap =
DashMap<ForeignNetworkRouteInfoKey, ForeignNetworkRouteInfoEntry>;
#[async_trait::async_trait]
pub trait RouteInterface {
async fn list_peers(&self) -> Vec<PeerId>;
async fn send_route_packet(
&self,
msg: Bytes,
route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error>;
fn my_peer_id(&self) -> PeerId;
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
DashMap::new()
}
}
pub type RouteInterfaceBox = Box<dyn RouteInterface + Send + Sync>;
@@ -56,7 +60,7 @@ impl RouteCostCalculatorInterface for DefaultRouteCostCalculator {}
pub type RouteCostCalculator = Box<dyn RouteCostCalculatorInterface>;
#[async_trait]
#[async_trait::async_trait]
#[auto_impl::auto_impl(Box, Arc)]
pub trait Route {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()>;
@@ -71,12 +75,23 @@ pub trait Route {
self.get_next_hop(peer_id).await
}
async fn list_routes(&self) -> Vec<crate::rpc::Route>;
async fn list_routes(&self) -> Vec<crate::proto::cli::Route>;
async fn get_peer_id_by_ipv4(&self, _ipv4: &Ipv4Addr) -> Option<PeerId> {
None
}
async fn list_peers_own_foreign_network(
&self,
_network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
vec![]
}
async fn list_foreign_network_info(&self) -> RouteForeignNetworkInfos {
Default::default()
}
async fn set_route_cost_fn(&self, _cost_fn: RouteCostCalculator) {}
async fn dump(&self) -> String {
+57 -23
View File
@@ -1,14 +1,18 @@
use std::sync::Arc;
use crate::rpc::{
cli::PeerInfo, peer_manage_rpc_server::PeerManageRpc, DumpRouteRequest, DumpRouteResponse,
ListForeignNetworkRequest, ListForeignNetworkResponse, ListPeerRequest, ListPeerResponse,
ListRouteRequest, ListRouteResponse, ShowNodeInfoRequest, ShowNodeInfoResponse,
use crate::proto::{
cli::{
DumpRouteRequest, DumpRouteResponse, ListForeignNetworkRequest, ListForeignNetworkResponse,
ListGlobalForeignNetworkRequest, ListGlobalForeignNetworkResponse, ListPeerRequest,
ListPeerResponse, ListRouteRequest, ListRouteResponse, PeerInfo, PeerManageRpc,
ShowNodeInfoRequest, ShowNodeInfoResponse,
},
rpc_types::{self, controller::BaseController},
};
use tonic::{Request, Response, Status};
use super::peer_manager::PeerManager;
#[derive(Clone)]
pub struct PeerManagerRpcService {
peer_manager: Arc<PeerManager>,
}
@@ -19,7 +23,15 @@ impl PeerManagerRpcService {
}
pub async fn list_peers(&self) -> Vec<PeerInfo> {
let peers = self.peer_manager.get_peer_map().list_peers().await;
let mut peers = self.peer_manager.get_peer_map().list_peers().await;
peers.extend(
self.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peers()
.await
.iter(),
);
let mut peer_infos = Vec::new();
for peer in peers {
let mut peer_info = PeerInfo::default();
@@ -27,6 +39,14 @@ impl PeerManagerRpcService {
if let Some(conns) = self.peer_manager.get_peer_map().list_peer_conns(peer).await {
peer_info.conns = conns;
} else if let Some(conns) = self
.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peer_conns(peer)
.await
{
peer_info.conns = conns;
}
peer_infos.push(peer_info);
@@ -36,12 +56,14 @@ impl PeerManagerRpcService {
}
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl PeerManageRpc for PeerManagerRpcService {
type Controller = BaseController;
async fn list_peer(
&self,
_request: Request<ListPeerRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListPeerResponse>, Status> {
_: BaseController,
_request: ListPeerRequest, // Accept request of type HelloRequest
) -> Result<ListPeerResponse, rpc_types::error::Error> {
let mut reply = ListPeerResponse::default();
let peers = self.list_peers().await;
@@ -49,45 +71,57 @@ impl PeerManageRpc for PeerManagerRpcService {
reply.peer_infos.push(peer);
}
Ok(Response::new(reply))
Ok(reply)
}
async fn list_route(
&self,
_request: Request<ListRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListRouteResponse>, Status> {
_: BaseController,
_request: ListRouteRequest, // Accept request of type HelloRequest
) -> Result<ListRouteResponse, rpc_types::error::Error> {
let mut reply = ListRouteResponse::default();
reply.routes = self.peer_manager.list_routes().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn dump_route(
&self,
_request: Request<DumpRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<DumpRouteResponse>, Status> {
_: BaseController,
_request: DumpRouteRequest, // Accept request of type HelloRequest
) -> Result<DumpRouteResponse, rpc_types::error::Error> {
let mut reply = DumpRouteResponse::default();
reply.result = self.peer_manager.dump_route().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_foreign_network(
&self,
_request: Request<ListForeignNetworkRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListForeignNetworkResponse>, Status> {
_: BaseController,
_request: ListForeignNetworkRequest, // Accept request of type HelloRequest
) -> Result<ListForeignNetworkResponse, rpc_types::error::Error> {
let reply = self
.peer_manager
.get_foreign_network_manager()
.list_foreign_networks()
.await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_global_foreign_network(
&self,
_: BaseController,
_request: ListGlobalForeignNetworkRequest,
) -> Result<ListGlobalForeignNetworkResponse, rpc_types::error::Error> {
Ok(self.peer_manager.list_global_foreign_network().await)
}
async fn show_node_info(
&self,
_request: Request<ShowNodeInfoRequest>, // Accept request of type HelloRequest
) -> Result<Response<ShowNodeInfoResponse>, Status> {
Ok(Response::new(ShowNodeInfoResponse {
_: BaseController,
_request: ShowNodeInfoRequest, // Accept request of type HelloRequest
) -> Result<ShowNodeInfoResponse, rpc_types::error::Error> {
Ok(ShowNodeInfoResponse {
node_info: Some(self.peer_manager.get_my_info()),
}))
})
}
}
@@ -1,4 +1,7 @@
syntax = "proto3";
import "common.proto";
package cli;
message Status {
@@ -16,18 +19,12 @@ message PeerConnStats {
uint64 latency_us = 5;
}
message TunnelInfo {
string tunnel_type = 1;
string local_addr = 2;
string remote_addr = 3;
}
message PeerConnInfo {
string conn_id = 1;
uint32 my_peer_id = 2;
uint32 peer_id = 3;
repeated string features = 4;
TunnelInfo tunnel = 5;
common.TunnelInfo tunnel = 5;
PeerConnStats stats = 6;
float loss_rate = 7;
bool is_client = 8;
@@ -46,36 +43,17 @@ message ListPeerResponse {
NodeInfo my_info = 2;
}
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
}
message Route {
uint32 peer_id = 1;
string ipv4_addr = 2;
common.Ipv4Inet ipv4_addr = 2;
uint32 next_hop_peer_id = 3;
int32 cost = 4;
repeated string proxy_cidrs = 5;
string hostname = 6;
StunInfo stun_info = 7;
common.StunInfo stun_info = 7;
string inst_id = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message NodeInfo {
@@ -83,10 +61,12 @@ message NodeInfo {
string ipv4_addr = 2;
repeated string proxy_cidrs = 3;
string hostname = 4;
StunInfo stun_info = 5;
common.StunInfo stun_info = 5;
string inst_id = 6;
repeated string listeners = 7;
string config = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message ShowNodeInfoRequest {}
@@ -103,18 +83,40 @@ message DumpRouteResponse { string result = 1; }
message ListForeignNetworkRequest {}
message ForeignNetworkEntryPb { repeated PeerInfo peers = 1; }
message ForeignNetworkEntryPb {
repeated PeerInfo peers = 1;
bytes network_secret_digest = 2;
}
message ListForeignNetworkResponse {
// foreign network in local
map<string, ForeignNetworkEntryPb> foreign_networks = 1;
}
message ListGlobalForeignNetworkRequest {}
message ListGlobalForeignNetworkResponse {
// foreign network in the entire network
message OneForeignNetwork {
string network_name = 1;
repeated uint32 peer_ids = 2;
string last_updated = 3;
uint32 version = 4;
}
message ForeignNetworks { repeated OneForeignNetwork foreign_networks = 1; }
map<uint32, ForeignNetworks> foreign_networks = 1;
}
service PeerManageRpc {
rpc ListPeer(ListPeerRequest) returns (ListPeerResponse);
rpc ListRoute(ListRouteRequest) returns (ListRouteResponse);
rpc DumpRoute(DumpRouteRequest) returns (DumpRouteResponse);
rpc ListForeignNetwork(ListForeignNetworkRequest)
returns (ListForeignNetworkResponse);
rpc ListGlobalForeignNetwork(ListGlobalForeignNetworkRequest)
returns (ListGlobalForeignNetworkResponse);
rpc ShowNodeInfo(ShowNodeInfoRequest) returns (ShowNodeInfoResponse);
}
@@ -125,7 +127,7 @@ enum ConnectorStatus {
}
message Connector {
string url = 1;
common.Url url = 1;
ConnectorStatus status = 2;
}
@@ -140,7 +142,7 @@ enum ConnectorManageAction {
message ManageConnectorRequest {
ConnectorManageAction action = 1;
string url = 2;
common.Url url = 2;
}
message ManageConnectorResponse {}
@@ -150,23 +152,6 @@ service ConnectorManageRpc {
rpc ManageConnector(ManageConnectorRequest) returns (ManageConnectorResponse);
}
message DirectConnectedPeerInfo { int32 latency_ms = 1; }
message PeerInfoForGlobalMap {
map<uint32, DirectConnectedPeerInfo> direct_peers = 1;
}
message GetGlobalPeerMapRequest {}
message GetGlobalPeerMapResponse {
map<uint32, PeerInfoForGlobalMap> global_peer_map = 1;
}
service PeerCenterRpc {
rpc GetGlobalPeerMap(GetGlobalPeerMapRequest)
returns (GetGlobalPeerMapResponse);
}
message VpnPortalInfo {
string vpn_type = 1;
string client_config = 2;
@@ -180,24 +165,3 @@ service VpnPortalRpc {
rpc GetVpnPortalInfo(GetVpnPortalInfoRequest)
returns (GetVpnPortalInfoResponse);
}
message HandshakeRequest {
uint32 magic = 1;
uint32 my_peer_id = 2;
uint32 version = 3;
repeated string features = 4;
string network_name = 5;
bytes network_secret_digrest = 6;
}
message TaRpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
uint32 service_id = 3;
uint32 transact_id = 4;
bool is_req = 5;
bytes content = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
}
+1
View File
@@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/cli.rs"));
+108
View File
@@ -0,0 +1,108 @@
syntax = "proto3";
import "error.proto";
package common;
message RpcDescriptor {
// allow same service registered multiple times in different domain
string domain_name = 1;
string proto_name = 2;
string service_name = 3;
uint32 method_index = 4;
}
message RpcRequest {
RpcDescriptor descriptor = 1;
bytes request = 2;
int32 timeout_ms = 3;
}
message RpcResponse {
bytes response = 1;
error.Error error = 2;
uint64 runtime_us = 3;
}
message RpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
int64 transaction_id = 3;
RpcDescriptor descriptor = 4;
bytes body = 5;
bool is_request = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
int32 trace_id = 9;
}
message Void {}
message UUID {
uint64 high = 1;
uint64 low = 2;
}
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
SymmetricEasyInc = 8;
SymmetricEasyDec = 9;
}
message Ipv4Addr { uint32 addr = 1; }
message Ipv6Addr {
uint32 part1 = 1;
uint32 part2 = 2;
uint32 part3 = 3;
uint32 part4 = 4;
}
message Ipv4Inet {
Ipv4Addr address = 1;
uint32 network_length = 2;
}
message Url { string url = 1; }
message SocketAddr {
oneof ip {
Ipv4Addr ipv4 = 1;
Ipv6Addr ipv6 = 2;
};
uint32 port = 3;
}
message TunnelInfo {
string tunnel_type = 1;
common.Url local_addr = 2;
common.Url remote_addr = 3;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
}
message PeerFeatureFlag {
bool is_public_server = 1;
bool no_relay_data = 2;
}
+168
View File
@@ -0,0 +1,168 @@
use std::{fmt::Display, str::FromStr};
use anyhow::Context;
include!(concat!(env!("OUT_DIR"), "/common.rs"));
impl From<uuid::Uuid> for Uuid {
fn from(uuid: uuid::Uuid) -> Self {
let (high, low) = uuid.as_u64_pair();
Uuid { low, high }
}
}
impl From<Uuid> for uuid::Uuid {
fn from(uuid: Uuid) -> Self {
uuid::Uuid::from_u64_pair(uuid.high, uuid.low)
}
}
impl Display for Uuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", uuid::Uuid::from(self.clone()))
}
}
impl From<std::net::Ipv4Addr> for Ipv4Addr {
fn from(value: std::net::Ipv4Addr) -> Self {
Self {
addr: u32::from_be_bytes(value.octets()),
}
}
}
impl From<Ipv4Addr> for std::net::Ipv4Addr {
fn from(value: Ipv4Addr) -> Self {
std::net::Ipv4Addr::from(value.addr)
}
}
impl ToString for Ipv4Addr {
fn to_string(&self) -> String {
std::net::Ipv4Addr::from(self.addr).to_string()
}
}
impl From<std::net::Ipv6Addr> for Ipv6Addr {
fn from(value: std::net::Ipv6Addr) -> Self {
let b = value.octets();
Self {
part1: u32::from_be_bytes([b[0], b[1], b[2], b[3]]),
part2: u32::from_be_bytes([b[4], b[5], b[6], b[7]]),
part3: u32::from_be_bytes([b[8], b[9], b[10], b[11]]),
part4: u32::from_be_bytes([b[12], b[13], b[14], b[15]]),
}
}
}
impl From<Ipv6Addr> for std::net::Ipv6Addr {
fn from(value: Ipv6Addr) -> Self {
let part1 = value.part1.to_be_bytes();
let part2 = value.part2.to_be_bytes();
let part3 = value.part3.to_be_bytes();
let part4 = value.part4.to_be_bytes();
std::net::Ipv6Addr::from([
part1[0], part1[1], part1[2], part1[3], part2[0], part2[1], part2[2], part2[3],
part3[0], part3[1], part3[2], part3[3], part4[0], part4[1], part4[2], part4[3],
])
}
}
impl ToString for Ipv6Addr {
fn to_string(&self) -> String {
std::net::Ipv6Addr::from(self.clone()).to_string()
}
}
impl From<cidr::Ipv4Inet> for Ipv4Inet {
fn from(value: cidr::Ipv4Inet) -> Self {
Ipv4Inet {
address: Some(value.address().into()),
network_length: value.network_length() as u32,
}
}
}
impl From<Ipv4Inet> for cidr::Ipv4Inet {
fn from(value: Ipv4Inet) -> Self {
cidr::Ipv4Inet::new(value.address.unwrap().into(), value.network_length as u8).unwrap()
}
}
impl std::fmt::Display for Ipv4Inet {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", cidr::Ipv4Inet::from(self.clone()))
}
}
impl FromStr for Ipv4Inet {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Ipv4Inet::from(
cidr::Ipv4Inet::from_str(s).with_context(|| "Failed to parse Ipv4Inet")?,
))
}
}
impl From<url::Url> for Url {
fn from(value: url::Url) -> Self {
Url {
url: value.to_string(),
}
}
}
impl From<Url> for url::Url {
fn from(value: Url) -> Self {
url::Url::parse(&value.url).unwrap()
}
}
impl FromStr for Url {
type Err = url::ParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Url {
url: s.parse::<url::Url>()?.to_string(),
})
}
}
impl Display for Url {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.url)
}
}
impl From<std::net::SocketAddr> for SocketAddr {
fn from(value: std::net::SocketAddr) -> Self {
match value {
std::net::SocketAddr::V4(v4) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv4(v4.ip().clone().into())),
port: v4.port() as u32,
},
std::net::SocketAddr::V6(v6) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv6(v6.ip().clone().into())),
port: v6.port() as u32,
},
}
}
}
impl From<SocketAddr> for std::net::SocketAddr {
fn from(value: SocketAddr) -> Self {
match value.ip.unwrap() {
socket_addr::Ip::Ipv4(ip) => std::net::SocketAddr::V4(std::net::SocketAddrV4::new(
std::net::Ipv4Addr::from(ip),
value.port as u16,
)),
socket_addr::Ip::Ipv6(ip) => std::net::SocketAddr::V6(std::net::SocketAddrV6::new(
std::net::Ipv6Addr::from(ip),
value.port as u16,
0,
0,
)),
}
}
}
+34
View File
@@ -0,0 +1,34 @@
syntax = "proto3";
package error;
message OtherError { string error_message = 1; }
message InvalidMethodIndex {
string service_name = 1;
uint32 method_index = 2;
}
message InvalidService { string service_name = 1; }
message ProstDecodeError {}
message ProstEncodeError {}
message ExecuteError { string error_message = 1; }
message MalformatRpcPacket { string error_message = 1; }
message Timeout { string error_message = 1; }
message Error {
oneof error {
OtherError other_error = 1;
InvalidMethodIndex invalid_method_index = 2;
InvalidService invalid_service = 3;
ProstDecodeError prost_decode_error = 4;
ProstEncodeError prost_encode_error = 5;
ExecuteError execute_error = 6;
MalformatRpcPacket malformat_rpc_packet = 7;
Timeout timeout = 8;
}
}
+84
View File
@@ -0,0 +1,84 @@
use prost::DecodeError;
use super::rpc_types;
include!(concat!(env!("OUT_DIR"), "/error.rs"));
impl From<&rpc_types::error::Error> for Error {
fn from(e: &rpc_types::error::Error) -> Self {
use super::error::error::Error as ProtoError;
match e {
rpc_types::error::Error::ExecutionError(e) => Self {
error: Some(ProtoError::ExecuteError(ExecuteError {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::DecodeError(_) => Self {
error: Some(ProtoError::ProstDecodeError(ProstDecodeError {})),
},
rpc_types::error::Error::EncodeError(_) => Self {
error: Some(ProtoError::ProstEncodeError(ProstEncodeError {})),
},
rpc_types::error::Error::InvalidMethodIndex(m, s) => Self {
error: Some(ProtoError::InvalidMethodIndex(InvalidMethodIndex {
method_index: *m as u32,
service_name: s.to_string(),
})),
},
rpc_types::error::Error::InvalidServiceKey(s, _) => Self {
error: Some(ProtoError::InvalidService(InvalidService {
service_name: s.to_string(),
})),
},
rpc_types::error::Error::MalformatRpcPacket(e) => Self {
error: Some(ProtoError::MalformatRpcPacket(MalformatRpcPacket {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::Timeout(e) => Self {
error: Some(ProtoError::Timeout(Timeout {
error_message: e.to_string(),
})),
},
#[allow(unreachable_patterns)]
e => Self {
error: Some(ProtoError::OtherError(OtherError {
error_message: e.to_string(),
})),
},
}
}
}
impl From<&Error> for rpc_types::error::Error {
fn from(e: &Error) -> Self {
use super::error::error::Error as ProtoError;
match &e.error {
Some(ProtoError::ExecuteError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::ProstDecodeError(_)) => {
Self::DecodeError(DecodeError::new("decode error"))
}
Some(ProtoError::ProstEncodeError(_)) => {
Self::DecodeError(DecodeError::new("encode error"))
}
Some(ProtoError::InvalidMethodIndex(e)) => {
Self::InvalidMethodIndex(e.method_index as u8, e.service_name.clone())
}
Some(ProtoError::InvalidService(e)) => {
Self::InvalidServiceKey(e.service_name.clone(), "".to_string())
}
Some(ProtoError::MalformatRpcPacket(e)) => {
Self::MalformatRpcPacket(e.error_message.clone())
}
Some(ProtoError::Timeout(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::OtherError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
None => Self::ExecutionError(anyhow::anyhow!("unknown error {:?}", e)),
}
}
}
+9
View File
@@ -0,0 +1,9 @@
pub mod rpc_impl;
pub mod rpc_types;
pub mod cli;
pub mod common;
pub mod error;
pub mod peer_rpc;
pub mod tests;
+207
View File
@@ -0,0 +1,207 @@
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "common.proto";
package peer_rpc;
message RoutePeerInfo {
// means next hop in route table.
uint32 peer_id = 1;
common.UUID inst_id = 2;
uint32 cost = 3;
optional common.Ipv4Addr ipv4_addr = 4;
repeated string proxy_cidrs = 5;
optional string hostname = 6;
common.NatType udp_stun_info = 7;
google.protobuf.Timestamp last_update = 8;
uint32 version = 9;
string easytier_version = 10;
common.PeerFeatureFlag feature_flag = 11;
uint64 peer_route_id = 12;
uint32 network_length = 13;
}
message PeerIdVersion {
uint32 peer_id = 1;
uint32 version = 2;
}
message RouteConnBitmap {
repeated PeerIdVersion peer_ids = 1;
bytes bitmap = 2;
}
message RoutePeerInfos { repeated RoutePeerInfo items = 1; }
message ForeignNetworkRouteInfoKey {
uint32 peer_id = 1;
string network_name = 2;
}
message ForeignNetworkRouteInfoEntry {
repeated uint32 foreign_peer_ids = 1;
google.protobuf.Timestamp last_update = 2;
uint32 version = 3;
bytes network_secret_digest = 4;
}
message RouteForeignNetworkInfos {
message Info {
ForeignNetworkRouteInfoKey key = 1;
ForeignNetworkRouteInfoEntry value = 2;
}
repeated Info infos = 1;
}
message SyncRouteInfoRequest {
uint32 my_peer_id = 1;
uint64 my_session_id = 2;
bool is_initiator = 3;
RoutePeerInfos peer_infos = 4;
RouteConnBitmap conn_bitmap = 5;
RouteForeignNetworkInfos foreign_network_infos = 6;
}
enum SyncRouteInfoError {
DuplicatePeerId = 0;
Stopped = 1;
}
message SyncRouteInfoResponse {
bool is_initiator = 1;
uint64 session_id = 2;
optional SyncRouteInfoError error = 3;
}
service OspfRouteRpc {
// Generates a "hello" greeting based on the supplied info.
rpc SyncRouteInfo(SyncRouteInfoRequest) returns (SyncRouteInfoResponse);
}
message GetIpListRequest {}
message GetIpListResponse {
common.Ipv4Addr public_ipv4 = 1;
repeated common.Ipv4Addr interface_ipv4s = 2;
common.Ipv6Addr public_ipv6 = 3;
repeated common.Ipv6Addr interface_ipv6s = 4;
repeated common.Url listeners = 5;
}
service DirectConnectorRpc {
rpc GetIpList(GetIpListRequest) returns (GetIpListResponse);
}
message SelectPunchListenerRequest {
bool force_new = 1;
}
message SelectPunchListenerResponse {
common.SocketAddr listener_mapped_addr = 1;
}
message SendPunchPacketConeRequest {
common.SocketAddr listener_mapped_addr = 1;
common.SocketAddr dest_addr = 2;
uint32 transaction_id = 3;
// send this many packets in a batch
uint32 packet_count_per_batch = 4;
// send total this batch count, total packet count = packet_batch_size * packet_batch_count
uint32 packet_batch_count = 5;
// interval between each batch
uint32 packet_interval_ms = 6;
}
message SendPunchPacketHardSymRequest {
common.SocketAddr listener_mapped_addr = 1;
repeated common.Ipv4Addr public_ips = 2;
uint32 transaction_id = 3;
uint32 port_index = 4;
uint32 round = 5;
}
message SendPunchPacketHardSymResponse { uint32 next_port_index = 1; }
message SendPunchPacketEasySymRequest {
common.SocketAddr listener_mapped_addr = 1;
repeated common.Ipv4Addr public_ips = 2;
uint32 transaction_id = 3;
uint32 base_port_num = 4;
uint32 max_port_num = 5;
bool is_incremental = 6;
}
message SendPunchPacketBothEasySymRequest {
uint32 udp_socket_count = 1;
common.Ipv4Addr public_ip = 2;
uint32 transaction_id = 3;
uint32 dst_port_num = 4;
uint32 wait_time_ms = 5;
}
message SendPunchPacketBothEasySymResponse {
// is doing punch with other peer
bool is_busy = 1;
common.SocketAddr base_mapped_addr = 2;
}
service UdpHolePunchRpc {
rpc SelectPunchListener(SelectPunchListenerRequest)
returns (SelectPunchListenerResponse);
// send packet to one remote_addr, used by nat1-3 to nat1-3
rpc SendPunchPacketCone(SendPunchPacketConeRequest) returns (common.Void);
// send packet to multiple remote_addr (birthday attack), used by nat4 to nat1-3
rpc SendPunchPacketHardSym(SendPunchPacketHardSymRequest)
returns (SendPunchPacketHardSymResponse);
rpc SendPunchPacketEasySym(SendPunchPacketEasySymRequest)
returns (common.Void);
// nat4 to nat4 (both predictably)
rpc SendPunchPacketBothEasySym(SendPunchPacketBothEasySymRequest)
returns (SendPunchPacketBothEasySymResponse);
}
message DirectConnectedPeerInfo { int32 latency_ms = 1; }
message PeerInfoForGlobalMap {
map<uint32, DirectConnectedPeerInfo> direct_peers = 1;
}
message ReportPeersRequest {
uint32 my_peer_id = 1;
PeerInfoForGlobalMap peer_infos = 2;
}
message ReportPeersResponse {}
message GlobalPeerMap { map<uint32, PeerInfoForGlobalMap> map = 1; }
message GetGlobalPeerMapRequest { uint64 digest = 1; }
message GetGlobalPeerMapResponse {
map<uint32, PeerInfoForGlobalMap> global_peer_map = 1;
optional uint64 digest = 2;
}
service PeerCenterRpc {
rpc ReportPeers(ReportPeersRequest) returns (ReportPeersResponse);
rpc GetGlobalPeerMap(GetGlobalPeerMapRequest)
returns (GetGlobalPeerMapResponse);
}
message HandshakeRequest {
uint32 magic = 1;
uint32 my_peer_id = 2;
uint32 version = 3;
repeated string features = 4;
string network_name = 5;
bytes network_secret_digrest = 6;
}

Some files were not shown because too many files have changed in this diff Show More