mirror of
https://github.com/soxoj/maigret.git
synced 2026-05-09 16:14:32 +00:00
Compare commits
9 Commits
extra-db-flagush
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
| 0f25db7179 | |||
| e962b8c693 | |||
| c6cfef84ce | |||
| b0ed09eb3e | |||
| 4e3bd3ab58 | |||
| 77c11df119 | |||
| 25026e21ea | |||
| b1004588af | |||
| 4bd2f7cb35 |
@@ -2,7 +2,7 @@ name: Build docker image and push to DockerHub
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
branches: [ main, dev ]
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
@@ -10,24 +10,62 @@ jobs:
|
||||
steps:
|
||||
-
|
||||
name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v1
|
||||
uses: docker/setup-qemu-action@v3
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
uses: docker/setup-buildx-action@v3
|
||||
-
|
||||
name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKER_HUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
|
||||
-
|
||||
name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
name: Extract metadata (CLI)
|
||||
id: meta_cli
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ secrets.DOCKER_HUB_USERNAME }}/maigret
|
||||
tags: |
|
||||
type=raw,value=latest,enable={{is_default_branch}}
|
||||
type=ref,event=branch
|
||||
type=sha,prefix=
|
||||
-
|
||||
name: Extract metadata (Web UI)
|
||||
id: meta_web
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ secrets.DOCKER_HUB_USERNAME }}/maigret
|
||||
tags: |
|
||||
type=raw,value=web,enable={{is_default_branch}}
|
||||
type=ref,event=branch,suffix=-web
|
||||
type=sha,prefix=web-
|
||||
-
|
||||
name: Build and push (CLI, default)
|
||||
id: docker_build_cli
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
push: true
|
||||
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/maigret:latest
|
||||
target: cli
|
||||
tags: ${{ steps.meta_cli.outputs.tags }}
|
||||
labels: ${{ steps.meta_cli.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
-
|
||||
name: Image digest
|
||||
run: echo ${{ steps.docker_build.outputs.digest }}
|
||||
name: Build and push (Web UI)
|
||||
id: docker_build_web
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
push: true
|
||||
target: web
|
||||
tags: ${{ steps.meta_web.outputs.tags }}
|
||||
labels: ${{ steps.meta_web.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
-
|
||||
name: Image digests
|
||||
run: |
|
||||
echo "cli: ${{ steps.docker_build_cli.outputs.digest }}"
|
||||
echo "web: ${{ steps.docker_build_web.outputs.digest }}"
|
||||
|
||||
+10
-1
@@ -1,4 +1,4 @@
|
||||
FROM python:3.11-slim
|
||||
FROM python:3.11-slim AS base
|
||||
LABEL maintainer="Soxoj <soxoj@protonmail.com>"
|
||||
WORKDIR /app
|
||||
RUN pip install --no-cache-dir --upgrade pip
|
||||
@@ -15,4 +15,13 @@ COPY . .
|
||||
RUN YARL_NO_EXTENSIONS=1 python3 -m pip install --no-cache-dir .
|
||||
# For production use, set FLASK_HOST to a specific IP address for security
|
||||
ENV FLASK_HOST=0.0.0.0
|
||||
|
||||
# Web UI variant: auto-launches the web interface on $PORT
|
||||
FROM base AS web
|
||||
ENV PORT=5000
|
||||
EXPOSE 5000
|
||||
ENTRYPOINT ["sh", "-c", "exec maigret --web \"$PORT\""]
|
||||
|
||||
# Default variant (last stage = `docker build .` target): CLI, backwards-compatible
|
||||
FROM base AS cli
|
||||
ENTRYPOINT ["maigret"]
|
||||
|
||||
@@ -109,7 +109,7 @@ Download a standalone EXE from [Releases](https://github.com/soxoj/maigret/relea
|
||||
|
||||
Run Maigret in the browser via cloud shells or Jupyter notebooks:
|
||||
|
||||
[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/soxoj/maigret&tutorial=README.md)
|
||||
<a href="https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/soxoj/maigret&tutorial=cloudshell-tutorial.md"><img src="https://user-images.githubusercontent.com/27065646/92304704-8d146d80-ef80-11ea-8c29-0deaabb1c702.png" alt="Open in Cloud Shell" height="50"></a>
|
||||
<a href="https://repl.it/github/soxoj/maigret"><img src="https://replit.com/badge/github/soxoj/maigret" alt="Run on Replit" height="50"></a>
|
||||
|
||||
<a href="https://colab.research.google.com/gist/soxoj/879b51bc3b2f8b695abb054090645000/maigret-collab.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" height="45"></a>
|
||||
@@ -140,15 +140,27 @@ maigret username
|
||||
|
||||
### Docker
|
||||
|
||||
Two image variants are published:
|
||||
|
||||
- `soxoj/maigret:latest` — CLI mode (default)
|
||||
- `soxoj/maigret:web` — auto-launches the [web interface](#web-interface)
|
||||
|
||||
```bash
|
||||
# official image
|
||||
# official image (CLI)
|
||||
docker pull soxoj/maigret
|
||||
|
||||
# usage
|
||||
# CLI usage
|
||||
docker run -v /mydir:/app/reports soxoj/maigret:latest username --html
|
||||
|
||||
# Web UI (open http://localhost:5000)
|
||||
docker run -p 5000:5000 soxoj/maigret:web
|
||||
|
||||
# Web UI on a custom port
|
||||
docker run -e PORT=8080 -p 8080:8080 soxoj/maigret:web
|
||||
|
||||
# manual build
|
||||
docker build -t maigret .
|
||||
docker build -t maigret . # CLI image (default target)
|
||||
docker build --target web -t maigret-web . # Web UI image
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
@@ -0,0 +1,69 @@
|
||||
# Maigret
|
||||
|
||||
<div align="center">
|
||||
<img src="https://raw.githubusercontent.com/soxoj/maigret/main/static/maigret.png" height="220" alt="Maigret logo"/>
|
||||
</div>
|
||||
|
||||
**Maigret** collects a dossier on a person **by username only**, checking for accounts on a huge number of sites and gathering all the available information from web pages. No API keys required.
|
||||
|
||||
## Installation
|
||||
|
||||
Google Cloud Shell does not ship with all the system libraries Maigret needs (`libcairo2-dev`, `pkg-config`). The helper script below installs them and then builds Maigret from the cloned source.
|
||||
|
||||
Copy the command and run it in the Cloud Shell terminal:
|
||||
|
||||
```bash
|
||||
./utils/cloudshell_install.sh
|
||||
```
|
||||
|
||||
When the script finishes, verify the install:
|
||||
|
||||
```bash
|
||||
maigret --version
|
||||
```
|
||||
|
||||
## Usage examples
|
||||
|
||||
Run a basic search for a username. By default Maigret checks the **500 highest-ranked sites by traffic** — pass `-a` to scan the full 3,000+ database.
|
||||
|
||||
```bash
|
||||
maigret soxoj
|
||||
```
|
||||
|
||||
Search several usernames at once:
|
||||
|
||||
```bash
|
||||
maigret user1 user2 user3
|
||||
```
|
||||
|
||||
Narrow the run to sites related to cryptocurrency via the `crypto` tag (you can also use country tags):
|
||||
|
||||
```bash
|
||||
maigret vitalik.eth --tags crypto
|
||||
```
|
||||
|
||||
Generate reports in HTML, PDF, and XMind 8 formats:
|
||||
|
||||
```bash
|
||||
maigret soxoj --html
|
||||
maigret soxoj --pdf
|
||||
maigret soxoj --xmind
|
||||
```
|
||||
|
||||
Download a generated report from Cloud Shell to your local machine:
|
||||
|
||||
```bash
|
||||
cloudshell download reports/report_soxoj.pdf
|
||||
```
|
||||
|
||||
Tune reliability on flaky networks — raise the timeout and retry failed checks:
|
||||
|
||||
```bash
|
||||
maigret soxoj --timeout 60 --retries 2
|
||||
```
|
||||
|
||||
For the full list of options see `maigret --help` or the [CLI documentation](https://maigret.readthedocs.io/en/latest/command-line-options.html).
|
||||
|
||||
## Further reading
|
||||
|
||||
Full project documentation: [maigret.readthedocs.io](https://maigret.readthedocs.io/)
|
||||
@@ -142,18 +142,28 @@ There are few options for sites data.json helpful in various cases:
|
||||
``protection`` (site protection tracking)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``protection`` field records what kind of anti-bot protection a site uses. Maigret reads this field and automatically applies the appropriate bypass mechanism.
|
||||
The ``protection`` field records what kind of anti-bot protection a site uses. Maigret reads this field and automatically applies the appropriate bypass mechanism where one exists.
|
||||
|
||||
Two categories of tag:
|
||||
|
||||
- **Load-bearing.** Maigret changes its HTTP client or headers based on the tag. Currently only ``tls_fingerprint`` (switches to ``curl_cffi`` with Chrome-class TLS).
|
||||
- **Documentation-only.** Maigret does **not** change behavior based on the tag; it records *why* the site is hard so a future solver can target the right set of sites without re-auditing.
|
||||
|
||||
Within the documentation-only tags, there is a further split that dictates whether the site is ``disabled: true``:
|
||||
|
||||
- ``ip_reputation`` is the **only** doc-tag that **keeps the site enabled**. It means "works for most users, fails from datacenter/cloud IPs." Disabling would silently hide a working site from anyone with a clean IP. The fix is **external** to Maigret (residential IP or ``--proxy``).
|
||||
- ``cf_js_challenge``, ``cf_firewall``, ``aws_waf_js_challenge``, ``ddos_guard_challenge``, ``custom_bot_protection``, ``js_challenge`` all pair with ``disabled: true``. They mean "does not work for anyone right now"; the tag identifies the provider so that when a bypass ships, every site with that tag can be re-enabled in one pass.
|
||||
|
||||
Supported values:
|
||||
|
||||
- ``tls_fingerprint`` — the site fingerprints the TLS handshake (JA3/JA4) and blocks non-browser clients. Maigret automatically uses ``curl_cffi`` with Chrome browser emulation to bypass this. Requires the ``curl_cffi`` package (included as a dependency). Examples: Instagram, NPM, Codepen, Kickstarter, Letterboxd.
|
||||
- ``ip_reputation`` — the site blocks requests from datacenter/cloud IPs regardless of headers or TLS. Cannot be bypassed automatically; run Maigret from a regular internet connection (not a datacenter) or use a proxy (``--proxy``). Examples: Reddit, Patreon, Figma.
|
||||
- ``cf_js_challenge`` — Cloudflare Managed Challenge / Turnstile JS challenge. Symptom: HTTP 403 with ``cf-mitigated: challenge`` header; body contains ``challenges.cloudflare.com``, ``_cf_chl_opt``, ``window._cf_chl``, or "Just a moment". Not bypassable via ``curl_cffi`` TLS impersonation (verified across Chrome 123/124/131, Safari 17/18, Firefox 133/135, Edge 101 — all return the same 403 challenge page); a real browser executing the challenge JS is required to obtain the clearance cookie. Documentation-only flag; sites stay ``disabled: true`` until a CF-challenge solver is integrated. Examples: DMOJ, Elakiri, Fanlore, Bdoutdoors, TheStudentRoom, forum.hr.
|
||||
- ``cf_firewall`` — Cloudflare firewall rule / bot score block (WAF action=block, **not** action=challenge). Symptom: HTTP 403 served by Cloudflare (``server: cloudflare``, ``cf-ray`` header) **without** JS-challenge markers — body typically shows "Access denied", "Attention Required", or just a bare 1015/1016/1020 error page. Unlike ``ip_reputation``, residential IPs are **not** sufficient to bypass — Cloudflare decides based on a composite of bot score, TLS fingerprint, UA, ASN, and custom site-owner rules, so ``curl_cffi`` Chrome impersonation from a residential line still returns 403. Documentation-only flag; sites stay ``disabled: true`` until a per-site bypass (cookies, real browser, or residential+clean session) is found. Examples: Fark, Fodors, Huntingnet, Hunttalk.
|
||||
- ``aws_waf_js_challenge`` — the site is protected by AWS WAF with a JavaScript challenge. Symptom: HTTP 202 with empty body and ``x-amzn-waf-action: challenge`` header (a token-granting challenge that requires executing the CAPTCHA/challenge JS bundle). Neither ``curl_cffi`` TLS impersonation nor User-Agent changes bypass this — a real browser or the official AWS WAF challenge-solver SDK is required. Currently marked for documentation only; sites using this protection stay ``disabled: true`` until a solver is integrated. Example: Dreamwidth.
|
||||
- ``ddos_guard_challenge`` — DDoS-Guard (ddos-guard.net) anti-bot page. Symptom: HTTP 403 with ``server: ddos-guard`` header; body contains "DDoS-Guard". DDoS-Guard fingerprints different UAs per source IP, so a single User-Agent override does not work across environments; a JS-capable bypass or DDoS-Guard-aware solver is required. Documentation-only flag; sites stay ``disabled: true`` until a solver is integrated. Example: ForumHouse.
|
||||
- ``js_challenge`` — **fallback** for JavaScript-challenge systems whose provider cannot be identified (custom in-house challenge pages that are not Cloudflare, AWS WAF, or any other recognized vendor). Prefer a provider-specific tag whenever the provider can be pinned down from response headers or body signatures.
|
||||
- ``custom_bot_protection`` — **fallback** for non-JS-challenge bot protection served by a custom/in-house system (not Cloudflare, not AWS WAF, not DDoS-Guard). Typical symptom: HTTP 403 from the site's own origin server (``server: nginx``, AWS ELB, etc.) with a branded block page, returned regardless of TLS fingerprint or residential IP. Not generically bypassable; investigate per site (cookies, session, proxy geography). Examples: Hackerearth ("HackerEarth Guardian"), FreelanceJob (nginx-level block).
|
||||
- ``tls_fingerprint`` *(load-bearing; site stays enabled)* — the site fingerprints the TLS handshake (JA3/JA4) and blocks non-browser clients. Maigret automatically uses ``curl_cffi`` with Chrome browser emulation to bypass this. Requires the ``curl_cffi`` package (included as a dependency). Examples: Instagram, NPM, Codepen, Kickstarter, Letterboxd.
|
||||
- ``ip_reputation`` *(documentation-only; site stays enabled)* — the site blocks requests from datacenter/cloud IPs regardless of headers or TLS. Cannot be bypassed automatically; run Maigret from a regular internet connection (not a datacenter) or use a proxy (``--proxy``). The site is **not** marked ``disabled`` because it continues to work for users on residential IPs. Examples: Reddit, Patreon, Figma, OnlyFans.
|
||||
- ``cf_js_challenge`` *(documentation-only; pair with ``disabled: true``)* — Cloudflare Managed Challenge / Turnstile JS challenge. Symptom: HTTP 403 with ``cf-mitigated: challenge`` header; body contains ``challenges.cloudflare.com``, ``_cf_chl_opt``, ``window._cf_chl``, or "Just a moment". Not bypassable via ``curl_cffi`` TLS impersonation (verified across Chrome 123/124/131, Safari 17/18, Firefox 133/135, Edge 101 — all return the same 403 challenge page); a real browser executing the challenge JS is required to obtain the clearance cookie. Sites stay ``disabled: true`` until a CF-challenge solver is integrated. Examples: DMOJ, Elakiri, Fanlore, Bdoutdoors, TheStudentRoom, forum.hr.
|
||||
- ``cf_firewall`` *(documentation-only; pair with ``disabled: true``)* — Cloudflare firewall rule / bot score block (WAF action=block, **not** action=challenge). Symptom: HTTP 403 served by Cloudflare (``server: cloudflare``, ``cf-ray`` header) **without** JS-challenge markers — body typically shows "Access denied", "Attention Required", or just a bare 1015/1016/1020 error page. Unlike ``ip_reputation``, residential IPs are **not** sufficient to bypass — Cloudflare decides based on a composite of bot score, TLS fingerprint, UA, ASN, and custom site-owner rules, so ``curl_cffi`` Chrome impersonation from a residential line still returns 403. Sites stay ``disabled: true`` until a per-site bypass (cookies, real browser, or residential+clean session) is found. Examples: Fark, Fodors, Huntingnet, Hunttalk.
|
||||
- ``aws_waf_js_challenge`` *(documentation-only; pair with ``disabled: true``)* — the site is protected by AWS WAF with a JavaScript challenge. Symptom: HTTP 202 with empty body and ``x-amzn-waf-action: challenge`` header (a token-granting challenge that requires executing the CAPTCHA/challenge JS bundle). Neither ``curl_cffi`` TLS impersonation nor User-Agent changes bypass this — a real browser or the official AWS WAF challenge-solver SDK is required. Sites stay ``disabled: true`` until a solver is integrated. Example: Dreamwidth.
|
||||
- ``ddos_guard_challenge`` *(documentation-only; pair with ``disabled: true``)* — DDoS-Guard (ddos-guard.net) anti-bot page. Symptom: HTTP 403 with ``server: ddos-guard`` header; body contains "DDoS-Guard". DDoS-Guard fingerprints different UAs per source IP, so a single User-Agent override does not work across environments; a JS-capable bypass or DDoS-Guard-aware solver is required. Sites stay ``disabled: true`` until a solver is integrated. Example: ForumHouse.
|
||||
- ``js_challenge`` *(documentation-only; pair with ``disabled: true``)* — **fallback** for JavaScript-challenge systems whose provider cannot be identified (custom in-house challenge pages that are not Cloudflare, AWS WAF, or any other recognized vendor). Prefer a provider-specific tag whenever the provider can be pinned down from response headers or body signatures.
|
||||
- ``custom_bot_protection`` *(documentation-only; pair with ``disabled: true``)* — **fallback** for non-JS-challenge bot protection served by a custom/in-house system (not Cloudflare, not AWS WAF, not DDoS-Guard). Typical symptom: HTTP 403 from the site's own origin server (``server: nginx``, AWS ELB, etc.) with a branded block page, returned regardless of TLS fingerprint or residential IP. Not generically bypassable; investigate per site (cookies, session, proxy geography). Examples: Hackerearth ("HackerEarth Guardian"), FreelanceJob (nginx-level block).
|
||||
|
||||
**Rule: prefer provider-specific protection tags.** When a site is blocked by an identifiable anti-bot vendor, always record the vendor in the tag (``cf_js_challenge``, ``cf_firewall``, ``aws_waf_js_challenge``, ``ddos_guard_challenge``, and future additions such as ``sucuri_challenge``, ``incapsula_challenge``). The generic ``js_challenge`` and ``custom_bot_protection`` tags are reserved for custom/unknown systems. Rationale: bypass solvers are inherently provider-specific (a Cloudflare Turnstile solver does not help with AWS WAF); recording the provider in advance lets us fan out fixes the moment a per-provider solver is added, without re-auditing every disabled site. The same principle applies to other protection categories when the provider is identifiable.
|
||||
|
||||
|
||||
+158
@@ -0,0 +1,158 @@
|
||||
"""Maigret AI Analysis Module
|
||||
|
||||
Provides AI-powered analysis of search results using OpenAI-compatible APIs.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
|
||||
import aiohttp
|
||||
|
||||
|
||||
def load_ai_prompt() -> str:
|
||||
"""Load the AI system prompt from the resources directory."""
|
||||
maigret_path = os.path.dirname(os.path.realpath(__file__))
|
||||
prompt_path = os.path.join(maigret_path, "resources", "ai_prompt.txt")
|
||||
with open(prompt_path, "r", encoding="utf-8") as f:
|
||||
return f.read()
|
||||
|
||||
|
||||
def resolve_api_key(settings) -> str | None:
|
||||
"""Resolve OpenAI API key from settings or environment variable.
|
||||
|
||||
Priority: settings.openai_api_key > OPENAI_API_KEY env var.
|
||||
"""
|
||||
key = getattr(settings, "openai_api_key", None)
|
||||
if key:
|
||||
return key
|
||||
return os.environ.get("OPENAI_API_KEY")
|
||||
|
||||
|
||||
class _Spinner:
|
||||
"""Simple animated spinner for terminal output."""
|
||||
|
||||
FRAMES = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"]
|
||||
|
||||
def __init__(self, text=""):
|
||||
self.text = text
|
||||
self._stop = threading.Event()
|
||||
self._thread = None
|
||||
|
||||
def start(self):
|
||||
self._thread = threading.Thread(target=self._spin, daemon=True)
|
||||
self._thread.start()
|
||||
|
||||
def _spin(self):
|
||||
i = 0
|
||||
while not self._stop.is_set():
|
||||
frame = self.FRAMES[i % len(self.FRAMES)]
|
||||
sys.stderr.write(f"\r{frame} {self.text}")
|
||||
sys.stderr.flush()
|
||||
i += 1
|
||||
self._stop.wait(0.08)
|
||||
|
||||
def stop(self):
|
||||
self._stop.set()
|
||||
if self._thread:
|
||||
self._thread.join()
|
||||
sys.stderr.write("\r\033[2K")
|
||||
sys.stderr.flush()
|
||||
|
||||
|
||||
async def print_streaming(text: str, delay: float = 0.04):
|
||||
"""Print text word by word with a delay, simulating streaming LLM output."""
|
||||
words = text.split(" ")
|
||||
for i, word in enumerate(words):
|
||||
if i > 0:
|
||||
sys.stdout.write(" ")
|
||||
sys.stdout.write(word)
|
||||
sys.stdout.flush()
|
||||
await asyncio.sleep(delay)
|
||||
sys.stdout.write("\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
async def get_ai_analysis(
|
||||
api_key: str,
|
||||
markdown_report: str,
|
||||
model: str = "gpt-4o",
|
||||
api_base_url: str = "https://api.openai.com/v1",
|
||||
) -> str:
|
||||
"""Send the markdown report to an OpenAI-compatible API and return the analysis.
|
||||
|
||||
Uses streaming to display tokens as they arrive.
|
||||
Raises on HTTP errors with descriptive messages.
|
||||
"""
|
||||
system_prompt = load_ai_prompt()
|
||||
|
||||
url = f"{api_base_url.rstrip('/')}/chat/completions"
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
payload = {
|
||||
"model": model,
|
||||
"stream": True,
|
||||
"messages": [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": markdown_report},
|
||||
],
|
||||
}
|
||||
|
||||
spinner = _Spinner("Analysing the data with AI...")
|
||||
spinner.start()
|
||||
first_token = True
|
||||
full_response = []
|
||||
|
||||
try:
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(url, json=payload, headers=headers) as resp:
|
||||
if resp.status == 401:
|
||||
raise RuntimeError("Invalid OpenAI API key (HTTP 401)")
|
||||
if resp.status == 429:
|
||||
raise RuntimeError("OpenAI API rate limit exceeded (HTTP 429)")
|
||||
if resp.status != 200:
|
||||
body = await resp.text()
|
||||
raise RuntimeError(
|
||||
f"OpenAI API error (HTTP {resp.status}): {body[:500]}"
|
||||
)
|
||||
|
||||
async for line in resp.content:
|
||||
decoded = line.decode("utf-8").strip()
|
||||
if not decoded or not decoded.startswith("data: "):
|
||||
continue
|
||||
|
||||
data_str = decoded[len("data: "):]
|
||||
if data_str == "[DONE]":
|
||||
break
|
||||
|
||||
try:
|
||||
chunk = json.loads(data_str)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
delta = chunk.get("choices", [{}])[0].get("delta", {})
|
||||
content = delta.get("content", "")
|
||||
if not content:
|
||||
continue
|
||||
|
||||
if first_token:
|
||||
spinner.stop()
|
||||
print()
|
||||
first_token = False
|
||||
|
||||
sys.stdout.write(content)
|
||||
sys.stdout.flush()
|
||||
except Exception:
|
||||
spinner.stop()
|
||||
raise
|
||||
|
||||
if first_token:
|
||||
# No tokens received — stop spinner anyway
|
||||
spinner.stop()
|
||||
|
||||
print()
|
||||
return "".join(full_response)
|
||||
+7
-1
@@ -345,7 +345,11 @@ def process_site_result(
|
||||
username = results_info["username"]
|
||||
is_parsing_enabled = results_info["parsing_enabled"]
|
||||
url = results_info.get("url_user")
|
||||
logger.info(url)
|
||||
url_probe = results_info.get("url_probe") or url
|
||||
if url_probe != url:
|
||||
logger.info(f"{url_probe} (display: {url})")
|
||||
else:
|
||||
logger.info(url)
|
||||
|
||||
status = results_info.get("status")
|
||||
if status is not None:
|
||||
@@ -603,6 +607,8 @@ def make_site_result(
|
||||
for k, v in site.get_params.items():
|
||||
url_probe += f"&{k}={v}"
|
||||
|
||||
results_site["url_probe"] = url_probe
|
||||
|
||||
if site.request_method:
|
||||
request_method = site.request_method.lower()
|
||||
elif site.check_type == "status_code" and site.request_head_only:
|
||||
|
||||
+80
-14
@@ -494,6 +494,21 @@ def setup_arguments_parser(settings: Settings):
|
||||
" (one report per username).",
|
||||
)
|
||||
|
||||
report_group.add_argument(
|
||||
"--ai",
|
||||
action="store_true",
|
||||
dest="ai",
|
||||
default=False,
|
||||
help="Generate an AI-powered analysis of the search results using OpenAI API. "
|
||||
"Requires OPENAI_API_KEY env var or openai_api_key in settings.",
|
||||
)
|
||||
report_group.add_argument(
|
||||
"--ai-model",
|
||||
dest="ai_model",
|
||||
default=settings.openai_model,
|
||||
help="OpenAI model to use for AI analysis (default: gpt-4o).",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--reports-sorting",
|
||||
default=settings.report_sorting,
|
||||
@@ -596,6 +611,7 @@ async def main():
|
||||
print_found_only=not args.print_not_found,
|
||||
skip_check_errors=not args.print_check_errors,
|
||||
color=not args.no_color,
|
||||
silent=args.ai,
|
||||
)
|
||||
|
||||
# Create object with all information about sites we are aware of.
|
||||
@@ -711,17 +727,33 @@ async def main():
|
||||
+ get_dict_ascii_tree(usernames, prepend="\t")
|
||||
)
|
||||
|
||||
if args.ai:
|
||||
from .ai import resolve_api_key
|
||||
|
||||
if not resolve_api_key(settings):
|
||||
query_notify.warning(
|
||||
'AI analysis requires an OpenAI API key. '
|
||||
'Set OPENAI_API_KEY environment variable or add '
|
||||
'openai_api_key to settings.json.'
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if not site_data:
|
||||
query_notify.warning('No sites to check, exiting!')
|
||||
sys.exit(2)
|
||||
|
||||
query_notify.warning(
|
||||
f'Starting a search on top {len(site_data)} sites from the Maigret database...'
|
||||
)
|
||||
if not args.all_sites:
|
||||
if args.ai:
|
||||
query_notify.warning(
|
||||
'You can run search by full list of sites with flag `-a`', '!'
|
||||
f'Starting AI-assisted search on top {len(site_data)} sites from the Maigret database...'
|
||||
)
|
||||
else:
|
||||
query_notify.warning(
|
||||
f'Starting a search on top {len(site_data)} sites from the Maigret database...'
|
||||
)
|
||||
if not args.all_sites:
|
||||
query_notify.warning(
|
||||
'You can run search by full list of sites with flag `-a`', '!'
|
||||
)
|
||||
|
||||
already_checked = set()
|
||||
general_results = []
|
||||
@@ -774,11 +806,12 @@ async def main():
|
||||
check_domains=args.with_domains,
|
||||
)
|
||||
|
||||
errs = errors.notify_about_errors(
|
||||
results, query_notify, show_statistics=args.verbose
|
||||
)
|
||||
for e in errs:
|
||||
query_notify.warning(*e)
|
||||
if not args.ai:
|
||||
errs = errors.notify_about_errors(
|
||||
results, query_notify, show_statistics=args.verbose
|
||||
)
|
||||
for e in errs:
|
||||
query_notify.warning(*e)
|
||||
|
||||
if args.reports_sorting == "data":
|
||||
results = sort_report_by_data_points(results)
|
||||
@@ -867,10 +900,43 @@ async def main():
|
||||
save_graph_report(filename, general_results, db)
|
||||
query_notify.warning(f'Graph report on all usernames saved in {filename}')
|
||||
|
||||
text_report = get_plaintext_report(report_context)
|
||||
if text_report:
|
||||
query_notify.info('Short text report:')
|
||||
print(text_report)
|
||||
if not args.ai:
|
||||
text_report = get_plaintext_report(report_context)
|
||||
if text_report:
|
||||
query_notify.info('Short text report:')
|
||||
print(text_report)
|
||||
|
||||
if args.ai:
|
||||
from .ai import get_ai_analysis, resolve_api_key
|
||||
from .report import generate_markdown_report
|
||||
|
||||
api_key = resolve_api_key(settings)
|
||||
|
||||
run_flags = []
|
||||
if args.tags:
|
||||
run_flags.append(f"--tags {args.tags}")
|
||||
if args.site_list:
|
||||
run_flags.append(f"--site {','.join(args.site_list)}")
|
||||
if args.all_sites:
|
||||
run_flags.append("--all-sites")
|
||||
run_info = {
|
||||
"sites_count": sum(len(d) for _, _, d in general_results),
|
||||
"flags": " ".join(run_flags) if run_flags else None,
|
||||
}
|
||||
|
||||
md_report = generate_markdown_report(report_context, run_info=run_info)
|
||||
|
||||
try:
|
||||
await get_ai_analysis(
|
||||
api_key=api_key,
|
||||
markdown_report=md_report,
|
||||
model=args.ai_model,
|
||||
api_base_url=getattr(
|
||||
settings, 'openai_api_base_url', 'https://api.openai.com/v1'
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
query_notify.warning(f'AI analysis failed: {e}')
|
||||
|
||||
# update database
|
||||
db.save_to_file(db_file)
|
||||
|
||||
@@ -123,6 +123,7 @@ class QueryNotifyPrint(QueryNotify):
|
||||
print_found_only=False,
|
||||
skip_check_errors=False,
|
||||
color=True,
|
||||
silent=False,
|
||||
):
|
||||
"""Create Query Notify Print Object.
|
||||
|
||||
@@ -149,6 +150,7 @@ class QueryNotifyPrint(QueryNotify):
|
||||
self.print_found_only = print_found_only
|
||||
self.skip_check_errors = skip_check_errors
|
||||
self.color = color
|
||||
self.silent = silent
|
||||
|
||||
return
|
||||
|
||||
@@ -187,6 +189,9 @@ class QueryNotifyPrint(QueryNotify):
|
||||
Nothing.
|
||||
"""
|
||||
|
||||
if self.silent:
|
||||
return
|
||||
|
||||
title = f"Checking {id_type}"
|
||||
if self.color:
|
||||
print(
|
||||
@@ -236,6 +241,9 @@ class QueryNotifyPrint(QueryNotify):
|
||||
Return Value:
|
||||
Nothing.
|
||||
"""
|
||||
if self.silent:
|
||||
return
|
||||
|
||||
notify = None
|
||||
self.result = result
|
||||
|
||||
|
||||
+15
-6
@@ -30,14 +30,18 @@ UTILS
|
||||
|
||||
|
||||
def filter_supposed_data(data):
|
||||
# interesting fields
|
||||
allowed_fields = ["fullname", "gender", "location", "age"]
|
||||
filtered_supposed_data = {
|
||||
CaseConverter.snake_to_title(k): v[0]
|
||||
|
||||
def _first(v):
|
||||
if isinstance(v, (list, tuple)):
|
||||
return v[0] if v else ""
|
||||
return v
|
||||
|
||||
return {
|
||||
CaseConverter.snake_to_title(k): _first(v)
|
||||
for k, v in data.items()
|
||||
if k in allowed_fields
|
||||
}
|
||||
return filtered_supposed_data
|
||||
|
||||
|
||||
def sort_report_by_data_points(results):
|
||||
@@ -267,7 +271,7 @@ def _md_format_value(value) -> str:
|
||||
return s
|
||||
|
||||
|
||||
def save_markdown_report(filename: str, context: dict, run_info: dict = None):
|
||||
def generate_markdown_report(context: dict, run_info: dict = None) -> str:
|
||||
username = context.get("username", "unknown")
|
||||
generated_at = context.get("generated_at", "")
|
||||
brief = context.get("brief", "")
|
||||
@@ -391,8 +395,13 @@ def save_markdown_report(filename: str, context: dict, run_info: dict = None):
|
||||
"CCPA, and similar).\n"
|
||||
)
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def save_markdown_report(filename: str, context: dict, run_info: dict = None):
|
||||
content = generate_markdown_report(context, run_info)
|
||||
with open(filename, "w", encoding="utf-8") as f:
|
||||
f.write("\n".join(lines))
|
||||
f.write(content)
|
||||
|
||||
|
||||
"""
|
||||
|
||||
@@ -0,0 +1,62 @@
|
||||
You are an OSINT analyst that converts raw username-investigation reports into a short, clean human-readable summary.
|
||||
|
||||
Your task:
|
||||
Read the attached account-discovery report and produce a concise report in exactly this style:
|
||||
|
||||
# Investigation Summary
|
||||
|
||||
Name: <most likely real full name>
|
||||
Location: <most likely current location>
|
||||
Occupation: <short combined description based only on strong signals>
|
||||
Interests: <3–6 broad interests inferred from platform types, bios, and activity>
|
||||
Languages: <languages supported by strong evidence only>
|
||||
Website: <main personal website if clearly present>
|
||||
Username: <main username> (variant: <variant usernames if any>)
|
||||
Platforms: <number> profiles, active from <first year> to <last year>
|
||||
Confidence: <High / Medium / Low> — <one short explanation why>
|
||||
|
||||
# Other leads
|
||||
|
||||
- <lead 1>
|
||||
- <lead 2>
|
||||
- <lead 3 if needed>
|
||||
|
||||
Rules:
|
||||
1. Use only information supported by the report.
|
||||
2. Resolve identity using consistency of username, full name, bio, links, company, and location.
|
||||
3. Prefer strong repeated signals over one-off weak signals.
|
||||
4. If one profile clearly conflicts with the rest, mention it in "Other leads" as a likely false positive instead of mixing it into the main identity.
|
||||
5. Keep the tone analytical and neutral.
|
||||
6. Do not mention every platform individually.
|
||||
7. Do not include raw URLs except for the main website.
|
||||
8. Do not mention NSFW/adult platforms in the main summary unless they are the only source for a critical lead; if such a profile looks inconsistent, mention it only as a likely false positive.
|
||||
9. "Occupation" should be a compact merged description, for example: "Chief Product Officer (CPO) at ..., entrepreneur, OSINT community founder".
|
||||
10. "Interests" should be broad categories, not noisy tags. Convert raw platform/tag evidence into natural categories like OSINT, software development, blogging, gaming, streaming, etc.
|
||||
11. "Languages" should only include languages clearly supported by bios, texts, country tags, or profile content.
|
||||
12. For "Platforms", count the profiles reported as found by the report summary, not manually deduplicated.
|
||||
13. For active years, use the earliest and latest reliable dates from the consistent identity cluster. Ignore obvious outlier dates if they belong to likely false positives or weak profiles.
|
||||
14. For confidence:
|
||||
- High = strong consistency across username, name, bio, links, location, and/or company
|
||||
- Medium = partial consistency with some gaps
|
||||
- Low = mostly username-only matches
|
||||
15. If some field is not reliably known, omit speculation and use the best cautious wording possible.
|
||||
16. For "Name", output only the most likely real personal name in clean canonical form.
|
||||
- Remove nicknames, handles, aliases, or bracketed parts such as "(Soxoj)".
|
||||
- Example: "Dmitriy (Soxoj) Danilov" -> "Dmitriy Danilov".
|
||||
17. For "Website", output only the plain domain or URL as text, not a markdown hyperlink.
|
||||
18. In "Other leads", do not label conflicting profiles as "false positive", "likely unrelated", or "potentially a false positive".
|
||||
- Instead, use neutral intelligence wording such as:
|
||||
"Accounts were found that are most likely unrelated to the main identity, but may indicate possible cross-border activity and should be verified."
|
||||
19. When describing anomalies in "Other leads", prefer cautious investigative phrasing:
|
||||
- "may be unrelated"
|
||||
- "requires verification"
|
||||
- "could indicate separate activity"
|
||||
- "should be checked manually"
|
||||
20. Do not include nicknames or aliases inside the Name field unless they are clearly part of the legal or real-world name.
|
||||
|
||||
Output requirements:
|
||||
- Return only the final formatted text.
|
||||
- Keep it short.
|
||||
- No preamble, no explanations.
|
||||
|
||||
Now analyze the following report
|
||||
+104
-66
@@ -40,7 +40,7 @@
|
||||
],
|
||||
"alexaRank": 3,
|
||||
"urlMain": "https://www.youtube.com/",
|
||||
"url": "https://www.youtube.com/@{username}",
|
||||
"url": "https://www.youtube.com/@{username}/about",
|
||||
"usernameClaimed": "test",
|
||||
"usernameUnclaimed": "noonewouldeverusethis777"
|
||||
},
|
||||
@@ -63,7 +63,7 @@
|
||||
],
|
||||
"alexaRank": 3,
|
||||
"urlMain": "https://www.youtube.com/",
|
||||
"url": "https://www.youtube.com/@{username}",
|
||||
"url": "https://www.youtube.com/@{username}/about",
|
||||
"usernameClaimed": "test",
|
||||
"usernameUnclaimed": "noonewouldeverusethis777"
|
||||
},
|
||||
@@ -100,7 +100,7 @@
|
||||
"sec-ch-ua": "Google Chrome\";v=\"87\", \" Not;A Brand\";v=\"99\", \"Chromium\";v=\"87\"",
|
||||
"authorization": "Bearer AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA",
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
|
||||
"x-guest-token": "2045154491230572773"
|
||||
"x-guest-token": "2048070238281826593"
|
||||
},
|
||||
"errors": {
|
||||
"Bad guest token": "x-guest-token update required"
|
||||
@@ -296,7 +296,7 @@
|
||||
"method": "vimeo"
|
||||
},
|
||||
"headers": {
|
||||
"Authorization": "jwt eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3NzY0Mzg3MjAsInVzZXJfaWQiOm51bGwsImFwcF9pZCI6NTg0NzksInNjb3BlcyI6InB1YmxpYyIsInRlYW1fdXNlcl9pZCI6bnVsbCwianRpIjoiNjY0OWY3ZWItMThjZS00ODU1LWIzNmEtNWY3MzRkOGZhNjAyIn0.l1SRcr5UqvxqYLveW7MTECKSfkgsbh1y9QZqZmBX1EI"
|
||||
"Authorization": "jwt eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3NzcxMzM4ODAsInVzZXJfaWQiOm51bGwsImFwcF9pZCI6NTg0NzksInNjb3BlcyI6InB1YmxpYyIsInRlYW1fdXNlcl9pZCI6bnVsbCwianRpIjoiZjFiMGJjNWUtMjIyOC00Y2I1LWFlNmItODk0YjZhNGNmODJhIn0.YCPXekRrHnJy8iH1gX4iVuNURiw6sU_FlmsfHnM2oug"
|
||||
},
|
||||
"urlProbe": "https://api.vimeo.com/users/{username}?fields=name%2Cgender%2Cbio%2Curi%2Clink%2Cbackground_video%2Clocation_details%2Cpictures%2Cverified%2Cmetadata.public_videos.total%2Cavailable_for_hire%2Ccan_work_remotely%2Cmetadata.connections.videos.total%2Cmetadata.connections.albums.total%2Cmetadata.connections.followers.total%2Cmetadata.connections.following.total%2Cmetadata.public_videos.total%2Cmetadata.connections.vimeo_experts.is_enrolled%2Ctotal_collection_count%2Ccreated_time%2Cprofile_preferences%2Cmembership%2Cclients%2Cskills%2Cproject_types%2Crates%2Ccategories%2Cis_expert%2Cprofile_discovery%2Cwebsites%2Ccontact_emails&fetch_user_profile=1",
|
||||
"checkType": "status_code",
|
||||
@@ -491,6 +491,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Reddit": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"custom_bot_protection"
|
||||
],
|
||||
"tags": [
|
||||
"discussion",
|
||||
"news",
|
||||
@@ -511,10 +515,7 @@
|
||||
"url": "https://www.reddit.com/user/{username}",
|
||||
"urlProbe": "https://api.reddit.com/user/{username}/about",
|
||||
"usernameClaimed": "blue",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
]
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Tumblr": {
|
||||
"tags": [
|
||||
@@ -1338,6 +1339,9 @@
|
||||
"did not match any articles",
|
||||
"not match"
|
||||
],
|
||||
"errors": {
|
||||
"Our systems have detected unusual traffic": "Google rate-limit / captcha"
|
||||
},
|
||||
"tags": [
|
||||
"education",
|
||||
"research"
|
||||
@@ -1613,6 +1617,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Quora": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"education"
|
||||
],
|
||||
@@ -1779,6 +1787,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Patreon": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"tags": [
|
||||
"finance"
|
||||
],
|
||||
@@ -2044,6 +2056,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Shutterstock": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"custom_bot_protection"
|
||||
],
|
||||
"tags": [
|
||||
"music",
|
||||
"photo",
|
||||
@@ -2807,6 +2823,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"PyPi": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"custom_bot_protection"
|
||||
],
|
||||
"tags": [
|
||||
"coding"
|
||||
],
|
||||
@@ -2818,10 +2838,7 @@
|
||||
"urlMain": "https://pypi.org/",
|
||||
"url": "https://pypi.org/user/{username}",
|
||||
"usernameClaimed": "adam",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
]
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Pastebin": {
|
||||
"tags": [
|
||||
@@ -3490,8 +3507,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"alexaRank": 1426,
|
||||
"absenceStrs": [
|
||||
"not found",
|
||||
"404"
|
||||
"<title>false | GeeksforGeeks Profile"
|
||||
],
|
||||
"tags": [
|
||||
"coding",
|
||||
@@ -3632,6 +3648,10 @@
|
||||
"disabled": true
|
||||
},
|
||||
"Redbubble": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"tags": [
|
||||
"shopping"
|
||||
],
|
||||
@@ -3640,10 +3660,7 @@
|
||||
"urlMain": "https://www.redbubble.com/",
|
||||
"url": "https://www.redbubble.com/people/{username}",
|
||||
"usernameClaimed": "blue",
|
||||
"usernameUnclaimed": "noonewouldeverusethis77777",
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
]
|
||||
"usernameUnclaimed": "noonewouldeverusethis77777"
|
||||
},
|
||||
"codeberg.org": {
|
||||
"tags": [
|
||||
@@ -5613,6 +5630,9 @@
|
||||
"alexaRank": 2472
|
||||
},
|
||||
"OnlyFans": {
|
||||
"protection": [
|
||||
"ip_reputation"
|
||||
],
|
||||
"tags": [
|
||||
"porn"
|
||||
],
|
||||
@@ -5622,8 +5642,8 @@
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
|
||||
"user-id": "0",
|
||||
"x-bc": "0a106d301866494c873ae3a05bc3c97cee59a749",
|
||||
"time": "1776790550214",
|
||||
"sign": "57203:31541b62efa9f19fafc79ca8002b1d0f12335c1d:6d2:69cfa6d8",
|
||||
"time": "1777132991121",
|
||||
"sign": "57203:3723aa7d500e76eabca29df74e4e97c483f14204:66d:69cfa6d8",
|
||||
"referer": "https://onlyfans.com/",
|
||||
"cookie": "__cf_bm=YovfzPN0T_wg6F60L5eZKPOQvlGESws3UDGgEkmPb9A-1776790253-1.0.1.1-KRZgptNe5P9epBZSdITa12VfTEDlDdLckPY3I2FDAacvCPxOj0PqeK86J5mcC7UQ_TM8_O24bAh21ElYINovqk2386EoJYyLmknHJ5UsFts"
|
||||
},
|
||||
@@ -6995,6 +7015,10 @@
|
||||
]
|
||||
},
|
||||
"LibraryThing": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"books"
|
||||
],
|
||||
@@ -7168,6 +7192,10 @@
|
||||
]
|
||||
},
|
||||
"Speedrun.com": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"gaming"
|
||||
],
|
||||
@@ -7922,11 +7950,14 @@
|
||||
"alexaRank": 6720
|
||||
},
|
||||
"Kick": {
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"url": "https://kick.com/{username}",
|
||||
"urlMain": "https://kick.com/",
|
||||
"urlProbe": "https://kick.com/api/v2/channels/{username}",
|
||||
"checkType": "status_code",
|
||||
"usernameClaimed": "blue",
|
||||
"usernameClaimed": "xqc",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"alexaRank": 6474,
|
||||
"tags": [
|
||||
@@ -8368,6 +8399,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"PlanetMinecraft": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"gaming"
|
||||
],
|
||||
@@ -9354,6 +9389,10 @@
|
||||
"alexaRank": 8514
|
||||
},
|
||||
"Rate Your Music": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"tags": [
|
||||
"music"
|
||||
],
|
||||
@@ -9890,6 +9929,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"JeuxVideo": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"fr",
|
||||
"gaming"
|
||||
@@ -9983,7 +10026,8 @@
|
||||
},
|
||||
"Anime-planet": {
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
"tls_fingerprint",
|
||||
"ip_reputation"
|
||||
],
|
||||
"tags": [
|
||||
"anime"
|
||||
@@ -10475,17 +10519,6 @@
|
||||
"usernameClaimed": "blue",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Fotolog": {
|
||||
"tags": [
|
||||
"photo"
|
||||
],
|
||||
"engine": "engine404get",
|
||||
"urlMain": "http://fotolog.com",
|
||||
"url": "http://fotolog.com/{username}",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"usernameClaimed": "red",
|
||||
"alexaRank": 11693
|
||||
},
|
||||
"PushSquare": {
|
||||
"tags": [
|
||||
"gaming",
|
||||
@@ -10615,6 +10648,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Lomography": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"absenceStrs": [
|
||||
"<title>404 · Lomography</title>"
|
||||
],
|
||||
@@ -10874,6 +10911,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Liberapay": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"tags": [
|
||||
"finance"
|
||||
],
|
||||
@@ -11034,6 +11075,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Joomlart": {
|
||||
"disabled": true,
|
||||
"tags": [
|
||||
"coding"
|
||||
],
|
||||
@@ -11146,12 +11188,14 @@
|
||||
"alexaRank": 14969,
|
||||
"urlMain": "https://www.vivino.com/",
|
||||
"url": "https://www.vivino.com/users/{username}",
|
||||
"urlProbe": "https://api.vivino.com/users/{username}",
|
||||
"usernameClaimed": "adam",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Flyertalk": {
|
||||
"protection": [
|
||||
"tls_fingerprint"
|
||||
"tls_fingerprint",
|
||||
"ip_reputation"
|
||||
],
|
||||
"tags": [
|
||||
"travel"
|
||||
@@ -11798,6 +11842,10 @@
|
||||
"alexaRank": 20421
|
||||
},
|
||||
"Smule": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"tags": [
|
||||
"music"
|
||||
],
|
||||
@@ -13285,6 +13333,10 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Smogon": {
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"custom_bot_protection"
|
||||
],
|
||||
"tags": [
|
||||
"gaming"
|
||||
],
|
||||
@@ -13336,6 +13388,9 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"PromptBase": {
|
||||
"protection": [
|
||||
"ip_reputation"
|
||||
],
|
||||
"absenceStrs": [
|
||||
"NotFound"
|
||||
],
|
||||
@@ -15289,6 +15344,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Knowem": {
|
||||
"disabled": true,
|
||||
"tags": [
|
||||
"business"
|
||||
],
|
||||
@@ -15558,6 +15614,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Polywork": {
|
||||
"disabled": true,
|
||||
"checkType": "message",
|
||||
"absenceStrs": [
|
||||
">404</h3>",
|
||||
@@ -15700,9 +15757,13 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Designspiration": {
|
||||
"protection": [
|
||||
"cf_js_challenge",
|
||||
"tls_fingerprint"
|
||||
],
|
||||
"checkType": "status_code",
|
||||
"urlMain": "https://www.designspiration.net/",
|
||||
"url": "https://www.designspiration.net/{username}/",
|
||||
"urlMain": "https://designspiration.com/",
|
||||
"url": "https://designspiration.com/{username}/",
|
||||
"usernameClaimed": "blue",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"alexaRank": 89022,
|
||||
@@ -17640,6 +17701,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"the-mainboard.com": {
|
||||
"disabled": true,
|
||||
"tags": [
|
||||
"forum",
|
||||
"us"
|
||||
@@ -17863,6 +17925,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Onlyfinder": {
|
||||
"disabled": true,
|
||||
"absenceStrs": [
|
||||
"\"rows\":[]"
|
||||
],
|
||||
@@ -18029,18 +18092,6 @@
|
||||
],
|
||||
"alexaRank": 379171
|
||||
},
|
||||
"Pitomec": {
|
||||
"tags": [
|
||||
"ru",
|
||||
"ua"
|
||||
],
|
||||
"checkType": "status_code",
|
||||
"alexaRank": 228310,
|
||||
"urlMain": "https://www.pitomec.ru",
|
||||
"url": "https://www.pitomec.ru/{username}",
|
||||
"usernameClaimed": "adam",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"Loveplanet": {
|
||||
"disabled": true,
|
||||
"tags": [
|
||||
@@ -18717,24 +18768,6 @@
|
||||
"ua"
|
||||
]
|
||||
},
|
||||
"SQL.ru": {
|
||||
"tags": [
|
||||
"ru"
|
||||
],
|
||||
"checkType": "message",
|
||||
"presenseStrs": [
|
||||
"По вашему запросу найдено"
|
||||
],
|
||||
"absenceStrs": [
|
||||
"Извините",
|
||||
" но по вашему запросу ничего не найдено"
|
||||
],
|
||||
"url": "https://www.sql.ru/forum/actualsearch.aspx?a={username}&ma=0",
|
||||
"urlMain": "https://www.sql.ru",
|
||||
"usernameClaimed": "buser",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
"alexaRank": 285351
|
||||
},
|
||||
"Pepper PL": {
|
||||
"url": "https://www.pepper.pl/profile/{username}",
|
||||
"urlMain": "https://www.pepper.pl/",
|
||||
@@ -18828,6 +18861,10 @@
|
||||
},
|
||||
"Math10": {
|
||||
"urlSubpath": "/forum",
|
||||
"disabled": true,
|
||||
"protection": [
|
||||
"cf_js_challenge"
|
||||
],
|
||||
"tags": [
|
||||
"forum",
|
||||
"ru"
|
||||
@@ -19042,6 +19079,7 @@
|
||||
"usernameUnclaimed": "noonewouldeverusethis7"
|
||||
},
|
||||
"mcfc-fan.ru": {
|
||||
"disabled": true,
|
||||
"engine": "uCoz",
|
||||
"urlMain": "http://mcfc-fan.ru",
|
||||
"usernameUnclaimed": "noonewouldeverusethis7",
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"version": 1,
|
||||
"updated_at": "2026-04-22T16:15:02Z",
|
||||
"sites_count": 3142,
|
||||
"updated_at": "2026-04-26T09:18:14Z",
|
||||
"sites_count": 3139,
|
||||
"min_maigret_version": "0.6.0",
|
||||
"data_sha256": "1e1ed6da2aa9db0f34171f61a044c20bbd1ed53a0430dec4a9ce8f8543655d1a",
|
||||
"data_sha256": "c51ecaa6c0736c5e1e7ca91aaf111445b3ac9ce9541a472d97db2dcc3ff8aa17",
|
||||
"data_url": "https://raw.githubusercontent.com/soxoj/maigret/main/maigret/resources/data.json"
|
||||
}
|
||||
@@ -55,6 +55,9 @@
|
||||
"pdf_report": false,
|
||||
"html_report": false,
|
||||
"md_report": false,
|
||||
"openai_api_key": "",
|
||||
"openai_model": "gpt-4o",
|
||||
"openai_api_base_url": "https://api.openai.com/v1",
|
||||
"web_interface_port": 5000,
|
||||
"no_autoupdate": false,
|
||||
"db_update_meta_url": "https://raw.githubusercontent.com/soxoj/maigret/main/maigret/resources/db_meta.json",
|
||||
|
||||
Generated
+6
-6
@@ -418,14 +418,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2026.2.25"
|
||||
version = "2026.4.22"
|
||||
description = "Python package for providing Mozilla's CA Bundle."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "certifi-2026.2.25-py3-none-any.whl", hash = "sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa"},
|
||||
{file = "certifi-2026.2.25.tar.gz", hash = "sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7"},
|
||||
{file = "certifi-2026.4.22-py3-none-any.whl", hash = "sha256:3cb2210c8f88ba2318d29b0388d1023c8492ff72ecdde4ebdaddbb13a31b1c4a"},
|
||||
{file = "certifi-2026.4.22.tar.gz", hash = "sha256:8d455352a37b71bf76a79caa83a3d6c25afee4a385d632127b6afb3963f1c580"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1261,14 +1261,14 @@ lxml = ["lxml ; platform_python_implementation == \"CPython\""]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.12"
|
||||
version = "3.13"
|
||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "idna-3.12-py3-none-any.whl", hash = "sha256:60ffaa1858fac94c9c124728c24fcde8160f3fb4a7f79aa8cdd33a9d1af60a67"},
|
||||
{file = "idna-3.12.tar.gz", hash = "sha256:724e9952cc9e2bd7550ea784adb098d837ab5267ef67a1ab9cf7846bdbdd8254"},
|
||||
{file = "idna-3.13-py3-none-any.whl", hash = "sha256:892ea0cde124a99ce773decba204c5552b69c3c67ffd5f232eb7696135bc8bb3"},
|
||||
{file = "idna-3.13.tar.gz", hash = "sha256:585ea8fe5d69b9181ec1afba340451fba6ba764af97026f92a91d4eef164a242"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
maigret @ https://github.com/soxoj/maigret/archive/refs/heads/main.zip
|
||||
pefile==2023.2.7 # do not bump while pyinstaller is 6.11.1, there is a conflict
|
||||
psutil==7.2.2
|
||||
pyinstaller==6.19.0
|
||||
pyinstaller==6.20.0
|
||||
pywin32-ctypes==0.2.3
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
|
||||
## List of supported sites (search methods): total 3142
|
||||
## List of supported sites (search methods): total 3139
|
||||
|
||||
Rank data fetched from Majestic Million by domains.
|
||||
|
||||
@@ -22,7 +22,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [WordPress (https://wordpress.com)](https://wordpress.com)*: top 50, blog*
|
||||
1.  [Google Plus (archived) (https://plus.google.com)](https://plus.google.com)*: top 50, social*
|
||||
1.  [Telegram (https://t.me/)](https://t.me/)*: top 50, messaging*
|
||||
1.  [Reddit (https://www.reddit.com/)](https://www.reddit.com/)*: top 50, discussion, news, social*
|
||||
1.  [Reddit (https://www.reddit.com/)](https://www.reddit.com/)*: top 50, discussion, news, social*, search is disabled
|
||||
1.  [Tumblr (https://www.tumblr.com)](https://www.tumblr.com)*: top 100, blog, social*
|
||||
1.  [Spotify (https://open.spotify.com/)](https://open.spotify.com/)*: top 100, music*
|
||||
1.  [Archive.org (https://archive.org)](https://archive.org)*: top 100, archive*, search is disabled
|
||||
@@ -101,7 +101,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [OP.GG LoL Vietnam (https://www.op.gg/)](https://www.op.gg/)*: top 500, gaming, vn*
|
||||
1.  [OP.GG LoL Thailand (https://www.op.gg/)](https://www.op.gg/)*: top 500, gaming, th*
|
||||
1.  [Xing (https://www.xing.com/)](https://www.xing.com/)*: top 500, de, eu*
|
||||
1.  [Patreon (https://www.patreon.com/)](https://www.patreon.com/)*: top 500, finance*
|
||||
1.  [Patreon (https://www.patreon.com/)](https://www.patreon.com/)*: top 500, finance*, search is disabled
|
||||
1.  [DeviantART (https://deviantart.com)](https://deviantart.com)*: top 500, art, photo*
|
||||
1.  [Gofundme (https://www.gofundme.com)](https://www.gofundme.com)*: top 500, finance*
|
||||
1.  [Zhihu (https://www.zhihu.com/)](https://www.zhihu.com/)*: top 500, cn*, search is disabled
|
||||
@@ -117,7 +117,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [OK (https://ok.ru/)](https://ok.ru/)*: top 1K, ru, social*
|
||||
1.  [Photobucket (https://photobucket.com/)](https://photobucket.com/)*: top 1K, photo*, search is disabled
|
||||
1.  [Udemy (https://www.udemy.com)](https://www.udemy.com)*: top 1K, education*, search is disabled
|
||||
1.  [Shutterstock (https://www.shutterstock.com)](https://www.shutterstock.com)*: top 1K, music, photo, stock*
|
||||
1.  [Shutterstock (https://www.shutterstock.com)](https://www.shutterstock.com)*: top 1K, music, photo, stock*, search is disabled
|
||||
1.  [MixCloud (https://www.mixcloud.com/)](https://www.mixcloud.com/)*: top 1K, music*
|
||||
1.  [NPM (https://www.npmjs.com/)](https://www.npmjs.com/)*: top 1K, coding*
|
||||
1.  [NPM-Package (https://www.npmjs.com/)](https://www.npmjs.com/)*: top 1K, coding*
|
||||
@@ -139,7 +139,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Gumroad (https://www.gumroad.com/)](https://www.gumroad.com/)*: top 1K, shopping*
|
||||
1.  [Upwork (https://upwork.com)](https://upwork.com)*: top 1K, freelance*
|
||||
1.  [Yumpu (https://www.yumpu.com)](https://www.yumpu.com)*: top 1K, stock*, search is disabled
|
||||
1.  [PyPi (https://pypi.org/)](https://pypi.org/)*: top 1K, coding*
|
||||
1.  [PyPi (https://pypi.org/)](https://pypi.org/)*: top 1K, coding*, search is disabled
|
||||
1.  [Douban (https://www.douban.com)](https://www.douban.com)*: top 1K, cn*
|
||||
1.  [LonelyPlanet (https://www.lonelyplanet.com)](https://www.lonelyplanet.com)*: top 1K, travel*, search is disabled
|
||||
1.  [Figma (https://www.figma.com/)](https://www.figma.com/)*: top 1K, design*
|
||||
@@ -183,7 +183,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [AllTrails (https://www.alltrails.com/)](https://www.alltrails.com/)*: top 5K, sport, travel*, search is disabled
|
||||
1.  [Habr (https://habr.com/)](https://habr.com/)*: top 5K, blog, discussion, ru*
|
||||
1.  [AllRecipes (https://www.allrecipes.com/)](https://www.allrecipes.com/)*: top 5K, hobby*
|
||||
1.  [Redbubble (https://www.redbubble.com/)](https://www.redbubble.com/)*: top 5K, shopping*
|
||||
1.  [Redbubble (https://www.redbubble.com/)](https://www.redbubble.com/)*: top 5K, shopping*, search is disabled
|
||||
1.  [Diigo (https://www.diigo.com/)](https://www.diigo.com/)*: top 5K, bookmarks*
|
||||
1.  [Windy (https://windy.com/)](https://windy.com/)*: top 5K, maps*
|
||||
1.  [Codecanyon (https://codecanyon.net)](https://codecanyon.net)*: top 5K, coding, shopping*
|
||||
@@ -360,7 +360,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Paltalk (https://www.paltalk.com)](https://www.paltalk.com)*: top 10K, messaging*
|
||||
1.  [NICommunityForum (https://www.native-instruments.com/forum/)](https://www.native-instruments.com/forum/)*: top 10K, forum*
|
||||
1.  [Ccm (https://ccm.net)](https://ccm.net)*: top 10K, fr*
|
||||
1.  [Rate Your Music (https://rateyourmusic.com/)](https://rateyourmusic.com/)*: top 10K, music*
|
||||
1.  [Rate Your Music (https://rateyourmusic.com/)](https://rateyourmusic.com/)*: top 10K, music*, search is disabled
|
||||
1.  [VideoHive (https://videohive.net)](https://videohive.net)*: top 10K, video*
|
||||
1.  [authorSTREAM (http://www.authorstream.com/)](http://www.authorstream.com/)*: top 10K, documents, in, sharing*, search is disabled
|
||||
1.  [Airliners (https://www.airliners.net/)](https://www.airliners.net/)*: top 10K, hobby, photo*, search is disabled
|
||||
@@ -407,7 +407,6 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [hi5 (http://www.hi5.com)](http://www.hi5.com)*: top 100K, social*, search is disabled
|
||||
1.  [Diary.ru (https://diary.ru)](https://diary.ru)*: top 100K, blog, ru*, search is disabled
|
||||
1.  [MirTesen (https://mirtesen.ru)](https://mirtesen.ru)*: top 100K, news, ru*, search is disabled
|
||||
1.  [Fotolog (http://fotolog.com)](http://fotolog.com)*: top 100K, photo*
|
||||
1.  [Aufeminin (https://www.aufeminin.com)](https://www.aufeminin.com)*: top 100K, fr*
|
||||
1.  [Coderwall (https://coderwall.com/)](https://coderwall.com/)*: top 100K, coding*
|
||||
1.  [PCPartPicker (https://pcpartpicker.com)](https://pcpartpicker.com)*: top 100K, shopping, tech*, search is disabled
|
||||
@@ -417,13 +416,13 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [TheStudentRoom (https://www.thestudentroom.co.uk)](https://www.thestudentroom.co.uk)*: top 100K, forum, gb*, search is disabled
|
||||
1.  [Codementor (https://www.codementor.io/)](https://www.codementor.io/)*: top 100K, coding*
|
||||
1.  [N4g (https://n4g.com/)](https://n4g.com/)*: top 100K, gaming, news*
|
||||
1.  [Lomography (https://www.lomography.com)](https://www.lomography.com)*: top 100K, photo*
|
||||
1.  [Lomography (https://www.lomography.com)](https://www.lomography.com)*: top 100K, photo*, search is disabled
|
||||
1.  [pixelfed.social (https://pixelfed.social/)](https://pixelfed.social/)*: top 100K, art, photo*
|
||||
1.  [Hackerearth (https://www.hackerearth.com)](https://www.hackerearth.com)*: top 100K, freelance*, search is disabled
|
||||
1.  [Weedmaps (https://weedmaps.com)](https://weedmaps.com)*: top 100K, us*
|
||||
1.  [Redtube (https://www.redtube.com/)](https://www.redtube.com/)*: top 100K, porn*
|
||||
1.  [Neoseeker (https://www.neoseeker.com)](https://www.neoseeker.com)*: top 100K, forum, gaming*
|
||||
1.  [Liberapay (https://liberapay.com)](https://liberapay.com)*: top 100K, finance*
|
||||
1.  [Liberapay (https://liberapay.com)](https://liberapay.com)*: top 100K, finance*, search is disabled
|
||||
1.  [Sythe (https://www.sythe.org)](https://www.sythe.org)*: top 100K, forum*
|
||||
1.  [FilmWeb (https://www.filmweb.pl/user/adam)](https://www.filmweb.pl/user/adam)*: top 100K, movies, pl*
|
||||
1.  [Listal (https://listal.com/)](https://listal.com/)*: top 100K, movies, music*
|
||||
@@ -438,7 +437,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [mastodon.social (https://chaos.social/)](https://chaos.social/)*: top 100K, social*
|
||||
1.  [notabug.org (https://notabug.org/)](https://notabug.org/)*: top 100K, coding*
|
||||
1.  [Livemaster (https://www.livemaster.ru)](https://www.livemaster.ru)*: top 100K, ru*, search is disabled
|
||||
1.  [Joomlart (https://www.joomlart.com)](https://www.joomlart.com)*: top 100K, coding*
|
||||
1.  [Joomlart (https://www.joomlart.com)](https://www.joomlart.com)*: top 100K, coding*, search is disabled
|
||||
1.  [Trinixy (https://trinixy.ru)](https://trinixy.ru)*: top 100K, news, ru*
|
||||
1.  [TripIt (https://tripit.com)](https://tripit.com)*: top 100K, travel*, search is disabled
|
||||
1.  [Mydramalist (https://mydramalist.com)](https://mydramalist.com)*: top 100K, kr, movies*
|
||||
@@ -578,7 +577,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [BabyBlog.ru (https://www.babyblog.ru/)](https://www.babyblog.ru/)*: top 100K, ru*
|
||||
1.  [7Cups (https://www.7cups.com/)](https://www.7cups.com/)*: top 100K, medicine*, search is disabled
|
||||
1.  [CTFtime (https://ctftime.org/)](https://ctftime.org/)*: top 100K, hacking*
|
||||
1.  [Smogon (https://www.smogon.com)](https://www.smogon.com)*: top 100K, gaming*
|
||||
1.  [Smogon (https://www.smogon.com)](https://www.smogon.com)*: top 100K, gaming*, search is disabled
|
||||
1.  [LOR (https://linux.org.ru/)](https://linux.org.ru/)*: top 100K, ru*
|
||||
1.  [Mouthshut (https://www.mouthshut.com/)](https://www.mouthshut.com/)*: top 100K, in*
|
||||
1.  [Eva (https://eva.ru/)](https://eva.ru/)*: top 100K, ru*, search is disabled
|
||||
@@ -679,7 +678,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Partyflock (https://partyflock.nl)](https://partyflock.nl)*: top 100K, nl*
|
||||
1.  [Trisquel (https://trisquel.info)](https://trisquel.info)*: top 100K, eu*
|
||||
1.  [Pokemon Showdown (https://pokemonshowdown.com)](https://pokemonshowdown.com)*: top 100K, gaming*
|
||||
1.  [Knowem (https://knowem.com/)](https://knowem.com/)*: top 100K, business*
|
||||
1.  [Knowem (https://knowem.com/)](https://knowem.com/)*: top 100K, business*, search is disabled
|
||||
1.  [MoiKrug (https://moikrug.ru/)](https://moikrug.ru/)*: top 100K, career*
|
||||
1.  [Medikforum (https://www.medikforum.ru)](https://www.medikforum.ru)*: top 100K, de, forum, nl, ru, ua*, search is disabled
|
||||
1.  [mynickname.com (https://mynickname.com)](https://mynickname.com)*: top 100K, social*
|
||||
@@ -694,7 +693,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Govloop (https://www.govloop.com)](https://www.govloop.com)*: top 100K, education*
|
||||
1.  [Wakatime (https://wakatime.com)](https://wakatime.com)*: top 100K, ng, ve*
|
||||
1.  [Cqham (http://www.cqham.ru)](http://www.cqham.ru)*: top 100K, ru, tech*
|
||||
1.  [Designspiration (https://www.designspiration.net/)](https://www.designspiration.net/)*: top 100K, design*
|
||||
1.  [Designspiration (https://designspiration.com/)](https://designspiration.com/)*: top 100K, design*
|
||||
1.  [Politforums (https://www.politforums.net/)](https://www.politforums.net/)*: top 100K, forum, ru*
|
||||
1.  [NameMC (https://namemc.com/)](https://namemc.com/)*: top 100K, gaming*
|
||||
1.  [EuroFootball (https://www.euro-football.ru)](https://www.euro-football.ru)*: top 100K, ru*
|
||||
@@ -702,7 +701,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Truelancer (https://www.truelancer.com)](https://www.truelancer.com)*: top 100K, in*
|
||||
1.  [Icheckmovies (https://www.icheckmovies.com/)](https://www.icheckmovies.com/)*: top 100K, movies*
|
||||
1.  [Likee (https://likee.video)](https://likee.video)*: top 100K, video*
|
||||
1.  [Polywork (https://www.polywork.com)](https://www.polywork.com)*: top 100K, career*
|
||||
1.  [Polywork (https://www.polywork.com)](https://www.polywork.com)*: top 100K, career*, search is disabled
|
||||
1.  [ForumHouse (https://www.forumhouse.ru/)](https://www.forumhouse.ru/)*: top 100K, forum, ru*, search is disabled
|
||||
1.  [AnimeSuperHero (https://animesuperhero.com)](https://animesuperhero.com)*: top 100K, forum*, search is disabled
|
||||
1.  [SportsTracker (https://www.sports-tracker.com/)](https://www.sports-tracker.com/)*: top 100K, pt, ru*
|
||||
@@ -805,7 +804,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Jigidi (https://www.jigidi.com/)](https://www.jigidi.com/)*: top 10M, hobby*
|
||||
1.  [Allhockey (https://allhockey.ru/)](https://allhockey.ru/)*: top 10M, ru*
|
||||
1.  [Runitonce (https://www.runitonce.com/)](https://www.runitonce.com/)*: top 10M, ca*
|
||||
1.  [Onlyfinder (https://onlyfinder.com)](https://onlyfinder.com)*: top 10M, webcam*
|
||||
1.  [Onlyfinder (https://onlyfinder.com)](https://onlyfinder.com)*: top 10M, webcam*, search is disabled
|
||||
1.  [Postila (https://postila.ru/)](https://postila.ru/)*: top 10M, ru*
|
||||
1.  [Chemport (https://www.chemport.ru)](https://www.chemport.ru)*: top 10M, forum, ru*
|
||||
1.  [Vapenews (https://vapenews.ru/)](https://vapenews.ru/)*: top 10M, ru*
|
||||
@@ -824,7 +823,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Loveplanet (https://loveplanet.ru)](https://loveplanet.ru)*: top 10M, dating, ru*, search is disabled
|
||||
1.  [sevenstring.org (https://sevenstring.org)](https://sevenstring.org)*: top 10M, forum*, search is disabled
|
||||
1.  [Bikepost (https://bikepost.ru)](https://bikepost.ru)*: top 10M, ru*
|
||||
1.  [the-mainboard.com (http://the-mainboard.com/index.php)](http://the-mainboard.com/index.php)*: top 10M, forum, us*
|
||||
1.  [the-mainboard.com (http://the-mainboard.com/index.php)](http://the-mainboard.com/index.php)*: top 10M, forum, us*, search is disabled
|
||||
1.  [australianfrequentflyer.com.au (https://www.australianfrequentflyer.com.au/community/)](https://www.australianfrequentflyer.com.au/community/)*: top 10M, au, forum*
|
||||
1.  [4stor (https://4stor.ru)](https://4stor.ru)*: top 10M, ru*
|
||||
1.  [subaruoutback.org (https://subaruoutback.org)](https://subaruoutback.org)*: top 10M, forum, us*
|
||||
@@ -834,7 +833,6 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Snooth (https://www.snooth.com/)](https://www.snooth.com/)*: top 10M, news*
|
||||
1.  [svtperformance.com (https://svtperformance.com)](https://svtperformance.com)*: top 10M, forum, us*
|
||||
1.  [DefensiveCarry (https://www.defensivecarry.com)](https://www.defensivecarry.com)*: top 10M, us*
|
||||
1.  [Pitomec (https://www.pitomec.ru)](https://www.pitomec.ru)*: top 10M, ru, ua*
|
||||
1.  [GotovimDoma (https://gotovim-doma.ru)](https://gotovim-doma.ru)*: top 10M, ru*, search is disabled
|
||||
1.  [Chollometro (https://www.chollometro.com/)](https://www.chollometro.com/)*: top 10M, es, shopping*
|
||||
1.  [Hpc (https://hpc.ru)](https://hpc.ru)*: top 10M, ru*
|
||||
@@ -870,9 +868,8 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [Affiliatefix (https://www.affiliatefix.com)](https://www.affiliatefix.com)*: top 10M, forum*
|
||||
1.  [Shophelp (https://shophelp.ru/)](https://shophelp.ru/)*: top 10M, ru*
|
||||
1.  [BeerMoneyForum (https://www.beermoneyforum.com)](https://www.beermoneyforum.com)*: top 10M, finance, forum, gambling*, search is disabled
|
||||
1.  [Math10 (https://www.math10.com/)](https://www.math10.com/)*: top 10M, forum, ru*
|
||||
1.  [Math10 (https://www.math10.com/)](https://www.math10.com/)*: top 10M, forum, ru*, search is disabled
|
||||
1.  [Pepper PL (https://www.pepper.pl/)](https://www.pepper.pl/)*: top 10M, pl*
|
||||
1.  [SQL.ru (https://www.sql.ru)](https://www.sql.ru)*: top 10M, ru*
|
||||
1.  [sigtalk.com (https://sigtalk.com)](https://sigtalk.com)*: top 10M, forum, us*
|
||||
1.  [mir-stalkera.ru (http://mir-stalkera.ru)](http://mir-stalkera.ru)*: top 10M, gaming, ru*
|
||||
1.  [Pedsovet (https://pedsovet.su/)](https://pedsovet.su/)*: top 10M, ru*, search is disabled
|
||||
@@ -890,7 +887,7 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [lada-vesta.net (http://www.lada-vesta.net)](http://www.lada-vesta.net)*: top 10M, auto, forum, ru*
|
||||
1.  [Sysadmins (https://sysadmins.ru)](https://sysadmins.ru)*: top 10M, forum, ru, tech*
|
||||
1.  [Plug.DJ (https://plug.dj/)](https://plug.dj/)*: top 10M, music*, search is disabled
|
||||
1.  [mcfc-fan.ru (http://mcfc-fan.ru)](http://mcfc-fan.ru)*: top 10M, ru, sport*
|
||||
1.  [mcfc-fan.ru (http://mcfc-fan.ru)](http://mcfc-fan.ru)*: top 10M, ru, sport*, search is disabled
|
||||
1.  [Hipforums (https://www.hipforums.com/)](https://www.hipforums.com/)*: top 10M, forum, ru*, search is disabled
|
||||
1.  [Rusfishing (https://www.rusfishing.ru)](https://www.rusfishing.ru)*: top 10M, ru*
|
||||
1.  [jeepgarage.org (https://jeepgarage.org)](https://jeepgarage.org)*: top 10M, forum, us*
|
||||
@@ -3146,24 +3143,24 @@ Rank data fetched from Majestic Million by domains.
|
||||
1.  [flarum.es (https://flarum.es)](https://flarum.es)*: top 100M, es, forum*
|
||||
1.  [forum.fibra.click (https://forum.fibra.click)](https://forum.fibra.click)*: top 100M, forum, it*
|
||||
|
||||
The list was updated at (2026-04-22)
|
||||
The list was updated at (2026-04-26)
|
||||
## Statistics
|
||||
|
||||
Enabled/total sites: 2529/3142 = 80.49%
|
||||
Enabled/total sites: 2510/3139 = 79.96%
|
||||
|
||||
Incomplete message checks: 320/2529 = 12.65% (false positive risks)
|
||||
Incomplete message checks: 317/2510 = 12.63% (false positive risks)
|
||||
|
||||
Status code checks: 632/2529 = 24.99% (false positive risks)
|
||||
Status code checks: 625/2510 = 24.9% (false positive risks)
|
||||
|
||||
False positive risk (total): 37.64%
|
||||
False positive risk (total): 37.53%
|
||||
|
||||
Sites with probing: 500px, Armchairgm, BinarySearch (disabled), BleachFandom, Bluesky, BongaCams, Boosty, BuyMeACoffee, Calendly, Cent, Chess, Code Sandbox, Code Snippet Wiki, DailyMotion, Discord, Diskusjon.no, Disqus, Docker Hub, Duolingo, FandomCommunityCentral, GitHub, GitLab, Google Plus (archived), Gravatar, HackTheBox, Hackerrank, Hashnode, Holopin, Imgur, Issuu, Keybase, Kick, Kvinneguiden, LeetCode, Lesswrong, Livejasmin, LocalCryptos (disabled), Medium, MicrosoftLearn, MixCloud, Monkeytype, NPM, Niftygateway, Omg.lol, OnlyFans, Paragraph, Picsart, Plurk, Polarsteps, Rarible, Reddit, Reddit Search (Pushshift) (disabled), Revolut.me, RoyalCams, Scratch, Soop, SportsTracker, Spotify, StackOverflow, Substack, TAP'D, Topcoder, Trello, Twitch, Twitter, Twitter Shadowban (disabled), UnstoppableDomains, Vimeo, Warframe Market, Warpcast, Weibo, Wikipedia, Yapisal (disabled), YouNow, en.brickimedia.org, nightbot, notabug.org, qiwi.me (disabled)
|
||||
Sites with probing: 500px, Armchairgm, BinarySearch (disabled), BleachFandom, Bluesky, BongaCams, Boosty, BuyMeACoffee, Calendly, Cent, Chess, Code Sandbox, Code Snippet Wiki, DailyMotion, Discord, Diskusjon.no, Disqus, Docker Hub, Duolingo, FandomCommunityCentral, GitHub, GitLab, Google Plus (archived), Gravatar, HackTheBox, Hackerrank, Hashnode, Holopin, Imgur, Issuu, Keybase, Kick, Kvinneguiden, LeetCode, Lesswrong, Livejasmin, LocalCryptos (disabled), Medium, MicrosoftLearn, MixCloud, Monkeytype, NPM, Niftygateway, Omg.lol, OnlyFans, Paragraph, Picsart, Plurk, Polarsteps, Rarible, Reddit (disabled), Reddit Search (Pushshift) (disabled), Revolut.me, RoyalCams, Scratch, Soop, SportsTracker, Spotify, StackOverflow, Substack, TAP'D, Topcoder, Trello, Twitch, Twitter, Twitter Shadowban (disabled), UnstoppableDomains, Vimeo, Vivino, Warframe Market, Warpcast, Weibo, Wikipedia, Yapisal (disabled), YouNow, en.brickimedia.org, nightbot, notabug.org, qiwi.me (disabled)
|
||||
|
||||
Sites with activation: OnlyFans, Twitter, Vimeo, Weibo
|
||||
|
||||
Top 20 profile URLs:
|
||||
- (709) `{urlMain}/index/8-0-{username} (uCoz)`
|
||||
- (314) `/{username}`
|
||||
- (312) `/{username}`
|
||||
- (223) `{urlMain}{urlSubpath}/members/?username={username} (XenForo)`
|
||||
- (170) `/user/{username}`
|
||||
- (138) `/profile/{username}`
|
||||
@@ -3172,7 +3169,7 @@ Top 20 profile URLs:
|
||||
- (116) `/u/{username}`
|
||||
- (93) `/users/{username}`
|
||||
- (87) `{urlMain}/u/{username}/summary (Discourse)`
|
||||
- (70) `/@{username}`
|
||||
- (68) `/@{username}`
|
||||
- (55) `/wiki/User:{username}`
|
||||
- (45) `SUBDOMAIN`
|
||||
- (38) `/members/?username={username}`
|
||||
@@ -3185,19 +3182,19 @@ Top 20 profile URLs:
|
||||
|
||||
|
||||
Sites by engine:
|
||||
- `uCoz`: 635/709 (89.6%)
|
||||
- `XenForo`: 182/223 (81.6%)
|
||||
- `uCoz`: 634/709 (89.4%)
|
||||
- `XenForo`: 181/223 (81.2%)
|
||||
- `phpBB/Search`: 120/127 (94.5%)
|
||||
- `vBulletin`: 31/120 (25.8%)
|
||||
- `Discourse`: 81/87 (93.1%)
|
||||
- `phpBB`: 22/27 (81.5%)
|
||||
- `phpBB`: 21/27 (77.8%)
|
||||
- `engine404`: 19/23 (82.6%)
|
||||
- `op.gg`: 17/17 (100.0%)
|
||||
- `Flarum`: 15/15 (100.0%)
|
||||
- `Wordpress/Author`: 7/9 (77.8%)
|
||||
- `engineRedirect`: 3/4 (75.0%)
|
||||
- `engine404get`: 3/3 (100.0%)
|
||||
- `phpBB2/Search`: 2/3 (66.7%)
|
||||
- `engine404get`: 2/2 (100.0%)
|
||||
|
||||
|
||||
Top 20 tags:
|
||||
@@ -3205,7 +3202,7 @@ Top 20 tags:
|
||||
- (750) `forum`
|
||||
- (128) `gaming`
|
||||
- (80) `coding`
|
||||
- (58) `photo`
|
||||
- (57) `photo`
|
||||
- (46) `tech`
|
||||
- (45) `social`
|
||||
- (41) `news`
|
||||
|
||||
@@ -56,3 +56,110 @@ async def test_import_aiohttp_cookies(cookie_test_server):
|
||||
print(f"Server response: {result}")
|
||||
|
||||
assert result == {'cookies': {'a': 'b'}}
|
||||
|
||||
|
||||
# ---- OnlyFans signing tests (pure-compute, no network) ----
|
||||
|
||||
class _FakeSite:
|
||||
"""Minimal stand-in for MaigretSite with the attributes onlyfans() touches."""
|
||||
|
||||
def __init__(self, headers=None, activation=None):
|
||||
self.headers = headers or {}
|
||||
self.activation = activation or {
|
||||
"static_param": "jLM8LXHU1CGcuCzPMNwWX9osCScVuP4D",
|
||||
"checksum_indexes": [28, 3, 16, 32, 25, 24, 23, 0, 26],
|
||||
"checksum_constant": -180,
|
||||
"format": "57203:{}:{:x}:69cfa6d8",
|
||||
"url": "https://onlyfans.com/api2/v2/init",
|
||||
}
|
||||
|
||||
|
||||
class _FakeResponse:
|
||||
def __init__(self, cookies=None):
|
||||
self.cookies = cookies or {}
|
||||
|
||||
|
||||
def test_onlyfans_sets_xbc_when_zero(monkeypatch):
|
||||
site = _FakeSite(headers={"x-bc": "0", "cookie": "existing=1"})
|
||||
|
||||
# Prevent any real network. If _sign path still fires requests.get, fail loudly.
|
||||
import maigret.activation as act_mod
|
||||
|
||||
def boom(*a, **kw): # pragma: no cover - sanity
|
||||
raise AssertionError("requests.get should not run when cookie is present")
|
||||
|
||||
monkeypatch.setattr(act_mod.__dict__.get("requests", None) or __import__("requests"), "get", boom, raising=False)
|
||||
|
||||
logger = Mock()
|
||||
ParsingActivator.onlyfans(site, logger, url="https://onlyfans.com/api2/v2/users/adam")
|
||||
|
||||
# x-bc must be rewritten to a non-zero hex token
|
||||
assert site.headers["x-bc"] != "0"
|
||||
assert len(site.headers["x-bc"]) == 40 # 20 bytes → 40 hex chars
|
||||
# time / sign headers set for target URL
|
||||
assert "time" in site.headers and site.headers["time"].isdigit()
|
||||
assert site.headers["sign"].startswith("57203:")
|
||||
|
||||
|
||||
def test_onlyfans_fetches_init_cookie_when_missing(monkeypatch):
|
||||
"""When cookie header is absent, init endpoint is called and its cookies stored."""
|
||||
site = _FakeSite(headers={"x-bc": "already_set_token", "user-id": "0"})
|
||||
|
||||
import requests
|
||||
|
||||
captured = {}
|
||||
|
||||
def fake_get(url, headers=None, timeout=15):
|
||||
captured["url"] = url
|
||||
captured["headers"] = dict(headers or {})
|
||||
return _FakeResponse(cookies={"sess": "abc123", "csrf": "xyz"})
|
||||
|
||||
monkeypatch.setattr(requests, "get", fake_get)
|
||||
|
||||
logger = Mock()
|
||||
ParsingActivator.onlyfans(site, logger, url="https://onlyfans.com/api2/v2/users/adam")
|
||||
|
||||
# init request made
|
||||
assert captured["url"] == site.activation["url"]
|
||||
# headers passed to init include freshly generated time/sign
|
||||
assert "time" in captured["headers"]
|
||||
assert captured["headers"]["sign"].startswith("57203:")
|
||||
# cookie header populated from response
|
||||
assert site.headers["cookie"] == "sess=abc123; csrf=xyz"
|
||||
|
||||
|
||||
def test_onlyfans_signature_is_deterministic_for_same_time(monkeypatch):
|
||||
"""Two calls with patched time produce identical signatures."""
|
||||
site1 = _FakeSite(headers={"x-bc": "token", "cookie": "c=1"})
|
||||
site2 = _FakeSite(headers={"x-bc": "token", "cookie": "c=1"})
|
||||
|
||||
import maigret.activation
|
||||
monkeypatch.setattr(maigret.activation, "_time", __import__("time"), raising=False)
|
||||
|
||||
fixed = 1_700_000_000.123
|
||||
import time as time_mod
|
||||
monkeypatch.setattr(time_mod, "time", lambda: fixed)
|
||||
|
||||
logger = Mock()
|
||||
ParsingActivator.onlyfans(site1, logger, url="https://onlyfans.com/api2/v2/users/adam")
|
||||
ParsingActivator.onlyfans(site2, logger, url="https://onlyfans.com/api2/v2/users/adam")
|
||||
|
||||
assert site1.headers["time"] == site2.headers["time"]
|
||||
assert site1.headers["sign"] == site2.headers["sign"]
|
||||
|
||||
|
||||
def test_onlyfans_sign_differs_per_path(monkeypatch):
|
||||
"""Different target URLs must yield different signatures."""
|
||||
site = _FakeSite(headers={"x-bc": "token", "cookie": "c=1"})
|
||||
|
||||
import time as time_mod
|
||||
monkeypatch.setattr(time_mod, "time", lambda: 1_700_000_000.0)
|
||||
|
||||
logger = Mock()
|
||||
ParsingActivator.onlyfans(site, logger, url="https://onlyfans.com/api2/v2/users/adam")
|
||||
sig_adam = site.headers["sign"]
|
||||
|
||||
ParsingActivator.onlyfans(site, logger, url="https://onlyfans.com/api2/v2/users/bob")
|
||||
sig_bob = site.headers["sign"]
|
||||
|
||||
assert sig_adam != sig_bob
|
||||
|
||||
@@ -1,7 +1,22 @@
|
||||
from argparse import ArgumentTypeError
|
||||
|
||||
from mock import Mock
|
||||
import pytest
|
||||
|
||||
from maigret import search
|
||||
from maigret.checking import (
|
||||
detect_error_page,
|
||||
extract_ids_data,
|
||||
parse_usernames,
|
||||
update_results_info,
|
||||
get_failed_sites,
|
||||
timeout_check,
|
||||
debug_response_logging,
|
||||
process_site_result,
|
||||
)
|
||||
from maigret.errors import CheckError
|
||||
from maigret.result import MaigretCheckResult, MaigretCheckStatus
|
||||
from maigret.sites import MaigretSite
|
||||
|
||||
|
||||
def site_result_except(server, username, **kwargs):
|
||||
@@ -67,3 +82,228 @@ async def test_checking_by_message_negative(httpserver, local_test_db):
|
||||
|
||||
result = await search('unclaimed', site_dict=sites_dict, logger=Mock())
|
||||
assert result['Message']['status'].is_found() is True
|
||||
|
||||
|
||||
# ---- Pure-function unit tests (no network) ----
|
||||
|
||||
|
||||
def test_detect_error_page_site_specific():
|
||||
err = detect_error_page(
|
||||
"Please enable JavaScript to proceed",
|
||||
200,
|
||||
{"Please enable JavaScript to proceed": "Scraping protection"},
|
||||
ignore_403=False,
|
||||
)
|
||||
assert err is not None
|
||||
assert err.type == "Site-specific"
|
||||
assert err.desc == "Scraping protection"
|
||||
|
||||
|
||||
def test_detect_error_page_403():
|
||||
err = detect_error_page("some body", 403, {}, ignore_403=False)
|
||||
assert err is not None
|
||||
assert err.type == "Access denied"
|
||||
|
||||
|
||||
def test_detect_error_page_403_ignored():
|
||||
# XenForo engine uses ignore403 because member-not-found also returns 403
|
||||
assert detect_error_page("not found body", 403, {}, ignore_403=True) is None
|
||||
|
||||
|
||||
def test_detect_error_page_999_linkedin():
|
||||
# LinkedIn returns 999 on bot suspicion — must NOT be reported as Server error
|
||||
assert detect_error_page("", 999, {}, ignore_403=False) is None
|
||||
|
||||
|
||||
def test_detect_error_page_500():
|
||||
err = detect_error_page("", 503, {}, ignore_403=False)
|
||||
assert err is not None
|
||||
assert err.type == "Server"
|
||||
assert "503" in err.desc
|
||||
|
||||
|
||||
def test_detect_error_page_ok():
|
||||
assert detect_error_page("hello world", 200, {}, ignore_403=False) is None
|
||||
|
||||
|
||||
def test_parse_usernames_single_username():
|
||||
logger = Mock()
|
||||
result = parse_usernames({"profile_username": "alice"}, logger)
|
||||
assert result == {"alice": "username"}
|
||||
|
||||
|
||||
def test_parse_usernames_list_of_usernames():
|
||||
logger = Mock()
|
||||
result = parse_usernames({"other_usernames": "['alice', 'bob']"}, logger)
|
||||
assert result == {"alice": "username", "bob": "username"}
|
||||
|
||||
|
||||
def test_parse_usernames_malformed_list():
|
||||
logger = Mock()
|
||||
result = parse_usernames({"other_usernames": "not-a-list"}, logger)
|
||||
# should swallow the error and just return empty
|
||||
assert result == {}
|
||||
assert logger.warning.called
|
||||
|
||||
|
||||
def test_parse_usernames_supported_id():
|
||||
logger = Mock()
|
||||
# "telegram" is in SUPPORTED_IDS per socid_extractor
|
||||
from maigret.checking import SUPPORTED_IDS
|
||||
if SUPPORTED_IDS:
|
||||
key = next(iter(SUPPORTED_IDS))
|
||||
result = parse_usernames({key: "some_value"}, logger)
|
||||
assert result.get("some_value") == key
|
||||
|
||||
|
||||
def test_update_results_info_links():
|
||||
info = {"username": "test"}
|
||||
result = update_results_info(
|
||||
info,
|
||||
{"links": "['https://example.com/a', 'https://example.com/b']", "website": "https://example.com/w"},
|
||||
{"alice": "username"},
|
||||
)
|
||||
assert result["ids_usernames"] == {"alice": "username"}
|
||||
assert "https://example.com/w" in result["ids_links"]
|
||||
assert "https://example.com/a" in result["ids_links"]
|
||||
|
||||
|
||||
def test_update_results_info_no_website():
|
||||
info = {}
|
||||
result = update_results_info(info, {"links": "[]"}, {})
|
||||
assert result["ids_links"] == []
|
||||
|
||||
|
||||
def test_extract_ids_data_bad_html_returns_empty():
|
||||
logger = Mock()
|
||||
# Random HTML should not raise — returns {} if nothing matches
|
||||
out = extract_ids_data("<html><body>nothing special</body></html>", logger, Mock(name="Site"))
|
||||
assert isinstance(out, dict)
|
||||
|
||||
|
||||
def test_get_failed_sites_filters_permanent_errors():
|
||||
# Temporary errors (Request timeout, Connecting failure, etc.) are retryable → returned.
|
||||
# Permanent ones (Captcha, Access denied, etc.) and results without error → filtered out.
|
||||
good_status = MaigretCheckResult("u", "S1", "https://s1", MaigretCheckStatus.CLAIMED)
|
||||
timeout_err = MaigretCheckResult(
|
||||
"u", "S2", "https://s2", MaigretCheckStatus.UNKNOWN,
|
||||
error=CheckError("Request timeout", "slow server"),
|
||||
)
|
||||
captcha_err = MaigretCheckResult(
|
||||
"u", "S3", "https://s3", MaigretCheckStatus.UNKNOWN,
|
||||
error=CheckError("Captcha", "Cloudflare"),
|
||||
)
|
||||
results = {
|
||||
"S1": {"status": good_status},
|
||||
"S2": {"status": timeout_err},
|
||||
"S3": {"status": captcha_err},
|
||||
"S4": {}, # no status at all
|
||||
}
|
||||
failed = get_failed_sites(results)
|
||||
# Only the temporary-error site is retry-worthy
|
||||
assert failed == ["S2"]
|
||||
|
||||
|
||||
def test_timeout_check_valid():
|
||||
assert timeout_check("2.5") == 2.5
|
||||
assert timeout_check("30") == 30.0
|
||||
|
||||
|
||||
def test_timeout_check_invalid():
|
||||
with pytest.raises(ArgumentTypeError):
|
||||
timeout_check("abc")
|
||||
with pytest.raises(ArgumentTypeError):
|
||||
timeout_check("0")
|
||||
with pytest.raises(ArgumentTypeError):
|
||||
timeout_check("-1")
|
||||
|
||||
|
||||
def test_debug_response_logging_writes(tmp_path, monkeypatch):
|
||||
monkeypatch.chdir(tmp_path)
|
||||
debug_response_logging("https://example.com", "<html>hi</html>", 200, None)
|
||||
out = (tmp_path / "debug.log").read_text()
|
||||
assert "https://example.com" in out
|
||||
assert "200" in out
|
||||
|
||||
|
||||
def test_debug_response_logging_no_response(tmp_path, monkeypatch):
|
||||
monkeypatch.chdir(tmp_path)
|
||||
debug_response_logging("https://example.com", None, None, CheckError("Timeout"))
|
||||
out = (tmp_path / "debug.log").read_text()
|
||||
assert "No response" in out
|
||||
|
||||
|
||||
def _make_site(data_overrides=None):
|
||||
base = {
|
||||
"url": "https://x/{username}",
|
||||
"urlMain": "https://x",
|
||||
"checkType": "status_code",
|
||||
"usernameClaimed": "a",
|
||||
"usernameUnclaimed": "b",
|
||||
}
|
||||
if data_overrides:
|
||||
base.update(data_overrides)
|
||||
return MaigretSite("TestSite", base)
|
||||
|
||||
|
||||
def test_process_site_result_no_response_returns_info():
|
||||
site = _make_site()
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
out = process_site_result(None, Mock(), Mock(), info, site)
|
||||
assert out is info
|
||||
|
||||
|
||||
def test_process_site_result_status_already_set():
|
||||
site = _make_site()
|
||||
pre = MaigretCheckResult("a", "S", "u", MaigretCheckStatus.ILLEGAL)
|
||||
info = {"username": "a", "parsing_enabled": False, "status": pre, "url_user": "u"}
|
||||
# Since status is already set, function returns without changes
|
||||
out = process_site_result(("<html/>", 200, None), Mock(), Mock(), info, site)
|
||||
assert out["status"] is pre
|
||||
|
||||
|
||||
def test_process_site_result_status_code_claimed():
|
||||
site = _make_site({"checkType": "status_code"})
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
out = process_site_result(("<html/>", 200, None), Mock(), Mock(), info, site)
|
||||
assert out["status"].status == MaigretCheckStatus.CLAIMED
|
||||
assert out["http_status"] == 200
|
||||
|
||||
|
||||
def test_process_site_result_status_code_available():
|
||||
site = _make_site({"checkType": "status_code"})
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
out = process_site_result(("<html/>", 404, None), Mock(), Mock(), info, site)
|
||||
assert out["status"].status == MaigretCheckStatus.AVAILABLE
|
||||
|
||||
|
||||
def test_process_site_result_message_claimed():
|
||||
site = _make_site({
|
||||
"checkType": "message",
|
||||
"presenseStrs": ["profile-name"],
|
||||
"absenceStrs": ["not found"],
|
||||
})
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
out = process_site_result(("<div class='profile-name'>Alice</div>", 200, None), Mock(), Mock(), info, site)
|
||||
assert out["status"].status == MaigretCheckStatus.CLAIMED
|
||||
|
||||
|
||||
def test_process_site_result_message_available_by_absence():
|
||||
site = _make_site({
|
||||
"checkType": "message",
|
||||
"presenseStrs": ["profile-name"],
|
||||
"absenceStrs": ["not found"],
|
||||
})
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
out = process_site_result(("<h1>not found</h1> profile-name too", 200, None), Mock(), Mock(), info, site)
|
||||
# absence marker wins even if presence marker also appears
|
||||
assert out["status"].status == MaigretCheckStatus.AVAILABLE
|
||||
|
||||
|
||||
def test_process_site_result_with_error_is_unknown():
|
||||
site = _make_site({"checkType": "status_code"})
|
||||
info = {"username": "a", "parsing_enabled": False, "url_user": "https://x/a"}
|
||||
resp = ("body", 403, CheckError("Captcha", "Cloudflare"))
|
||||
out = process_site_result(resp, Mock(), Mock(), info, site)
|
||||
assert out["status"].status == MaigretCheckStatus.UNKNOWN
|
||||
assert out["status"].error is not None
|
||||
|
||||
@@ -49,6 +49,8 @@ DEFAULT_ARGS: Dict[str, Any] = {
|
||||
'with_domains': False,
|
||||
'xmind': False,
|
||||
'md': False,
|
||||
'ai': False,
|
||||
'ai_model': 'gpt-4o',
|
||||
'no_autoupdate': False,
|
||||
'force_update': False,
|
||||
}
|
||||
|
||||
+11
-11
@@ -26,7 +26,7 @@ async def test_simple_asyncio_executor():
|
||||
executor = AsyncioSimpleExecutor(logger=logger)
|
||||
assert await executor.run(tasks) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
||||
assert executor.execution_time > 0.2
|
||||
assert executor.execution_time < 0.3
|
||||
assert executor.execution_time < 1.0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -37,7 +37,7 @@ async def test_asyncio_progressbar_executor():
|
||||
# no guarantees for the results order
|
||||
assert sorted(await executor.run(tasks)) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
||||
assert executor.execution_time > 0.2
|
||||
assert executor.execution_time < 0.3
|
||||
assert executor.execution_time < 1.0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -48,7 +48,7 @@ async def test_asyncio_progressbar_semaphore_executor():
|
||||
# no guarantees for the results order
|
||||
assert sorted(await executor.run(tasks)) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
||||
assert executor.execution_time > 0.2
|
||||
assert executor.execution_time < 0.4
|
||||
assert executor.execution_time < 1.1
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
@@ -59,12 +59,12 @@ async def test_asyncio_progressbar_queue_executor():
|
||||
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=2)
|
||||
assert await executor.run(tasks) == [0, 1, 3, 2, 4, 6, 7, 5, 9, 8]
|
||||
assert executor.execution_time > 0.5
|
||||
assert executor.execution_time < 0.7
|
||||
assert executor.execution_time < 1.4
|
||||
|
||||
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=3)
|
||||
assert await executor.run(tasks) == [0, 3, 1, 4, 6, 2, 7, 9, 5, 8]
|
||||
assert executor.execution_time > 0.4
|
||||
assert executor.execution_time < 0.6
|
||||
assert executor.execution_time < 1.3
|
||||
|
||||
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=5)
|
||||
assert await executor.run(tasks) in (
|
||||
@@ -72,12 +72,12 @@ async def test_asyncio_progressbar_queue_executor():
|
||||
[0, 3, 6, 1, 4, 9, 7, 2, 5, 8],
|
||||
)
|
||||
assert executor.execution_time > 0.3
|
||||
assert executor.execution_time < 0.5
|
||||
assert executor.execution_time < 1.2
|
||||
|
||||
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=10)
|
||||
assert await executor.run(tasks) == [0, 3, 6, 9, 1, 4, 7, 2, 5, 8]
|
||||
assert executor.execution_time > 0.2
|
||||
assert executor.execution_time < 0.4
|
||||
assert executor.execution_time < 1.1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -88,13 +88,13 @@ async def test_asyncio_queue_generator_executor():
|
||||
results = [result async for result in executor.run(tasks)] # type: ignore[arg-type]
|
||||
assert results == [0, 1, 3, 2, 4, 6, 7, 5, 9, 8]
|
||||
assert executor.execution_time > 0.5
|
||||
assert executor.execution_time < 0.6
|
||||
assert executor.execution_time < 1.3
|
||||
|
||||
executor = AsyncioQueueGeneratorExecutor(logger=logger, in_parallel=3)
|
||||
results = [result async for result in executor.run(tasks)] # type: ignore[arg-type]
|
||||
assert results == [0, 3, 1, 4, 6, 2, 7, 9, 5, 8]
|
||||
assert executor.execution_time > 0.4
|
||||
assert executor.execution_time < 0.5
|
||||
assert executor.execution_time < 1.2
|
||||
|
||||
executor = AsyncioQueueGeneratorExecutor(logger=logger, in_parallel=5)
|
||||
results = [result async for result in executor.run(tasks)] # type: ignore[arg-type]
|
||||
@@ -103,10 +103,10 @@ async def test_asyncio_queue_generator_executor():
|
||||
[0, 3, 6, 1, 4, 9, 7, 2, 5, 8],
|
||||
)
|
||||
assert executor.execution_time > 0.3
|
||||
assert executor.execution_time < 0.4
|
||||
assert executor.execution_time < 1.1
|
||||
|
||||
executor = AsyncioQueueGeneratorExecutor(logger=logger, in_parallel=10)
|
||||
results = [result async for result in executor.run(tasks)] # type: ignore[arg-type]
|
||||
assert results == [0, 3, 6, 9, 1, 4, 7, 2, 5, 8]
|
||||
assert executor.execution_time > 0.2
|
||||
assert executor.execution_time < 0.3
|
||||
assert executor.execution_time < 1.0
|
||||
|
||||
@@ -10,8 +10,15 @@ import xmind # type: ignore[import-untyped]
|
||||
from jinja2 import Template
|
||||
|
||||
from maigret.report import (
|
||||
filter_supposed_data,
|
||||
sort_report_by_data_points,
|
||||
_md_format_value,
|
||||
generate_csv_report,
|
||||
generate_txt_report,
|
||||
save_csv_report,
|
||||
save_txt_report,
|
||||
save_json_report,
|
||||
save_markdown_report,
|
||||
save_xmind_report,
|
||||
save_html_report,
|
||||
save_pdf_report,
|
||||
@@ -456,3 +463,223 @@ def test_text_report_broken():
|
||||
assert brief_part in report_text
|
||||
assert 'us' in report_text
|
||||
assert 'photo' in report_text
|
||||
|
||||
|
||||
def test_filter_supposed_data():
|
||||
data = {
|
||||
'fullname': ['Alice'],
|
||||
'gender': ['female'],
|
||||
'location': ['Berlin'],
|
||||
'age': ['30'],
|
||||
'email': ['x@y.z'], # not allowed, must be dropped
|
||||
'bio': ['hi'], # not allowed
|
||||
}
|
||||
result = filter_supposed_data(data)
|
||||
assert result == {
|
||||
'Fullname': 'Alice',
|
||||
'Gender': 'female',
|
||||
'Location': 'Berlin',
|
||||
'Age': '30',
|
||||
}
|
||||
|
||||
|
||||
def test_filter_supposed_data_empty():
|
||||
assert filter_supposed_data({}) == {}
|
||||
assert filter_supposed_data({'nope': ['v']}) == {}
|
||||
|
||||
|
||||
def test_filter_supposed_data_scalar_values():
|
||||
# Strings and scalars must be kept whole — previously v[0] on "Alice"
|
||||
# silently returned "A" instead of "Alice".
|
||||
data = {
|
||||
'fullname': 'Alice',
|
||||
'gender': 'female',
|
||||
'location': 'Berlin',
|
||||
'age': 30,
|
||||
}
|
||||
assert filter_supposed_data(data) == {
|
||||
'Fullname': 'Alice',
|
||||
'Gender': 'female',
|
||||
'Location': 'Berlin',
|
||||
'Age': 30,
|
||||
}
|
||||
|
||||
|
||||
def test_filter_supposed_data_empty_list_yields_empty_string():
|
||||
# Edge case: list value present but empty should not crash with IndexError.
|
||||
assert filter_supposed_data({'fullname': []}) == {'Fullname': ''}
|
||||
|
||||
|
||||
def test_filter_supposed_data_mixed_values():
|
||||
# List and scalar mixed in the same payload.
|
||||
data = {'fullname': ['Alice', 'Alicia'], 'gender': 'female'}
|
||||
assert filter_supposed_data(data) == {
|
||||
'Fullname': 'Alice',
|
||||
'Gender': 'female',
|
||||
}
|
||||
|
||||
|
||||
def test_sort_report_by_data_points():
|
||||
status_many = MaigretCheckResult('', '', '', MaigretCheckStatus.CLAIMED)
|
||||
status_many.ids_data = {'a': 1, 'b': 2, 'c': 3}
|
||||
status_one = MaigretCheckResult('', '', '', MaigretCheckStatus.CLAIMED)
|
||||
status_one.ids_data = {'a': 1}
|
||||
status_none = MaigretCheckResult('', '', '', MaigretCheckStatus.CLAIMED)
|
||||
|
||||
results = {
|
||||
'few': {'status': status_one},
|
||||
'many': {'status': status_many},
|
||||
'zero': {'status': status_none},
|
||||
'nostatus': {},
|
||||
}
|
||||
sorted_out = sort_report_by_data_points(results)
|
||||
keys = list(sorted_out.keys())
|
||||
# site with 3 ids_data fields must come first
|
||||
assert keys[0] == 'many'
|
||||
# site with 1 field next
|
||||
assert keys[1] == 'few'
|
||||
|
||||
|
||||
def test_md_format_value_list():
|
||||
assert _md_format_value(['a', 'b', 'c']) == 'a, b, c'
|
||||
|
||||
|
||||
def test_md_format_value_url():
|
||||
assert _md_format_value('https://example.com') == '[https://example.com](https://example.com)'
|
||||
assert _md_format_value('http://x.y') == '[http://x.y](http://x.y)'
|
||||
|
||||
|
||||
def test_md_format_value_plain():
|
||||
assert _md_format_value('hello') == 'hello'
|
||||
assert _md_format_value(42) == '42'
|
||||
|
||||
|
||||
def test_save_csv_report():
|
||||
filename = 'report_test.csv'
|
||||
save_csv_report(filename, 'test', EXAMPLE_RESULTS)
|
||||
with open(filename) as f:
|
||||
content = f.read()
|
||||
assert 'username,name,url_main' in content
|
||||
assert 'test,GitHub' in content
|
||||
|
||||
|
||||
def test_save_txt_report():
|
||||
filename = 'report_test.txt'
|
||||
save_txt_report(filename, 'test', EXAMPLE_RESULTS)
|
||||
with open(filename) as f:
|
||||
content = f.read()
|
||||
assert 'https://www.github.com/test' in content
|
||||
assert 'Total Websites Username Detected On : 1' in content
|
||||
|
||||
|
||||
def test_save_json_report_simple():
|
||||
filename = 'report_test.json'
|
||||
save_json_report(filename, 'test', EXAMPLE_RESULTS, 'simple')
|
||||
with open(filename) as f:
|
||||
data = json.load(f)
|
||||
assert 'GitHub' in data
|
||||
|
||||
|
||||
def test_save_json_report_ndjson():
|
||||
filename = 'report_test_ndjson.json'
|
||||
save_json_report(filename, 'test', EXAMPLE_RESULTS, 'ndjson')
|
||||
with open(filename) as f:
|
||||
lines = f.readlines()
|
||||
assert len(lines) == 1
|
||||
assert json.loads(lines[0])['sitename'] == 'GitHub'
|
||||
|
||||
|
||||
def _markdown_context_with_rich_ids():
|
||||
"""Build a context with found accounts, ids_data (incl. image, url, list) to exercise all branches."""
|
||||
found_result = copy.deepcopy(GOOD_RESULT)
|
||||
found_result.tags = ['photo', 'us']
|
||||
found_result.ids_data = {
|
||||
"fullname": "Alice",
|
||||
"name": "Alice A.",
|
||||
"location": "Berlin",
|
||||
"bio": "Photographer",
|
||||
"external_url": "https://example.com/profile",
|
||||
"image": "https://example.com/avatar.png", # must be skipped
|
||||
"aliases": ["alice", "alicea"], # list value
|
||||
"last_online": "2024-01-02 10:00:00",
|
||||
}
|
||||
data = {
|
||||
'Github': {
|
||||
'username': 'alice',
|
||||
'parsing_enabled': True,
|
||||
'url_main': 'https://github.com/',
|
||||
'url_user': 'https://github.com/alice',
|
||||
'status': found_result,
|
||||
'http_status': 200,
|
||||
'is_similar': False,
|
||||
'rank': 1,
|
||||
'site': MaigretSite('Github', {}),
|
||||
'found': True,
|
||||
'ids_data': found_result.ids_data,
|
||||
},
|
||||
'Similar': {
|
||||
'username': 'alice',
|
||||
'url_user': 'https://other.com/alice',
|
||||
'is_similar': True,
|
||||
'found': True,
|
||||
'status': copy.deepcopy(GOOD_RESULT),
|
||||
},
|
||||
}
|
||||
return {
|
||||
'username': 'alice',
|
||||
'generated_at': '2024-01-02 10:00',
|
||||
'brief': 'Search returned 1 account',
|
||||
'countries_tuple_list': [('us', 1)],
|
||||
'interests_tuple_list': [('photo', 1)],
|
||||
'first_seen': '2023-01-01',
|
||||
'results': [('alice', 'username', data)],
|
||||
}
|
||||
|
||||
|
||||
def test_save_markdown_report():
|
||||
filename = 'report_test.md'
|
||||
context = _markdown_context_with_rich_ids()
|
||||
save_markdown_report(filename, context, run_info={'sites_count': 100, 'flags': '--top-sites 100'})
|
||||
with open(filename) as f:
|
||||
content = f.read()
|
||||
assert '# Report by searching on username "alice"' in content
|
||||
assert '## Summary' in content
|
||||
assert '## Accounts found' in content
|
||||
assert '### Github' in content
|
||||
assert '[https://github.com/alice](https://github.com/alice)' in content
|
||||
assert 'Ethical use' in content
|
||||
assert '100 sites checked' in content
|
||||
# image field must NOT appear in per-site listing
|
||||
assert 'avatar.png' not in content
|
||||
# list field rendered with join
|
||||
assert 'alice, alicea' in content
|
||||
# external url formatted as markdown link
|
||||
assert '[https://example.com/profile](https://example.com/profile)' in content
|
||||
|
||||
|
||||
def test_save_markdown_report_minimal_context():
|
||||
"""No run_info, no first_seen — exercise the fallback branches."""
|
||||
filename = 'report_test_min.md'
|
||||
context = {
|
||||
'username': 'bob',
|
||||
'brief': 'nothing found',
|
||||
'results': [],
|
||||
}
|
||||
save_markdown_report(filename, context)
|
||||
with open(filename) as f:
|
||||
content = f.read()
|
||||
assert '# Report by searching on username "bob"' in content
|
||||
assert '## Summary' in content
|
||||
|
||||
|
||||
def test_get_plaintext_report_minimal():
|
||||
"""Minimal context without countries/interests."""
|
||||
context = {
|
||||
'brief': 'Nothing to report.',
|
||||
'interests_tuple_list': [],
|
||||
'countries_tuple_list': [],
|
||||
}
|
||||
out = get_plaintext_report(context)
|
||||
assert 'Nothing to report.' in out
|
||||
assert 'Countries:' not in out
|
||||
assert 'Interests' not in out
|
||||
|
||||
Executable
+5
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
sudo apt-get update && sudo apt-get install -y libcairo2-dev pkg-config
|
||||
pip install .
|
||||
Reference in New Issue
Block a user