Conversation
Documenta estrutura do repositório, stack, comandos essenciais, arquitetura do código, convenções de idioma/estilo e regras de segurança para orientar assistentes de IA que trabalham no projeto. https://claude.ai/code/session_01XNKttuRDZ8bXMTm7ApjRDu
Cria docs/UX_FLOW.md com diagramas (Mermaid + ASCII) que mapeiam toda a jornada do usuário: setup inicial, autenticação, modos de execução (dry-run/--yes/padrão), roteamento por tipo de entidade, tratamento de erros (FloodWait/RPC) e ciclo de vida da sessão. https://claude.ai/code/session_01XNKttuRDZ8bXMTm7ApjRDu
- Aviso destrutivo em destaque (blockquote) - Funcionalidades detalhadas com APIs usadas - Instalação passo a passo (Linux/macOS/Windows) - Exemplos de uso com saída esperada do dry-run - Tabela de referência de flags (--dry-run, --yes, --limit) - Tabela de variáveis de ambiente - Seção "Como funciona" com diagrama de fluxo ASCII - Tabela de tratamento de erros - Guia de primeira execução (autenticação Telethon) - Estrutura do projeto atualizada - Tabela de dependências com links - FAQ com 5 perguntas frequentes (collapsible) https://claude.ai/code/session_01XNKttuRDZ8bXMTm7ApjRDu
Adiciona suporte a login via QR code como alternativa ao login por telefone/código. O QR code é exibido diretamente no terminal usando ASCII. Mudanças: - Nova flag --qr-login para ativar login via QR code - Função display_qr_code() para exibir QR code em ASCII no terminal - Função login_with_qr() para gerenciar o fluxo de login via QR code - Regeneração automática do QR code ao expirar (timeout 10s) - Tratamento de SessionPasswordNeededError para contas com 2FA - Sessão separada (_qr.session) para não interferir com sessão principal - Adicionada dependência qrcode>=8.0,<9.0 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughAnálise do PR - CleanTelegramWalkthroughEscuta aqui, malandro! Tá vendo esse PR? É a criação completa de um novo projeto Python chamado CleanTelegram, viu? Gerencia limpeza de históricos no Telegram, sai de grupos e canais, mas com estilo, tá ligado? Vem com testes, documentação, configurações de linting e tudo certinho! Tenho pena do tolo que não aprecia uma estrutura assim! Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as __main__.py
participant Auth as Auth Flow
participant Client as TelegramClient
participant Dialog as Dialog Handler
participant Action as Action (delete/leave)
User->>CLI: Run clean-telegram [--dry-run] [--yes]
CLI->>CLI: Load .env, validate API_ID/API_HASH
alt QR Login
CLI->>Auth: display_qr_code(url)
Auth->>User: Show QR in terminal
CLI->>Client: login_with_qr()
Client->>Auth: Wait for QR scan/2FA
else Session Login
CLI->>Client: TelegramClient(session_name)
end
Client->>CLI: ✓ Authenticated
alt Not dry-run
CLI->>User: Prompt "APAGAR TUDO" confirmation (unless --yes)
User->>CLI: Confirm or cancel
end
CLI->>Client: iter_dialogs(limit=limit)
loop For each dialog
Client->>Dialog: Get entity (Channel/Chat/User)
Dialog->>Action: dispatch by type
alt Channel
Action->>Client: LeaveChannelRequest
else Chat
Action->>Client: DeleteChatUserRequest (or fallback delete_dialog)
else User
Action->>Client: DeleteHistoryRequest
end
alt FloodWaitError
Action->>CLI: Retry (max 5 attempts)
else RPCError/Exception
Action->>CLI: Log error, continue next dialog
end
end
CLI->>User: Display summary & exit
sequenceDiagram
participant Test as Test Suite
participant Mock as Mock TelegramClient
participant Client as client.py
participant Utils as utils.py
Test->>Utils: env_int("API_ID")
Utils->>Test: return int (or SystemExit)
Test->>Utils: resolve_session_name(name)
Utils->>Utils: Check absolute/relative/simple
Utils->>Test: return resolved path
Test->>Mock: Create AsyncMock TelegramClient
Test->>Client: process_dialog(mock_client, entity, ...)
Client->>Mock: Send request (DeleteHistoryRequest/LeaveChannelRequest)
Mock->>Client: Return response (or raise error)
Client->>Test: Return bool success status
Test->>Client: Verify error handling (FloodWait/RPC)
Client->>Test: Return False on error
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutos Tem que prestar atenção, parceiro! Com ~2.5k linhas de código novo espalhadas por múltiplos arquivos heterogêneos (CLI async, Telegram API integration, utilities, fixtures de teste complexas), fluxo de retry/error handling denso, e suite de testes extensa. Isso tudo junto demanda raciocínio cuidadoso por área diferente. Tenho pena do revisor que não dá tempo pra isso! Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 30
🤖 Fix all issues with AI agents
In @.claude/commands/lint.md:
- Around line 88-103: A documentação de exemplo está inconsistente: o bloco
.flake8 e a seção [tool.black] usam line-length = 88 enquanto o projeto usa 100;
atualize os exemplos para usar line-length = 100 (ajustando tanto o arquivo
.flake8 exemplo quanto a chave [tool.black] no pyproject.toml exemplo) para que
os valores reflitam o valor real do projeto e evitem confusão ao copiar a
configuração.
In @.claude/settings.json:
- Around line 86-104: O hook executa flake8/mypy antes de checar se a ferramenta
existe; ajuste as duas commands para verificar "command -v flake8" / "command -v
mypy" antes de rodar a ferramenta (use a verificação de disponibilidade
primeiro) e só então executar RESULT=$(flake8 "$FILE" 2>&1) ou RESULT=$(mypy
"$FILE" 2>&1) quando FILE terminar em .py; aplique a mesma inversão na string do
hook que referencia $FILE e os comandos flake8/mypy para evitar erros de comando
não encontrado.
In @.mcp.json:
- Around line 24-31: Remove secrets/placeholders from the committed .mcp.json by
adding ".mcp.json" to .gitignore, renaming the committed file to
".mcp.json.example" as a template, and update the template to show only
reference names (not real values) for DATABASE_URL and GOOGLE_MAPS_API_KEY; then
change any code/commands that read this config to load these values from
environment variables (referencing the DATABASE_URL and GOOGLE_MAPS_API_KEY
keys) so no secret values are stored in the repo.
- Around line 1-78: The .mcp.json change is unrelated to the QR code login work
(it adds an unrelated mcpServers config with entries like "python-sdk",
"docker", "jupyter", "memory-bank", etc.); remove this file from the current
branch/PR by reverting or unstaging the .mcp.json change so the PR contains only
QR-code-login changes, or alternatively move these mcpServers entries into a
separate commit/branch and open a dedicated PR for that configuration change.
- Line 78: O arquivo .mcp.json está faltando uma nova linha no final do arquivo;
abra .mcp.json e garanta que o último caractere do arquivo seja um '\n'
(adicione uma linha em branco final) para cumprir a convenção POSIX e evitar
diffs/avisos de lint.
In `@CLAUDE.md`:
- Around line 96-114: Add blank lines before and after each Markdown heading to
satisfy markdownlint MD022: insert an empty line above and below the "Core
Technologies", "Development Tools", and "Code Quality Tools" headings in
CLAUDE.md so each heading is separated from surrounding content (ensuring one
empty line before the heading and one empty line after the heading).
- Around line 199-214: Update the git clone URL in the bash setup block: replace
the incorrect repository owner in the "git clone
https://github.com/gabrielramos/CleanTelegram" command with the correct owner
"prof-ramos" so the line reads "git clone
https://github.com/prof-ramos/CleanTelegram"; locate the command in the
CLAUDE.md bash snippet (the git clone line) and update only the owner portion to
avoid breaking the rest of the setup instructions.
- Around line 117-137: The project tree in CLAUDE.md omits the existing
tests/test_main.py which covers the __main__ module; update the displayed
structure to include a line for tests/test_main.py (e.g., add "├── test_main.py
# Tests for __main__") under the tests/ section so the README accurately
reflects the repository contents and the test targeting the __main__ entrypoint
is documented.
In `@docs/UX_FLOW.md`:
- Around line 1-3: Add a new section to the UX_FLOW.md that documents the CLI
flag --qr-login and the QR authentication flow: describe how the app generates a
QR code, displays it to the user, and regenerates the QR payload automatically
every 10 seconds until scanned; document handling of 2FA by detecting
SessionPasswordNeededError and prompting for the account password; and note that
the QR-based login uses a separate session file named _qr.session to isolate QR
auth state from normal sessions. Include brief step-by-step user actions and
expected app behavior for each of these items.
- Around line 112-119: The mermaid flowchart in "## 4. Fluxo Principal de
Execução" is missing the new execution flag --qr-login; update the diagram under
the A[python -m clean_telegram] node to add a branch/decision for the --qr-login
flag (e.g., add a decision node for flags or a node C[--qr-login?] that leads to
the QR login flow) so the flowchart reflects both normal env-check path
(B{Variáveis .env ok?}) and the new QR-login execution path; ensure the label
uses the exact flag string "--qr-login" so it clearly appears in the diagram.
In `@pyproject.toml`:
- Around line 133-143: Increase the test coverage threshold by updating the
fail_under setting (currently fail_under = 55) to a stricter value (suggest 70
or 75) in the pyproject.toml so CI fails when coverage falls below the agreed
standard; locate the fail_under entry and change its numeric value accordingly,
keeping the surrounding coverage config (show_missing, skip_covered,
exclude_lines) intact.
- Around line 27-31: As dependências em pyproject.toml estão muito permissivas;
substitua os ranges por versões fixas (por exemplo definir telethon==1.42.0,
python-dotenv==1.2.1, qrcode==8.2) no bloco dependencies, gere e comite um
arquivo de lock (por exemplo poetry.lock ou requirements.txt/requirements.lock)
para reprodutibilidade, execute uma varredura de segurança das dependências
transitivas (preste atenção em Pillow que vem com qrcode) e rejeite/bloqueie
pacotes typosquat (procure por telethon2, fastertelethon, etc.) garantindo que
telethon seja instalado apenas do repositório oficial.
In `@README.md`:
- Around line 68-70: Update the README usage examples to run the package as a
module instead of a script: replace occurrences of "python clean_telegram.py
..." with "python -m clean_telegram ..." (since the project uses a src/ layout
with __main__.py). Ensure all examples (the ones currently showing "--dry-run"
and other flags) use the module invocation and keep the same flags and
arguments.
- Around line 112-119: Add the missing `--qr-login` flag to the "Referência de
flags" table in the README by inserting a new row for `--qr-login` with a
concise Portuguese description (e.g., "Ativa login via QR code") and its default
state (e.g., "Desativado"); update the table under the "Referência de flags"
heading so the new `--qr-login` entry appears alongside `--dry-run`, `--yes`,
`--limit N`, and `--help`.
- Around line 186-191: The dependencies table in README.md is missing the qrcode
package; update the table to add a row for [qrcode] with the production version
used in pyproject.toml and requirements.txt so the documentation matches the
declared dependencies; ensure the new row follows the existing table format
(Package | Version | Purpose) and describe its purpose (QR code generation) to
match the other entries.
- Around line 172-184: Atualize a seção "Estrutura do projeto" para refletir o
layout real do repositório: substitua a entrada única clean_telegram.py por a
estrutura em src/clean_telegram incluindo __main__.py como ponto de entrada e os
módulos client.py e utils.py; remova/altere qualquer referência a arquivos
inexistentes (clean_telegram.py) e liste os arquivos reais (por exemplo
src/clean_telegram/__main__.py, src/clean_telegram/client.py,
src/clean_telegram/utils.py), mantendo as outras entradas (.env.example,
requirements.txt, docs/, etc.) consistentes com o repositório.
In `@requirements-dev.txt`:
- Around line 1-16: The dev requirements currently use open-ended constraints
(e.g., black, isort, flake8, mypy, pytest, pytest-asyncio, pytest-cov,
types-python-dotenv) which can pull breaking changes; update each entry to
include a conservative upper bound (for example change "pytest-asyncio>=0.23.0"
to "pytest-asyncio>=0.23.0,<1.0.0") or mirror the upper-bound style used in
production requirements; apply the same pattern to black, isort, flake8, mypy,
pytest, pytest-cov and types-python-dotenv so CI uses tested ranges and run the
test suite/lockfile update after making the changes.
In `@src/clean_telegram/__init__.py`:
- Around line 1-14: The __all__ list is not alphabetically ordered; update the
module-level export list so names are sorted lexicographically (for example:
env_int, process_dialog, safe_sleep) by editing the __all__ assignment that
currently contains "process_dialog", "env_int", "safe_sleep" to the correctly
sorted order; ensure the exported symbols (process_dialog, env_int, safe_sleep)
remain unchanged and run the linter to verify the warning is resolved.
In `@src/clean_telegram/__main__.py`:
- Around line 113-116: Capture the original SystemExit exception and chain it
when raising the new SystemExit so the original context isn't lost: change the
except block that handles env_int("API_ID") to use "except SystemExit as e" and
then "raise SystemExit('Faltou API_ID no .env ou valor inválido') from e"
(reference: env_int and api_id in __main__.py).
- Around line 125-128: The current args.qr_login branch uses
session_name.replace(".session", "") which can remove occurrences in the middle
of a path; instead check the suffix and strip only the trailing ".session"
before appending "_qr.session": in the args.qr_login block, test
session_name.endswith(".session") and if true set base_name to session_name
without that suffix (or use session_name.removesuffix(".session") on Python
3.9+), otherwise keep session_name as base_name, then set session_name =
f"{base_name}_qr.session".
- Around line 37-75: The login_with_qr loop can run forever if the user never
scans the QR; add a global timeout or max-attempts guard to break the while not
qr_login.is_logged loop. Modify login_with_qr to track attempts (e.g.,
max_attempts = N) or record a start time and bail out when exceeded,
incrementing attempts each iteration where display_qr_code(qr_login.url) and
await asyncio.wait_for(qr_login.wait(), timeout=10) are called; on reaching the
limit return False and log a clear message so qr_login, qr_login.wait(),
display_qr_code, and the asyncio.wait_for timeout handling remain intact.
- Around line 169-204: The retry loop is dead because process_dialog (in
client.py) catches FloodWaitError and returns False, and __main__.py ignores
that return and increments processed unconditionally; fix by either (A) making
process_dialog re-raise FloodWaitError instead of swallowing it so the existing
try/except FloodWaitError in the async loop will work, or (B) changing the loop
in __main__.py to check the boolean return of process_dialog (only increment
processed when it returns True), treat False as a non-success and handle
retries/skip there (do not increment processed, and if you need FloodWait
behavior propagate a specific signal or exception from process_dialog). Ensure
references: process_dialog, FloodWaitError, processed, and the retry while True
loop in __main__.py are updated consistently.
- Around line 53-63: The loop shows the same expired QR because after catching
asyncio.TimeoutError you never refresh the QR; update the exception handler to
call await qr_login.recreate() before continuing so
display_qr_code(qr_login.url) receives a new valid URL — i.e., in the except
block for asyncio.TimeoutError, invoke await qr_login.recreate(), log
appropriately, then continue; this references qr_login.is_logged,
display_qr_code, qr_login.wait(), asyncio.TimeoutError and qr_login.recreate().
In `@src/clean_telegram/client.py`:
- Around line 80-124: The current process_dialog function catches FloodWaitError
and returns False, which prevents the caller's retry/backoff logic from running;
update the FloodWaitError handler in process_dialog to log the error (using
logger.error with title and e.seconds) and then re-raise the FloodWaitError
(raise) instead of returning False so the caller can handle retries. Keep the
other exception handlers unchanged.
In `@src/clean_telegram/utils.py`:
- Around line 31-35: No bloco que converte int(v) capture a exceção original e
faça o encadeamento ao relançar: altere "except ValueError:" para "except
ValueError as e:", continue fazendo logger.error(...) e troque "raise
SystemExit(f\"{name} deve ser um número inteiro válido\")" por "raise
SystemExit(f\"{name} deve ser um número inteiro válido\") from e" para preservar
o traceback original; referências: v, name, logger e SystemExit.
In `@tests/conftest.py`:
- Around line 34-36: The helper async context manager `_client_context` is dead
code because `mock_client` configures `__aenter__`/`__aexit__` directly; remove
the unused `_client_context` definition to clean up the test file, or
alternatively replace the explicit `__aenter__/__aexit__` setup in the
`mock_client` fixture to use `_client_context` if you intended to reuse
it—locate `_client_context` and the `mock_client` fixture in tests/conftest.py
(the symbols `_client_context` and `mock_client`) and either delete
`_client_context` or refactor `mock_client` to return/patch via
`_client_context`.
In `@tests/test_client.py`:
- Around line 87-243: Add unit tests covering the QR-login features: create
tests for display_qr_code() to verify ASCII QR generation and console output,
tests for login_with_qr() covering successful login, QR expiry and regeneration
flow (simulate expired token and ensure a new QR is emitted), handling of
SessionPasswordNeededError to trigger 2FA flow, and that a separate session file
(_qr.session) is created; use mocks for the Telethon client/session, simulate
side_effects (expiry, SessionPasswordNeededError) and assert calls to
display_qr_code, session creation, and any fallback/regeneration logic in
login_with_qr.
In `@tests/test_main.py`:
- Around line 474-510: The test currently uses mock_process_dialog_failing to
always raise FloodWaitError but lacks assertions and assumes behavior that will
change once FloodWaitError handling in client.py is fixed; update the test to
(1) keep using mock_process_dialog_failing as the patched target for
process_dialog, (2) after awaiting __main__.main(), assert that
mock_process_dialog_failing was called exactly the configured max_retries (e.g.,
5) to verify retry logic, and (3) ensure the patch targets
"clean_telegram.__main__.process_dialog" and references FloodWaitError,
process_dialog, mock_process_dialog_failing and __main__.main so the test
reflects the real retry behavior after fixing client.py.
- Around line 1-12: Add unit tests for display_qr_code and login_with_qr: for
display_qr_code patch qrcode.QRCode to return a mock with a print_ascii method
and assert QRCode was instantiated with the provided data and print_ascii was
called; for login_with_qr write async tests using AsyncMock/patch to simulate
the client interactions—one test where login succeeds (mock the awaited call to
return a valid user/session and assert the function returns success), one where
a timeout or FloodWait-like exception is raised (assert the function handles or
raises as expected), one where telethon.errors.SessionPasswordNeededError is
raised (assert the function raises/handles that specific exception), and one
generic error path (raise a generic Exception and assert error handling); use
pytest.mark.asyncio or pytest-asyncio fixtures, patch the specific client
methods used inside login_with_qr (reference the login_with_qr function name and
any internal client call names it invokes) and assert proper behavior for each
scenario.
- Around line 431-472: The test test_main_flood_wait_retry is asserting too
weakly and should verify the retry actually happened: update the final assertion
to assert call_count["count"] == 2 so it requires one failing call
(FloodWaitError) and one successful retry; keep the existing mock_process_dialog
(which raises FloodWaitError on the first call and returns True on the second)
and the patch of clean_telegram.__main__.process_dialog so the test exercises
the retry path in __main__.main and validates that process_dialog was invoked
twice.
| ### .flake8 | ||
| ```ini | ||
| [flake8] | ||
| max-line-length = 88 | ||
| exclude = .git,__pycache__,venv | ||
| ignore = E203,W503 | ||
| ``` | ||
|
|
||
| ### pyproject.toml | ||
| ```toml | ||
| [tool.black] | ||
| line-length = 88 | ||
|
|
||
| [tool.isort] | ||
| profile = "black" | ||
| ``` |
There was a problem hiding this comment.
Inconsistência na configuração de exemplo, tolo!
Escuta aqui! Os exemplos de configuração mostram line-length = 88 (padrão do Black), mas o pyproject.toml do projeto usa line-length = 100. Tenho pena do tolo que copia esse exemplo e acha que tá certo! Ajusta essa documentação pra refletir o projeto, sacou?
🔧 Correção proposta
[tool.black]
-line-length = 88
+line-length = 100
[tool.isort]
profile = "black"
+line_length = 100🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 88-88: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 89-89: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 96-96: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 97-97: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In @.claude/commands/lint.md around lines 88 - 103, A documentação de exemplo
está inconsistente: o bloco .flake8 e a seção [tool.black] usam line-length = 88
enquanto o projeto usa 100; atualize os exemplos para usar line-length = 100
(ajustando tanto o arquivo .flake8 exemplo quanto a chave [tool.black] no
pyproject.toml exemplo) para que os valores reflitam o valor real do projeto e
evitem confusão ao copiar a configuração.
| { | ||
| "matcher": "Write|Edit|MultiEdit", | ||
| "hooks": [ | ||
| { | ||
| "type": "command", | ||
| "command": "FILE=$(echo $STDIN_JSON | jq -r '.tool_input.file_path // \"\"'); if [[ \"$FILE\" =~ \\.py$ ]]; then RESULT=$(flake8 \"$FILE\" 2>&1); if [ $? -ne 0 ] && command -v flake8 >/dev/null 2>&1; then echo \"Flake8 linting issues found: $RESULT\" >&2; exit 2; fi; fi", | ||
| "timeout": 30 | ||
| } | ||
| ] | ||
| }, | ||
| { | ||
| "matcher": "Write|Edit|MultiEdit", | ||
| "hooks": [ | ||
| { | ||
| "type": "command", | ||
| "command": "FILE=$(echo $STDIN_JSON | jq -r '.tool_input.file_path // \"\"'); if [[ \"$FILE\" =~ \\.py$ ]]; then RESULT=$(mypy \"$FILE\" 2>&1); if [ $? -ne 0 ] && command -v mypy >/dev/null 2>&1; then echo \"MyPy type checking issues found: $RESULT\" >&2; exit 2; fi; fi", | ||
| "timeout": 30 | ||
| } | ||
| ] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Lógica invertida na verificação do Flake8 e MyPy, malandro!
Escuta aqui, tolo! Nas linhas 91 e 101, o código primeiro executa o flake8/mypy e depois verifica se a ferramenta está instalada. Isso significa que se a ferramenta não existir, o shell imprime um erro feio antes de descobrir que não precisava rodar. Verifica a existência antes de executar, sacou?
♻️ Correção proposta para flake8 (linha 91)
-"command": "FILE=$(echo $STDIN_JSON | jq -r '.tool_input.file_path // \"\"'); if [[ \"$FILE\" =~ \\.py$ ]]; then RESULT=$(flake8 \"$FILE\" 2>&1); if [ $? -ne 0 ] && command -v flake8 >/dev/null 2>&1; then echo \"Flake8 linting issues found: $RESULT\" >&2; exit 2; fi; fi",
+"command": "FILE=$(echo $STDIN_JSON | jq -r '.tool_input.file_path // \"\"'); if [[ \"$FILE\" =~ \\.py$ ]] && command -v flake8 >/dev/null 2>&1; then RESULT=$(flake8 \"$FILE\" 2>&1); if [ $? -ne 0 ]; then echo \"Flake8 linting issues found: $RESULT\" >&2; exit 2; fi; fi",Mesma correção se aplica ao MyPy na linha 101.
🤖 Prompt for AI Agents
In @.claude/settings.json around lines 86 - 104, O hook executa flake8/mypy
antes de checar se a ferramenta existe; ajuste as duas commands para verificar
"command -v flake8" / "command -v mypy" antes de rodar a ferramenta (use a
verificação de disponibilidade primeiro) e só então executar RESULT=$(flake8
"$FILE" 2>&1) ou RESULT=$(mypy "$FILE" 2>&1) quando FILE terminar em .py;
aplique a mesma inversão na string do hook que referencia $FILE e os comandos
flake8/mypy para evitar erros de comando não encontrado.
| { | ||
| "mcpServers": { | ||
| "python-sdk": { | ||
| "name": "Python SDK", | ||
| "description": "Official Python SDK with FastMCP for rapid MCP development", | ||
| "command": "python", | ||
| "args": ["-m", "python_sdk.server"], | ||
| "env": {} | ||
| }, | ||
| "docker": { | ||
| "name": "Docker MCP", | ||
| "description": "Isolated code execution via Docker containers", | ||
| "command": "python", | ||
| "args": ["-m", "mcp_server_docker"], | ||
| "env": {} | ||
| }, | ||
| "jupyter": { | ||
| "name": "Jupyter MCP", | ||
| "description": "MCP integration for interactive Jupyter notebooks", | ||
| "command": "python", | ||
| "args": ["-m", "server_jupyter"], | ||
| "env": {} | ||
| }, | ||
| "postgresql": { | ||
| "name": "PostgreSQL MCP", | ||
| "description": "Natural language queries to PostgreSQL databases", | ||
| "command": "python", | ||
| "args": ["-m", "server_postgres"], | ||
| "env": { | ||
| "DATABASE_URL": "" | ||
| } | ||
| }, | ||
| "opik": { | ||
| "name": "Opik MCP", | ||
| "description": "Observability for LLM apps with tracing and metrics", | ||
| "command": "python", | ||
| "args": ["-m", "opik_mcp"], | ||
| "env": {} | ||
| }, | ||
| "memory-bank": { | ||
| "name": "Memory Bank MCP", | ||
| "description": "Centralized memory system for AI agents", | ||
| "command": "server-memory", | ||
| "args": [], | ||
| "env": {} | ||
| }, | ||
| "sequential-thinking": { | ||
| "name": "Sequential Thinking MCP", | ||
| "description": "Helps LLMs decompose complex tasks into logical steps", | ||
| "command": "code-reasoning", | ||
| "args": [], | ||
| "env": {} | ||
| }, | ||
| "brave-search": { | ||
| "name": "Brave Search MCP", | ||
| "description": "Privacy-focused web search tool", | ||
| "command": "server-brave-search", | ||
| "args": [], | ||
| "env": {} | ||
| }, | ||
| "google-maps": { | ||
| "name": "Google Maps MCP", | ||
| "description": "Integrates Google Maps for geolocation and directions", | ||
| "command": "server-google-maps", | ||
| "args": [], | ||
| "env": { | ||
| "GOOGLE_MAPS_API_KEY": "" | ||
| } | ||
| }, | ||
| "deep-graph": { | ||
| "name": "Deep Graph MCP (Code Graph)", | ||
| "description": "Transforms source code into semantic graphs via DeepGraph", | ||
| "command": "mcp-code-graph", | ||
| "args": [], | ||
| "env": {} | ||
| } | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Ei, malandro! Esse arquivo não tem nada a ver com o PR de QR code login!
Tenho pena do tolo que mistura arquivos não relacionados no mesmo PR! Esse .mcp.json configura servidores MCP para ferramentas de desenvolvimento, mas o objetivo desse PR é adicionar login via QR code. Isso deveria estar num commit ou PR separado pra manter o escopo limpo e facilitar o review, tá ligado?
🤖 Prompt for AI Agents
In @.mcp.json around lines 1 - 78, The .mcp.json change is unrelated to the QR
code login work (it adds an unrelated mcpServers config with entries like
"python-sdk", "docker", "jupyter", "memory-bank", etc.); remove this file from
the current branch/PR by reverting or unstaging the .mcp.json change so the PR
contains only QR-code-login changes, or alternatively move these mcpServers
entries into a separate commit/branch and open a dedicated PR for that
configuration change.
| "postgresql": { | ||
| "name": "PostgreSQL MCP", | ||
| "description": "Natural language queries to PostgreSQL databases", | ||
| "command": "python", | ||
| "args": ["-m", "server_postgres"], | ||
| "env": { | ||
| "DATABASE_URL": "" | ||
| } |
There was a problem hiding this comment.
Tenho pena do tolo que commita segredos em texto plano no repositório!
As variáveis DATABASE_URL (linha 30) e GOOGLE_MAPS_API_KEY (linha 67) estão com valores vazios como placeholders. Isso é um risco de segurança, malandro — alguém pode preencher com valores reais e commitar sem querer!
Recomendações:
- Adicionar
.mcp.jsonao.gitignoree usar um.mcp.json.examplecomo template. - Ou referenciar variáveis de ambiente do sistema ao invés de colocar os valores diretamente no arquivo.
🛡️ Sugestão: usar .gitignore + template
Adicionar ao .gitignore:
+.mcp.jsonRenomear o arquivo atual para .mcp.json.example para servir de referência.
Also applies to: 61-68
🤖 Prompt for AI Agents
In @.mcp.json around lines 24 - 31, Remove secrets/placeholders from the
committed .mcp.json by adding ".mcp.json" to .gitignore, renaming the committed
file to ".mcp.json.example" as a template, and update the template to show only
reference names (not real values) for DATABASE_URL and GOOGLE_MAPS_API_KEY; then
change any code/commands that read this config to load these values from
environment variables (referencing the DATABASE_URL and GOOGLE_MAPS_API_KEY
keys) so no secret values are stored in the repo.
| "env": {} | ||
| } | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Falta newline no final do arquivo, seu bobalhão!
Arquivos POSIX devem terminar com uma linha em branco. Adiciona um \n no final, tá ligado?
🔧 Correção proposta
}
-}
+}
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } | |
| } | |
🤖 Prompt for AI Agents
In @.mcp.json at line 78, O arquivo .mcp.json está faltando uma nova linha no
final do arquivo; abra .mcp.json e garanta que o último caractere do arquivo
seja um '\n' (adicione uma linha em branco final) para cumprir a convenção POSIX
e evitar diffs/avisos de lint.
| @asynccontextmanager | ||
| async def _client_context(*args, **kwargs): | ||
| yield client |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Código morto, parceiro! _client_context nunca é usado!
A função _client_context é definida mas nunca referenciada. O mock_client na linha 38-42 usa __aenter__/__aexit__ diretamente, tornando _client_context desnecessário.
♻️ Remover código morto
client.return_value = client
- `@asynccontextmanager`
- async def _client_context(*args, **kwargs):
- yield client
-
mock_client = MagicMock()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @asynccontextmanager | |
| async def _client_context(*args, **kwargs): | |
| yield client | |
| client.return_value = client | |
| mock_client = MagicMock() |
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 35-35: Missing return type annotation for private function _client_context
(ANN202)
[warning] 35-35: Missing type annotation for *args
(ANN002)
[warning] 35-35: Unused function argument: args
(ARG001)
[warning] 35-35: Missing type annotation for **kwargs
(ANN003)
[warning] 35-35: Unused function argument: kwargs
(ARG001)
🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 34 - 36, The helper async context manager
`_client_context` is dead code because `mock_client` configures
`__aenter__`/`__aexit__` directly; remove the unused `_client_context`
definition to clean up the test file, or alternatively replace the explicit
`__aenter__/__aexit__` setup in the `mock_client` fixture to use
`_client_context` if you intended to reuse it—locate `_client_context` and the
`mock_client` fixture in tests/conftest.py (the symbols `_client_context` and
`mock_client`) and either delete `_client_context` or refactor `mock_client` to
return/patch via `_client_context`.
| class TestProcessDialog: | ||
| """Testes para process_dialog.""" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_channel(self, mock_telegram_client, mock_channel): | ||
| """Testa process_dialog com Channel.""" | ||
| mock_telegram_client.return_value = None | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_channel, | ||
| "Test Channel", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_channel_dry_run(self, mock_telegram_client, mock_channel): | ||
| """Testa process_dialog com Channel em dry_run.""" | ||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_channel, | ||
| "Test Channel", | ||
| 1, | ||
| dry_run=True, | ||
| ) | ||
|
|
||
| assert result is True | ||
| mock_telegram_client.assert_not_awaited() | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_chat(self, mock_telegram_client, mock_chat): | ||
| """Testa process_dialog com Chat.""" | ||
| mock_telegram_client.return_value = None | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_chat, | ||
| "Test Chat", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_chat_fallback_on_rpc_error(self, mock_telegram_client, mock_chat): | ||
| """Testa fallback em process_dialog quando RPCError ocorre em Chat.""" | ||
|
|
||
| # Configura side_effect para a primeira chamada (DeleteChatUserRequest) levantar erro | ||
| # e mock delete_dialog para o fallback | ||
| mock_telegram_client.side_effect = BadRequestError(None, "Test error") | ||
| mock_telegram_client.delete_dialog = AsyncMock() | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_chat, | ||
| "Test Chat", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
| mock_telegram_client.delete_dialog.assert_called_once_with(mock_chat) | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_user(self, mock_telegram_client, mock_user): | ||
| """Testa process_dialog com User.""" | ||
| mock_telegram_client.return_value = None | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_user, | ||
| "Test User", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_bot(self, mock_telegram_client, mock_bot): | ||
| """Testa process_dialog com bot.""" | ||
| mock_telegram_client.return_value = None | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_bot, | ||
| "TestBot", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_unknown_type(self, mock_telegram_client): | ||
| """Testa process_dialog com tipo desconhecido.""" | ||
| unknown_entity = MagicMock(spec=object) | ||
| mock_telegram_client.return_value = None | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| unknown_entity, | ||
| "Unknown", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is True | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_dialog_flood_wait_error(self, mock_telegram_client, mock_channel): | ||
| """Testa process_dialog com FloodWaitError.""" | ||
| mock_telegram_client.side_effect = FloodWaitError(None, 60) | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_channel, | ||
| "Test Channel", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is False | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_dialog_rpc_error(self, mock_telegram_client, mock_channel): | ||
| """Testa process_dialog com RPCError.""" | ||
| mock_telegram_client.side_effect = BadRequestError(None, "Test error") | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_channel, | ||
| "Test Channel", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is False | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_dialog_generic_exception(self, mock_telegram_client, mock_channel): | ||
| """Testa process_dialog com exceção genérica.""" | ||
| mock_telegram_client.side_effect = Exception("Unexpected error") | ||
|
|
||
| result = await process_dialog( | ||
| mock_telegram_client, | ||
| mock_channel, | ||
| "Test Channel", | ||
| 1, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| assert result is False |
There was a problem hiding this comment.
Cadê os testes do QR login, malandro?!
Tenho pena do tolo que adiciona uma feature inteira de login via QR code e não escreve um teste sequer pra display_qr_code() e login_with_qr()! O PR diz que todas as funções novas estão em __main__.py, mas elas precisam de cobertura de teste. Pelo menos teste:
- Geração e exibição do QR code em ASCII
- Fluxo de regeneração quando o QR expira
- Tratamento de
SessionPasswordNeededError(2FA) - Criação da sessão separada (
_qr.session)
Quer que eu gere os testes unitários para as funções de QR login ou abra uma issue pra rastrear essa tarefa?
🤖 Prompt for AI Agents
In `@tests/test_client.py` around lines 87 - 243, Add unit tests covering the
QR-login features: create tests for display_qr_code() to verify ASCII QR
generation and console output, tests for login_with_qr() covering successful
login, QR expiry and regeneration flow (simulate expired token and ensure a new
QR is emitted), handling of SessionPasswordNeededError to trigger 2FA flow, and
that a separate session file (_qr.session) is created; use mocks for the
Telethon client/session, simulate side_effects (expiry,
SessionPasswordNeededError) and assert calls to display_qr_code, session
creation, and any fallback/regeneration logic in login_with_qr.
| """Testes para o módulo __main__.py.""" | ||
|
|
||
| import asyncio | ||
| from contextlib import asynccontextmanager | ||
| from unittest.mock import AsyncMock, MagicMock, Mock, patch | ||
|
|
||
| import pytest | ||
| from telethon.errors import FloodWaitError, RPCError | ||
| from telethon.tl.types import Channel, User | ||
|
|
||
| # Importamos aqui para evitar problemas de importação | ||
| from clean_telegram import __main__ |
There was a problem hiding this comment.
Nenhum teste para QR login, malandro! Tenho pena do tolo que deploia sem testar a feature principal!
O PR adiciona display_qr_code e login_with_qr mas não há nenhum teste para essas funções. Considere adicionar testes unitários para:
display_qr_code: verificar queqrcode.QRCodeé chamado eprint_asciié invocadologin_with_qr: cenários de sucesso, timeout,SessionPasswordNeededError, e erro genérico
Quer que eu gere os testes unitários para display_qr_code e login_with_qr, ou abra uma issue pra rastrear?
🤖 Prompt for AI Agents
In `@tests/test_main.py` around lines 1 - 12, Add unit tests for display_qr_code
and login_with_qr: for display_qr_code patch qrcode.QRCode to return a mock with
a print_ascii method and assert QRCode was instantiated with the provided data
and print_ascii was called; for login_with_qr write async tests using
AsyncMock/patch to simulate the client interactions—one test where login
succeeds (mock the awaited call to return a valid user/session and assert the
function returns success), one where a timeout or FloodWait-like exception is
raised (assert the function handles or raises as expected), one where
telethon.errors.SessionPasswordNeededError is raised (assert the function
raises/handles that specific exception), and one generic error path (raise a
generic Exception and assert error handling); use pytest.mark.asyncio or
pytest-asyncio fixtures, patch the specific client methods used inside
login_with_qr (reference the login_with_qr function name and any internal client
call names it invokes) and assert proper behavior for each scenario.
| @pytest.mark.asyncio | ||
| async def test_main_flood_wait_retry(self, monkeypatch_env, mock_async_telegram_client): | ||
| """Testa retry em FloodWaitError.""" | ||
| monkeypatch_env(API_ID="12345", API_HASH="test_hash") | ||
|
|
||
| # Criar mock de diálogo | ||
| mock_dialog = MagicMock() | ||
| mock_dialog.name = "Test Channel" | ||
| mock_dialog.entity = MagicMock(spec=Channel) | ||
|
|
||
| # Contador para controlar quando lançar erro | ||
| call_count = {"count": 0} | ||
|
|
||
| # Mock process_dialog para lançar FloodWaitError na primeira chamada | ||
| async def mock_process_dialog(*args, **kwargs): | ||
| call_count["count"] += 1 | ||
| if call_count["count"] == 1: | ||
| raise FloodWaitError(None, 1) | ||
| return True | ||
|
|
||
| # Mock iter_dialogs como async generator | ||
| async def mock_iter_dialogs(): | ||
| yield mock_dialog | ||
|
|
||
| client = AsyncMock() | ||
| client.get_me = AsyncMock( | ||
| return_value=Mock(username="testuser", id=12345, first_name="Test") | ||
| ) | ||
| client.iter_dialogs = mock_iter_dialogs | ||
| mock_async_telegram_client.__aenter__.return_value = client | ||
|
|
||
| with patch("sys.argv", ["clean-telegram", "--dry-run"]): | ||
| with patch( | ||
| "clean_telegram.__main__.TelegramClient", return_value=mock_async_telegram_client | ||
| ): | ||
| with patch( | ||
| "clean_telegram.__main__.process_dialog", side_effect=mock_process_dialog | ||
| ): | ||
| await __main__.main() | ||
|
|
||
| # Verifica que process_dialog foi chamado mais de uma vez (retry) | ||
| assert call_count["count"] >= 1 |
There was a problem hiding this comment.
Escuta aqui, malandro! Esse teste de retry tá testando código morto!
Dois problemas, parceiro:
-
Cenário irreal: O mock de
process_dialoglançaFloodWaitErrordiretamente, mas oprocess_dialogreal (emclient.py) capturaFloodWaitErrorinternamente e retornaFalse. Esseexcept FloodWaitErrorno__main__nunca é alcançado na prática. O teste valida um caminho de código que não existe. -
Asserção fraca (linha 472):
assert call_count["count"] >= 1passa mesmo sem retry. Para verificar que o retry aconteceu, deveria ser== 2(primeira chamada falha + retry com sucesso).
🔧 Correção da asserção (assumindo que o bug de FloodWait em __main__/client seja corrigido)
- # Verifica que process_dialog foi chamado mais de uma vez (retry)
- assert call_count["count"] >= 1
+ # Verifica que process_dialog foi chamado 2 vezes (1 falha + 1 retry com sucesso)
+ assert call_count["count"] == 2🧰 Tools
🪛 Ruff (0.14.14)
[warning] 445-445: Missing return type annotation for private function mock_process_dialog
Add return type annotation: bool
(ANN202)
[warning] 445-445: Missing type annotation for *args
(ANN002)
[warning] 445-445: Unused function argument: args
(ARG001)
[warning] 445-445: Missing type annotation for **kwargs
(ANN003)
[warning] 445-445: Unused function argument: kwargs
(ARG001)
[warning] 452-452: Missing return type annotation for private function mock_iter_dialogs
(ANN202)
🤖 Prompt for AI Agents
In `@tests/test_main.py` around lines 431 - 472, The test
test_main_flood_wait_retry is asserting too weakly and should verify the retry
actually happened: update the final assertion to assert call_count["count"] == 2
so it requires one failing call (FloodWaitError) and one successful retry; keep
the existing mock_process_dialog (which raises FloodWaitError on the first call
and returns True on the second) and the patch of
clean_telegram.__main__.process_dialog so the test exercises the retry path in
__main__.main and validates that process_dialog was invoked twice.
| @pytest.mark.asyncio | ||
| async def test_main_max_retries_exceeded(self, monkeypatch_env, mock_async_telegram_client): | ||
| """Testa quando max retries é atingido.""" | ||
| monkeypatch_env(API_ID="12345", API_HASH="test_hash") | ||
|
|
||
| # Criar mock de diálogo | ||
| mock_dialog = MagicMock() | ||
| mock_dialog.name = "Test Channel" | ||
| mock_dialog.entity = MagicMock(spec=Channel) | ||
|
|
||
| # Mock process_dialog para sempre lançar FloodWaitError | ||
| async def mock_process_dialog_failing(*args, **kwargs): | ||
| raise FloodWaitError(None, 1) | ||
|
|
||
| # Mock iter_dialogs como async generator | ||
| async def mock_iter_dialogs(): | ||
| yield mock_dialog | ||
|
|
||
| client = AsyncMock() | ||
| client.get_me = AsyncMock( | ||
| return_value=Mock(username="testuser", id=12345, first_name="Test") | ||
| ) | ||
| client.iter_dialogs = mock_iter_dialogs | ||
| mock_async_telegram_client.__aenter__.return_value = client | ||
|
|
||
| with patch("sys.argv", ["clean-telegram", "--dry-run"]): | ||
| with patch( | ||
| "clean_telegram.__main__.TelegramClient", return_value=mock_async_telegram_client | ||
| ): | ||
| with patch( | ||
| "clean_telegram.__main__.process_dialog", | ||
| side_effect=mock_process_dialog_failing, | ||
| ): | ||
| await __main__.main() | ||
|
|
||
| # Verifica que o loop continuou mesmo após max retries | ||
|
|
There was a problem hiding this comment.
Teste de max retries: mesmo problema do cenário irreal, parceiro!
O mock_process_dialog_failing sempre lança FloodWaitError, mas com o process_dialog real, esse cenário não ocorre (exceção capturada internamente). Ajuste após corrigir o bug de FloodWaitError em client.py.
Além disso, não há asserção neste teste — ele apenas verifica que não ocorre crash. Considere assertar que mock_process_dialog_failing foi chamado exatamente 5 vezes (max_retries).
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 485-485: Missing return type annotation for private function mock_process_dialog_failing
Add return type annotation: NoReturn
(ANN202)
[warning] 485-485: Missing type annotation for *args
(ANN002)
[warning] 485-485: Unused function argument: args
(ARG001)
[warning] 485-485: Missing type annotation for **kwargs
(ANN003)
[warning] 485-485: Unused function argument: kwargs
(ARG001)
[warning] 489-489: Missing return type annotation for private function mock_iter_dialogs
(ANN202)
🤖 Prompt for AI Agents
In `@tests/test_main.py` around lines 474 - 510, The test currently uses
mock_process_dialog_failing to always raise FloodWaitError but lacks assertions
and assumes behavior that will change once FloodWaitError handling in client.py
is fixed; update the test to (1) keep using mock_process_dialog_failing as the
patched target for process_dialog, (2) after awaiting __main__.main(), assert
that mock_process_dialog_failing was called exactly the configured max_retries
(e.g., 5) to verify retry logic, and (3) ensure the patch targets
"clean_telegram.__main__.process_dialog" and references FloodWaitError,
process_dialog, mock_process_dialog_failing and __main__.main so the test
reflects the real retry behavior after fixing client.py.
Expand the analysis to cover both the current main branch (59% coverage) and the incoming PR #3 which restructures the codebase (83% coverage). The new QR login feature (display_qr_code and login_with_qr) has 0% test coverage and contains bugs identified by both this analysis and the CodeRabbit review. https://claude.ai/code/session_01M6v9Qk8D3MvXKW5TXq1AG5
Resumo
Adiciona suporte a login via QR code como alternativa ao login por telefone/código. O QR code é exibido diretamente no terminal usando ASCII, facilitando o uso em ambientes sem interface gráfica.
Principais mudanças
Nova funcionalidade
--qr-login: Ativa o modo de login via QR codedisplay_qr_code(): Exibe o QR code em ASCII no terminallogin_with_qr(): Gerencia o fluxo completo de login via QR codeArquivos modificados
src/clean_telegram/__main__.py- Adicionada funcionalidade de QR code loginpyproject.toml- Adicionada dependênciaqrcode>=8.0,<9.0requirements.txt- Adicionada dependênciaqrcode>=8.0,<9.0Testes
Como usar
Características técnicas
client.qr_login()do Telethon_qr.session) para não interferir com sessão principal🤖 Gerado com Claude Code
Summary by CodeRabbit
Notas de Lançamento
Novas Funcionalidades
Documentação
Chores