Skip to content

Conversation

@electron271
Copy link
Member

@electron271 electron271 commented Aug 3, 2025

Description

fixes an issue where users cant make snippets if they arent a mod

Guidelines

  • My code follows the style guidelines of this project (formatted with Ruff)

  • I have performed a self-review of my own code

  • I have commented my code, particularly in hard-to-understand areas

  • I have made corresponding changes to the documentation if needed

  • My changes generate no new warnings

  • I have tested this change

  • Any dependent changes have been merged and published in downstream modules

  • I have added all appropriate labels to this PR

  • I have followed all of these guidelines.

How Has This Been Tested? (if applicable)

had an user who wasnt a mod make a snippet before and after, worked after

Screenshots (if applicable)

Please add screenshots to help explain your changes.

Additional Information

Please add any other information that is important to this PR.

Summary by Sourcery

Bug Fixes:

  • Replace commands.CheckFailure with PermissionLevelError in SnippetsBaseCog's permission check so non-mod users can create snippets

@github-actions
Copy link
Contributor

github-actions bot commented Aug 3, 2025

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Scanned Files

None

@cloudflare-workers-and-pages
Copy link

Deploying tux with  Cloudflare Pages  Cloudflare Pages

Latest commit: 43d7a10
Status: ✅  Deploy successful!
Preview URL: https://63b0fb49.tux-afh.pages.dev
Branch Preview URL: https://snippet-permissions-fix.tux-afh.pages.dev

View logs

@codecov
Copy link

codecov bot commented Aug 3, 2025

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
240 1 239 2
View the top 1 failed test(s) by shortest run time
tests/unit/test_main.py::TestMainIntegration::test_module_can_be_executed_as_script
Stack Traces | 0.978s run time
self = <tests.unit.test_main.TestMainIntegration object at 0x7f86d063c640>

    @pytest.mark.slow
    def test_module_can_be_executed_as_script(self) -> None:
        """Test that the module can actually be executed as a Python script."""
        # This is a real integration test that actually tries to run the module
        # We mock the TuxApp to prevent the bot from starting
    
        # Create a temporary script that imports and patches TuxApp
    
        test_script = textwrap.dedent("""
            import sys
            from unittest.mock import Mock, patch
    
            # Add the project root to the path
            sys.path.insert(0, "{project_root}")
    
            # Mock the config loading before importing tux.main to prevent FileNotFoundError in CI
            # We need to mock the file reading operations that happen at module import time
            with patch("pathlib.Path.read_text") as mock_read_text:
                # Mock the YAML content that would be read from config files
                mock_config_content = '''
                USER_IDS:
                  BOT_OWNER: 123456789
                  SYSADMINS: [123456789]
                ALLOW_SYSADMINS_EVAL: false
                BOT_INFO:
                  BOT_NAME: "Test Bot"
                  PROD_PREFIX: "!"
                  DEV_PREFIX: "??"
                  ACTIVITIES: "Testing"
                  HIDE_BOT_OWNER: false
                STATUS_ROLES: []
                TEMPVC_CATEGORY_ID: null
                TEMPVC_CHANNEL_ID: null
                GIF_LIMITER:
                  RECENT_GIF_AGE: 3600
                  GIF_LIMIT_EXCLUDE: []
                  GIF_LIMITS_USER: {{}}
                  GIF_LIMITS_CHANNEL: {{}}
                XP:
                  XP_BLACKLIST_CHANNELS: []
                  XP_ROLES: []
                  XP_MULTIPLIERS: []
                  XP_COOLDOWN: 60
                  LEVELS_EXPONENT: 2
                  SHOW_XP_PROGRESS: false
                  ENABLE_XP_CAP: true
                SNIPPETS:
                  LIMIT_TO_ROLE_IDS: false
                  ACCESS_ROLE_IDS: []
                '''
                mock_read_text.return_value = mock_config_content
    
                with patch("tux.app.TuxApp") as mock_app:
                    mock_instance = Mock()
                    mock_app.return_value = mock_instance
    
                    # Import and run main
                    import tux.main
                    tux.main.run()
    
                    # Verify it was called
                    assert mock_app.called
                    assert mock_instance.run.called
                    print("SUCCESS: Module executed correctly")
        """)
    
        # Get the project root dynamically
        project_root = Path(__file__).parent.parent
        script_content = test_script.format(project_root=project_root)
    
        # Write and execute the test script
        with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
            f.write(script_content)
            temp_script = f.name
    
        try:
            result = subprocess.run(
                [sys.executable, temp_script],
                capture_output=True,
                text=True,
                timeout=30,
                check=False,
            )
    
            # Check that the script executed successfully
>           assert result.returncode == 0, f"Script failed: {result.stderr}"
E           AssertionError: Script failed: Traceback (most recent call last):
E               File "/tmp/tmpld50g5ik.py", line 45, in <module>
E                 with patch("tux.app.TuxApp") as mock_app:
E                      ~~~~~^^^^^^^^^^^^^^^^^^
E               File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/unittest/mock.py", line 1481, in __enter__
E                 self.target = self.getter()
E                               ~~~~~~~~~~~^^
E               File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/pkgutil.py", line 518, in resolve_name
E                 mod = importlib.import_module(s)
E               File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/importlib/__init__.py", line 88, in import_module
E                 return _bootstrap._gcd_import(name[level:], package, level)
E                        ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E               File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
E               File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
E               File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
E               File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
E               File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
E               File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
E               File ".../tux/tux/app.py", line 11, in <module>
E                 from tux.bot import Tux
E               File ".../tux/tux/bot.py", line 20, in <module>
E                 from tux.cog_loader import CogLoader
E               File ".../tux/tux/cog_loader.py", line 14, in <module>
E                 from tux.utils.config import CONFIG
E               File ".../tux/utils/config.py", line 55, in <module>
E                 class Config:
E                 ...<100 lines>...
E                     BRIDGE_WEBHOOK_IDS: Final[list[int]] = [int(x) for x in config["IRC"]["BRIDGE_WEBHOOK_IDS"]]
E               File ".../tux/utils/config.py", line 156, in Config
E                 BRIDGE_WEBHOOK_IDS: Final[list[int]] = [int(x) for x in config["IRC"]["BRIDGE_WEBHOOK_IDS"]]
E                                                                         ~~~~~~^^^^^^^
E             KeyError: 'IRC'
E             
E           assert 1 == 0
E            +  where 1 = CompletedProcess(args=['.../tux/tux/.venv/bin/python', '/tmp/tmpld50g5ik.py'], returncode=1, stdout='', stderr='Traceback (most recent call last):\n  File "/tmp/tmpld50g5ik.py", line 45, in <module>\n    with patch("tux.app.TuxApp") as mock_app:\n         ~~~~~^^^^^^^^^^^^^^^^^^\n  File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/unittest/mock.py", line 1481, in __enter__\n    self.target = self.getter()\n                  ~~~~~~~~~~~^^\n  File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/pkgutil.py", line 518, in resolve_name\n    mod = importlib.import_module(s)\n  File ".../hostedtoolcache/Python/3.13.5................../x64/lib/python3.13/importlib/__init__.py", line 88, in import_module\n    return _bootstrap._gcd_import(name[level:], package, level)\n           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import\n  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load\n  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked\n  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked\n  File "<frozen importlib._bootstrap_external>", line 1026, in exec_module\n  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed\n  File ".../tux/tux/app.py", line 11, in <module>\n    from tux.bot import Tux\n  File ".../tux/tux/bot.py", line 20, in <module>\n    from tux.cog_loader import CogLoader\n  File ".../tux/tux/cog_loader.py", line 14, in <module>\n    from tux.utils.config import CONFIG\n  File ".../tux/utils/config.py", line 55, in <module>\n    class Config:\n    ...<100 lines>...\n        BRIDGE_WEBHOOK_IDS: Final[list[int]] = [int(x) for x in config["IRC"]["BRIDGE_WEBHOOK_IDS"]]\n  File ".../tux/utils/config.py", line 156, in Config\n    BRIDGE_WEBHOOK_IDS: Final[list[int]] = [int(x) for x in config["IRC"]["BRIDGE_WEBHOOK_IDS"]]\n                                                            ~~~~~~^^^^^^^\nKeyError: \'IRC\'\n').returncode

tests/unit/test_main.py:279: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@electron271
Copy link
Member Author

this seems dependent on LIMIT_TO_ROLE_IDS being false, which could explain this.
with all testing users were as bot owner/sysadmin since they managed that tux instance, both of the main servers which run tux have that on true, and the only server which may have it on false is currently on a version before the snippets refactoring

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Aug 3, 2025

Reviewer's Guide

This PR fixes the mod-check logic in the SnippetsBaseCog by importing and catching the specific PermissionLevelError instead of the generic CheckFailure, ensuring non-mod users are correctly handled when creating snippets.

Sequence diagram for permission check in check_if_user_has_mod_override

sequenceDiagram
    participant User as actor User
    participant Bot as SnippetsBaseCog
    participant Checks as checks.has_pl(2)
    participant Exception as PermissionLevelError
    User->>Bot: invoke command
    Bot->>Checks: has_pl(2).predicate(ctx)
    alt User is not a mod
        Checks-->>Bot: raise PermissionLevelError
        Bot->>Bot: return False
    else User is a mod
        Checks-->>Bot: success
        Bot->>Bot: continue
    end
Loading

Class diagram for updated exception handling in SnippetsBaseCog

classDiagram
    class SnippetsBaseCog {
        +check_if_user_has_mod_override(ctx)
    }
    class commands.CheckFailure
    class PermissionLevelError
    SnippetsBaseCog ..> PermissionLevelError : catches
    SnippetsBaseCog ..> commands.CheckFailure : (no longer catches)
Loading

File-Level Changes

Change Details Files
Handle specific PermissionLevelError in mod-check predicate
  • Imported PermissionLevelError from exceptions
  • Replaced except commands.CheckFailure with except PermissionLevelError
  • Added comment to explain non-mod path
tux/cogs/snippets/__init__.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @electron271 - I've reviewed your changes - here's some feedback:

  • Ensure any other permission checks that previously caught commands.CheckFailure are updated to catch PermissionLevelError for consistency across the codebase.
  • Verify that checks.has_pl only raises PermissionLevelError and not other CheckFailure-derived exceptions to avoid unhandled cases.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Ensure any other permission checks that previously caught commands.CheckFailure are updated to catch PermissionLevelError for consistency across the codebase.
- Verify that checks.has_pl only raises PermissionLevelError and not other CheckFailure-derived exceptions to avoid unhandled cases.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@electron271
Copy link
Member Author

okay so i think this might be some weird edge case, either way fixed

Copy link
Collaborator

@meatharvester meatharvester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

everything appears to be good and dosent introduce any unwanted side effects

@electron271 electron271 merged commit 35cb543 into main Aug 3, 2025
36 checks passed
@electron271 electron271 deleted the snippet-permissions-fix branch August 3, 2025 04:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants