From 46a4f23048fe136e80b1c147eeb43ced5061ee11 Mon Sep 17 00:00:00 2001 From: Omer Katz Date: Wed, 19 Aug 2020 20:19:45 +0300 Subject: [PATCH] Refactor CLI to use Click instead of our custom argparse based framework (#5718) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 'abstractproperty' is deprecated. Use 'property' with 'abstractmethod' instead * Fix #2849 - Initial work of celery 5.0.0 alpha1 series by dropping python below 3.6 from marix & remove import from __future__ (#5684) * initial work of celery 5.0.0 alpha-1 series by dropping python below 3.6 * i-5651(ut): add ut for ResultSet.join_native (#5679) * dropped python versions below 3.6 from tox * dropped python versions below 3.6 from travis * dropped python versions below 3.6 from appveyor * dropped python2 compat __future__ imports from tests * Fixed a bug where canvases with a group and tasks in the middle followed by a group fails to complete and indefinitely hangs. (#5681) Fixes #5512, fixes #5354, fixes #2573. * dropped python2 compat __future__ imports from celery * dropped python2 compat code from init * revert readme change about version * removed python 2 object inheritance (#5687) * removed python 2 object inheritance * removed python 2 object inheritance * removed python 2 compatibility decorator (#5689) * removed python 2 compatibility decorator * removed python 2 compatibility decorator * removed python 2 compatibility decorator * removed python 2 compatibility decorator * Remove unused imports. * Remove unused imports of python_2_unicode_compatible. Also removed leftover useage of them where they were still used. * Run pyupgrade on codebase (#5726) * Run pyupgrade on codebase. * Use format strings where possible. * pyupgrade examples. * pyupgrade on celerydocs extension. * pyupgrade on updated code. * Address code review comments. * Address code review comments. * Remove unused imports. * Fix indentation. * Address code review comments. * Fix syntax error. * Fix syntax error. * Fix syntax error. * pytest 5.x for celery 5 (#5791) * Port latest changes from master to v5-dev (#5942) * Fix serialization and deserialization of nested exception classes (#5717) * Fix #5597: chain priority (#5759) * adding `worker_process_shutdown` to __all__ (#5762) * Fix typo (#5769) * Reformat code. * Simplify commands to looking for celery worker processes (#5778) * update doc- celery supports storage list. (#5776) * Update introduction.rst * Update introduction.rst * Fail xfailed tests if the failure is unexpected. * Added integration coverage for link_error (#5373) * Added coverage for link_error. * Use pytest-rerunfailed plugin instead of rolling our own custom implementation. * Added link_error with retries. This currently fails. * Remove unused import. * Fix import on Python 2.7. * retries in link_error do not hang the worker anymore. * Run error callbacks eagerly when the task itself is run eagerly. Fixes #4899. * Adjust unit tests accordingly. * Grammar in documentation (#5780) * Grammar in documentation * Address review. * pypy 7.2 matrix (#5790) * removed extra slashes in CELERY_BROKER_URL (#5792) The Celery broker URL in settings.py had 2 slashes in the end which are not required and can be misleading. so I changed :- CELERY_BROKER_URL = 'amqp://guest:guest@localhost//' to CELERY_BROKER_URL = 'amqp://guest:guest@localhost' * Fix #5772 task_default_exchange & task_default_exchange_type not work (#5773) * Fix #5772 task_default_exchange & task_default_exchange_type not work * Add unit test: test_setting_default_exchange * Move default_exchange test to standalone class * Run integration suite with memcached results backend. (#5739) * Fix hanging forever when fetching results from a group(chain(group)) canvas. (#5744) PR #5739 uncovered multiple problems with the cache backend. This PR should resolve one of them. PR #5638 fixed the same test case for our async results backends that support native join. However, it did not fix the test case for sync results backends that support native join. * Fix regression in PR #5681. (#5753) See comment in the diff for details. * Grammatical fix to CONTRIBUTING.rst doc (#5794) * Fix #5734 Celery does not consider authMechanism on mongodb backend URLs (#5795) * Fix #5734 Celery does not consider authMechanism on mongodb backend URLs * Add unit test: test_get_connection_with_authmechanism * Add unit test: test_get_connection_with_authmechanism_no_username * Fix errors in Python 2.7 Remove "," after "**" operator * Revert "Revert "Revert "Added handle of SIGTERM in BaseTask in celery/task.py to prevent kill the task" (#5577)" (#5586)" (#5797) This reverts commit f79894e0a2c7156fd0ca5e8e3b652b6a46a7e8e7. * Add Python 3.8 Support (#5785) * Added Python 3.8 to the build matrix. * Ensure a supported tblib version is installed for Python 3.8 and above. In addition, modernize the relevant tests. * Workaround patching problem in test. * py 3.8 in clasifier * ubuntu bionic (#5799) * ubuntu bionic * fast finish * sync bumversion with pypi release * Dev.req (#5803) * update docker config * undo hardpin * devr req install from github master * update docker config (#5801) * update docker config * make dockerfile to install from github master dev branch by default * update download link * Isort. * Grammatical & punctuation fixes for CONTRIBUTING.rst document (#5804) * update dockerfile * switched to ubuntu bionic * update docker * keep it empty until we reconfigure it again with autopep8 * Fixed Dockerfile (#5809) * Update document CONTRIBUTING.rst & fix Dockerfile typo (#5813) * Added an issue template for minor releases. * reference gocelery Go Client/Server for Celery (#5815) * Add enterprise language (#5818) * Fix/correct minor doc typos (#5825) * Correct a small typo * Correct bad contributing documentation links * Preserve the task priority in case of a retry (#5820) * Preserve the task priority in case of a retry * Created test case for retried tasks with priority * Implement an integration test for retried tasks with priorities * bump kombu * basic changelog for celery 4.4.0rc4 * bump celery 4.4.0rc4 * events bootstep disabled if no events (#5807) * events bootstep disabled if no events * Added unit tests. * update bug report template * fixing ascii art to look nicer (#5831) * Only rerun flaky tests when failures can be intermediary. * Rename Changelog to Changelog.rst * The test_nested_group_chain test can run without native_join support. (#5838) * Run integration tests with Cassandra (#5834) * Run integration tests with Cassandra. * Configure cassandra result backend * Pre-create keyspace and table * Fix deprecation warning. * Fix path to cqlsh. * Increase connection timeout. * Wait until the cluster is available. * SQS - Reject on failure (#5843) * reject on failure * add documentation * test fix * test fix * test fix * Add a concurrency model with ThreadPoolExecutor (#5099) * Add a concurrency model with ThreadPoolExecutor * thread model test for pypy * Chain primitive's code example fix in canvas documentation (Regression PR#4444) (#5845) * Changed multi-line string (#5846) This string wasn't rendering properly and was printing the python statement too. Although the change isn't as pretty code-wise, it gets rid of an annoyance for the user. * Add auto expiry for DynamoDB backend (#5805) * Add auto expiry for DynamoDB backend This adds auto-expire support for the DynamoDB backend, via the DynamoDB Time to Live feature. * Require boto3>=1.9.178 for DynamoDB TTL support boto3 version 1.9.178 requires botocore>=1.12.178. botocore version 1.12.178 introduces support for the DynamoDB UpdateTimeToLive call. The UpdateTimeToLive call is used by the DynamoDB backend to enable TTL support on a newly created table. * Separate TTL handling from table creation Handle TTL enabling/disabling separately from the table get-or-create function. Improve handling of cases where the TTL is already set to the desired state. DynamoDB only allows a single TTL update action within a fairly long time window, so some problematic cases (changing the TTL attribute, enabling/disabling TTL when it was recently modified) will raise exceptions that have to be dealt with. * Handle older boto3 versions If the boto3 TTL methods are not found, log an informative error. If the user wants to enable TTL, raise an exception; if TTL should be disabled, simply return. * Improve logging - Handle exceptions by logging the error and re-raising - Log (level debug) when the desired TTL state is already in place * Add and use _has_ttl() convenience method Additional changes: - Handle exceptions when calling boto3's describe_time_to_live() - Fix test cases for missing TTL methods * Update ttl_seconds documentation * Log invalid TTL; catch and raise ValueError * Separate method _get_table_ttl_description * Separate ttl method validation function * Clarify tri-state TTL value * Improve test coverage * Fix minor typo in comment * Mark test as xfail when using the cache backend. (#5851) * [Fix #5436] Store extending result in all backends (#5661) * [Fix #5436] Store extending result in all backends * Fix sqlalchemy * More fixu * Fixing tests * removing not necessary import * Removing debug code * Removing debug code * Add tests for get_result_meta in base and database * Revert "Add auto expiry for DynamoDB backend (#5805)" (#5855) This reverts commit f7f5bcfceca692d0e78c742a7c09c424f53d915b. * Revert "Mark test as xfail when using the cache backend. (#5851)" (#5854) This reverts commit 1b303c2968836245aaa43c3d0ff9249dd8bf9ed2. * docs: Document Redis commands used by celery (#5853) * remove cache back end integrtion test. (#5856) * Fix a race condition when publishing a very large chord header (#5850) * Added a test case which artificially introduces a delay to group.save(). * Fix race condition by delaying the task only after saving the group. * update tox * Remove duplicate boto dependency. (#5858) * Revert "remove cache back end integrtion test. (#5856)" (#5859) This reverts commit e0ac7a19a745dd5a52a615c1330bd67f2cef4d00. * Revert "Revert "Add auto expiry for DynamoDB backend (#5805)" (#5855)" (#5857) This reverts commit 4ddc605392d7694760f23069c34ede34b3e582c3. * Revert "update tox" This reverts commit 49427f51049073e38439ea9b3413978784a24999. * Fix the test_simple_chord_with_a_delay_in_group_save test. * Revert "Revert "Skip unsupported canvas when using the cache backend"" (#5860) * Revert "Revert "Mark test as xfail when using the cache backend. (#5851)" (#5854)" This reverts commit fc101c61c1912c4dafa661981f8b865c011e8a55. * Make the xfail condition stricter. * Fix the xfail condition. * Linters should use Python 3.8. * Move pypy unit tests to the correct stage. * Temporarily allow PyPy to fail since it is unavailable in Travis. * Remove unused variables. * Fix unused imports. * Fix pydocstyle errors in dynamodb. * Fix pydocstyle errors in redis backend. * bump kombu to 4.6.7 * celery 4.4.0rc5 changelog * celery 4.4.0rc5 * rm redundant code (#5864) * isort. * Document the threads task pool in the CLI. * Removed the paragraph about using librabbitmq. Refer to #5872 (#5873) * Task class definitions can have retry attributes (#5869) * autoretry_for * retry_kwargs * retry_backoff * retry_backoff_max * retry_jitter can now be defined as cls attributes. All of these can be overriden from the @task decorator https://github.com/celery/celery/issues/4684 * whatsnew in Celery 4.4 as per projects standard (#5817) * 4.4 whatsnew * update * update * Move old whatsnew to history. * Remove old news & fix markers. * Added a section notifying Python 3.4 has been dropped. * Added a note about ElasticSearch basic auth. * Added a note about being able to replace eagerly run tasks. * Update index. * Address comment. * Described boto3 version updates. * Fix heading. * More news. * Thread pool. * Add Django and Config changes * Bump version 4.4.0 * upate readme * Update docs regarding Redis Message Priorities (#5874) * Update docs regarding Redis Message Priorities * fixup! Update docs regarding Redis Message Priorities * Update 4.4.0 docs (#5875) * Update 4.4 release changelog * Update whatsnew-4.4 * Update tasks docs * Fix recent tasks doc file update (#5879) * Include renamed Changelog.rst in source releases. (#5880) Changelog.rst was renamed from Changelog in fd023ec174bedc2dc65c63a0dc7c85e425ac00c6 but MANIFEST.in was not updated to include the new name. This fixes the file name so Changelog.rst will show up in future source releases again. * Reorganised project_urls and classifiers. (#5884) * Use safequote in SQS Getting Started doc (#5885) * Have appveyor build relevant versions of Python. (#5887) * Have appveyor build relevant and buildable versions of Python. * Appveyor is missing CI requirements to build. * Pin pycurl to version that will build with appveyor (because wheels files exist) * Restrict python 2.7 64 bit version of python-dateutil for parse. * Use is_alive instead of isAlive for Python 3.9 compatibility. (#5898) * Very minor tweak to commen to improve docs (#5900) As discussed here: https://stackoverflow.com/questions/58816271/celery-task-asyncresult-takes-task-id-but-is-documented-to-get-asyncresult-inst this comment seems to flow to a very confusing and misleading piece of documentation here: https://docs.celeryproject.org/en/latest/reference/celery.app.task.html#celery.app.task.Task.AsyncResult * Support configuring schema of a PostgreSQL database (#5910) * Support configuring schema of a PostgreSQL database * Add unit test * Remove blank line * Fix raise issue to make exception message more friendly (#5912) Signed-off-by: Chenyang Yan * Add progress for retry connections (#5915) This will show current retry progress so it will clear confusion about how many retries will be tried for connecting to broker. Closes #4556 * chg: change xrange to range (#5926) * update docs for json serializer and add note for int keys serialization (#5932) * fix indentation for note block in calling.rst (#5933) * Added links to other issue trackers. (#5939) * Add labels automatically for issues. (#5938) * Run pyupgrade. Co-authored-by: Michal Čihař Co-authored-by: ptitpoulpe Co-authored-by: Didi Bar-Zev Co-authored-by: Santos Solorzano Co-authored-by: manlix Co-authored-by: Jimmy <54828848+sckhg1367@users.noreply.github.com> Co-authored-by: Борис Верховский Co-authored-by: Asif Saif Uddin Co-authored-by: Jainal Gosaliya Co-authored-by: gsfish Co-authored-by: Dipankar Achinta Co-authored-by: Pengjie Song (宋鹏捷) Co-authored-by: Chris Griffin Co-authored-by: Muhammad Hewedy Co-authored-by: Blaine Bublitz Co-authored-by: Tamu Co-authored-by: Erik Tews Co-authored-by: abhinav nilaratna Co-authored-by: Wyatt Paul Co-authored-by: gal cohen Co-authored-by: as Co-authored-by: Param Kapur Co-authored-by: Sven Ulland Co-authored-by: Safwan Rahman Co-authored-by: Aissaoui Anouar Co-authored-by: Neal Wang Co-authored-by: Alireza Amouzadeh Co-authored-by: Marcos Moyano Co-authored-by: Stepan Henek Co-authored-by: Andrew Sklyarov Co-authored-by: Michael Fladischer Co-authored-by: Dejan Lekic Co-authored-by: Yannick Schuchmann Co-authored-by: Matt Davis Co-authored-by: Karthikeyan Singaravelan Co-authored-by: Bernd Wechner Co-authored-by: Sören Oldag Co-authored-by: uddmorningsun Co-authored-by: Amar Fadil <34912365+marfgold1@users.noreply.github.com> Co-authored-by: woodenrobot Co-authored-by: Sardorbek Imomaliev * Remove fallback code for Python 2 support marked with TODOs. (#5953) Co-authored-by: Asif Saif Uddin * Remove PY3 conditionals (#5954) * Added integration coverage for link_error (#5373) * Added coverage for link_error. * Use pytest-rerunfailed plugin instead of rolling our own custom implementation. * Added link_error with retries. This currently fails. * Remove unused import. * Fix import on Python 2.7. * retries in link_error do not hang the worker anymore. * Run error callbacks eagerly when the task itself is run eagerly. Fixes #4899. * Adjust unit tests accordingly. * Grammar in documentation (#5780) * Grammar in documentation * Address review. * pypy 7.2 matrix (#5790) * removed extra slashes in CELERY_BROKER_URL (#5792) The Celery broker URL in settings.py had 2 slashes in the end which are not required and can be misleading. so I changed :- CELERY_BROKER_URL = 'amqp://guest:guest@localhost//' to CELERY_BROKER_URL = 'amqp://guest:guest@localhost' * Fix #5772 task_default_exchange & task_default_exchange_type not work (#5773) * Fix #5772 task_default_exchange & task_default_exchange_type not work * Add unit test: test_setting_default_exchange * Move default_exchange test to standalone class * Run integration suite with memcached results backend. (#5739) * Fix hanging forever when fetching results from a group(chain(group)) canvas. (#5744) PR #5739 uncovered multiple problems with the cache backend. This PR should resolve one of them. PR #5638 fixed the same test case for our async results backends that support native join. However, it did not fix the test case for sync results backends that support native join. * Fix regression in PR #5681. (#5753) See comment in the diff for details. * Grammatical fix to CONTRIBUTING.rst doc (#5794) * Fix #5734 Celery does not consider authMechanism on mongodb backend URLs (#5795) * Fix #5734 Celery does not consider authMechanism on mongodb backend URLs * Add unit test: test_get_connection_with_authmechanism * Add unit test: test_get_connection_with_authmechanism_no_username * Fix errors in Python 2.7 Remove "," after "**" operator * Revert "Revert "Revert "Added handle of SIGTERM in BaseTask in celery/task.py to prevent kill the task" (#5577)" (#5586)" (#5797) This reverts commit f79894e0a2c7156fd0ca5e8e3b652b6a46a7e8e7. * Add Python 3.8 Support (#5785) * Added Python 3.8 to the build matrix. * Ensure a supported tblib version is installed for Python 3.8 and above. In addition, modernize the relevant tests. * Workaround patching problem in test. * py 3.8 in clasifier * ubuntu bionic (#5799) * ubuntu bionic * fast finish * sync bumversion with pypi release * Dev.req (#5803) * update docker config * undo hardpin * devr req install from github master * update docker config (#5801) * update docker config * make dockerfile to install from github master dev branch by default * update download link * Isort. * Grammatical & punctuation fixes for CONTRIBUTING.rst document (#5804) * update dockerfile * switched to ubuntu bionic * update docker * keep it empty until we reconfigure it again with autopep8 * Fixed Dockerfile (#5809) * Update document CONTRIBUTING.rst & fix Dockerfile typo (#5813) * Added an issue template for minor releases. * reference gocelery Go Client/Server for Celery (#5815) * Add enterprise language (#5818) * Fix/correct minor doc typos (#5825) * Correct a small typo * Correct bad contributing documentation links * Preserve the task priority in case of a retry (#5820) * Preserve the task priority in case of a retry * Created test case for retried tasks with priority * Implement an integration test for retried tasks with priorities * bump kombu * basic changelog for celery 4.4.0rc4 * bump celery 4.4.0rc4 * events bootstep disabled if no events (#5807) * events bootstep disabled if no events * Added unit tests. * update bug report template * fixing ascii art to look nicer (#5831) * Only rerun flaky tests when failures can be intermediary. * Rename Changelog to Changelog.rst * The test_nested_group_chain test can run without native_join support. (#5838) * Run integration tests with Cassandra (#5834) * Run integration tests with Cassandra. * Configure cassandra result backend * Pre-create keyspace and table * Fix deprecation warning. * Fix path to cqlsh. * Increase connection timeout. * Wait until the cluster is available. * SQS - Reject on failure (#5843) * reject on failure * add documentation * test fix * test fix * test fix * Add a concurrency model with ThreadPoolExecutor (#5099) * Add a concurrency model with ThreadPoolExecutor * thread model test for pypy * Chain primitive's code example fix in canvas documentation (Regression PR#4444) (#5845) * Changed multi-line string (#5846) This string wasn't rendering properly and was printing the python statement too. Although the change isn't as pretty code-wise, it gets rid of an annoyance for the user. * Add auto expiry for DynamoDB backend (#5805) * Add auto expiry for DynamoDB backend This adds auto-expire support for the DynamoDB backend, via the DynamoDB Time to Live feature. * Require boto3>=1.9.178 for DynamoDB TTL support boto3 version 1.9.178 requires botocore>=1.12.178. botocore version 1.12.178 introduces support for the DynamoDB UpdateTimeToLive call. The UpdateTimeToLive call is used by the DynamoDB backend to enable TTL support on a newly created table. * Separate TTL handling from table creation Handle TTL enabling/disabling separately from the table get-or-create function. Improve handling of cases where the TTL is already set to the desired state. DynamoDB only allows a single TTL update action within a fairly long time window, so some problematic cases (changing the TTL attribute, enabling/disabling TTL when it was recently modified) will raise exceptions that have to be dealt with. * Handle older boto3 versions If the boto3 TTL methods are not found, log an informative error. If the user wants to enable TTL, raise an exception; if TTL should be disabled, simply return. * Improve logging - Handle exceptions by logging the error and re-raising - Log (level debug) when the desired TTL state is already in place * Add and use _has_ttl() convenience method Additional changes: - Handle exceptions when calling boto3's describe_time_to_live() - Fix test cases for missing TTL methods * Update ttl_seconds documentation * Log invalid TTL; catch and raise ValueError * Separate method _get_table_ttl_description * Separate ttl method validation function * Clarify tri-state TTL value * Improve test coverage * Fix minor typo in comment * Mark test as xfail when using the cache backend. (#5851) * [Fix #5436] Store extending result in all backends (#5661) * [Fix #5436] Store extending result in all backends * Fix sqlalchemy * More fixu * Fixing tests * removing not necessary import * Removing debug code * Removing debug code * Add tests for get_result_meta in base and database * Revert "Add auto expiry for DynamoDB backend (#5805)" (#5855) This reverts commit f7f5bcfceca692d0e78c742a7c09c424f53d915b. * Revert "Mark test as xfail when using the cache backend. (#5851)" (#5854) This reverts commit 1b303c2968836245aaa43c3d0ff9249dd8bf9ed2. * docs: Document Redis commands used by celery (#5853) * remove cache back end integrtion test. (#5856) * Fix a race condition when publishing a very large chord header (#5850) * Added a test case which artificially introduces a delay to group.save(). * Fix race condition by delaying the task only after saving the group. * update tox * Remove duplicate boto dependency. (#5858) * Revert "remove cache back end integrtion test. (#5856)" (#5859) This reverts commit e0ac7a19a745dd5a52a615c1330bd67f2cef4d00. * Revert "Revert "Add auto expiry for DynamoDB backend (#5805)" (#5855)" (#5857) This reverts commit 4ddc605392d7694760f23069c34ede34b3e582c3. * Revert "update tox" This reverts commit 49427f51049073e38439ea9b3413978784a24999. * Fix the test_simple_chord_with_a_delay_in_group_save test. * Revert "Revert "Skip unsupported canvas when using the cache backend"" (#5860) * Revert "Revert "Mark test as xfail when using the cache backend. (#5851)" (#5854)" This reverts commit fc101c61c1912c4dafa661981f8b865c011e8a55. * Make the xfail condition stricter. * Fix the xfail condition. * Linters should use Python 3.8. * Move pypy unit tests to the correct stage. * Temporarily allow PyPy to fail since it is unavailable in Travis. * Remove unused variables. * Fix unused imports. * Fix pydocstyle errors in dynamodb. * Fix pydocstyle errors in redis backend. * bump kombu to 4.6.7 * celery 4.4.0rc5 changelog * celery 4.4.0rc5 * rm redundant code (#5864) * isort. * Document the threads task pool in the CLI. * Removed the paragraph about using librabbitmq. Refer to #5872 (#5873) * Task class definitions can have retry attributes (#5869) * autoretry_for * retry_kwargs * retry_backoff * retry_backoff_max * retry_jitter can now be defined as cls attributes. All of these can be overriden from the @task decorator https://github.com/celery/celery/issues/4684 * whatsnew in Celery 4.4 as per projects standard (#5817) * 4.4 whatsnew * update * update * Move old whatsnew to history. * Remove old news & fix markers. * Added a section notifying Python 3.4 has been dropped. * Added a note about ElasticSearch basic auth. * Added a note about being able to replace eagerly run tasks. * Update index. * Address comment. * Described boto3 version updates. * Fix heading. * More news. * Thread pool. * Add Django and Config changes * Bump version 4.4.0 * upate readme * Update docs regarding Redis Message Priorities (#5874) * Update docs regarding Redis Message Priorities * fixup! Update docs regarding Redis Message Priorities * Update 4.4.0 docs (#5875) * Update 4.4 release changelog * Update whatsnew-4.4 * Update tasks docs * Fix recent tasks doc file update (#5879) * Include renamed Changelog.rst in source releases. (#5880) Changelog.rst was renamed from Changelog in fd023ec174bedc2dc65c63a0dc7c85e425ac00c6 but MANIFEST.in was not updated to include the new name. This fixes the file name so Changelog.rst will show up in future source releases again. * Reorganised project_urls and classifiers. (#5884) * Use safequote in SQS Getting Started doc (#5885) * Have appveyor build relevant versions of Python. (#5887) * Have appveyor build relevant and buildable versions of Python. * Appveyor is missing CI requirements to build. * Pin pycurl to version that will build with appveyor (because wheels files exist) * Restrict python 2.7 64 bit version of python-dateutil for parse. * Use is_alive instead of isAlive for Python 3.9 compatibility. (#5898) * Very minor tweak to commen to improve docs (#5900) As discussed here: https://stackoverflow.com/questions/58816271/celery-task-asyncresult-takes-task-id-but-is-documented-to-get-asyncresult-inst this comment seems to flow to a very confusing and misleading piece of documentation here: https://docs.celeryproject.org/en/latest/reference/celery.app.task.html#celery.app.task.Task.AsyncResult * Support configuring schema of a PostgreSQL database (#5910) * Support configuring schema of a PostgreSQL database * Add unit test * Remove blank line * Fix raise issue to make exception message more friendly (#5912) Signed-off-by: Chenyang Yan * Add progress for retry connections (#5915) This will show current retry progress so it will clear confusion about how many retries will be tried for connecting to broker. Closes #4556 * chg: change xrange to range (#5926) * update docs for json serializer and add note for int keys serialization (#5932) * fix indentation for note block in calling.rst (#5933) * Added links to other issue trackers. (#5939) * Add labels automatically for issues. (#5938) * remove redundant raise from docstring (#5941) `throw` is True by default so the Retry exception will already get raised by calling `self.retry(countdown=60 * 5, exc=exc)` * Run pyupgrade. * Fix typo (#5943) * Remove fallback code for Python 2 support. * docs: fixes Rabbits and Warrens link in routing userguide (#4007) (#5949) * Fix labels on Github issue templates. (#5955) Use quotation marks to escape labels on Github issue templates. This prevents the colon from breaking the template. * added retry_on_timeout and socket_keepalive to config and doc (#5952) * Fixed event capture from building infinite list (#5870) * Fix error propagation example (#5966) * update range (#5971) * update setup.cfg * bump billiard to 3.6.3.0 * Update __init__.py (#5951) * Update __init__.py Fixed issue for object with result_backend=True (decode fails on multiple None request) * Update __init__.py suggested changeds * Update __init__.py * Use configured db schema also for sequences (#5972) * Added a default value for retries in worker.strategy. (#5945) * Added a default value for retries in worker.strategy. I was facing an issue when adding tasks directly to rabbitmq using pika instead of calling task.apply_async. The issue was the self.retry mechanisum was failing. In app/tasks.py the line `retries = request.retries + 1` was causing the issue. On further tracing I figured out that it was because the default .get value (None) was getting passed through this function and was raising TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' * Add test cases for default and custom retries value * pypy 7.3 (#5980) * Pass `interval` to `get_many` (#5931) * Pass `interval` to `get_many` * Fix: Syntax error for py2.7 * Fix: Syntax error for py2.7 * Fixed problem with conflicting autoretry_for task parameter and Task.replace() (#5934) * Fix #5917 (#5918) * Fix changelog (#5881) * merge in place the apps beat schedule in the default Schedule class. (#5908) * Handle Redis connection errors in result consumer (#5921) * Handle Redis connection errors in result consumer * Closes #5919. * Use context manager for Redis conusmer reconnect * Log error when result backend reconnection fails * Fix inspect_command documentation (#5983) * Use gevent and eventlet wait() functions to remove busy-wait (#5974) * Use gevent and eventlet wait() functions to remove busy-wait Fixes issue #4999. Calling AsyncResult.get() in a gevent context would cause the async Drainer to repeatedly call wait_for until the result was completed. I've updated the code to have a specific implementation for gevent and eventlet that will cause wait_for to only return every "timeout" # of seconds, rather than repeatedly returning. Some things I'd like some feedback on: * Where's the best place to add test coverage for this? It doesn't look like there are any tests that directly exercised the Drainer yet so I would probably look to add some of these to the backends/ unit tests. * The way I did this for the Eventlet interface was to rely on the private _exit_event member of the GreenThread instance; to do this without relying on a private member would require some additional changes to the backend Drainer interface so that we could wait for an eventlet-specific event in wait_for(). I can do this, just wanted to get some feedback before. * Add unit tests for Drainer classes In order for this to work without monkeypatching in the tests, I needed to call sleep(0) to let the gevent/eventlet greenlets to yield control back to the calling thread. I also made the check interval configurable in the drainer so that we didn't need to sleep multiples of 1 second in the tests. * Weaken asserts since they don't pass on CI * Fix eventlet auto-patching DNS resolver module on import By default it looks like "import eventlet" imports the greendns module unless the environment EVENTLET_NO_GREENDNS is set to true. This broke a pymongo test. * Add tests ensuring that the greenlet loop isn't blocked These tests make sure that while drain_events_until is running that other gevent/eventlet concurrency can run. * Clean up tests and make sure they wait for all the threads to stop * Fix chords with chained groups (#5947) * kombu 4.6.8 * update setup * updated version 4.4.1 * Fix: Accept and swallow `kwargs` to handle unexpected keyword arguments * Allow boto to look for credentials in S3Backend * add reference to Rusty Celery * Update document of revoke method in Control class * Fix copy-paste error in result_compression docs * Make 'socket_keepalive' optional variable (#6000) * update connection params - socket_keepalive is optional now * update readme - added versionadded 4.4.1 and fixed `redis_socket_keepalive` * added check of socket_keepalive in arguments for UnixSocketConnect * Fixed incorrect setting name in documentation (#6002) * updated version 4.4.2 * Fix backend utf-8 encoding in s3 backend Celery backend uses utf-8 to deserialize results, which would fail for some serializations like pickle. * Fix typo in celery.bin.multi document * Upgraded pycurl to the latest version that supports wheel. * pytest 5.3.5 max * Add uptime to the stats inspect command * Doc tweaks: mostly grammar and punctuation (#6016) * Fix a bunch of comma splices in the docs * Remove some unnecessary words from next-steps doc * Tweak awkward wording; fix bad em-dash * Fix a bunch more comma splices in next-steps doc * Miscellaneous grammar/punctuation/wording fixes * Link to task options in task decorator docs * Fixing issue #6019: unable to use mysql SSL parameters when getting mysql engine (#6020) * Fixing issue #6019: unable to use mysql SSL parametes in create_engine() * adding test for get_engine when self.forked is False and engine args are passed in for create_engine() * Clean TraceBack to reduce memory leaks for exception task (#6024) * Clean TraceBack to reduce memory leaks * add unit test * add unit test * reject unittest * Patch For Python 2.7 compatibility * update unittest * Register to the garbage collector by explicitly referring to f_locals. * need more check * update code coverage * update Missing unit test * 3.4 -> 3.5 Co-authored-by: heedong.jung * exceptions: NotRegistered: fix up language Minor fix to the language. * Note about autodiscover_tasks and periodic tasks This is particularly important for Django projects that put periodic tasks into each app's `tasks.py` and want to use one as a periodic task. By the time `autodiscover_tasks()` loads those tasks, the `on_after_configure` Signal has already come and gone, so anything decorated with `@app.on_after_finalize.connect` will never be called. If there's other documentation on this subject, I could not find it. * Avoid PyModulelevel, deprecated in Sphinx 4 Use `PyFunction` instead of `PyModulelevel` to avoid this deprecation warning: RemovedInSphinx40Warning: PyModulelevel is deprecated. Please check the implementation of This replacement is one of the options listed in the Sphinx docs (https://www.sphinx-doc.org/en/master/extdev/deprecated.html). * Give up sending a worker-offline message if transport is not connected (#6039) * If worker-offline event fails to send, give up and die peacefully * Add test for retry= and msgs in heartbeat * Fix the build and all documentation warnings. I finally upgraded our theme to 2.0. As a result we've upgraded Sphinx to 2.0. Work to upgrade Sphinx to 3.0 will proceed in a different PR. This upgrade also fixes our build issues caused by #6032. We don't support Sphinx 1.x as a result of that patch. I've also included the missing 4.3 changelog to our history. * Support both Sphinx 2 and 3. * Add Task to __all__ in celery.__init__.py * Add missing parenthesis to example in docs * Ensure a single chain object in a chain does not raise MaximumRecursionError. Previously chain([chain(sig)]) would crash. We now ensure it doesn't. Fixes #5973. * update setup.py * fix typo missing quote at the end of line * Fix a typo in monitoring doc * update travis * update ubuntu to focal foss 20.04 LTS * Fix autoscale when prefetch_multiplier is 1 * Allow start_worker to function without ping task * Update celeryd.conf Move the directory of the program before the execution of the command/script * Add documentation for "predefined_queue_urls" * [Fix #6074]: Add missing documentation for MongoDB as result backend. * update funding * 🐛 Correctly handle configuring the serializer for always_eager mode. (#6079) * 🐛 Correctly handle configuring the serializer for always_eager mode. options['serializer'] will always exist, because it is initialized from an mattrgetter. Even if unset, it will be present in the options with a value of None. * 🐛 Add a test for new always_eager + task_serializer behavior. * ✏️ Whoops missed a : * Remove doubling of prefetch_count increase when prefetch_multiplier gt 1 (#6081) * try ubuntu focal (#6088) * Fix eager function not returning result after retries. Using apply function does not return correct results after at least one retry because the return value of successive call is not going back to the original caller. * return retry result if not throw and is_eager if throw is false, we would be interested by the result of retry and not the current result which will be an exception. This way it does not break the logic of `raise self.retry` This should be used like `return self.retry(..., throw=False)` in an except statement. * revert formatting change * Add tests for eager retry without throw * update predefined-queues documentation Suggested version of configuration does not work. Additionally I'd like to mention, that `access_key_id` and `secret_access_key` are mandatory fields and not allowing you to go with defaults AWS_* env variables. I can contribute for this variables to be optional Also I'm not sure if security token will apply, could you please advice how to do it? * Fix couchbase version < 3.0.0 as API changed * Remove reference to -O fair in optimizations -O fair was made the default in Celery 4.0 https://docs.celeryproject.org/en/stable/history/whatsnew-4.0.html#ofair-is-now-the-default-scheduling-strategy * pytest ranges * pypy3 * revert to bionic * do not load docs.txt requirements for python 2.7 As it requires Sphinx >= 2.0.0 and there is no such version compatible with python 2.7 * update cassandra travis integration test configuration cassandra:latest docker image changed location of cqlsh program * pin cassandra-driver CI get stuck after all cassandra integration tests * Fix all flake8 lint errors * Fix all pydocstyle lint errors * Fix all configcheck lint errors * Always requeue while worker lost regardless of the redelivered flag (#6103) * #5598 fix, always redelivery while WorkerLostError * fix, change the requeue flag so the task will remain PENDING * Allow relative paths in the filesystem backend (#6070) * Allow relative paths in the filesystem backend * fix order of if statements * [Fixed Issue #6017] --> Added Multi default logfiles and pidfiles paths [Description]: --> Changed the default paths for log files & pid files to be '/var/log/celery' and '/var/run/celery' --> Handled by creating the respective paths if not exist. --> Used os.makedir(path,if_exists=True) [Unit Test Added]: --> .travis.yml - config updated with 'before install'. --> t/unit/apps/test_multi.py - Changed the default log files & pid files paths wherever required. * Avoid race condition due to task duplication. In some circumstances like a network partitioning, some tasks might be duplicated. Sometimes, this lead to a race condition where a lost task overwrites the result of the last successful task in the backend. In order to avoid this race condition we prevent updating the result if it's already in successful state. This fix has been done for KV backends only and therefore won't work with other backends. * adding tests * Exceptions must be old-style classes or derived from BaseException, but here self.result may not subclass of BaseException. * update fund link * Fix windows build (#6104) * do not load memcache nor couchbase lib during windows build those libraries depends on native libraries libcouchbase and libmemcached that are not installed on Appveyor. As only unit tests runs on Appveyor, it should be fine * Add python 3.8 workaround for app trap * skip tests file_descriptor_safety tests on windows AsyncPool is not supported on Windows so Pool does have _fileno_to_outq attribute, making the test fail * Fix crossplatform log and pid files in multi mode it relates to #6017 * Use tox to build and test on windows * remove tox_install_command * drop python 2.7 from windows build * Add encode to meta task in base.py (#5894) * Add encode to base.py meta result Fix bug with impossibility to load None from task meta * Add tests for None. Remove exceed encode. * Update base.py Add return payload if None * Update time.py to solve the microsecond issues (#5199) When `relative` is set to True, the day, hour, minutes second will be round to the nearest one, however, the original program do not update the microsecond (reset it). As a result, the run-time offset on the microsecond will then be accumulated. For example, given the interval is 15s and relative is set to True 1. 2018-11-27T15:01:30.123236+08:00 2. 2018-11-27T15:01:45.372687+08:00 3. 2018-11-27T15:02:00.712601+08:00 4. 2018-11-27T15:02:15.987720+08:00 5. 2018-11-27T15:02:31.023670+08:00 * Change backend _ensure_not_eager error to warning * Add priority support for 'celery.chord_unlock' task (#5766) * Change eager retry behaviour even with raise self.retry, it should return the eventual value or MaxRetriesExceededError. if return value of eager apply is Retry exception, retry eagerly the task signature * Order supported Python versions * Avoid race condition in elasticsearch backend if a task is retried, the task retry may work concurrently to current task. store_result may come out of order. it may cause a non ready state (Retry) to override a ready state (Success, Failure). If this happens, it will block indefinitely pending any chord depending on this task. this change makes document updates safe for concurrent writes. https://www.elastic.co/guide/en/elasticsearch/reference/current/optimistic-concurrency-control.html * backends base get_many pass READY_STATES arg * test backends base get_many pass READY_STATES arg * Add integration tests for Elasticsearch and fix _update * Revert "revert to bionic" This reverts commit 6e091573f2ab0d0989b8d7c26b677c80377c1721. * remove jython check * feat(backend): Adds cleanup to ArangoDB backend * Delete Document Known Issue with CONN_MAX_AGE in 4.3 * issue 6108 fix filesystem backend cannot not be serialized by picked (#6120) * issue 6108 fix filesystem backend cannot not be serialized by picked https://github.com/celery/celery/issues/6108 * issue-6108 fix unit test failure * issue-6108 fix flake8 warning Co-authored-by: Murphy Meng * kombu==4.6.9 (#6133) * changelog for 4.4.3 * v 4.4.3 * remove un supported classifier * Fix autoretry_for with explicit retry (#6138) * Add tests for eager task retry * Fixes #6135 If autoretry_for is set too broad on Exception, then autoretry may get a Retry if that's the case, rethrow directly instead of wrapping it in another Retry to avoid loosing new args * Use Django DB max age connection setting (fixes #4116) * Add retry on recoverable exception for the backend (#6122) * Add state to KeyValueStoreBackend.set method This way, a backend implementation is able to take decisions based on current state to store meta in case of failures. * Add retry on recoverable exception for the backend acks.late makes celery acknowledge messages only after processing and storing result on the backend. However, in case of backend unreachable, it will shadow a Retry exception and put the task as failed in the backend not retrying the task and acknoledging it on the broker. With this new result_backend_always_retry setting, if the backend exception is recoverable (to be defined per backend implementation), it will retry the backend operation with an exponential backoff. * Make elasticsearch backward compatible with 6.x * Make ES retry storing updates in a better way if existing value in the backend is success, then do nothing. if it is a ready status, then update it only if new value is a ready status as well. else update it. This way, a SUCCESS cannot be overriden so that we do not loose results but any ready state other than success (FAILURE, REVOKED) can be overriden by another ready status (i.e. a SUCCESS) * Add test for value not found in ES backend * Fix random distribution of jitter for exponential backoff random.randrange should be called with the actual so that all numbers have equivalent probability, otherwise maximum value does have a way higher probability of occuring. * fix unit test if extra modules are not present * ElasticSearch: add setting to save meta as json * fix #6136. celery 4.4.3 always trying create /var/run/celery directory (#6142) * fix #6136. celery 4.4.3 always trying create /var/run/celery directory, even if it's not needed. * fix #6136. cleanup * Add task_internal_error signal (#6049) * Add internal_error signal There is no special signal for an out of body error which can be the result of a bad result backend. * Fix syntax error. * Document the task_internal_error signal. Co-authored-by: Laurentiu Dragan * changelog for v4.4.4 * kombu 4.6.10 (#6144) * v4.4.4 * Add missing dependency on future (#6146) Fixes #6145 * ElasticSearch: Retry index if document was deleted between index and update (#6140) * ElasticSearch: Retry index if document was deleted between index and update * Elasticsearch increase coverage to 100% * Fix pydocstyle * Specify minimum version of Sphinx for Celery extension (#6150) The Sphinx extension requires Sphinx 2 or later due to #6032. * fix windows build * fix flake8 error * fix multi tests in local Mock os.mkdir and os.makedirs to avoid creating /var/run/celery and /var/log/celery during unit tests if run without root priviledges * Customize the retry interval of chord_unlock tasks * changelog v4.4.5 * v4.4.5 * Fix typo in comment. * Remove autoscale force_scale methods (#6085) * Remove autoscale force_scale methods * Remove unused variable in test * Pass ping destination to request The destination argument worked fine from CLI but didn't get used when calling ping from Python. * Fix autoscale test * chord: merge init options with run options * put back KeyValueStoreBackend.set method without state It turns out it was breaking some other projects. wrapping set method with _set_with_state, this way it will not break existing Backend. while enabling this feature for other Backend. Currently, only ElasticsearchBackend supports this feature. It protects concurrent update to corrupt state in the backend. Existing success cannot be overriden, nor a ready state by a non ready state. i.e. a Retry state cannot override a Success or Failure. As a result, chord_unlock task will not loop forever due to missing ready state on the backend. * added --range-prefix option to `celery multi` (#6180) * added --range-prefix option to `celery multi` Added option for overriding default range prefix when running multiple workers prividing range with `celery multy` command. * covered multi --range-prefix with tests * fixed --range-prefix test * Added as_list function to AsyncResult class (#6179) * Add as_list method to return task IDs as a list * Add a test for as_list method * Add docstring for as_list method * Fix CassandraBackend error in threads or gevent pool (#6147) * Fix CassandraBackend error in threads or gevent pool * remove CassandraBackend.process_cleanup * Add test case * Add test case * Add comments test_as_uri Co-authored-by: baixue * changelog for v4.4.6 * v4.4.6 * Update Wiki link in "resources" In the page linked below, the link to wiki is outdated. Fixed that. https://docs.celeryproject.org/en/stable/getting-started/resources.html * test_canvas: Add test for chord-in-chain Add test case for the issue where a chord in a chain does not work when using .apply(). This works fine with .apply_async(). * Trying to fix flaky tests in ci * fix pydocstyle errors * fix pydocstyle * Drainer tests, put a lower constraint on number of intervals liveness should iterate 10 times per interval while drain_events only once. However, as it may use thread that may be scheduled out of order, we may end up in some situation where liveness and drain_events were called the same amount of time. Lowering the constraint from < to <= to avoid failing the tests. * pyupgrade. * Fix merge error. Co-authored-by: Борис Верховский Co-authored-by: Asif Saif Uddin Co-authored-by: Jainal Gosaliya Co-authored-by: gsfish Co-authored-by: Dipankar Achinta Co-authored-by: spengjie Co-authored-by: Chris Griffin Co-authored-by: Muhammad Hewedy Co-authored-by: Blaine Bublitz Co-authored-by: Tamu Co-authored-by: Erik Tews Co-authored-by: abhinav nilaratna Co-authored-by: Wyatt Paul Co-authored-by: gal cohen Co-authored-by: whuji Co-authored-by: Param Kapur Co-authored-by: Sven Ulland Co-authored-by: Safwan Rahman Co-authored-by: Aissaoui Anouar Co-authored-by: Neal Wang Co-authored-by: Alireza Amouzadeh Co-authored-by: Marcos Moyano Co-authored-by: Stepan Henek Co-authored-by: Andrew Sklyarov Co-authored-by: Michael Fladischer Co-authored-by: Dejan Lekic Co-authored-by: Yannick Schuchmann Co-authored-by: Matt Davis Co-authored-by: Xtreak Co-authored-by: Bernd Wechner Co-authored-by: Sören Oldag Co-authored-by: uddmorningsun Co-authored-by: Amar Fadil <34912365+marfgold1@users.noreply.github.com> Co-authored-by: woodenrobot Co-authored-by: Sardorbek Imomaliev Co-authored-by: Alex Riina Co-authored-by: Joon Hwan 김준환 Co-authored-by: Prabakaran Kumaresshan Co-authored-by: Martey Dodoo Co-authored-by: Konstantin Seleznev <4374093+Seleznev-nvkz@users.noreply.github.com> Co-authored-by: Prodge Co-authored-by: Abdelhadi Dyouri Co-authored-by: Ixiodor Co-authored-by: abhishekakamai <47558404+abhishekakamai@users.noreply.github.com> Co-authored-by: Allan Lei Co-authored-by: M1ha Shvn Co-authored-by: Salih Caglar Ispirli Co-authored-by: Micha Moskovic Co-authored-by: Chris Burr Co-authored-by: Dave King Co-authored-by: Dmitry Nikulin Co-authored-by: Michael Gaddis Co-authored-by: epwalsh Co-authored-by: TalRoni Co-authored-by: Leo Singer Co-authored-by: Stephen Tomkinson Co-authored-by: Abhishek Co-authored-by: theirix Co-authored-by: yukihira1992 Co-authored-by: jpays Co-authored-by: Greg Ward Co-authored-by: Alexa Griffith Co-authored-by: heedong <63043496+heedong-jung@users.noreply.github.com> Co-authored-by: heedong.jung Co-authored-by: Shreyansh Khajanchi Co-authored-by: Sam Thompson Co-authored-by: Alphadelta14 Co-authored-by: Azimjon Pulatov Co-authored-by: ysde Co-authored-by: AmirMohammad Ziaei Co-authored-by: Ben Nadler Co-authored-by: Harald Nezbeda Co-authored-by: Chris Frisina Co-authored-by: Adam Eijdenberg Co-authored-by: rafaelreuber Co-authored-by: Noah Kantrowitz Co-authored-by: Ben Nadler Co-authored-by: Clement Michaud Co-authored-by: Mathieu Chataigner Co-authored-by: eugeneyalansky <65346459+eugeneyalansky@users.noreply.github.com> Co-authored-by: Leonard Lu Co-authored-by: XinYang Co-authored-by: Ingolf Becker Co-authored-by: Anuj Chauhan Co-authored-by: shaoziwei Co-authored-by: Mathieu Chataigner Co-authored-by: Anakael Co-authored-by: Danny Chan Co-authored-by: Sebastiaan ten Pas Co-authored-by: David TILLOY Co-authored-by: Anthony N. Simon Co-authored-by: lironhl Co-authored-by: Raphael Cohen Co-authored-by: JaeyoungHeo Co-authored-by: singlaive Co-authored-by: Murphy Meng Co-authored-by: Wu Haotian Co-authored-by: Kwist Co-authored-by: Laurentiu Dragan Co-authored-by: Michal Čihař Co-authored-by: Radim Sückr Co-authored-by: Artem Vasilyev Co-authored-by: kakakikikeke-fork Co-authored-by: Pysaoke Co-authored-by: baixue Co-authored-by: Prashant Sinha Co-authored-by: AbdealiJK * Remove Python 2 compatibility code from Celery (#6221) * Remove five from celery/__init__.py * Remove five from celery/beat.py * Remove five from celery/bootsteps.py * Remove five from celery/exceptions.py * Remove five from celery/local.py * Remove five from celery/platforms.py * Remove five from celery/result.py * Remove five from celery/schedules.py * Remove five from celery/app/amqp.py * Remove five from celery/app/annotations.py * Remove five from celery/app/backends.py * Remove five from celery/app/base.py * Remove five from celery/app/control.py * Remove five from celery/app/defaults.py * Remove five from celery/app/log.py * Remove five from celery/app/registry.py * Remove five from celery/app/routes.py * Remove five from celery/app/task.py * Remove five from celery/app/trace.py * Remove five from celery/app/utils.py * Remove five from celery/apps/beat.py * Remove five from celery/apps/multi.py * Remove five from celery/apps/worker.py * Remove five from celery/backends/database/__init__.py * Remove five from celery/backends/amqp.py * Remove five from celery/backends/asynchronous.py * Remove five from celery/backends/base.py * Remove five from celery/backends/dynamodb.py * Remove five from celery/backends/elasticsearch.py * Remove five from celery/backends/mongodb.py * Remove five from celery/backends/redis.py * Remove five from celery/backends/rpc.py * Remove five from celery/concurrency/asynpool.py * Remove five from celery/concurrency/base.py * Remove five from celery/concurrency/prefork.py * Remove five from celery/contrib/testing/manager.py * Remove five from celery/contrib/migrate.py * Remove five from celery/contrib/rdb.py * Remove five from celery/events/cursesmon.py * Remove five from celery/events/dispatcher.py * Remove five from celery/events/state.py * Remove five from celery/loaders/base.py * Remove five from celery/security/certificate.py * Remove five from celery/security/utils.py * Remove five from celery/task/base.py * Remove five from celery/utils/dispatch/signal.py * Remove five from celery/utils/abstract.py * Remove five from celery/utils/collections.py * Remove five from celery/utils/debug.py * Remove five from celery/utils/functional.py * Remove five from celery/utils/graph.py * Remove five from celery/utils/imports.py * Remove five from celery/utils/log.py * Remove five from celery/utils/saferepr.py * Remove five from celery/utils/serialization.py * Remove five from celery/utils/term.py * Remove five from celery/utils/text.py * Remove five from celery/utils/threads.py * Remove five from celery/utils/time.py * Remove five from celery/utils/timer2.py * Remove five from celery/consumer/consumer.py * Remove five from celery/consumer/gossip.py * Remove five from celery/consumer/mingle.py * Remove five from celery/worker/autoscale.py * Remove five from celery/worker/components.py * Remove five from celery/worker/control.py * Remove five from celery/worker/request.py * Remove five from celery/worker/state.py * Remove five from celery/worker/worker.py * Remove five from celery/t/benchmarks/bench_worker.py * Remove five from celery/t/integration/test_canvas.py * Remove five from celery/t/unit/app * Remove five from celery/t/unit/backends * Remove five from celery/t/unit/compat_modules * Remove five from celery/t/unit/concurrency * Remove five from celery/t/unit/contrib * Remove five from celery/t/unit/events * Remove five from celery/t/unit/security * Remove five from celery/t/unit/tasks * Remove five from celery/t/unit/utils * Remove five from celery/t/unit/worker * Sort imports. * Comment out PyPy for now. * Remove flakeplus. * Happify linter. * Fix merge problems. * Delete backport. * Remove unused import. * Remove logic that notifies user that the Python version isn't supported from setup.py. pip already does that for us. * Add a trove classifier to indicate Celery only supports Python 3. * Restore usage of `reraise` for consistency with the kombu port. * Drop Python 2 compatibility code from our Sphinx extension. * Remove mention of flakeplus from tox.ini. * Remove mention of flakeplus from our CONTRIBUTING guide. * Bump Sphinx requirement. * Remove Python 2 compatibility code from our custom Sphinx extension. * Resolve Sphinx warning due to removed section in 32ff7b45aa3d78aedca61b6554a9db39122924fd. * Remove pydocstyle from build matrix as it was removed from master. See #6278. * Bump version: 4.4.7 → 5.0.0-alpha1 * Final touches. * Fix README. * Bump Kombu to 5.0.0. * Bump version: 5.0.0-alpha1 → 5.0.0a2 * Fix wrong version. * Remove autodoc for removed module. * Remove documentation for removed methods. * Remove the riak backend since riak is no longer maintained. * Remove riak backend since riak is no longer maintained. * Start fresh. * Added all arguments for the celery worker command. Still needs more documentation and improvements... * Load the application and execute a worker. * Added the rest of the global options. If an app is not specified we now use the default app. In addition, we now exit with the correct status code. * Extract validation into parameter types. * Restructure and document. * Allow to pass worker configuration options from command line. * Implement the beat command. * Allow to configure celery options through the CLI. * Implement the `celery call` command. * Implement the `celery list bindings` command. * Implement the `celery purge` command. * Implement the `celery result` command. * Implement the `celery migrate` task. * Implemented the celery@thedrow: OK 1 node online. command. * Take --no-color in consideration when outputting to stdout. * Ensure `celery worker` takes `--no-color` into consideration. * Use the preformatted OK string. * Adopt the NO_COLOR standard. See https://no-color.org/ for details. * Split commands into separate files. * Added 'did you mean' messages. * Implement the `celery events` command. * Text style should take --no-color into consideration as well. * Implement the basic `celery inspect` command. * Improve UI. * Organize the code. * Implement the `celery graph bootsteps` command. * Implement the `celery graph workers` command. * Implement the `celery upgrade settings` command. * Implement the `celery report` command. * Delete former unit tests. * Implement the `celery logtool` command. * Pass the quiet argument to the CLI context. * Limit inspect to existing actions. * Implement the `celery control` command. * Basic scaffold for the `celery amqp` shell command. * Start implementing the shell commands. * Implement basic.publish and basic.get. * Echo OK after acknowledgement. * Reformat Code. * Implement the exchange.declare command. * Implement the exchange.delete command. * Implement the queue.bind command. * Implement the queue.declare command. * Implement the queue.delete command. * Echo queue.declare result to screen. * Echo queue.delete result to screen. * Implement the queue.purge command. * Fix color support for error(). * Report errors and continue. * Handle connection errors and reconnect on error. * Refactor. * Implement the `celery shell` command. * Isort. * Add documentation. * Correct argument types. * Implement detach for `celery worker`. * Documentation. * Implement detach for `celery beat`. * Documentation. * Implement the `celery multi` command. * Documentation. * Implement user options. * Collect command actions from the correct registry. * Isort. * Fix access to app. * Match arguments for control. * Start fresh. * Added all arguments for the celery worker command. Still needs more documentation and improvements... * Load the application and execute a worker. * Added the rest of the global options. If an app is not specified we now use the default app. In addition, we now exit with the correct status code. * Extract validation into parameter types. * Restructure and document. * Allow to pass worker configuration options from command line. * Implement the beat command. * Allow to configure celery options through the CLI. * Implement the `celery call` command. * Implement the `celery list bindings` command. * Implement the `celery purge` command. * Implement the `celery result` command. * Implement the `celery migrate` task. * Implemented the celery@thedrow: OK 1 node online. command. * Take --no-color in consideration when outputting to stdout. * Ensure `celery worker` takes `--no-color` into consideration. * Use the preformatted OK string. * Adopt the NO_COLOR standard. See https://no-color.org/ for details. * Split commands into separate files. * Added 'did you mean' messages. * Implement the `celery events` command. * Text style should take --no-color into consideration as well. * Implement the basic `celery inspect` command. * Improve UI. * Organize the code. * Implement the `celery graph bootsteps` command. * Implement the `celery graph workers` command. * Implement the `celery upgrade settings` command. * Implement the `celery report` command. * Implement the `celery logtool` command. * Pass the quiet argument to the CLI context. * Limit inspect to existing actions. * Implement the `celery control` command. * Basic scaffold for the `celery amqp` shell command. * Start implementing the shell commands. * Implement basic.publish and basic.get. * Echo OK after acknowledgement. * Reformat Code. * Implement the exchange.declare command. * Implement the exchange.delete command. * Implement the queue.bind command. * Implement the queue.declare command. * Implement the queue.delete command. * Echo queue.declare result to screen. * Echo queue.delete result to screen. * Implement the queue.purge command. * Fix color support for error(). * Report errors and continue. * Handle connection errors and reconnect on error. * Refactor. * Implement the `celery shell` command. * Isort. * Add documentation. * Correct argument types. * Implement detach for `celery worker`. * Documentation. * Implement detach for `celery beat`. * Documentation. * Implement the `celery multi` command. * Documentation. * Implement user options. * Collect command actions from the correct registry. * Isort. * Fix access to app. * Match arguments for control. * added --range-prefix option to `celery multi` (#6180) * added --range-prefix option to `celery multi` Added option for overriding default range prefix when running multiple workers prividing range with `celery multy` command. * covered multi --range-prefix with tests * fixed --range-prefix test * multi: fixed handling unknown options, fixed doc example * removed debug print * Fix click.style usage. * Remove app.start() and app.worker_main() since they are never used. * autopep8. * Record new bandit profile. * Fix pep8 and docstyle errors. * Happify flake8. * Happify linters. * Remove typo. * Added the documentation for the CLI. * There's no return value so there's no point returning it. * Remove redundant assignment. * Use pformat and echo with click. * Finishing touches for the CLI. * More finishing touches. * Happify linters. Co-authored-by: tothegump Co-authored-by: Asif Saif Uddin Co-authored-by: Michal Čihař Co-authored-by: ptitpoulpe Co-authored-by: Didi Bar-Zev Co-authored-by: Santos Solorzano Co-authored-by: manlix Co-authored-by: Jimmy <54828848+sckhg1367@users.noreply.github.com> Co-authored-by: Борис Верховский Co-authored-by: Jainal Gosaliya Co-authored-by: gsfish Co-authored-by: Dipankar Achinta Co-authored-by: Pengjie Song (宋鹏捷) Co-authored-by: Chris Griffin Co-authored-by: Muhammad Hewedy Co-authored-by: Blaine Bublitz Co-authored-by: Tamu Co-authored-by: Erik Tews Co-authored-by: abhinav nilaratna Co-authored-by: Wyatt Paul Co-authored-by: gal cohen Co-authored-by: as Co-authored-by: Param Kapur Co-authored-by: Sven Ulland Co-authored-by: Safwan Rahman Co-authored-by: Aissaoui Anouar Co-authored-by: Neal Wang Co-authored-by: Alireza Amouzadeh Co-authored-by: Marcos Moyano Co-authored-by: Stepan Henek Co-authored-by: Andrew Sklyarov Co-authored-by: Michael Fladischer Co-authored-by: Dejan Lekic Co-authored-by: Yannick Schuchmann Co-authored-by: Matt Davis Co-authored-by: Karthikeyan Singaravelan Co-authored-by: Bernd Wechner Co-authored-by: Sören Oldag Co-authored-by: uddmorningsun Co-authored-by: Amar Fadil <34912365+marfgold1@users.noreply.github.com> Co-authored-by: woodenrobot Co-authored-by: Sardorbek Imomaliev Co-authored-by: gsfish Co-authored-by: Alex Riina Co-authored-by: Joon Hwan 김준환 Co-authored-by: Prabakaran Kumaresshan Co-authored-by: Martey Dodoo Co-authored-by: Konstantin Seleznev <4374093+Seleznev-nvkz@users.noreply.github.com> Co-authored-by: Prodge Co-authored-by: Abdelhadi Dyouri Co-authored-by: Ixiodor Co-authored-by: abhishekakamai <47558404+abhishekakamai@users.noreply.github.com> Co-authored-by: Allan Lei Co-authored-by: M1ha Shvn Co-authored-by: Salih Caglar Ispirli Co-authored-by: Micha Moskovic Co-authored-by: Chris Burr Co-authored-by: Dave King Co-authored-by: Dmitry Nikulin Co-authored-by: Michael Gaddis Co-authored-by: epwalsh Co-authored-by: TalRoni Co-authored-by: Leo Singer Co-authored-by: Stephen Tomkinson Co-authored-by: Abhishek Co-authored-by: theirix Co-authored-by: yukihira1992 Co-authored-by: jpays Co-authored-by: Greg Ward Co-authored-by: Alexa Griffith Co-authored-by: heedong <63043496+heedong-jung@users.noreply.github.com> Co-authored-by: heedong.jung Co-authored-by: Shreyansh Khajanchi Co-authored-by: Sam Thompson Co-authored-by: Alphadelta14 Co-authored-by: Azimjon Pulatov Co-authored-by: ysde Co-authored-by: AmirMohammad Ziaei Co-authored-by: Ben Nadler Co-authored-by: Harald Nezbeda Co-authored-by: Chris Frisina Co-authored-by: Adam Eijdenberg Co-authored-by: rafaelreuber Co-authored-by: Noah Kantrowitz Co-authored-by: Ben Nadler Co-authored-by: Clement Michaud Co-authored-by: Mathieu Chataigner Co-authored-by: eugeneyalansky <65346459+eugeneyalansky@users.noreply.github.com> Co-authored-by: Leonard Lu Co-authored-by: XinYang Co-authored-by: Ingolf Becker Co-authored-by: Anuj Chauhan Co-authored-by: shaoziwei Co-authored-by: Mathieu Chataigner Co-authored-by: Anakael Co-authored-by: Danny Chan Co-authored-by: Sebastiaan ten Pas Co-authored-by: David TILLOY Co-authored-by: Anthony N. Simon Co-authored-by: lironhl Co-authored-by: Raphael Cohen Co-authored-by: JaeyoungHeo Co-authored-by: singlaive Co-authored-by: Murphy Meng Co-authored-by: Wu Haotian Co-authored-by: Kwist Co-authored-by: Laurentiu Dragan Co-authored-by: Radim Sückr Co-authored-by: Artem Vasilyev Co-authored-by: kakakikikeke-fork Co-authored-by: Pysaoke Co-authored-by: baixue Co-authored-by: Prashant Sinha Co-authored-by: AbdealiJK --- bandit.json | 687 ++++++++++++------------- celery/__main__.py | 8 +- celery/app/base.py | 21 +- celery/bin/__init__.py | 3 - celery/bin/amqp.py | 614 ++++++++++------------- celery/bin/base.py | 803 +++++++----------------------- celery/bin/beat.py | 189 +++---- celery/bin/call.py | 143 +++--- celery/bin/celery.py | 657 +++++------------------- celery/bin/celeryd_detach.py | 136 ----- celery/bin/control.py | 401 +++++++-------- celery/bin/events.py | 242 +++------ celery/bin/graph.py | 376 +++++++------- celery/bin/list.py | 44 +- celery/bin/logtool.py | 90 ++-- celery/bin/migrate.py | 113 ++--- celery/bin/multi.py | 38 +- celery/bin/purge.py | 108 ++-- celery/bin/result.py | 67 ++- celery/bin/shell.py | 295 +++++------ celery/bin/upgrade.py | 149 +++--- celery/bin/worker.py | 640 +++++++++++------------- docs/conf.py | 1 + docs/reference/cli.rst | 7 + docs/reference/index.rst | 1 + requirements/default.txt | 3 + requirements/docs.txt | 1 + t/unit/app/test_app.py | 33 +- t/unit/bin/test_amqp.py | 142 ------ t/unit/bin/test_base.py | 374 -------------- t/unit/bin/test_beat.py | 144 ------ t/unit/bin/test_call.py | 41 -- t/unit/bin/test_celery.py | 295 ----------- t/unit/bin/test_celeryd_detach.py | 126 ----- t/unit/bin/test_celeryevdump.py | 63 --- t/unit/bin/test_control.py | 125 ----- t/unit/bin/test_events.py | 89 ---- t/unit/bin/test_list.py | 26 - t/unit/bin/test_migrate.py | 25 - t/unit/bin/test_multi.py | 407 --------------- t/unit/bin/test_purge.py | 26 - t/unit/bin/test_report.py | 27 - t/unit/bin/test_result.py | 30 -- t/unit/bin/test_upgrade.py | 20 - t/unit/bin/test_worker.py | 695 -------------------------- 45 files changed, 2278 insertions(+), 6247 deletions(-) delete mode 100644 celery/bin/celeryd_detach.py create mode 100644 docs/reference/cli.rst delete mode 100644 t/unit/bin/test_amqp.py delete mode 100644 t/unit/bin/test_base.py delete mode 100644 t/unit/bin/test_beat.py delete mode 100644 t/unit/bin/test_call.py delete mode 100644 t/unit/bin/test_celery.py delete mode 100644 t/unit/bin/test_celeryd_detach.py delete mode 100644 t/unit/bin/test_celeryevdump.py delete mode 100644 t/unit/bin/test_control.py delete mode 100644 t/unit/bin/test_events.py delete mode 100644 t/unit/bin/test_list.py delete mode 100644 t/unit/bin/test_migrate.py delete mode 100644 t/unit/bin/test_purge.py delete mode 100644 t/unit/bin/test_report.py delete mode 100644 t/unit/bin/test_result.py delete mode 100644 t/unit/bin/test_upgrade.py delete mode 100644 t/unit/bin/test_worker.py diff --git a/bandit.json b/bandit.json index be58e134a5c..95a9201f312 100644 --- a/bandit.json +++ b/bandit.json @@ -1,17 +1,17 @@ { "errors": [], - "generated_at": "2018-08-19T14:29:46Z", + "generated_at": "2020-08-06T14:09:58Z", "metrics": { "_totals": { - "CONFIDENCE.HIGH": 41.0, + "CONFIDENCE.HIGH": 38.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 2.0, "CONFIDENCE.UNDEFINED": 0.0, - "SEVERITY.HIGH": 1.0, - "SEVERITY.LOW": 40.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 38.0, "SEVERITY.MEDIUM": 2.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 28612, + "loc": 29309, "nosec": 0 }, "celery/__init__.py": { @@ -23,7 +23,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 132, + "loc": 129, "nosec": 0 }, "celery/__main__.py": { @@ -35,7 +35,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 13, + "loc": 9, "nosec": 0 }, "celery/_state.py": { @@ -47,7 +47,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 120, + "loc": 119, "nosec": 0 }, "celery/app/__init__.py": { @@ -59,7 +59,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 59, + "loc": 56, "nosec": 0 }, "celery/app/amqp.py": { @@ -71,7 +71,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 521, + "loc": 528, "nosec": 0 }, "celery/app/annotations.py": { @@ -83,7 +83,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 41, + "loc": 39, + "nosec": 0 + }, + "celery/app/autoretry.py": { + "CONFIDENCE.HIGH": 0.0, + "CONFIDENCE.LOW": 0.0, + "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.UNDEFINED": 0.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, + "SEVERITY.MEDIUM": 0.0, + "SEVERITY.UNDEFINED": 0.0, + "loc": 43, "nosec": 0 }, "celery/app/backends.py": { @@ -95,7 +107,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 60, + "loc": 62, "nosec": 0 }, "celery/app/base.py": { @@ -107,7 +119,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 983, + "loc": 964, "nosec": 0 }, "celery/app/builtins.py": { @@ -119,7 +131,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 150, + "loc": 153, "nosec": 0 }, "celery/app/control.py": { @@ -131,7 +143,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 361, + "loc": 383, "nosec": 0 }, "celery/app/defaults.py": { @@ -143,7 +155,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 324, + "loc": 365, "nosec": 0 }, "celery/app/events.py": { @@ -155,7 +167,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 30, + "loc": 29, "nosec": 0 }, "celery/app/log.py": { @@ -167,7 +179,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 199, + "loc": 197, "nosec": 0 }, "celery/app/registry.py": { @@ -179,7 +191,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 48, + "loc": 49, "nosec": 0 }, "celery/app/routes.py": { @@ -203,7 +215,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 718, + "loc": 740, "nosec": 0 }, "celery/app/trace.py": { @@ -215,7 +227,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 482, + "loc": 535, "nosec": 0 }, "celery/app/utils.py": { @@ -227,7 +239,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 295, + "loc": 300, "nosec": 0 }, "celery/apps/__init__.py": { @@ -251,7 +263,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 130, + "loc": 128, "nosec": 0 }, "celery/apps/multi.py": { @@ -263,7 +275,7 @@ "SEVERITY.LOW": 2.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 406, + "loc": 409, "nosec": 0 }, "celery/apps/worker.py": { @@ -275,7 +287,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 288, + "loc": 291, "nosec": 0 }, "celery/backends/__init__.py": { @@ -287,7 +299,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 18, + "loc": 17, "nosec": 0 }, "celery/backends/amqp.py": { @@ -299,7 +311,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 257, + "loc": 265, + "nosec": 0 + }, + "celery/backends/arangodb.py": { + "CONFIDENCE.HIGH": 0.0, + "CONFIDENCE.LOW": 0.0, + "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.UNDEFINED": 0.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, + "SEVERITY.MEDIUM": 0.0, + "SEVERITY.UNDEFINED": 0.0, + "loc": 199, "nosec": 0 }, "celery/backends/asynchronous.py": { @@ -311,7 +335,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 231, + "loc": 243, + "nosec": 0 + }, + "celery/backends/azureblockblob.py": { + "CONFIDENCE.HIGH": 0.0, + "CONFIDENCE.LOW": 0.0, + "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.UNDEFINED": 0.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, + "SEVERITY.MEDIUM": 0.0, + "SEVERITY.UNDEFINED": 0.0, + "loc": 107, "nosec": 0 }, "celery/backends/base.py": { @@ -323,7 +359,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 631, + "loc": 773, "nosec": 0 }, "celery/backends/cache.py": { @@ -335,7 +371,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 119, + "loc": 117, "nosec": 0 }, "celery/backends/cassandra.py": { @@ -347,7 +383,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 176, + "loc": 178, "nosec": 0 }, "celery/backends/consul.py": { @@ -359,7 +395,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 75, + "loc": 74, + "nosec": 0 + }, + "celery/backends/cosmosdbsql.py": { + "CONFIDENCE.HIGH": 0.0, + "CONFIDENCE.LOW": 0.0, + "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.UNDEFINED": 0.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, + "SEVERITY.MEDIUM": 0.0, + "SEVERITY.UNDEFINED": 0.0, + "loc": 169, "nosec": 0 }, "celery/backends/couchbase.py": { @@ -371,7 +419,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 87, + "loc": 85, "nosec": 0 }, "celery/backends/couchdb.py": { @@ -383,7 +431,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 80, + "loc": 76, "nosec": 0 }, "celery/backends/database/__init__.py": { @@ -395,7 +443,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 153, + "loc": 176, "nosec": 0 }, "celery/backends/database/models.py": { @@ -407,7 +455,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 56, + "loc": 83, "nosec": 0 }, "celery/backends/database/session.py": { @@ -431,7 +479,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 227, + "loc": 380, "nosec": 0 }, "celery/backends/elasticsearch.py": { @@ -443,7 +491,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 113, + "loc": 192, "nosec": 0 }, "celery/backends/filesystem.py": { @@ -455,7 +503,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 67, + "loc": 76, "nosec": 0 }, "celery/backends/mongodb.py": { @@ -467,7 +515,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 233, + "loc": 241, "nosec": 0 }, "celery/backends/redis.py": { @@ -479,7 +527,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 379, + "loc": 448, "nosec": 0 }, "celery/backends/riak.py": { @@ -491,7 +539,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 99, + "loc": 105, "nosec": 0 }, "celery/backends/rpc.py": { @@ -503,10 +551,10 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 252, + "loc": 251, "nosec": 0 }, - "celery/beat.py": { + "celery/backends/s3.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -515,10 +563,10 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 522, + "loc": 65, "nosec": 0 }, - "celery/bin/__init__.py": { + "celery/beat.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -527,10 +575,10 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 3, + "loc": 553, "nosec": 0 }, - "celery/bin/amqp.py": { + "celery/bin/__init__.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -539,22 +587,22 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 290, + "loc": 0, "nosec": 0 }, - "celery/bin/base.py": { - "CONFIDENCE.HIGH": 2.0, + "celery/bin/amqp.py": { + "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, "CONFIDENCE.UNDEFINED": 0.0, - "SEVERITY.HIGH": 1.0, - "SEVERITY.LOW": 1.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 501, + "loc": 268, "nosec": 0 }, - "celery/bin/beat.py": { + "celery/bin/base.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -563,10 +611,10 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 86, + "loc": 180, "nosec": 0 }, - "celery/bin/call.py": { + "celery/bin/beat.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -575,10 +623,10 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 64, + "loc": 58, "nosec": 0 }, - "celery/bin/celery.py": { + "celery/bin/call.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, @@ -587,19 +635,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 370, + "loc": 66, "nosec": 0 }, - "celery/bin/celeryd_detach.py": { + "celery/bin/celery.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, - "CONFIDENCE.MEDIUM": 1.0, + "CONFIDENCE.MEDIUM": 0.0, "CONFIDENCE.UNDEFINED": 0.0, "SEVERITY.HIGH": 0.0, - "SEVERITY.LOW": 1.0, + "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 113, + "loc": 127, "nosec": 0 }, "celery/bin/control.py": { @@ -611,7 +659,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 195, + "loc": 164, "nosec": 0 }, "celery/bin/events.py": { @@ -623,7 +671,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 120, + "loc": 76, "nosec": 0 }, "celery/bin/graph.py": { @@ -635,7 +683,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 167, + "loc": 157, "nosec": 0 }, "celery/bin/list.py": { @@ -647,7 +695,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 36, + "loc": 25, "nosec": 0 }, "celery/bin/logtool.py": { @@ -659,7 +707,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 133, + "loc": 122, "nosec": 0 }, "celery/bin/migrate.py": { @@ -683,7 +731,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 356, + "loc": 372, "nosec": 0 }, "celery/bin/purge.py": { @@ -695,7 +743,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 59, + "loc": 55, "nosec": 0 }, "celery/bin/result.py": { @@ -707,7 +755,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 33, + "loc": 22, "nosec": 0 }, "celery/bin/shell.py": { @@ -719,7 +767,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 137, + "loc": 143, "nosec": 0 }, "celery/bin/upgrade.py": { @@ -731,19 +779,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 78, + "loc": 69, "nosec": 0 }, "celery/bin/worker.py": { "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, - "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.MEDIUM": 1.0, "CONFIDENCE.UNDEFINED": 0.0, "SEVERITY.HIGH": 0.0, - "SEVERITY.LOW": 0.0, + "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 256, + "loc": 300, "nosec": 0 }, "celery/bootsteps.py": { @@ -755,7 +803,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 311, + "loc": 308, "nosec": 0 }, "celery/canvas.py": { @@ -767,7 +815,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 1052, + "loc": 1113, "nosec": 0 }, "celery/concurrency/__init__.py": { @@ -779,7 +827,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 14, + "loc": 19, "nosec": 0 }, "celery/concurrency/asynpool.py": { @@ -791,7 +839,7 @@ "SEVERITY.LOW": 17.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 984, + "loc": 1019, "nosec": 0 }, "celery/concurrency/base.py": { @@ -803,7 +851,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 126, + "loc": 128, "nosec": 0 }, "celery/concurrency/eventlet.py": { @@ -839,7 +887,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 128, + "loc": 131, "nosec": 0 }, "celery/concurrency/solo.py": { @@ -851,7 +899,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 20, + "loc": 21, + "nosec": 0 + }, + "celery/concurrency/thread.py": { + "CONFIDENCE.HIGH": 0.0, + "CONFIDENCE.LOW": 0.0, + "CONFIDENCE.MEDIUM": 0.0, + "CONFIDENCE.UNDEFINED": 0.0, + "SEVERITY.HIGH": 0.0, + "SEVERITY.LOW": 0.0, + "SEVERITY.MEDIUM": 0.0, + "SEVERITY.UNDEFINED": 0.0, + "loc": 33, "nosec": 0 }, "celery/contrib/__init__.py": { @@ -875,7 +935,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 115, + "loc": 114, "nosec": 0 }, "celery/contrib/migrate.py": { @@ -887,7 +947,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 324, + "loc": 323, "nosec": 0 }, "celery/contrib/pytest.py": { @@ -899,7 +959,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 132, + "loc": 146, "nosec": 0 }, "celery/contrib/rdb.py": { @@ -911,7 +971,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 144, + "loc": 142, "nosec": 0 }, "celery/contrib/sphinx.py": { @@ -923,7 +983,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 75, + "loc": 69, "nosec": 0 }, "celery/contrib/testing/__init__.py": { @@ -947,7 +1007,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 82, + "loc": 84, "nosec": 0 }, "celery/contrib/testing/manager.py": { @@ -959,7 +1019,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 165, + "loc": 175, "nosec": 0 }, "celery/contrib/testing/mocks.py": { @@ -971,7 +1031,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 82, + "loc": 101, "nosec": 0 }, "celery/contrib/testing/tasks.py": { @@ -983,7 +1043,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 7, + "loc": 6, "nosec": 0 }, "celery/contrib/testing/worker.py": { @@ -995,7 +1055,7 @@ "SEVERITY.LOW": 2.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 126, + "loc": 130, "nosec": 0 }, "celery/events/__init__.py": { @@ -1007,7 +1067,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 13, + "loc": 12, "nosec": 0 }, "celery/events/cursesmon.py": { @@ -1019,7 +1079,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 449, + "loc": 446, "nosec": 0 }, "celery/events/dispatcher.py": { @@ -1031,7 +1091,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 195, + "loc": 194, "nosec": 0 }, "celery/events/dumper.py": { @@ -1043,7 +1103,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 87, + "loc": 82, "nosec": 0 }, "celery/events/event.py": { @@ -1055,7 +1115,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 42, + "loc": 45, "nosec": 0 }, "celery/events/receiver.py": { @@ -1067,7 +1127,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 111, + "loc": 112, "nosec": 0 }, "celery/events/snapshot.py": { @@ -1079,7 +1139,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 89, + "loc": 87, "nosec": 0 }, "celery/events/state.py": { @@ -1091,7 +1151,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 570, + "loc": 569, "nosec": 0 }, "celery/exceptions.py": { @@ -1103,7 +1163,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 159, + "loc": 186, "nosec": 0 }, "celery/five.py": { @@ -1115,7 +1175,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 5, + "loc": 4, "nosec": 0 }, "celery/fixups/__init__.py": { @@ -1139,7 +1199,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 144, + "loc": 146, "nosec": 0 }, "celery/loaders/__init__.py": { @@ -1151,7 +1211,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 15, + "loc": 13, "nosec": 0 }, "celery/loaders/app.py": { @@ -1163,7 +1223,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 6, + "loc": 5, "nosec": 0 }, "celery/loaders/base.py": { @@ -1175,7 +1235,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 195, + "loc": 202, "nosec": 0 }, "celery/loaders/default.py": { @@ -1187,7 +1247,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 32, + "loc": 31, "nosec": 0 }, "celery/local.py": { @@ -1199,7 +1259,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 438, + "loc": 426, "nosec": 0 }, "celery/platforms.py": { @@ -1211,7 +1271,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 606, + "loc": 623, "nosec": 0 }, "celery/result.py": { @@ -1223,7 +1283,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 837, + "loc": 866, "nosec": 0 }, "celery/schedules.py": { @@ -1235,7 +1295,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 678, + "loc": 674, "nosec": 0 }, "celery/security/__init__.py": { @@ -1247,19 +1307,19 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 46, + "loc": 54, "nosec": 0 }, "celery/security/certificate.py": { - "CONFIDENCE.HIGH": 1.0, + "CONFIDENCE.HIGH": 0.0, "CONFIDENCE.LOW": 0.0, "CONFIDENCE.MEDIUM": 0.0, "CONFIDENCE.UNDEFINED": 0.0, "SEVERITY.HIGH": 0.0, - "SEVERITY.LOW": 1.0, + "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 64, + "loc": 73, "nosec": 0 }, "celery/security/key.py": { @@ -1271,7 +1331,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 14, + "loc": 24, "nosec": 0 }, "celery/security/serialization.py": { @@ -1283,7 +1343,7 @@ "SEVERITY.LOW": 3.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 76, + "loc": 78, "nosec": 0 }, "celery/security/utils.py": { @@ -1295,7 +1355,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 22, + "loc": 21, "nosec": 0 }, "celery/signals.py": { @@ -1307,7 +1367,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 121, + "loc": 131, "nosec": 0 }, "celery/states.py": { @@ -1319,7 +1379,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 96, + "loc": 95, "nosec": 0 }, "celery/task/__init__.py": { @@ -1343,7 +1403,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 189, + "loc": 184, "nosec": 0 }, "celery/utils/__init__.py": { @@ -1355,7 +1415,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 18, + "loc": 31, "nosec": 0 }, "celery/utils/abstract.py": { @@ -1367,7 +1427,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 100, + "loc": 109, "nosec": 0 }, "celery/utils/collections.py": { @@ -1379,7 +1439,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 623, + "loc": 611, "nosec": 0 }, "celery/utils/debug.py": { @@ -1391,7 +1451,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 151, + "loc": 148, "nosec": 0 }, "celery/utils/deprecated.py": { @@ -1403,7 +1463,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 91, + "loc": 90, "nosec": 0 }, "celery/utils/dispatch/__init__.py": { @@ -1415,7 +1475,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 4, + "loc": 3, "nosec": 0 }, "celery/utils/dispatch/signal.py": { @@ -1427,19 +1487,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 272, - "nosec": 0 - }, - "celery/utils/dispatch/weakref_backports.py": { - "CONFIDENCE.HIGH": 0.0, - "CONFIDENCE.LOW": 0.0, - "CONFIDENCE.MEDIUM": 0.0, - "CONFIDENCE.UNDEFINED": 0.0, - "SEVERITY.HIGH": 0.0, - "SEVERITY.LOW": 0.0, - "SEVERITY.MEDIUM": 0.0, - "SEVERITY.UNDEFINED": 0.0, - "loc": 54, + "loc": 262, "nosec": 0 }, "celery/utils/encoding.py": { @@ -1451,7 +1499,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 6, + "loc": 5, "nosec": 0 }, "celery/utils/functional.py": { @@ -1475,7 +1523,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 247, + "loc": 244, "nosec": 0 }, "celery/utils/imports.py": { @@ -1487,7 +1535,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 121, + "loc": 122, "nosec": 0 }, "celery/utils/iso8601.py": { @@ -1499,7 +1547,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 63, + "loc": 62, "nosec": 0 }, "celery/utils/log.py": { @@ -1511,7 +1559,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 214, + "loc": 210, "nosec": 0 }, "celery/utils/nodenames.py": { @@ -1523,7 +1571,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 72, + "loc": 71, "nosec": 0 }, "celery/utils/objects.py": { @@ -1535,7 +1583,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 106, + "loc": 107, "nosec": 0 }, "celery/utils/saferepr.py": { @@ -1547,7 +1595,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 191, + "loc": 188, "nosec": 0 }, "celery/utils/serialization.py": { @@ -1559,7 +1607,7 @@ "SEVERITY.LOW": 4.0, "SEVERITY.MEDIUM": 1.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 228, + "loc": 210, "nosec": 0 }, "celery/utils/static/__init__.py": { @@ -1571,7 +1619,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 9, + "loc": 8, "nosec": 0 }, "celery/utils/sysinfo.py": { @@ -1583,7 +1631,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 33, + "loc": 32, "nosec": 0 }, "celery/utils/term.py": { @@ -1595,7 +1643,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 131, + "loc": 128, "nosec": 0 }, "celery/utils/text.py": { @@ -1607,7 +1655,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 127, + "loc": 135, "nosec": 0 }, "celery/utils/threads.py": { @@ -1619,7 +1667,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 258, + "loc": 256, "nosec": 0 }, "celery/utils/time.py": { @@ -1631,7 +1679,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 304, + "loc": 293, "nosec": 0 }, "celery/utils/timer2.py": { @@ -1643,7 +1691,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 119, + "loc": 118, "nosec": 0 }, "celery/worker/__init__.py": { @@ -1655,7 +1703,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 4, + "loc": 3, "nosec": 0 }, "celery/worker/autoscale.py": { @@ -1667,7 +1715,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 132, + "loc": 123, "nosec": 0 }, "celery/worker/components.py": { @@ -1679,7 +1727,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 190, + "loc": 188, "nosec": 0 }, "celery/worker/consumer/__init__.py": { @@ -1691,7 +1739,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 15, + "loc": 14, "nosec": 0 }, "celery/worker/consumer/agent.py": { @@ -1703,7 +1751,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 15, + "loc": 14, "nosec": 0 }, "celery/worker/consumer/connection.py": { @@ -1715,7 +1763,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 26, + "loc": 25, "nosec": 0 }, "celery/worker/consumer/consumer.py": { @@ -1727,7 +1775,7 @@ "SEVERITY.LOW": 1.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 469, + "loc": 470, "nosec": 0 }, "celery/worker/consumer/control.py": { @@ -1739,7 +1787,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 24, + "loc": 23, "nosec": 0 }, "celery/worker/consumer/events.py": { @@ -1763,7 +1811,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 171, + "loc": 173, "nosec": 0 }, "celery/worker/consumer/heart.py": { @@ -1775,7 +1823,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 27, + "loc": 26, "nosec": 0 }, "celery/worker/consumer/mingle.py": { @@ -1787,7 +1835,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 60, + "loc": 58, "nosec": 0 }, "celery/worker/consumer/tasks.py": { @@ -1799,7 +1847,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 46, + "loc": 45, "nosec": 0 }, "celery/worker/control.py": { @@ -1811,7 +1859,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 425, + "loc": 423, "nosec": 0 }, "celery/worker/heartbeat.py": { @@ -1835,7 +1883,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 86, + "loc": 79, "nosec": 0 }, "celery/worker/pidbox.py": { @@ -1847,7 +1895,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 97, + "loc": 96, "nosec": 0 }, "celery/worker/request.py": { @@ -1859,7 +1907,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 444, + "loc": 536, "nosec": 0 }, "celery/worker/state.py": { @@ -1871,7 +1919,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 199, + "loc": 200, "nosec": 0 }, "celery/worker/strategy.py": { @@ -1883,7 +1931,7 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 169, + "loc": 166, "nosec": 0 }, "celery/worker/worker.py": { @@ -1895,345 +1943,317 @@ "SEVERITY.LOW": 0.0, "SEVERITY.MEDIUM": 0.0, "SEVERITY.UNDEFINED": 0.0, - "loc": 337, + "loc": 338, "nosec": 0 } }, "results": [ { - "code": "10 from functools import partial\n11 from subprocess import Popen\n12 from time import sleep\n", + "code": "8 from functools import partial\n9 from subprocess import Popen\n10 from time import sleep\n", "filename": "celery/apps/multi.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Consider possible security implications associated with Popen module.", - "line_number": 11, + "line_number": 9, "line_range": [ - 11 + 9 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html#b404-import-subprocess", "test_id": "B404", "test_name": "blacklist" }, { - "code": "195 maybe_call(on_spawn, self, argstr=' '.join(argstr), env=env)\n196 pipe = Popen(argstr, env=env)\n197 return self.handle_process_exit(\n", + "code": "196 maybe_call(on_spawn, self, argstr=' '.join(argstr), env=env)\n197 pipe = Popen(argstr, env=env)\n198 return self.handle_process_exit(\n", "filename": "celery/apps/multi.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "subprocess call - check for execution of untrusted input.", - "line_number": 196, + "line_number": 197, "line_range": [ - 196 + 197 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b603_subprocess_without_shell_equals_true.html", "test_id": "B603", "test_name": "subprocess_without_shell_equals_true" }, { - "code": "320 ])\n321 os.execv(sys.executable, [sys.executable] + sys.argv)\n322 \n", + "code": "322 ])\n323 os.execv(sys.executable, [sys.executable] + sys.argv)\n324 \n", "filename": "celery/apps/worker.py", "issue_confidence": "MEDIUM", "issue_severity": "LOW", "issue_text": "Starting a process without a shell.", - "line_number": 321, + "line_number": 323, "line_range": [ - 321 + 323 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b606_start_process_with_no_shell.html", "test_id": "B606", "test_name": "start_process_with_no_shell" }, { - "code": "66 self.set(key, b'test value')\n67 assert self.get(key) == b'test value'\n68 self.delete(key)\n", + "code": "74 self.set(key, b'test value')\n75 assert self.get(key) == b'test value'\n76 self.delete(key)\n", "filename": "celery/backends/filesystem.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 67, + "line_number": 75, "line_range": [ - 67 + 75 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "342 while 1:\n343 val = input(p).lower()\n344 if val in choices:\n", - "filename": "celery/bin/base.py", - "issue_confidence": "HIGH", - "issue_severity": "HIGH", - "issue_text": "The input method in Python 2 will read from standard input, evaluate and run the resulting string as python source code. This is similar, though in many ways worse, then using eval. On Python 2, use raw_input instead, input is safe in Python 3.", - "line_number": 343, - "line_range": [ - 343 - ], - "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b322-input", - "test_id": "B322", - "test_name": "blacklist" - }, - { - "code": "540 in_option = m.groups()[0].strip()\n541 assert in_option, 'missing long opt'\n542 elif in_option and line.startswith(' ' * 4):\n", - "filename": "celery/bin/base.py", - "issue_confidence": "HIGH", - "issue_severity": "LOW", - "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 541, - "line_range": [ - 541 - ], - "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", - "test_id": "B101", - "test_name": "assert_used" - }, - { - "code": "38 path = executable\n39 os.execv(path, [path] + argv)\n40 except Exception: # pylint: disable=broad-except\n", - "filename": "celery/bin/celeryd_detach.py", + "code": "89 path = executable\n90 os.execv(path, [path] + argv)\n91 except Exception: # pylint: disable=broad-except\n", + "filename": "celery/bin/worker.py", "issue_confidence": "MEDIUM", "issue_severity": "LOW", "issue_text": "Starting a process without a shell.", - "line_number": 39, + "line_number": 90, "line_range": [ - 39 + 90 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b606_start_process_with_no_shell.html", "test_id": "B606", "test_name": "start_process_with_no_shell" }, { - "code": "28 from numbers import Integral\n29 from pickle import HIGHEST_PROTOCOL\n30 from time import sleep\n", + "code": "23 from numbers import Integral\n24 from pickle import HIGHEST_PROTOCOL\n25 from time import sleep\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Consider possible security implications associated with HIGHEST_PROTOCOL module.", - "line_number": 29, + "line_number": 24, "line_range": [ - 29 + 24 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html#b403-import-pickle", "test_id": "B403", "test_name": "blacklist" }, { - "code": "574 proc in waiting_to_start):\n575 assert proc.outqR_fd in fileno_to_outq\n576 assert fileno_to_outq[proc.outqR_fd] is proc\n", + "code": "613 proc in waiting_to_start):\n614 assert proc.outqR_fd in fileno_to_outq\n615 assert fileno_to_outq[proc.outqR_fd] is proc\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 575, + "line_number": 614, "line_range": [ - 575 + 614 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "575 assert proc.outqR_fd in fileno_to_outq\n576 assert fileno_to_outq[proc.outqR_fd] is proc\n577 assert proc.outqR_fd in hub.readers\n", + "code": "614 assert proc.outqR_fd in fileno_to_outq\n615 assert fileno_to_outq[proc.outqR_fd] is proc\n616 assert proc.outqR_fd in hub.readers\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 576, + "line_number": 615, "line_range": [ - 576 + 615 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "576 assert fileno_to_outq[proc.outqR_fd] is proc\n577 assert proc.outqR_fd in hub.readers\n578 error('Timed out waiting for UP message from %r', proc)\n", + "code": "615 assert fileno_to_outq[proc.outqR_fd] is proc\n616 assert proc.outqR_fd in hub.readers\n617 error('Timed out waiting for UP message from %r', proc)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 577, + "line_number": 616, "line_range": [ - 577 + 616 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "597 \n598 assert not isblocking(proc.outq._reader)\n599 \n600 # handle_result_event is called when the processes outqueue is\n601 # readable.\n602 add_reader(proc.outqR_fd, handle_result_event, proc.outqR_fd)\n", + "code": "636 \n637 assert not isblocking(proc.outq._reader)\n638 \n639 # handle_result_event is called when the processes outqueue is\n640 # readable.\n641 add_reader(proc.outqR_fd, handle_result_event, proc.outqR_fd)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 598, + "line_number": 637, "line_range": [ - 598, - 599, - 600, - 601 + 637, + 638, + 639, + 640 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1048 synq = None\n1049 assert isblocking(inq._reader)\n1050 assert not isblocking(inq._writer)\n", + "code": "1090 synq = None\n1091 assert isblocking(inq._reader)\n1092 assert not isblocking(inq._writer)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1049, + "line_number": 1091, "line_range": [ - 1049 + 1091 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1049 assert isblocking(inq._reader)\n1050 assert not isblocking(inq._writer)\n1051 assert not isblocking(outq._reader)\n", + "code": "1091 assert isblocking(inq._reader)\n1092 assert not isblocking(inq._writer)\n1093 assert not isblocking(outq._reader)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1050, + "line_number": 1092, "line_range": [ - 1050 + 1092 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1050 assert not isblocking(inq._writer)\n1051 assert not isblocking(outq._reader)\n1052 assert isblocking(outq._writer)\n", + "code": "1092 assert not isblocking(inq._writer)\n1093 assert not isblocking(outq._reader)\n1094 assert isblocking(outq._writer)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1051, + "line_number": 1093, "line_range": [ - 1051 + 1093 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1051 assert not isblocking(outq._reader)\n1052 assert isblocking(outq._writer)\n1053 if self.synack:\n", + "code": "1093 assert not isblocking(outq._reader)\n1094 assert isblocking(outq._writer)\n1095 if self.synack:\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1052, + "line_number": 1094, "line_range": [ - 1052 + 1094 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1054 synq = _SimpleQueue(wnonblock=True)\n1055 assert isblocking(synq._reader)\n1056 assert not isblocking(synq._writer)\n", + "code": "1096 synq = _SimpleQueue(wnonblock=True)\n1097 assert isblocking(synq._reader)\n1098 assert not isblocking(synq._writer)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1055, + "line_number": 1097, "line_range": [ - 1055 + 1097 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1055 assert isblocking(synq._reader)\n1056 assert not isblocking(synq._writer)\n1057 return inq, outq, synq\n", + "code": "1097 assert isblocking(synq._reader)\n1098 assert not isblocking(synq._writer)\n1099 return inq, outq, synq\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1056, + "line_number": 1098, "line_range": [ - 1056 + 1098 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1067 return logger.warning('process with pid=%s already exited', pid)\n1068 assert proc.inqW_fd not in self._fileno_to_inq\n1069 assert proc.inqW_fd not in self._all_inqueues\n", + "code": "1109 return logger.warning('process with pid=%s already exited', pid)\n1110 assert proc.inqW_fd not in self._fileno_to_inq\n1111 assert proc.inqW_fd not in self._all_inqueues\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1068, + "line_number": 1110, "line_range": [ - 1068 + 1110 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1068 assert proc.inqW_fd not in self._fileno_to_inq\n1069 assert proc.inqW_fd not in self._all_inqueues\n1070 self._waiting_to_start.discard(proc)\n", + "code": "1110 assert proc.inqW_fd not in self._fileno_to_inq\n1111 assert proc.inqW_fd not in self._all_inqueues\n1112 self._waiting_to_start.discard(proc)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1069, + "line_number": 1111, "line_range": [ - 1069 + 1111 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1147 \"\"\"Mark new ownership for ``queues`` to update fileno indices.\"\"\"\n1148 assert queues in self._queues\n1149 b = len(self._queues)\n", + "code": "1189 \"\"\"Mark new ownership for ``queues`` to update fileno indices.\"\"\"\n1190 assert queues in self._queues\n1191 b = len(self._queues)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1148, + "line_number": 1190, "line_range": [ - 1148 + 1190 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1150 self._queues[queues] = proc\n1151 assert b == len(self._queues)\n1152 \n", + "code": "1192 self._queues[queues] = proc\n1193 assert b == len(self._queues)\n1194 \n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1151, + "line_number": 1193, "line_range": [ - 1151 + 1193 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1230 pass\n1231 assert len(self._queues) == before\n1232 \n", + "code": "1272 pass\n1273 assert len(self._queues) == before\n1274 \n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1231, + "line_number": 1273, "line_range": [ - 1231 + 1273 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "1237 \"\"\"\n1238 assert not proc._is_alive()\n1239 self._waiting_to_start.discard(proc)\n", + "code": "1279 \"\"\"\n1280 assert not proc._is_alive()\n1281 self._waiting_to_start.discard(proc)\n", "filename": "celery/concurrency/asynpool.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 1238, + "line_number": 1280, "line_range": [ - 1238 + 1280 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", @@ -2254,253 +2274,238 @@ "test_name": "assert_used" }, { - "code": "102 setup_app_for_worker(app, loglevel, logfile)\n103 assert 'celery.ping' in app.tasks\n104 # Make sure we can connect to the broker\n105 with app.connection(hostname=os.environ.get('TEST_BROKER')) as conn:\n", + "code": "104 if perform_ping_check:\n105 assert 'celery.ping' in app.tasks\n106 # Make sure we can connect to the broker\n", "filename": "celery/contrib/testing/worker.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 103, + "line_number": 105, "line_range": [ - 103, - 104 + 105 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "173 return self.win.getkey().upper()\n174 except Exception: # pylint: disable=broad-except\n175 pass\n", + "code": "169 return self.win.getkey().upper()\n170 except Exception: # pylint: disable=broad-except\n171 pass\n", "filename": "celery/events/cursesmon.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Try, Except, Pass detected.", - "line_number": 174, + "line_number": 170, "line_range": [ - 174 + 170 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b110_try_except_pass.html", "test_id": "B110", "test_name": "try_except_pass" }, { - "code": "479 max_groups = os.sysconf('SC_NGROUPS_MAX')\n480 except Exception: # pylint: disable=broad-except\n481 pass\n", + "code": "481 max_groups = os.sysconf('SC_NGROUPS_MAX')\n482 except Exception: # pylint: disable=broad-except\n483 pass\n", "filename": "celery/platforms.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Try, Except, Pass detected.", - "line_number": 480, + "line_number": 482, "line_range": [ - 480 + 482 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b110_try_except_pass.html", "test_id": "B110", "test_name": "try_except_pass" }, { - "code": "21 def __init__(self, cert):\n22 assert crypto is not None\n23 with reraise_errors('Invalid certificate: {0!r}'):\n", - "filename": "celery/security/certificate.py", - "issue_confidence": "HIGH", - "issue_severity": "LOW", - "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 22, - "line_range": [ - 22 - ], - "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", - "test_id": "B101", - "test_name": "assert_used" - }, - { - "code": "30 \"\"\"Serialize data structure into string.\"\"\"\n31 assert self._key is not None\n32 assert self._cert is not None\n", + "code": "27 \"\"\"Serialize data structure into string.\"\"\"\n28 assert self._key is not None\n29 assert self._cert is not None\n", "filename": "celery/security/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 31, + "line_number": 28, "line_range": [ - 31 + 28 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "31 assert self._key is not None\n32 assert self._cert is not None\n33 with reraise_errors('Unable to serialize: {0!r}', (Exception,)):\n", + "code": "28 assert self._key is not None\n29 assert self._cert is not None\n30 with reraise_errors('Unable to serialize: {0!r}', (Exception,)):\n", "filename": "celery/security/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 32, + "line_number": 29, "line_range": [ - 32 + 29 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "46 \"\"\"Deserialize data structure from string.\"\"\"\n47 assert self._cert_store is not None\n48 with reraise_errors('Unable to deserialize: {0!r}', (Exception,)):\n", + "code": "43 \"\"\"Deserialize data structure from string.\"\"\"\n44 assert self._cert_store is not None\n45 with reraise_errors('Unable to deserialize: {0!r}', (Exception,)):\n", "filename": "celery/security/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 47, + "line_number": 44, "line_range": [ - 47 + 44 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "21 \"\"\"Context reraising crypto errors as :exc:`SecurityError`.\"\"\"\n22 assert crypto is not None\n23 errors = (crypto.Error,) if errors is None else errors\n", + "code": "14 \"\"\"Convert string to hash object of cryptography library.\"\"\"\n15 assert digest is not None\n16 return getattr(hashes, digest.upper())()\n", "filename": "celery/security/utils.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 22, + "line_number": 15, "line_range": [ - 22 + 15 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "193 def _connect_signal(self, receiver, sender, weak, dispatch_uid):\n194 assert callable(receiver), 'Signal receivers must be callable'\n195 if not fun_accepts_kwargs(receiver):\n", + "code": "184 def _connect_signal(self, receiver, sender, weak, dispatch_uid):\n185 assert callable(receiver), 'Signal receivers must be callable'\n186 if not fun_accepts_kwargs(receiver):\n", "filename": "celery/utils/dispatch/signal.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 194, + "line_number": 185, "line_range": [ - 194 + 185 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "280 # Tasks are rarely, if ever, created at runtime - exec here is fine.\n281 exec(definition, namespace)\n282 result = namespace[name]\n", + "code": "277 # Tasks are rarely, if ever, created at runtime - exec here is fine.\n278 exec(definition, namespace)\n279 result = namespace[name]\n", "filename": "celery/utils/functional.py", "issue_confidence": "HIGH", "issue_severity": "MEDIUM", "issue_text": "Use of exec detected.", - "line_number": 281, + "line_number": 278, "line_range": [ - 281 + 278 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b102_exec_used.html", "test_id": "B102", "test_name": "exec_used" }, { - "code": "21 try:\n22 import cPickle as pickle\n23 except ImportError:\n", + "code": "15 try:\n16 import cPickle as pickle\n17 except ImportError:\n", "filename": "celery/utils/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Consider possible security implications associated with cPickle module.", - "line_number": 22, + "line_number": 16, "line_range": [ - 22 + 16 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html#b403-import-pickle", "test_id": "B403", "test_name": "blacklist" }, { - "code": "23 except ImportError:\n24 import pickle # noqa\n25 \n", + "code": "17 except ImportError:\n18 import pickle # noqa\n19 \n", "filename": "celery/utils/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Consider possible security implications associated with pickle module.", - "line_number": 24, + "line_number": 18, "line_range": [ - 24 + 18 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html#b403-import-pickle", "test_id": "B403", "test_name": "blacklist" }, { - "code": "71 loads(dumps(superexc))\n72 except Exception: # pylint: disable=broad-except\n73 pass\n", + "code": "64 loads(dumps(superexc))\n65 except Exception: # pylint: disable=broad-except\n66 pass\n", "filename": "celery/utils/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Try, Except, Pass detected.", - "line_number": 72, + "line_number": 65, "line_range": [ - 72 + 65 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b110_try_except_pass.html", "test_id": "B110", "test_name": "try_except_pass" }, { - "code": "165 try:\n166 pickle.loads(pickle.dumps(exc))\n167 except Exception: # pylint: disable=broad-except\n", + "code": "158 try:\n159 pickle.loads(pickle.dumps(exc))\n160 except Exception: # pylint: disable=broad-except\n", "filename": "celery/utils/serialization.py", "issue_confidence": "HIGH", "issue_severity": "MEDIUM", "issue_text": "Pickle and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue.", - "line_number": 166, + "line_number": 159, "line_range": [ - 166 + 159 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b301-pickle", "test_id": "B301", "test_name": "blacklist" }, { - "code": "166 pickle.loads(pickle.dumps(exc))\n167 except Exception: # pylint: disable=broad-except\n168 pass\n", + "code": "159 pickle.loads(pickle.dumps(exc))\n160 except Exception: # pylint: disable=broad-except\n161 pass\n", "filename": "celery/utils/serialization.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Try, Except, Pass detected.", - "line_number": 167, + "line_number": 160, "line_range": [ - 167 + 160 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b110_try_except_pass.html", "test_id": "B110", "test_name": "try_except_pass" }, { - "code": "403 if full_jitter:\n404 countdown = random.randrange(countdown + 1)\n405 # Adjust according to maximum wait time and account for negative values.\n", + "code": "385 if full_jitter:\n386 countdown = random.randrange(countdown + 1)\n387 # Adjust according to maximum wait time and account for negative values.\n", "filename": "celery/utils/time.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Standard pseudo-random generators are not suitable for security/cryptographic purposes.", - "line_number": 404, + "line_number": 386, "line_range": [ - 404 + 386 ], "more_info": "https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b311-random", "test_id": "B311", "test_name": "blacklist" }, { - "code": "79 \n80 assert self.keepalive, 'cannot scale down too fast.'\n81 \n", + "code": "75 \n76 assert self.keepalive, 'cannot scale down too fast.'\n77 \n", "filename": "celery/worker/autoscale.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.", - "line_number": 80, + "line_number": 76, "line_range": [ - 80 + 76 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html", "test_id": "B101", "test_name": "assert_used" }, { - "code": "341 self.connection.collect()\n342 except Exception: # pylint: disable=broad-except\n343 pass\n", + "code": "335 self.connection.collect()\n336 except Exception: # pylint: disable=broad-except\n337 pass\n", "filename": "celery/worker/consumer/consumer.py", "issue_confidence": "HIGH", "issue_severity": "LOW", "issue_text": "Try, Except, Pass detected.", - "line_number": 342, + "line_number": 336, "line_range": [ - 342 + 336 ], "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b110_try_except_pass.html", "test_id": "B110", diff --git a/celery/__main__.py b/celery/__main__.py index b1e5c42fcb5..b0557b18548 100644 --- a/celery/__main__.py +++ b/celery/__main__.py @@ -2,17 +2,17 @@ import sys -from . import maybe_patch_concurrency +# from . import maybe_patch_concurrency __all__ = ('main',) def main(): """Entrypoint to the ``celery`` umbrella command.""" - if 'multi' not in sys.argv: - maybe_patch_concurrency() + # if 'multi' not in sys.argv: + # maybe_patch_concurrency() from celery.bin.celery import main as _main - _main() + sys.exit(_main()) if __name__ == '__main__': # pragma: no cover diff --git a/celery/app/base.py b/celery/app/base.py index 250ad6f23ee..c4657ce39f6 100644 --- a/celery/app/base.py +++ b/celery/app/base.py @@ -31,10 +31,9 @@ from celery.utils.log import get_logger from celery.utils.objects import FallbackContext, mro_lookup from celery.utils.time import timezone, to_utc - +from . import backends # Load all builtin tasks from . import builtins # noqa -from . import backends from .annotations import prepare as prepare_annotations from .autoretry import add_autoretry_behaviour from .defaults import DEFAULT_SECURITY_DIGEST, find_deprecated_settings @@ -342,24 +341,6 @@ def close(self): self._pool = None _deregister_app(self) - def start(self, argv=None): - """Run :program:`celery` using `argv`. - - Uses :data:`sys.argv` if `argv` is not specified. - """ - return instantiate( - 'celery.bin.celery:CeleryCommand', app=self - ).execute_from_commandline(argv) - - def worker_main(self, argv=None): - """Run :program:`celery worker` using `argv`. - - Uses :data:`sys.argv` if `argv` is not specified. - """ - return instantiate( - 'celery.bin.worker:worker', app=self - ).execute_from_commandline(argv) - def task(self, *args, **opts): """Decorator to create a task class out of any callable. diff --git a/celery/bin/__init__.py b/celery/bin/__init__.py index e682e2dc318..e69de29bb2d 100644 --- a/celery/bin/__init__.py +++ b/celery/bin/__init__.py @@ -1,3 +0,0 @@ -from .base import Option - -__all__ = ('Option',) diff --git a/celery/bin/amqp.py b/celery/bin/amqp.py index 2543e854402..8b3dea87c71 100644 --- a/celery/bin/amqp.py +++ b/celery/bin/amqp.py @@ -1,97 +1,12 @@ -"""The :program:`celery amqp` command. +"""AMQP 0.9.1 REPL.""" -.. program:: celery amqp -""" -import cmd as _cmd import pprint -import shlex -import sys -from functools import partial -from itertools import count -from kombu.utils.encoding import safe_str +import click +from amqp import Connection, Message +from click_repl import register_repl -from celery.bin.base import Command -from celery.five import string_t -from celery.utils.functional import padlist -from celery.utils.serialization import strtobool - -__all__ = ('AMQPAdmin', 'AMQShell', 'Spec', 'amqp') - -# Map to coerce strings to other types. -COERCE = {bool: strtobool} - -HELP_HEADER = """ -Commands --------- -""".rstrip() - -EXAMPLE_TEXT = """ -Example: - -> queue.delete myqueue yes no -""" - -say = partial(print, file=sys.stderr) - - -class Spec: - """AMQP Command specification. - - Used to convert arguments to Python values and display various help - and tool-tips. - - Arguments: - args (Sequence): see :attr:`args`. - returns (str): see :attr:`returns`. - """ - - #: List of arguments this command takes. - #: Should contain ``(argument_name, argument_type)`` tuples. - args = None - - #: Helpful human string representation of what this command returns. - #: May be :const:`None`, to signify the return type is unknown. - returns = None - - def __init__(self, *args, **kwargs): - self.args = args - self.returns = kwargs.get('returns') - - def coerce(self, index, value): - """Coerce value for argument at index.""" - arg_info = self.args[index] - arg_type = arg_info[1] - # Might be a custom way to coerce the string value, - # so look in the coercion map. - return COERCE.get(arg_type, arg_type)(value) - - def str_args_to_python(self, arglist): - """Process list of string arguments to values according to spec. - - Example: - >>> spec = Spec([('queue', str), ('if_unused', bool)]) - >>> spec.str_args_to_python('pobox', 'true') - ('pobox', True) - """ - return tuple( - self.coerce(index, value) for index, value in enumerate(arglist)) - - def format_response(self, response): - """Format the return value of this command in a human-friendly way.""" - if not self.returns: - return 'ok.' if response is None else response - if callable(self.returns): - return self.returns(response) - return self.returns.format(response) - - def format_arg(self, name, type, default_value=None): - if default_value is not None: - return f'{name}:{default_value}' - return name - - def format_signature(self): - return ' '.join(self.format_arg(*padlist(list(arg), 3)) - for arg in self.args) +__all__ = ('amqp',) def dump_message(message): @@ -102,268 +17,289 @@ def dump_message(message): 'delivery_info': message.delivery_info} -def format_declare_queue(ret): - return 'ok. queue:{} messages:{} consumers:{}.'.format(*ret) +class AMQPContext: + def __init__(self, cli_context): + self.cli_context = cli_context + self.connection = self.cli_context.app.connection() + self.channel = None + self.reconnect() + def respond(self, retval): + if isinstance(retval, str): + self.cli_context.echo(retval) + else: + self.cli_context.echo(pprint.pformat(retval)) -class AMQShell(_cmd.Cmd): - """AMQP API Shell. + def echo_error(self, exception): + self.cli_context.error(f'{self.cli_context.ERROR}: {exception}') - Arguments: - connect (Callable): Function used to connect to the server. - Must return :class:`kombu.Connection` object. - silent (bool): If enabled, the commands won't have annoying - output not relevant when running in non-shell mode. - """ + def echo_ok(self): + self.cli_context.echo(self.cli_context.OK) - conn = None - chan = None - prompt_fmt = '{self.counter}> ' - identchars = _cmd.IDENTCHARS = '.' - needs_reconnect = False - counter = 1 - inc_counter = count(2) - - #: Map of built-in command names -> method names - builtins = { - 'EOF': 'do_exit', - 'exit': 'do_exit', - 'help': 'do_help', - } - - #: Map of AMQP API commands and their :class:`Spec`. - amqp = { - 'exchange.declare': Spec(('exchange', str), - ('type', str), - ('passive', bool, 'no'), - ('durable', bool, 'no'), - ('auto_delete', bool, 'no'), - ('internal', bool, 'no')), - 'exchange.delete': Spec(('exchange', str), - ('if_unused', bool)), - 'queue.bind': Spec(('queue', str), - ('exchange', str), - ('routing_key', str)), - 'queue.declare': Spec(('queue', str), - ('passive', bool, 'no'), - ('durable', bool, 'no'), - ('exclusive', bool, 'no'), - ('auto_delete', bool, 'no'), - returns=format_declare_queue), - 'queue.delete': Spec(('queue', str), - ('if_unused', bool, 'no'), - ('if_empty', bool, 'no'), - returns='ok. {0} messages deleted.'), - 'queue.purge': Spec(('queue', str), - returns='ok. {0} messages deleted.'), - 'basic.get': Spec(('queue', str), - ('no_ack', bool, 'off'), - returns=dump_message), - 'basic.publish': Spec(('msg', str), - ('exchange', str), - ('routing_key', str), - ('mandatory', bool, 'no'), - ('immediate', bool, 'no')), - 'basic.ack': Spec(('delivery_tag', int)), - } - - def _prepare_spec(self, conn): - # XXX Hack to fix Issue #2013 - from amqp import Connection, Message - if isinstance(conn.connection, Connection): - self.amqp['basic.publish'] = Spec(('msg', Message), - ('exchange', str), - ('routing_key', str), - ('mandatory', bool, 'no'), - ('immediate', bool, 'no')) - - def __init__(self, *args, **kwargs): - self.connect = kwargs.pop('connect') - self.silent = kwargs.pop('silent', False) - self.out = kwargs.pop('out', sys.stderr) - _cmd.Cmd.__init__(self, *args, **kwargs) - self._reconnect() - - def note(self, m): - """Say something to the user. Disabled if :attr:`silent`.""" - if not self.silent: - say(m, file=self.out) - - def say(self, m): - say(m, file=self.out) - - def get_amqp_api_command(self, cmd, arglist): - """Get AMQP command wrapper. - - With a command name and a list of arguments, convert the arguments - to Python values and find the corresponding method on the AMQP channel - object. - - Returns: - Tuple: of `(method, processed_args)` pairs. - """ - spec = self.amqp[cmd] - args = spec.str_args_to_python(arglist) - attr_name = cmd.replace('.', '_') - if self.needs_reconnect: - self._reconnect() - return getattr(self.chan, attr_name), args, spec.format_response - - def do_exit(self, *args): - """The `'exit'` command.""" - self.note("\n-> please, don't leave!") - sys.exit(0) - - def display_command_help(self, cmd, short=False): - spec = self.amqp[cmd] - self.say('{} {}'.format(cmd, spec.format_signature())) - - def do_help(self, *args): - if not args: - self.say(HELP_HEADER) - for cmd_name in self.amqp: - self.display_command_help(cmd_name, short=True) - self.say(EXAMPLE_TEXT) + def reconnect(self): + if self.connection: + self.connection.close() else: - self.display_command_help(args[0]) - - def default(self, line): - self.say(f"unknown syntax: {line!r}. how about some 'help'?") - - def get_names(self): - return set(self.builtins) | set(self.amqp) - - def completenames(self, text, *ignored): - """Return all commands starting with `text`, for tab-completion.""" - names = self.get_names() - first = [cmd for cmd in names - if cmd.startswith(text.replace('_', '.'))] - if first: - return first - return [cmd for cmd in names - if cmd.partition('.')[2].startswith(text)] - - def dispatch(self, cmd, arglist): - """Dispatch and execute the command. - - Look-up order is: :attr:`builtins` -> :attr:`amqp`. - """ - if isinstance(arglist, string_t): - arglist = shlex.split(safe_str(arglist)) - if cmd in self.builtins: - return getattr(self, self.builtins[cmd])(*arglist) - fun, args, formatter = self.get_amqp_api_command(cmd, arglist) - return formatter(fun(*args)) - - def parseline(self, parts): - """Parse input line. - - Returns: - Tuple: of three items: - `(command_name, arglist, original_line)` - """ - if parts: - return parts[0], parts[1:], ' '.join(parts) - return '', '', '' - - def onecmd(self, line): - """Parse line and execute command.""" - if isinstance(line, string_t): - line = shlex.split(safe_str(line)) - cmd, arg, line = self.parseline(line) - if not line: - return self.emptyline() - self.lastcmd = line - self.counter = next(self.inc_counter) - try: - self.respond(self.dispatch(cmd, arg)) - except (AttributeError, KeyError): - self.default(line) - except Exception as exc: # pylint: disable=broad-except - self.say(exc) - self.needs_reconnect = True + self.connection = self.cli_context.app.connection() - def respond(self, retval): - """What to do with the return value of a command.""" - if retval is not None: - if isinstance(retval, string_t): - self.say(retval) - else: - self.say(pprint.pformat(retval)) - - def _reconnect(self): - """Re-establish connection to the AMQP server.""" - self.conn = self.connect(self.conn) - self._prepare_spec(self.conn) - self.chan = self.conn.default_channel - self.needs_reconnect = False - - @property - def prompt(self): - return self.prompt_fmt.format(self=self) - - -class AMQPAdmin: - """The celery :program:`celery amqp` utility.""" - - Shell = AMQShell - - def __init__(self, *args, **kwargs): - self.app = kwargs['app'] - self.out = kwargs.setdefault('out', sys.stderr) - self.silent = kwargs.get('silent') - self.args = args - - def connect(self, conn=None): - if conn: - conn.close() - conn = self.app.connection() - self.note('-> connecting to {}.'.format(conn.as_uri())) - conn.connect() - self.note('-> connected.') - return conn - - def run(self): - shell = self.Shell(connect=self.connect, out=self.out) - if self.args: - return shell.onecmd(self.args) + self.cli_context.echo(f'-> connecting to {self.connection.as_uri()}.') try: - return shell.cmdloop() - except KeyboardInterrupt: - self.note('(bibi)') - - def note(self, m): - if not self.silent: - say(m, file=self.out) + self.connection.connect() + except (ConnectionRefusedError, ConnectionResetError) as e: + self.echo_error(e) + else: + self.cli_context.secho('-> connected.', fg='green', bold=True) + self.channel = self.connection.default_channel -class amqp(Command): +@click.group(invoke_without_command=True) +@click.pass_context +def amqp(ctx): """AMQP Administration Shell. Also works for non-AMQP transports (but not ones that store declarations in memory). - - Examples: - .. code-block:: console - - $ # start shell mode - $ celery amqp - $ # show list of commands - $ celery amqp help - - $ celery amqp exchange.delete name - $ celery amqp queue.delete queue - $ celery amqp queue.delete queue yes yes """ - - def run(self, *args, **options): - options['app'] = self.app - return AMQPAdmin(*args, **options).run() - - -def main(): - amqp().execute_from_commandline() + if not isinstance(ctx.obj, AMQPContext): + ctx.obj = AMQPContext(ctx.obj) + + +@amqp.command(name='exchange.declare') +@click.argument('exchange', + type=str) +@click.argument('type', + type=str) +@click.argument('passive', + type=bool, + default=False) +@click.argument('durable', + type=bool, + default=False) +@click.argument('auto_delete', + type=bool, + default=False) +@click.pass_obj +def exchange_declare(amqp_context, exchange, type, passive, durable, + auto_delete): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + amqp_context.channel.exchange_declare(exchange=exchange, + type=type, + passive=passive, + durable=durable, + auto_delete=auto_delete) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.echo_ok() + + +@amqp.command(name='exchange.delete') +@click.argument('exchange', + type=str) +@click.argument('if_unused', + type=bool) +@click.pass_obj +def exchange_delete(amqp_context, exchange, if_unused): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + amqp_context.channel.exchange_delete(exchange=exchange, + if_unused=if_unused) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.echo_ok() + + +@amqp.command(name='queue.bind') +@click.argument('queue', + type=str) +@click.argument('exchange', + type=str) +@click.argument('routing_key', + type=str) +@click.pass_obj +def queue_bind(amqp_context, queue, exchange, routing_key): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + amqp_context.channel.queue_bind(queue=queue, + exchange=exchange, + routing_key=routing_key) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.echo_ok() + + +@amqp.command(name='queue.declare') +@click.argument('queue', + type=str) +@click.argument('passive', + type=bool, + default=False) +@click.argument('durable', + type=bool, + default=False) +@click.argument('auto_delete', + type=bool, + default=False) +@click.pass_obj +def queue_declare(amqp_context, queue, passive, durable, auto_delete): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + retval = amqp_context.channel.queue_declare(queue=queue, + passive=passive, + durable=durable, + auto_delete=auto_delete) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.cli_context.secho( + 'queue:{0} messages:{1} consumers:{2}'.format(*retval), + fg='cyan', bold=True) + amqp_context.echo_ok() + + +@amqp.command(name='queue.delete') +@click.argument('queue', + type=str) +@click.argument('if_unused', + type=bool, + default=False) +@click.argument('if_empty', + type=bool, + default=False) +@click.pass_obj +def queue_delete(amqp_context, queue, if_unused, if_empty): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + retval = amqp_context.channel.queue_delete(queue=queue, + if_unused=if_unused, + if_empty=if_empty) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.cli_context.secho( + f'{retval} messages deleted.', + fg='cyan', bold=True) + amqp_context.echo_ok() + + +@amqp.command(name='queue.purge') +@click.argument('queue', + type=str) +@click.pass_obj +def queue_purge(amqp_context, queue): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + retval = amqp_context.channel.queue_purge(queue=queue) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.cli_context.secho( + f'{retval} messages deleted.', + fg='cyan', bold=True) + amqp_context.echo_ok() + + +@amqp.command(name='basic.get') +@click.argument('queue', + type=str) +@click.argument('no_ack', + type=bool, + default=False) +@click.pass_obj +def basic_get(amqp_context, queue, no_ack): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + message = amqp_context.channel.basic_get(queue, no_ack=no_ack) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.respond(dump_message(message)) + amqp_context.echo_ok() + + +@amqp.command(name='basic.publish') +@click.argument('msg', + type=str) +@click.argument('exchange', + type=str) +@click.argument('routing_key', + type=str) +@click.argument('mandatory', + type=bool, + default=False) +@click.argument('immediate', + type=bool, + default=False) +@click.pass_obj +def basic_publish(amqp_context, msg, exchange, routing_key, mandatory, + immediate): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + # XXX Hack to fix Issue #2013 + if isinstance(amqp_context.connection.connection, Connection): + msg = Message(msg) + try: + amqp_context.channel.basic_publish(msg, + exchange=exchange, + routing_key=routing_key, + mandatory=mandatory, + immediate=immediate) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.echo_ok() + + +@amqp.command(name='basic.ack') +@click.argument('delivery_tag', + type=int) +@click.pass_obj +def basic_ack(amqp_context, delivery_tag): + if amqp_context.channel is None: + amqp_context.echo_error('Not connected to broker. Please retry...') + amqp_context.reconnect() + else: + try: + amqp_context.channel.basic_ack(delivery_tag) + except Exception as e: + amqp_context.echo_error(e) + amqp_context.reconnect() + else: + amqp_context.echo_ok() -if __name__ == '__main__': # pragma: no cover - main() +repl = register_repl(amqp) diff --git a/celery/bin/base.py b/celery/bin/base.py index 3e852a2f187..b11ebecade8 100644 --- a/celery/bin/base.py +++ b/celery/bin/base.py @@ -1,675 +1,232 @@ -"""Base command-line interface.""" -import argparse +"""Click customizations for Celery.""" import json -import os -import random -import re -import sys -import warnings -from collections import defaultdict -from heapq import heappush +from collections import OrderedDict from pprint import pformat -from celery import VERSION_BANNER, Celery, maybe_patch_concurrency, signals -from celery.exceptions import CDeprecationWarning, CPendingDeprecationWarning -from celery.five import (getfullargspec, items, long_t, string, string_t, - text_t) -from celery.platforms import EX_FAILURE, EX_OK, EX_USAGE, isatty -from celery.utils import imports, term, text -from celery.utils.functional import dictfilter -from celery.utils.nodenames import host_format, node_format -from celery.utils.objects import Bunch - -# Option is here for backwards compatibility, as third-party commands -# may import it from here. -try: - from optparse import Option # pylint: disable=deprecated-module -except ImportError: # pragma: no cover - Option = None # noqa - -try: - input = raw_input -except NameError: # pragma: no cover - pass - -__all__ = ( - 'Error', 'UsageError', 'Extensions', 'Command', 'Option', 'daemon_options', -) - -# always enable DeprecationWarnings, so our users can see them. -for warning in (CDeprecationWarning, CPendingDeprecationWarning): - warnings.simplefilter('once', warning, 0) - -# TODO: Remove this once we drop support for Python < 3.6 -if sys.version_info < (3, 6): - ModuleNotFoundError = ImportError - -ARGV_DISABLED = """ -Unrecognized command-line arguments: {0} - -Try --help? -""" - -UNABLE_TO_LOAD_APP_MODULE_NOT_FOUND = """ -Unable to load celery application. -The module {0} was not found. -""" - -UNABLE_TO_LOAD_APP_APP_MISSING = """ -Unable to load celery application. -{0} -""" - -find_long_opt = re.compile(r'.+?(--.+?)(?:\s|,|$)') -find_rst_ref = re.compile(r':\w+:`(.+?)`') -find_rst_decl = re.compile(r'^\s*\.\. .+?::.+$') - - -def _optparse_callback_to_type(option, callback): - parser = Bunch(values=Bunch()) - - def _on_arg(value): - callback(option, None, value, parser) - return getattr(parser.values, option.dest) - return _on_arg - - -def _add_optparse_argument(parser, opt, typemap=None): - typemap = { - 'string': text_t, - 'int': int, - 'long': long_t, - 'float': float, - 'complex': complex, - 'choice': None} if not typemap else typemap - if opt.callback: - opt.type = _optparse_callback_to_type(opt, opt.type) - # argparse checks for existence of this kwarg - if opt.action == 'callback': - opt.action = None - # store_true sets value to "('NO', 'DEFAULT')" for some - # crazy reason, so not to set a sane default here. - if opt.action == 'store_true' and opt.default is None: - opt.default = False - parser.add_argument( - *opt._long_opts + opt._short_opts, - **dictfilter({ - 'action': opt.action, - 'type': typemap.get(opt.type, opt.type), - 'dest': opt.dest, - 'nargs': opt.nargs, - 'choices': opt.choices, - 'help': opt.help, - 'metavar': opt.metavar, - 'default': opt.default})) - - -def _add_compat_options(parser, options): - for option in options or (): - if callable(option): - option(parser) - else: - _add_optparse_argument(parser, option) +import click +from click import ParamType +from kombu.utils.objects import cached_property +from celery._state import get_current_app +from celery.utils import text +from celery.utils.log import mlevel +from celery.utils.time import maybe_iso8601 -class Error(Exception): - """Exception raised by commands.""" - - status = EX_FAILURE +try: + from pygments import highlight + from pygments.lexers import PythonLexer + from pygments.formatters import Terminal256Formatter +except ImportError: + def highlight(s, *args, **kwargs): + """Place holder function in case pygments is missing.""" + return s + LEXER = None + FORMATTER = None +else: + LEXER = PythonLexer() + FORMATTER = Terminal256Formatter() + + +class CLIContext: + """Context Object for the CLI.""" + + def __init__(self, app, no_color, workdir, quiet=False): + """Initialize the CLI context.""" + self.app = app or get_current_app() + self.no_color = no_color + self.quiet = quiet + self.workdir = workdir - def __init__(self, reason, status=None): - self.reason = reason - self.status = status if status is not None else self.status - super().__init__(reason, status) + @cached_property + def OK(self): + return self.style("OK", fg="green", bold=True) \ - def __str__(self): - return self.reason + @cached_property + def ERROR(self): + return self.style("ERROR", fg="red", bold=True) -class UsageError(Error): - """Exception raised for malformed arguments.""" + def style(self, message=None, **kwargs): + if self.no_color: + return message + else: + return click.style(message, **kwargs) - status = EX_USAGE + def secho(self, message=None, **kwargs): + if self.no_color: + kwargs['color'] = False + click.echo(message, **kwargs) + else: + click.secho(message, **kwargs) + def echo(self, message=None, **kwargs): + if self.no_color: + kwargs['color'] = False + click.echo(message, **kwargs) + else: + click.echo(message, **kwargs) -class Extensions: - """Loads extensions from setuptools entrypoints.""" + def error(self, message=None, **kwargs): + kwargs['err'] = True + if self.no_color: + kwargs['color'] = False + click.echo(message, **kwargs) + else: + click.echo(message, **kwargs) - def __init__(self, namespace, register): - self.names = [] - self.namespace = namespace - self.register = register + def pretty(self, n): + if isinstance(n, list): + return self.OK, self.pretty_list(n) + if isinstance(n, dict): + if 'ok' in n or 'error' in n: + return self.pretty_dict_ok_error(n) + else: + s = json.dumps(n, sort_keys=True, indent=4) + if not self.no_color: + s = highlight(s, LEXER, FORMATTER) + return self.OK, s + if isinstance(n, str): + return self.OK, n + return self.OK, pformat(n) - def add(self, cls, name): - heappush(self.names, name) - self.register(cls, name=name) + def pretty_list(self, n): + if not n: + return '- empty -' + return '\n'.join( + f'{self.style("*", fg="white")} {item}' for item in n + ) - def load(self): - for name, cls in imports.load_extension_classes(self.namespace): - self.add(cls, name) - return self.names + def pretty_dict_ok_error(self, n): + try: + return (self.OK, + text.indent(self.pretty(n['ok'])[1], 4)) + except KeyError: + pass + return (self.ERROR, + text.indent(self.pretty(n['error'])[1], 4)) + def say_chat(self, direction, title, body='', show_body=False): + if direction == '<-' and self.quiet: + return + dirstr = not self.quiet and f'{self.style(direction, fg="white", bold=True)} ' or '' + self.echo(f'{dirstr} {title}') + if body and show_body: + self.echo(body) -class Command: - """Base class for command-line applications. - Arguments: - app (Celery): The app to use. - get_app (Callable): Fucntion returning the current app - when no app provided. - """ +class CeleryOption(click.Option): + """Customized option for Celery.""" - Error = Error - UsageError = UsageError - Parser = argparse.ArgumentParser + def get_default(self, ctx): + if self.default_value_from_context: + self.default = ctx.obj[self.default_value_from_context] + return super(CeleryOption, self).get_default(ctx) - #: Arg list used in help. - args = '' + def __init__(self, *args, **kwargs): + """Initialize a Celery option.""" + self.help_group = kwargs.pop('help_group', None) + self.default_value_from_context = kwargs.pop('default_value_from_context', None) + super(CeleryOption, self).__init__(*args, **kwargs) - #: Application version. - version = VERSION_BANNER - #: If false the parser will raise an exception if positional - #: args are provided. - supports_args = True +class CeleryCommand(click.Command): + """Customized command for Celery.""" - #: List of options (without preload options). - option_list = None + def format_options(self, ctx, formatter): + """Write all the options into the formatter if they exist.""" + opts = OrderedDict() + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + if hasattr(param, 'help_group') and param.help_group: + opts.setdefault(str(param.help_group), []).append(rv) + else: + opts.setdefault('Options', []).append(rv) - # module Rst documentation to parse help from (if any) - doc = None + for name, opts_group in opts.items(): + with formatter.section(name): + formatter.write_dl(opts_group) - # Some programs (multi) does not want to load the app specified - # (Issue #1008). - respects_app_option = True - #: Enable if the application should support config from the cmdline. - enable_config_from_cmdline = False +class CeleryDaemonCommand(CeleryCommand): + """Daemon commands.""" - #: Default configuration name-space. - namespace = None + def __init__(self, *args, **kwargs): + """Initialize a Celery command with common daemon options.""" + super().__init__(*args, **kwargs) + self.params.append(CeleryOption(('-f', '--logfile'), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--pidfile',), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--gid',), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--umask',), help_group="Daemonization Options")) + self.params.append(CeleryOption(('--executable',), help_group="Daemonization Options")) - #: Text to print at end of --help - epilog = None - #: Text to print in --help before option list. - description = '' +class CommaSeparatedList(ParamType): + """Comma separated list argument.""" - #: Set to true if this command doesn't have sub-commands - leaf = True + name = "comma separated list" - # used by :meth:`say_remote_command_reply`. - show_body = True - # used by :meth:`say_chat`. - show_reply = True + def convert(self, value, param, ctx): + return set(text.str_to_list(value)) - prog_name = 'celery' - #: Name of argparse option used for parsing positional args. - args_name = 'args' +class Json(ParamType): + """JSON formatted argument.""" - def __init__(self, app=None, get_app=None, no_color=False, - stdout=None, stderr=None, quiet=False, on_error=None, - on_usage_error=None): - self.app = app - self.get_app = get_app or self._get_default_app - self.stdout = stdout or sys.stdout - self.stderr = stderr or sys.stderr - self._colored = None - self._no_color = no_color - self.quiet = quiet - if not self.description: - self.description = self._strip_restructeredtext(self.__doc__) - if on_error: - self.on_error = on_error - if on_usage_error: - self.on_usage_error = on_usage_error - - def run(self, *args, **options): - raise NotImplementedError('subclass responsibility') - - def on_error(self, exc): - # pylint: disable=method-hidden - # on_error argument to __init__ may override this method. - self.error(self.colored.red(f'Error: {exc}')) - - def on_usage_error(self, exc): - # pylint: disable=method-hidden - # on_usage_error argument to __init__ may override this method. - self.handle_error(exc) - - def on_concurrency_setup(self): - pass - - def __call__(self, *args, **kwargs): - random.seed() # maybe we were forked. - self.verify_args(args) - try: - ret = self.run(*args, **kwargs) - return ret if ret is not None else EX_OK - except self.UsageError as exc: - self.on_usage_error(exc) - return exc.status - except self.Error as exc: - self.on_error(exc) - return exc.status - - def verify_args(self, given, _index=0): - S = getfullargspec(self.run) - _index = 1 if S.args and S.args[0] == 'self' else _index - required = S.args[_index:-len(S.defaults) if S.defaults else None] - missing = required[len(given):] - if missing: - raise self.UsageError('Missing required {}: {}'.format( - text.pluralize(len(missing), 'argument'), - ', '.join(missing) - )) - - def execute_from_commandline(self, argv=None): - """Execute application from command-line. - - Arguments: - argv (List[str]): The list of command-line arguments. - Defaults to ``sys.argv``. - """ - if argv is None: - argv = list(sys.argv) - # Should we load any special concurrency environment? - self.maybe_patch_concurrency(argv) - self.on_concurrency_setup() - - # Dump version and exit if '--version' arg set. - self.early_version(argv) - try: - argv = self.setup_app_from_commandline(argv) - except ModuleNotFoundError as e: - package_name = e.name - self.on_error(UNABLE_TO_LOAD_APP_MODULE_NOT_FOUND.format(package_name)) - return EX_FAILURE - except AttributeError as e: - msg = e.args[0].capitalize() - self.on_error(UNABLE_TO_LOAD_APP_APP_MISSING.format(msg)) - return EX_FAILURE - - self.prog_name = os.path.basename(argv[0]) - return self.handle_argv(self.prog_name, argv[1:]) - - def run_from_argv(self, prog_name, argv=None, command=None): - return self.handle_argv(prog_name, - sys.argv if argv is None else argv, command) - - def maybe_patch_concurrency(self, argv=None): - argv = argv or sys.argv - pool_option = self.with_pool_option(argv) - if pool_option: - maybe_patch_concurrency(argv, *pool_option) - - def usage(self, command): - return f'%(prog)s {command} [options] {self.args}' - - def add_arguments(self, parser): - pass - - def get_options(self): - # This is for optparse options, please use add_arguments. - return self.option_list - - def add_preload_arguments(self, parser): - group = parser.add_argument_group('Global Options') - group.add_argument('-A', '--app', default=None) - group.add_argument('-b', '--broker', default=None) - group.add_argument('--result-backend', default=None) - group.add_argument('--loader', default=None) - group.add_argument('--config', default=None) - group.add_argument('--workdir', default=None) - group.add_argument( - '--no-color', '-C', action='store_true', default=None) - group.add_argument('--quiet', '-q', action='store_true') - - def _add_version_argument(self, parser): - parser.add_argument( - '--version', action='version', version=self.version, - ) + name = "json" - def prepare_arguments(self, parser): - pass - - def expanduser(self, value): - if isinstance(value, string_t): - return os.path.expanduser(value) - return value - - def ask(self, q, choices, default=None): - """Prompt user to choose from a tuple of string values. - - If a default is not specified the question will be repeated - until the user gives a valid choice. - - Matching is case insensitive. - - Arguments: - q (str): the question to ask (don't include questionark) - choice (Tuple[str]): tuple of possible choices, must be lowercase. - default (Any): Default value if any. - """ - schoices = choices - if default is not None: - schoices = [c.upper() if c == default else c.lower() - for c in choices] - schoices = '/'.join(schoices) - - p = '{} ({})? '.format(q.capitalize(), schoices) - while 1: - val = input(p).lower() - if val in choices: - return val - elif default is not None: - break - return default - - def handle_argv(self, prog_name, argv, command=None): - """Parse arguments from argv and dispatch to :meth:`run`. - - Warning: - Exits with an error message if :attr:`supports_args` is disabled - and ``argv`` contains positional arguments. - - Arguments: - prog_name (str): The program name (``argv[0]``). - argv (List[str]): Rest of command-line arguments. - """ - options, args = self.prepare_args( - *self.parse_options(prog_name, argv, command)) - return self(*args, **options) - - def prepare_args(self, options, args): - if options: - options = { - k: self.expanduser(v) - for k, v in items(options) if not k.startswith('_') - } - args = [self.expanduser(arg) for arg in args] - self.check_args(args) - return options, args - - def check_args(self, args): - if not self.supports_args and args: - self.die(ARGV_DISABLED.format(', '.join(args)), EX_USAGE) - - def error(self, s): - self.out(s, fh=self.stderr) - - def out(self, s, fh=None): - print(s, file=fh or self.stdout) - - def die(self, msg, status=EX_FAILURE): - self.error(msg) - sys.exit(status) - - def early_version(self, argv): - if '--version' in argv: - print(self.version, file=self.stdout) - sys.exit(0) - - def parse_options(self, prog_name, arguments, command=None): - """Parse the available options.""" - # Don't want to load configuration to just print the version, - # so we handle --version manually here. - self.parser = self.create_parser(prog_name, command) - options = vars(self.parser.parse_args(arguments)) - return options, options.pop(self.args_name, None) or [] - - def create_parser(self, prog_name, command=None): - # for compatibility with optparse usage. - usage = self.usage(command).replace('%prog', '%(prog)s') - parser = self.Parser( - prog=prog_name, - usage=usage, - epilog=self._format_epilog(self.epilog), - formatter_class=argparse.RawDescriptionHelpFormatter, - description=self._format_description(self.description), - ) - self._add_version_argument(parser) - self.add_preload_arguments(parser) - self.add_arguments(parser) - self.add_compat_options(parser, self.get_options()) - self.add_compat_options(parser, self.app.user_options['preload']) - - if self.supports_args: - # for backward compatibility with optparse, we automatically - # add arbitrary positional args. - parser.add_argument(self.args_name, nargs='*') - return self.prepare_parser(parser) - - def _format_epilog(self, epilog): - if epilog: - return f'\n{epilog}\n\n' - return '' - - def _format_description(self, description): - width = argparse.HelpFormatter('prog')._width - return text.ensure_newlines( - text.fill_paragraphs(text.dedent(description), width)) - - def add_compat_options(self, parser, options): - _add_compat_options(parser, options) - - def prepare_parser(self, parser): - docs = [self.parse_doc(doc) for doc in (self.doc, __doc__) if doc] - for doc in docs: - for long_opt, help in items(doc): - option = parser._option_string_actions[long_opt] - if option is not None: - option.help = ' '.join(help).format(default=option.default) - return parser - - def setup_app_from_commandline(self, argv): - preload_options, remaining_options = self.parse_preload_options(argv) - quiet = preload_options.get('quiet') - if quiet is not None: - self.quiet = quiet + def convert(self, value, param, ctx): try: - self.no_color = preload_options['no_color'] - except KeyError: - pass - workdir = preload_options.get('workdir') - if workdir: - os.chdir(workdir) - app = (preload_options.get('app') or - os.environ.get('CELERY_APP') or - self.app) - preload_loader = preload_options.get('loader') - if preload_loader: - # Default app takes loader from this env (Issue #1066). - os.environ['CELERY_LOADER'] = preload_loader - loader = (preload_loader, - os.environ.get('CELERY_LOADER') or - 'default') - broker = preload_options.get('broker', None) - if broker: - os.environ['CELERY_BROKER_URL'] = broker - result_backend = preload_options.get('result_backend', None) - if result_backend: - os.environ['CELERY_RESULT_BACKEND'] = result_backend - config = preload_options.get('config') - if config: - os.environ['CELERY_CONFIG_MODULE'] = config - if self.respects_app_option: - if app: - self.app = self.find_app(app) - elif self.app is None: - self.app = self.get_app(loader=loader) - if self.enable_config_from_cmdline: - remaining_options = self.process_cmdline_config(remaining_options) - else: - self.app = Celery(fixups=[]) - - self._handle_user_preload_options(argv) - - return remaining_options + return json.loads(value) + except ValueError as e: + self.fail(str(e)) - def _handle_user_preload_options(self, argv): - user_preload = tuple(self.app.user_options['preload'] or ()) - if user_preload: - user_options, _ = self._parse_preload_options(argv, user_preload) - signals.user_preload_options.send( - sender=self, app=self.app, options=user_options, - ) - def find_app(self, app): - from celery.app.utils import find_app - return find_app(app, symbol_by_name=self.symbol_by_name) +class ISO8601DateTime(ParamType): + """ISO 8601 Date Time argument.""" - def symbol_by_name(self, name, imp=imports.import_from_cwd): - return imports.symbol_by_name(name, imp=imp) - get_cls_by_name = symbol_by_name # XXX compat + name = "iso-86091" - def process_cmdline_config(self, argv): + def convert(self, value, param, ctx): try: - cargs_start = argv.index('--') - except ValueError: - return argv - argv, cargs = argv[:cargs_start], argv[cargs_start + 1:] - self.app.config_from_cmdline(cargs, namespace=self.namespace) - return argv - - def parse_preload_options(self, args): - return self._parse_preload_options(args, [self.add_preload_arguments]) - - def _parse_preload_options(self, args, options): - args = [arg for arg in args if arg not in ('-h', '--help')] - parser = self.Parser() - self.add_compat_options(parser, options) - namespace, unknown_args = parser.parse_known_args(args) - return vars(namespace), unknown_args - - def add_append_opt(self, acc, opt, value): - default = opt.default or [] - - if opt.dest not in acc: - acc[opt.dest] = default - - acc[opt.dest].append(value) - - def parse_doc(self, doc): - options, in_option = defaultdict(list), None - for line in doc.splitlines(): - if line.startswith('.. cmdoption::'): - m = find_long_opt.match(line) - if m: - in_option = m.groups()[0].strip() - assert in_option, 'missing long opt' - elif in_option and line.startswith(' ' * 4): - if not find_rst_decl.match(line): - options[in_option].append( - find_rst_ref.sub( - r'\1', line.strip()).replace('`', '')) - return options - - def _strip_restructeredtext(self, s): - return '\n'.join( - find_rst_ref.sub(r'\1', line.replace('`', '')) - for line in (s or '').splitlines() - if not find_rst_decl.match(line) - ) + return maybe_iso8601(value) + except (TypeError, ValueError) as e: + self.fail(e) - def with_pool_option(self, argv): - """Return tuple of ``(short_opts, long_opts)``. - Returns only if the command - supports a pool argument, and used to monkey patch eventlet/gevent - environments as early as possible. +class ISO8601DateTimeOrFloat(ParamType): + """ISO 8601 Date Time or float argument.""" - Example: - >>> has_pool_option = (['-P'], ['--pool']) - """ + name = "iso-86091 or float" - def node_format(self, s, nodename, **extra): - return node_format(s, nodename, **extra) + def convert(self, value, param, ctx): + try: + return float(value) + except (TypeError, ValueError): + pass - def host_format(self, s, **extra): - return host_format(s, **extra) + try: + return maybe_iso8601(value) + except (TypeError, ValueError) as e: + self.fail(e) - def _get_default_app(self, *args, **kwargs): - from celery._state import get_current_app - return get_current_app() # omit proxy - def pretty_list(self, n): - c = self.colored - if not n: - return '- empty -' - return '\n'.join( - str(c.reset(c.white('*'), f' {item}')) for item in n - ) +class LogLevel(click.Choice): + """Log level option.""" - def pretty_dict_ok_error(self, n): - c = self.colored - try: - return (c.green('OK'), - text.indent(self.pretty(n['ok'])[1], 4)) - except KeyError: - pass - return (c.red('ERROR'), - text.indent(self.pretty(n['error'])[1], 4)) + def __init__(self): + """Initialize the log level option with the relevant choices.""" + super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL')) - def say_remote_command_reply(self, replies): - c = self.colored - node = next(iter(replies)) # <-- take first. - reply = replies[node] - status, preply = self.pretty(reply) - self.say_chat('->', c.cyan(node, ': ') + status, - text.indent(preply, 4) if self.show_reply else '') + def convert(self, value, param, ctx): + value = super().convert(value, param, ctx) + return mlevel(value) - def pretty(self, n): - OK = str(self.colored.green('OK')) - if isinstance(n, list): - return OK, self.pretty_list(n) - if isinstance(n, dict): - if 'ok' in n or 'error' in n: - return self.pretty_dict_ok_error(n) - else: - return OK, json.dumps(n, sort_keys=True, indent=4) - if isinstance(n, string_t): - return OK, string(n) - return OK, pformat(n) - def say_chat(self, direction, title, body=''): - c = self.colored - if direction == '<-' and self.quiet: - return - dirstr = not self.quiet and c.bold(c.white(direction), ' ') or '' - self.out(c.reset(dirstr, title)) - if body and self.show_body: - self.out(body) - - @property - def colored(self): - if self._colored is None: - self._colored = term.colored( - enabled=isatty(self.stdout) and not self.no_color) - return self._colored - - @colored.setter - def colored(self, obj): - self._colored = obj - - @property - def no_color(self): - return self._no_color - - @no_color.setter - def no_color(self, value): - self._no_color = value - if self._colored is not None: - self._colored.enabled = not self._no_color - - -def daemon_options(parser, default_pidfile=None, default_logfile=None): - """Add daemon options to argparse parser.""" - group = parser.add_argument_group('Daemonization Options') - group.add_argument('-f', '--logfile', default=default_logfile), - group.add_argument('--pidfile', default=default_pidfile), - group.add_argument('--uid', default=None), - group.add_argument('--gid', default=None), - group.add_argument('--umask', default=None), - group.add_argument('--executable', default=None), +JSON = Json() +ISO8601 = ISO8601DateTime() +ISO8601_OR_FLOAT = ISO8601DateTimeOrFloat() +LOG_LEVEL = LogLevel() +COMMA_SEPARATED_LIST = CommaSeparatedList() diff --git a/celery/bin/beat.py b/celery/bin/beat.py index 40959568e68..54a74c14c7e 100644 --- a/celery/bin/beat.py +++ b/celery/bin/beat.py @@ -1,131 +1,70 @@ -"""The :program:`celery beat` command. - -.. program:: celery beat - -.. seealso:: - - See :ref:`preload-options` and :ref:`daemon-options`. - -.. cmdoption:: --detach - - Detach and run in the background as a daemon. - -.. cmdoption:: -s, --schedule - - Path to the schedule database. Defaults to `celerybeat-schedule`. - The extension '.db' may be appended to the filename. - Default is {default}. - -.. cmdoption:: -S, --scheduler - - Scheduler class to use. - Default is :class:`{default}`. - -.. cmdoption:: --max-interval - - Max seconds to sleep between schedule iterations. - -.. cmdoption:: -f, --logfile - - Path to log file. If no logfile is specified, `stderr` is used. - -.. cmdoption:: -l, --loglevel - - Logging level, choose between `DEBUG`, `INFO`, `WARNING`, - `ERROR`, `CRITICAL`, or `FATAL`. - -.. cmdoption:: --pidfile - - File used to store the process pid. Defaults to `celerybeat.pid`. - - The program won't start if this file already exists - and the pid is still alive. - -.. cmdoption:: --uid - - User id, or user name of the user to run as after detaching. - -.. cmdoption:: --gid - - Group id, or group name of the main group to change to after - detaching. - -.. cmdoption:: --umask - - Effective umask (in octal) of the process after detaching. Inherits - the umask of the parent process by default. - -.. cmdoption:: --workdir - - Optional directory to change to after detaching. - -.. cmdoption:: --executable - - Executable to use for the detached process. -""" +"""The :program:`celery beat` command.""" from functools import partial -from celery.bin.base import Command, daemon_options -from celery.platforms import detached, maybe_drop_privileges - -__all__ = ('beat',) - -HELP = __doc__ - - -class beat(Command): - """Start the beat periodic task scheduler. +import click - Examples: - .. code-block:: console - - $ celery beat -l info - $ celery beat -s /var/run/celery/beat-schedule --detach - $ celery beat -S django - - The last example requires the :pypi:`django-celery-beat` extension - package found on PyPI. - """ - - doc = HELP - enable_config_from_cmdline = True - supports_args = False +from celery.bin.base import LOG_LEVEL, CeleryDaemonCommand, CeleryOption +from celery.platforms import detached, maybe_drop_privileges - def run(self, detach=False, logfile=None, pidfile=None, uid=None, - gid=None, umask=None, workdir=None, **kwargs): - if not detach: - maybe_drop_privileges(uid=uid, gid=gid) - kwargs.pop('app', None) - beat = partial(self.app.Beat, - logfile=logfile, pidfile=pidfile, **kwargs) - if detach: - with detached(logfile, pidfile, uid, gid, umask, workdir): - return beat().run() - else: +@click.command(cls=CeleryDaemonCommand, context_settings={ + 'allow_extra_args': True +}) +@click.option('--detach', + cls=CeleryOption, + is_flag=True, + default=False, + help_group="Beat Options", + help="Detach and run in the background as a daemon.") +@click.option('-s', + '--schedule', + cls=CeleryOption, + callback=lambda ctx, _, value: value or ctx.obj.app.conf.beat_schedule_filename, + help_group="Beat Options", + help="Path to the schedule database." + " Defaults to `celerybeat-schedule`." + "The extension '.db' may be appended to the filename.") +@click.option('-S', + '--scheduler', + cls=CeleryOption, + callback=lambda ctx, _, value: value or ctx.obj.app.conf.beat_scheduler, + help_group="Beat Options", + help="Scheduler class to use.") +@click.option('--max-interval', + cls=CeleryOption, + type=int, + help_group="Beat Options", + help="Max seconds to sleep between schedule iterations.") +@click.option('-l', + '--loglevel', + default='WARNING', + cls=CeleryOption, + type=LOG_LEVEL, + help_group="Beat Options", + help="Logging level.") +@click.pass_context +def beat(ctx, detach=False, logfile=None, pidfile=None, uid=None, + gid=None, umask=None, workdir=None, **kwargs): + """Start the beat periodic task scheduler.""" + app = ctx.obj.app + + if ctx.args: + try: + app.config_from_cmdline(ctx.args) + except (KeyError, ValueError) as e: + # TODO: Improve the error messages + raise click.UsageError("Unable to parse extra configuration" + " from command line.\n" + f"Reason: {e}", ctx=ctx) + + if not detach: + maybe_drop_privileges(uid=uid, gid=gid) + + beat = partial(app.Beat, + logfile=logfile, pidfile=pidfile, **kwargs) + + if detach: + with detached(logfile, pidfile, uid, gid, umask, workdir): return beat().run() - - def add_arguments(self, parser): - c = self.app.conf - bopts = parser.add_argument_group('Beat Options') - bopts.add_argument('--detach', action='store_true', default=False) - bopts.add_argument( - '-s', '--schedule', default=c.beat_schedule_filename) - bopts.add_argument('--max-interval', type=float) - bopts.add_argument('-S', '--scheduler', default=c.beat_scheduler) - bopts.add_argument('-l', '--loglevel', default='WARN') - - daemon_options(parser, default_pidfile='celerybeat.pid') - - user_options = self.app.user_options['beat'] - if user_options: - uopts = parser.add_argument_group('User Options') - self.add_compat_options(uopts, user_options) - - -def main(app=None): - beat(app=app).execute_from_commandline() - - -if __name__ == '__main__': # pragma: no cover - main() + else: + return beat().run() diff --git a/celery/bin/call.py b/celery/bin/call.py index 1cf123c693e..c2744a4cd28 100644 --- a/celery/bin/call.py +++ b/celery/bin/call.py @@ -1,81 +1,70 @@ """The ``celery call`` program used to send tasks from the command-line.""" -from kombu.utils.json import loads +import click -from celery.bin.base import Command -from celery.five import string_t -from celery.utils.time import maybe_iso8601 +from celery.bin.base import (ISO8601, ISO8601_OR_FLOAT, JSON, CeleryCommand, + CeleryOption) -class call(Command): - """Call a task by name. - - Examples: - .. code-block:: console - - $ celery call tasks.add --args='[2, 2]' - $ celery call tasks.add --args='[2, 2]' --countdown=10 - """ - - args = '' - - # since we have an argument --args, we need to name this differently. - args_name = 'posargs' - - def add_arguments(self, parser): - group = parser.add_argument_group('Calling Options') - group.add_argument('--args', '-a', - help='positional arguments (json).') - group.add_argument('--kwargs', '-k', - help='keyword arguments (json).') - group.add_argument('--eta', - help='scheduled time (ISO-8601).') - group.add_argument( - '--countdown', type=float, - help='eta in seconds from now (float/int).', - ) - group.add_argument( - '--expires', - help='expiry time (ISO-8601/float/int).', - ), - group.add_argument( - '--serializer', default='json', - help='defaults to json.'), - - ropts = parser.add_argument_group('Routing Options') - ropts.add_argument('--queue', help='custom queue name.') - ropts.add_argument('--exchange', help='custom exchange name.') - ropts.add_argument('--routing-key', help='custom routing key.') - - def run(self, name, *_, **kwargs): - self._send_task(name, **kwargs) - - def _send_task(self, name, args=None, kwargs=None, - countdown=None, serializer=None, - queue=None, exchange=None, routing_key=None, - eta=None, expires=None, **_): - # arguments - args = loads(args) if isinstance(args, string_t) else args - kwargs = loads(kwargs) if isinstance(kwargs, string_t) else kwargs - - # Expires can be int/float. - try: - expires = float(expires) - except (TypeError, ValueError): - # or a string describing an ISO 8601 datetime. - try: - expires = maybe_iso8601(expires) - except (TypeError, ValueError): - raise - - # send the task and print the id. - self.out(self.app.send_task( - name, - args=args or (), kwargs=kwargs or {}, - countdown=countdown, - serializer=serializer, - queue=queue, - exchange=exchange, - routing_key=routing_key, - eta=maybe_iso8601(eta), - expires=expires, - ).id) +@click.argument('name') +@click.option('-a', + '--args', + cls=CeleryOption, + type=JSON, + default='[]', + help_group="Calling Options", + help="Positional arguments.") +@click.option('-k', + '--kwargs', + cls=CeleryOption, + type=JSON, + default='{}', + help_group="Calling Options", + help="Keyword arguments.") +@click.option('--eta', + cls=CeleryOption, + type=ISO8601, + help_group="Calling Options", + help="scheduled time.") +@click.option('--countdown', + cls=CeleryOption, + type=float, + help_group="Calling Options", + help="eta in seconds from now.") +@click.option('--expires', + cls=CeleryOption, + type=ISO8601_OR_FLOAT, + help_group="Calling Options", + help="expiry time.") +@click.option('--serializer', + cls=CeleryOption, + default='json', + help_group="Calling Options", + help="task serializer.") +@click.option('--queue', + cls=CeleryOption, + help_group="Routing Options", + help="custom queue name.") +@click.option('--exchange', + cls=CeleryOption, + help_group="Routing Options", + help="custom exchange name.") +@click.option('--routing-key', + cls=CeleryOption, + help_group="Routing Options", + help="custom routing key.") +@click.command(cls=CeleryCommand) +@click.pass_context +def call(ctx, name, args, kwargs, eta, countdown, expires, serializer, queue, exchange, routing_key): + """Call a task by name.""" + task_id = ctx.obj.app.send_task( + name, + args=args, kwargs=kwargs, + countdown=countdown, + serializer=serializer, + queue=queue, + exchange=exchange, + routing_key=routing_key, + eta=eta, + expires=expires + ).id + ctx.obj.echo(task_id) diff --git a/celery/bin/celery.py b/celery/bin/celery.py index 62c609c7aff..4f7c95d065c 100644 --- a/celery/bin/celery.py +++ b/celery/bin/celery.py @@ -1,549 +1,150 @@ -"""The :program:`celery` umbrella command. +"""Celery Command Line Interface.""" +import os -.. program:: celery +import click +from click.types import ParamType +from click_didyoumean import DYMGroup -.. _preload-options: - -Preload Options ---------------- - -These options are supported by all commands, -and usually parsed before command-specific arguments. - -.. cmdoption:: -A, --app - - app instance to use (e.g., ``module.attr_name``) - -.. cmdoption:: -b, --broker - - URL to broker. default is ``amqp://guest@localhost//`` - -.. cmdoption:: --loader - - name of custom loader class to use. - -.. cmdoption:: --config - - Name of the configuration module - -.. cmdoption:: -C, --no-color - - Disable colors in output. - -.. cmdoption:: -q, --quiet - - Give less verbose output (behavior depends on the sub command). - -.. cmdoption:: --help - - Show help and exit. - -.. _daemon-options: - -Daemon Options --------------- - -These options are supported by commands that can detach -into the background (daemon). They will be present -in any command that also has a `--detach` option. - -.. cmdoption:: -f, --logfile - - Path to log file. If no logfile is specified, `stderr` is used. - -.. cmdoption:: --pidfile - - Optional file used to store the process pid. - - The program won't start if this file already exists - and the pid is still alive. - -.. cmdoption:: --uid - - User id, or user name of the user to run as after detaching. - -.. cmdoption:: --gid - - Group id, or group name of the main group to change to after - detaching. - -.. cmdoption:: --umask - - Effective umask (in octal) of the process after detaching. Inherits - the umask of the parent process by default. - -.. cmdoption:: --workdir - - Optional directory to change to after detaching. - -.. cmdoption:: --executable - - Executable to use for the detached process. - -``celery inspect`` ------------------- - -.. program:: celery inspect - -.. cmdoption:: -t, --timeout - - Timeout in seconds (float) waiting for reply - -.. cmdoption:: -d, --destination - - Comma separated list of destination node names. - -.. cmdoption:: -j, --json - - Use json as output format. - -``celery control`` ------------------- - -.. program:: celery control - -.. cmdoption:: -t, --timeout - - Timeout in seconds (float) waiting for reply - -.. cmdoption:: -d, --destination - - Comma separated list of destination node names. - -.. cmdoption:: -j, --json - - Use json as output format. - -``celery migrate`` ------------------- - -.. program:: celery migrate - -.. cmdoption:: -n, --limit - - Number of tasks to consume (int). - -.. cmdoption:: -t, -timeout - - Timeout in seconds (float) waiting for tasks. - -.. cmdoption:: -a, --ack-messages - - Ack messages from source broker. - -.. cmdoption:: -T, --tasks - - List of task names to filter on. - -.. cmdoption:: -Q, --queues - - List of queues to migrate. - -.. cmdoption:: -F, --forever - - Continually migrate tasks until killed. - -``celery upgrade`` ------------------- - -.. program:: celery upgrade - -.. cmdoption:: --django - - Upgrade a Django project. - -.. cmdoption:: --compat - - Maintain backwards compatibility. - -.. cmdoption:: --no-backup - - Don't backup original files. - -``celery shell`` ----------------- - -.. program:: celery shell - -.. cmdoption:: -I, --ipython - - Force :pypi:`iPython` implementation. - -.. cmdoption:: -B, --bpython - - Force :pypi:`bpython` implementation. - -.. cmdoption:: -P, --python - - Force default Python shell. - -.. cmdoption:: -T, --without-tasks - - Don't add tasks to locals. - -.. cmdoption:: --eventlet - - Use :pypi:`eventlet` monkey patches. - -.. cmdoption:: --gevent - - Use :pypi:`gevent` monkey patches. - -``celery result`` ------------------ - -.. program:: celery result - -.. cmdoption:: -t, --task - - Name of task (if custom backend). - -.. cmdoption:: --traceback - - Show traceback if any. - -``celery purge`` ----------------- - -.. program:: celery purge - -.. cmdoption:: -f, --force - - Don't prompt for verification before deleting messages (DANGEROUS) - -``celery call`` ---------------- - -.. program:: celery call - -.. cmdoption:: -a, --args - - Positional arguments (json format). - -.. cmdoption:: -k, --kwargs - - Keyword arguments (json format). - -.. cmdoption:: --eta - - Scheduled time in ISO-8601 format. - -.. cmdoption:: --countdown - - ETA in seconds from now (float/int). - -.. cmdoption:: --expires - - Expiry time in float/int seconds, or a ISO-8601 date. - -.. cmdoption:: --serializer - - Specify serializer to use (default is json). - -.. cmdoption:: --queue - - Destination queue. - -.. cmdoption:: --exchange - - Destination exchange (defaults to the queue exchange). - -.. cmdoption:: --routing-key - - Destination routing key (defaults to the queue routing key). -""" -import numbers -import sys -from functools import partial - -# Import commands from other modules +from celery import VERSION_BANNER +from celery.app.utils import find_app from celery.bin.amqp import amqp -# Cannot use relative imports here due to a Windows issue (#1111). -from celery.bin.base import Command, Extensions +from celery.bin.base import CeleryCommand, CeleryOption, CLIContext from celery.bin.beat import beat from celery.bin.call import call -from celery.bin.control import _RemoteControl # noqa from celery.bin.control import control, inspect, status from celery.bin.events import events from celery.bin.graph import graph from celery.bin.list import list_ from celery.bin.logtool import logtool from celery.bin.migrate import migrate +from celery.bin.multi import multi from celery.bin.purge import purge from celery.bin.result import result from celery.bin.shell import shell from celery.bin.upgrade import upgrade from celery.bin.worker import worker -from celery.platforms import EX_FAILURE, EX_OK, EX_USAGE -from celery.utils import term, text - -__all__ = ('CeleryCommand', 'main') - -HELP = """ ----- -- - - ---- Commands- -------------- --- ------------ - -{commands} ----- -- - - --------- -- - -------------- --- ------------ -Type '{prog_name} --help' for help using a specific command. -""" -command_classes = [ - ('Main', ['worker', 'events', 'beat', 'shell', 'multi', 'amqp'], 'green'), - ('Remote Control', ['status', 'inspect', 'control'], 'blue'), - ('Utils', - ['purge', 'list', 'call', 'result', 'migrate', 'graph', 'upgrade'], - None), - ('Debugging', ['report', 'logtool'], 'red'), -] +class App(ParamType): + """Application option.""" + name = "application" -def determine_exit_status(ret): - if isinstance(ret, numbers.Integral): - return ret - return EX_OK if ret else EX_FAILURE - - -def main(argv=None): - """Start celery umbrella command.""" - # Fix for setuptools generated scripts, so that it will - # work with multiprocessing fork emulation. - # (see multiprocessing.forking.get_preparation_data()) - try: - if __name__ != '__main__': # pragma: no cover - sys.modules['__main__'] = sys.modules[__name__] - cmd = CeleryCommand() - cmd.maybe_patch_concurrency() - from billiard import freeze_support - freeze_support() - cmd.execute_from_commandline(argv) - except KeyboardInterrupt: - pass - - -class multi(Command): - """Start multiple worker instances.""" - - respects_app_option = False - - def run_from_argv(self, prog_name, argv, command=None): - from celery.bin.multi import MultiTool - cmd = MultiTool(quiet=self.quiet, no_color=self.no_color) - return cmd.execute_from_commandline([command] + argv) - - -class help(Command): - """Show help screen and exit.""" - - def usage(self, command): - return f'%(prog)s [options] {self.args}' - - def run(self, *args, **kwargs): - self.parser.print_help() - self.out(HELP.format( - prog_name=self.prog_name, - commands=CeleryCommand.list_commands( - colored=self.colored, app=self.app), - )) - - return EX_USAGE - - -class report(Command): - """Shows information useful to include in bug-reports.""" - - def __init__(self, *args, **kwargs): - """Custom initialization for report command. - - We need this custom initialization to make sure that - everything is loaded when running a report. - There has been some issues when printing Django's - settings because Django is not properly setup when - running the report. - """ - super().__init__(*args, **kwargs) - self.app.loader.import_default_modules() - - def run(self, *args, **kwargs): - self.out(self.app.bugreport()) - return EX_OK - - -class CeleryCommand(Command): - """Base class for commands.""" - - commands = { - 'amqp': amqp, - 'beat': beat, - 'call': call, - 'control': control, - 'events': events, - 'graph': graph, - 'help': help, - 'inspect': inspect, - 'list': list_, - 'logtool': logtool, - 'migrate': migrate, - 'multi': multi, - 'purge': purge, - 'report': report, - 'result': result, - 'shell': shell, - 'status': status, - 'upgrade': upgrade, - 'worker': worker, - } - ext_fmt = '{self.namespace}.commands' - enable_config_from_cmdline = True - prog_name = 'celery' - namespace = 'celery' - - @classmethod - def register_command(cls, fun, name=None): - cls.commands[name or fun.__name__] = fun - return fun - - def execute(self, command, argv=None): - try: - cls = self.commands[command] - except KeyError: - cls, argv = self.commands['help'], ['help'] - try: - return cls( - app=self.app, on_error=self.on_error, - no_color=self.no_color, quiet=self.quiet, - on_usage_error=partial(self.on_usage_error, command=command), - ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) - except self.UsageError as exc: - self.on_usage_error(exc) - return exc.status - except self.Error as exc: - self.on_error(exc) - return exc.status - - def on_usage_error(self, exc, command=None): - if command: - helps = '{self.prog_name} {command} --help' - else: - helps = '{self.prog_name} --help' - self.error(self.colored.magenta(f'Error: {exc}')) - self.error("""Please try '{}'""".format(helps.format( - self=self, command=command, - ))) - - def _relocate_args_from_start(self, argv, index=0): - """Move options to the end of args. - - This rewrites: - -l debug worker -c 3 - to: - worker -c 3 -l debug - """ - if argv: - rest = [] - while index < len(argv): - value = argv[index] - if value.startswith('--'): - rest.append(value) - elif value.startswith('-'): - # we eat the next argument even though we don't know - # if this option takes an argument or not. - # instead we'll assume what's the command name in the - # return statements below. - try: - nxt = argv[index + 1] - if nxt.startswith('-'): - # is another option - rest.append(value) - else: - # is (maybe) a value for this option - rest.extend([value, nxt]) - index += 1 - except IndexError: # pragma: no cover - rest.append(value) - break - else: - break - index += 1 - if argv[index:]: # pragma: no cover - # if there are more arguments left then divide and swap - # we assume the first argument in argv[i:] is the command - # name. - return argv[index:] + rest - return [] - - def prepare_prog_name(self, name): - if name == '__main__.py': - return sys.modules['__main__'].__file__ - return name - - def handle_argv(self, prog_name, argv, **kwargs): - self.prog_name = self.prepare_prog_name(prog_name) - argv = self._relocate_args_from_start(argv) - _, argv = self.prepare_args(None, argv) + def convert(self, value, param, ctx): try: - command = argv[0] - except IndexError: - command, argv = 'help', ['help'] - return self.execute(command, argv) - - def execute_from_commandline(self, argv=None): - argv = sys.argv if argv is None else argv - if 'multi' in argv[1:3]: # Issue 1008 - self.respects_app_option = False - try: - sys.exit(determine_exit_status( - super().execute_from_commandline(argv))) - except KeyboardInterrupt: - sys.exit(EX_FAILURE) - - @classmethod - def get_command_info(cls, command, indent=0, - color=None, colored=None, app=None): - colored = term.colored() if colored is None else colored - colored = colored.names[color] if color else lambda x: x - obj = cls.commands[command] - cmd = 'celery {}'.format(colored(command)) - if obj.leaf: - return '|' + text.indent(cmd, indent) - return text.join([ - ' ', - '|' + text.indent(f'{cmd} --help', indent), - obj.list_commands(indent, f'celery {command}', colored, - app=app), - ]) - - @classmethod - def list_commands(cls, indent=0, colored=None, app=None): - colored = term.colored() if colored is None else colored - white = colored.white - ret = [] - for command_cls, commands, color in command_classes: - ret.extend([ - text.indent('+ {}: '.format(white(command_cls)), indent), - '\n'.join( - cls.get_command_info( - command, indent + 4, color, colored, app=app) - for command in commands), - '' - ]) - return '\n'.join(ret).strip() - - def with_pool_option(self, argv): - if len(argv) > 1 and 'worker' in argv[0:3]: - # this command supports custom pools - # that may have to be loaded as early as possible. - return (['-P'], ['--pool']) - - def on_concurrency_setup(self): - self.load_extension_commands() - - def load_extension_commands(self): - names = Extensions(self.ext_fmt.format(self=self), - self.register_command).load() - if names: - command_classes.append(('Extensions', names, 'magenta')) - - -if __name__ == '__main__': # pragma: no cover - main() + return find_app(value) + except (ModuleNotFoundError, AttributeError) as e: + self.fail(str(e)) + + +APP = App() + + +@click.group(cls=DYMGroup, invoke_without_command=True) +@click.option('-A', + '--app', + envvar='APP', + cls=CeleryOption, + type=APP, + help_group="Global Options") +@click.option('-b', + '--broker', + envvar='BROKER_URL', + cls=CeleryOption, + help_group="Global Options") +@click.option('--result-backend', + envvar='RESULT_BACKEND', + cls=CeleryOption, + help_group="Global Options") +@click.option('--loader', + envvar='LOADER', + cls=CeleryOption, + help_group="Global Options") +@click.option('--config', + envvar='CONFIG_MODULE', + cls=CeleryOption, + help_group="Global Options") +@click.option('--workdir', + cls=CeleryOption, + help_group="Global Options") +@click.option('-C', + '--no-color', + envvar='NO_COLOR', + is_flag=True, + cls=CeleryOption, + help_group="Global Options") +@click.option('-q', + '--quiet', + is_flag=True, + cls=CeleryOption, + help_group="Global Options") +@click.option('--version', + cls=CeleryOption, + is_flag=True, + help_group="Global Options") +@click.pass_context +def celery(ctx, app, broker, result_backend, loader, config, workdir, + no_color, quiet, version): + """Celery command entrypoint.""" + if version: + click.echo(VERSION_BANNER) + ctx.exit() + elif ctx.invoked_subcommand is None: + click.echo(ctx.get_help()) + ctx.exit() + + if workdir: + os.chdir(workdir) + if loader: + # Default app takes loader from this env (Issue #1066). + os.environ['CELERY_LOADER'] = loader + if broker: + os.environ['CELERY_BROKER_URL'] = broker + if result_backend: + os.environ['CELERY_RESULT_BACKEND'] = result_backend + if config: + os.environ['CELERY_CONFIG_MODULE'] = config + ctx.obj = CLIContext(app=app, no_color=no_color, workdir=workdir, quiet=quiet) + + # User options + worker.params.extend(ctx.obj.app.user_options.get('worker', [])) + beat.params.extend(ctx.obj.app.user_options.get('beat', [])) + events.params.extend(ctx.obj.app.user_options.get('events', [])) + + +@celery.command(cls=CeleryCommand) +@click.pass_context +def report(ctx): + """Shows information useful to include in bug-reports.""" + app = ctx.obj.app + app.loader.import_default_modules() + ctx.obj.echo(app.bugreport()) + + +celery.add_command(purge) +celery.add_command(call) +celery.add_command(beat) +celery.add_command(list_) +celery.add_command(result) +celery.add_command(migrate) +celery.add_command(status) +celery.add_command(worker) +celery.add_command(events) +celery.add_command(inspect) +celery.add_command(control) +celery.add_command(graph) +celery.add_command(upgrade) +celery.add_command(logtool) +celery.add_command(amqp) +celery.add_command(shell) +celery.add_command(multi) + + +def main() -> int: + """Start celery umbrella command. + + This function is the main entrypoint for the CLI. + + :return: The exit code of the CLI. + """ + return celery(auto_envvar_prefix="CELERY") diff --git a/celery/bin/celeryd_detach.py b/celery/bin/celeryd_detach.py deleted file mode 100644 index 724f466554c..00000000000 --- a/celery/bin/celeryd_detach.py +++ /dev/null @@ -1,136 +0,0 @@ -"""Program used to daemonize the worker. - -Using :func:`os.execv` as forking and multiprocessing -leads to weird issues (it was a long time ago now, but it -could have something to do with the threading mutex bug) -""" -import argparse -import os -import sys - -import celery -from celery.bin.base import daemon_options -from celery.platforms import EX_FAILURE, detached -from celery.utils.log import get_logger -from celery.utils.nodenames import default_nodename, node_format - -__all__ = ('detached_celeryd', 'detach') - -logger = get_logger(__name__) -C_FAKEFORK = os.environ.get('C_FAKEFORK') - - -def detach(path, argv, logfile=None, pidfile=None, uid=None, - gid=None, umask=None, workdir=None, fake=False, app=None, - executable=None, hostname=None): - """Detach program by argv'.""" - hostname = default_nodename(hostname) - logfile = node_format(logfile, hostname) - pidfile = node_format(pidfile, hostname) - fake = 1 if C_FAKEFORK else fake - with detached(logfile, pidfile, uid, gid, umask, workdir, fake, - after_forkers=False): - try: - if executable is not None: - path = executable - os.execv(path, [path] + argv) - except Exception: # pylint: disable=broad-except - if app is None: - from celery import current_app - app = current_app - app.log.setup_logging_subsystem( - 'ERROR', logfile, hostname=hostname) - logger.critical("Can't exec %r", ' '.join([path] + argv), - exc_info=True) - return EX_FAILURE - - -class detached_celeryd: - """Daemonize the celery worker process.""" - - usage = '%(prog)s [options] [celeryd options]' - version = celery.VERSION_BANNER - description = ('Detaches Celery worker nodes. See `celery worker --help` ' - 'for the list of supported worker arguments.') - command = sys.executable - execv_path = sys.executable - execv_argv = ['-m', 'celery', 'worker'] - - def __init__(self, app=None): - self.app = app - - def create_parser(self, prog_name): - parser = argparse.ArgumentParser( - prog=prog_name, - usage=self.usage, - description=self.description, - ) - self._add_version_argument(parser) - self.add_arguments(parser) - return parser - - def _add_version_argument(self, parser): - parser.add_argument( - '--version', action='version', version=self.version, - ) - - def parse_options(self, prog_name, argv): - parser = self.create_parser(prog_name) - options, leftovers = parser.parse_known_args(argv) - if options.logfile: - leftovers.append(f'--logfile={options.logfile}') - if options.pidfile: - leftovers.append(f'--pidfile={options.pidfile}') - if options.hostname: - leftovers.append(f'--hostname={options.hostname}') - return options, leftovers - - def execute_from_commandline(self, argv=None): - argv = sys.argv if argv is None else argv - prog_name = os.path.basename(argv[0]) - config, argv = self._split_command_line_config(argv) - options, leftovers = self.parse_options(prog_name, argv[1:]) - sys.exit(detach( - app=self.app, path=self.execv_path, - argv=self.execv_argv + leftovers + config, - **vars(options) - )) - - def _split_command_line_config(self, argv): - config = list(self._extract_command_line_config(argv)) - try: - argv = argv[:argv.index('--')] - except ValueError: - pass - return config, argv - - def _extract_command_line_config(self, argv): - # Extracts command-line config appearing after '--': - # celery worker -l info -- worker.prefetch_multiplier=10 - # This to make sure argparse doesn't gobble it up. - seen_cargs = 0 - for arg in argv: - if seen_cargs: - yield arg - else: - if arg == '--': - seen_cargs = 1 - yield arg - - def add_arguments(self, parser): - daemon_options(parser, default_pidfile='celeryd.pid') - parser.add_argument('--workdir', default=None) - parser.add_argument('-n', '--hostname') - parser.add_argument( - '--fake', - action='store_true', default=False, - help="Don't fork (for debugging purposes)", - ) - - -def main(app=None): - detached_celeryd(app).execute_from_commandline() - - -if __name__ == '__main__': # pragma: no cover - main() diff --git a/celery/bin/control.py b/celery/bin/control.py index 32f36915b18..fd6e8cbde2b 100644 --- a/celery/bin/control.py +++ b/celery/bin/control.py @@ -1,238 +1,187 @@ """The ``celery control``, ``. inspect`` and ``. status`` programs.""" +from functools import partial + +import click from kombu.utils.json import dumps -from kombu.utils.objects import cached_property -from celery.bin.base import Command -from celery.five import items, string_t -from celery.platforms import EX_UNAVAILABLE, EX_USAGE +from celery.bin.base import COMMA_SEPARATED_LIST, CeleryCommand, CeleryOption +from celery.platforms import EX_UNAVAILABLE from celery.utils import text - - -class _RemoteControl(Command): - - name = None - leaf = False - control_group = None - - def __init__(self, *args, **kwargs): - self.show_body = kwargs.pop('show_body', True) - self.show_reply = kwargs.pop('show_reply', True) - super().__init__(*args, **kwargs) - - def add_arguments(self, parser): - group = parser.add_argument_group('Remote Control Options') - group.add_argument( - '--timeout', '-t', type=float, - help='Timeout in seconds (float) waiting for reply', - ) - group.add_argument( - '--destination', '-d', - help='Comma separated list of destination node names.') - group.add_argument( - '--json', '-j', action='store_true', default=False, - help='Use json as output format.', - ) - - @classmethod - def get_command_info(cls, command, - indent=0, prefix='', color=None, - help=False, app=None, choices=None): - if choices is None: - choices = cls._choices_by_group(app) - meta = choices[command] - if help: - help = '|' + text.indent(meta.help, indent + 4) - else: - help = None - return text.join([ - '|' + text.indent('{}{} {}'.format( - prefix, color(command), meta.signature or ''), indent), - help, - ]) - - @classmethod - def list_commands(cls, indent=0, prefix='', - color=None, help=False, app=None): - choices = cls._choices_by_group(app) - color = color if color else lambda x: x - prefix = prefix + ' ' if prefix else '' - return '\n'.join( - cls.get_command_info(c, indent, prefix, color, help, - app=app, choices=choices) - for c in sorted(choices)) - - def usage(self, command): - return '%(prog)s {} [options] {} [arg1 .. argN]'.format( - command, self.args) - - def call(self, *args, **kwargs): - raise NotImplementedError('call') - - def run(self, *args, **kwargs): - if not args: - raise self.UsageError( - f'Missing {self.name} method. See --help') - return self.do_call_method(args, **kwargs) - - def _ensure_fanout_supported(self): - with self.app.connection_for_write() as conn: - if not conn.supports_exchange_type('fanout'): - raise self.Error( - 'Broadcast not supported by transport {!r}'.format( - conn.info()['transport'])) - - def do_call_method(self, args, - timeout=None, destination=None, json=False, **kwargs): - method = args[0] - if method == 'help': - raise self.Error(f"Did you mean '{self.name} --help'?") - try: - meta = self.choices[method] - except KeyError: - raise self.UsageError( - f'Unknown {self.name} method {method}') - - self._ensure_fanout_supported() - - timeout = timeout or meta.default_timeout - if destination and isinstance(destination, string_t): - destination = [dest.strip() for dest in destination.split(',')] - - replies = self.call( - method, - arguments=self.compile_arguments(meta, method, args[1:]), - timeout=timeout, - destination=destination, - callback=None if json else self.say_remote_command_reply, - ) - if not replies: - raise self.Error('No nodes replied within time constraint.', - status=EX_UNAVAILABLE) - if json: - self.out(dumps(replies)) - return replies - - def compile_arguments(self, meta, method, args): - args = list(args) - kw = {} - if meta.args: - kw.update({ - k: v for k, v in self._consume_args(meta, method, args) - }) - if meta.variadic: - kw.update({meta.variadic: args}) - if not kw and args: - raise self.Error( - f'Command {method!r} takes no arguments.', - status=EX_USAGE) - return kw or {} - - def _consume_args(self, meta, method, args): - i = 0 - try: - for i, arg in enumerate(args): - try: - name, typ = meta.args[i] - except IndexError: - if meta.variadic: - break - raise self.Error( - 'Command {!r} takes arguments: {}'.format( - method, meta.signature), - status=EX_USAGE) - else: - yield name, typ(arg) if typ is not None else arg - finally: - args[:] = args[i:] - - @classmethod - def _choices_by_group(cls, app): - from celery.worker.control import Panel - - # need to import task modules for custom user-remote control commands. - app.loader.import_default_modules() - - return { - name: info for name, info in items(Panel.meta) - if info.type == cls.control_group and info.visible - } - - @cached_property - def choices(self): - return self._choices_by_group(self.app) - - @property - def epilog(self): - return '\n'.join([ - '[Commands]', - self.list_commands(indent=4, help=True, app=self.app) - ]) - - -class inspect(_RemoteControl): +from celery.worker.control import Panel + + +def _say_remote_command_reply(ctx, replies, show_reply=False): + node = next(iter(replies)) # <-- take first. + reply = replies[node] + node = ctx.obj.style(f'{node}: ', fg='cyan', bold=True) + status, preply = ctx.obj.pretty(reply) + ctx.obj.say_chat('->', f'{node}{status}', + text.indent(preply, 4) if show_reply else '', + show_body=show_reply) + + +def _consume_arguments(meta, method, args): + i = 0 + try: + for i, arg in enumerate(args): + try: + name, typ = meta.args[i] + except IndexError: + if meta.variadic: + break + raise click.UsageError( + 'Command {0!r} takes arguments: {1}'.format( + method, meta.signature)) + else: + yield name, typ(arg) if typ is not None else arg + finally: + args[:] = args[i:] + + +def _compile_arguments(action, args): + meta = Panel.meta[action] + arguments = {} + if meta.args: + arguments.update({ + k: v for k, v in _consume_arguments(meta, action, args) + }) + if meta.variadic: + arguments.update({meta.variadic: args}) + return arguments + + +@click.command(cls=CeleryCommand) +@click.option('-t', + '--timeout', + cls=CeleryOption, + type=float, + default=1.0, + help_group='Remote Control Options', + help='Timeout in seconds waiting for reply.') +@click.option('-d', + '--destination', + cls=CeleryOption, + type=COMMA_SEPARATED_LIST, + help_group='Remote Control Options', + help='Comma separated list of destination node names.') +@click.option('-j', + '--json', + cls=CeleryOption, + is_flag=True, + help_group='Remote Control Options', + help='Use json as output format.') +@click.pass_context +def status(ctx, timeout, destination, json, **kwargs): + """Show list of workers that are online.""" + callback = None if json else partial(_say_remote_command_reply, ctx) + replies = ctx.obj.app.control.inspect(timeout=timeout, + destination=destination, + callback=callback).ping() + + if not replies: + ctx.obj.echo('No nodes replied within time constraint') + return EX_UNAVAILABLE + + if json: + ctx.obj.echo(dumps(replies)) + nodecount = len(replies) + if not kwargs.get('quiet', False): + ctx.obj.echo('\n{0} {1} online.'.format( + nodecount, text.pluralize(nodecount, 'node'))) + + +@click.command(cls=CeleryCommand) +@click.argument("action", type=click.Choice([ + name for name, info in Panel.meta.items() + if info.type == 'inspect' and info.visible +])) +@click.option('-t', + '--timeout', + cls=CeleryOption, + type=float, + default=1.0, + help_group='Remote Control Options', + help='Timeout in seconds waiting for reply.') +@click.option('-d', + '--destination', + cls=CeleryOption, + type=COMMA_SEPARATED_LIST, + help_group='Remote Control Options', + help='Comma separated list of destination node names.') +@click.option('-j', + '--json', + cls=CeleryOption, + is_flag=True, + help_group='Remote Control Options', + help='Use json as output format.') +@click.pass_context +def inspect(ctx, action, timeout, destination, json, **kwargs): """Inspect the worker at runtime. Availability: RabbitMQ (AMQP) and Redis transports. - - Examples: - .. code-block:: console - - $ celery inspect active --timeout=5 - $ celery inspect scheduled -d worker1@example.com - $ celery inspect revoked -d w1@e.com,w2@e.com """ - - name = 'inspect' - control_group = 'inspect' - - def call(self, method, arguments, **options): - return self.app.control.inspect(**options)._request( - method, **arguments) - - -class control(_RemoteControl): + callback = None if json else partial(_say_remote_command_reply, ctx, + show_reply=True) + replies = ctx.obj.app.control.inspect(timeout=timeout, + destination=destination, + callback=callback)._request(action) + + if not replies: + ctx.obj.echo('No nodes replied within time constraint') + return EX_UNAVAILABLE + + if json: + ctx.obj.echo(dumps(replies)) + nodecount = len(replies) + if not ctx.obj.quiet: + ctx.obj.echo('\n{0} {1} online.'.format( + nodecount, text.pluralize(nodecount, 'node'))) + + +@click.command(cls=CeleryCommand, + context_settings={'allow_extra_args': True}) +@click.argument("action", type=click.Choice([ + name for name, info in Panel.meta.items() + if info.type == 'control' and info.visible +])) +@click.option('-t', + '--timeout', + cls=CeleryOption, + type=float, + default=1.0, + help_group='Remote Control Options', + help='Timeout in seconds waiting for reply.') +@click.option('-d', + '--destination', + cls=CeleryOption, + type=COMMA_SEPARATED_LIST, + help_group='Remote Control Options', + help='Comma separated list of destination node names.') +@click.option('-j', + '--json', + cls=CeleryOption, + is_flag=True, + help_group='Remote Control Options', + help='Use json as output format.') +@click.pass_context +def control(ctx, action, timeout, destination, json): """Workers remote control. Availability: RabbitMQ (AMQP), Redis, and MongoDB transports. - - Examples: - .. code-block:: console - - $ celery control enable_events --timeout=5 - $ celery control -d worker1@example.com enable_events - $ celery control -d w1.e.com,w2.e.com enable_events - - $ celery control -d w1.e.com add_consumer queue_name - $ celery control -d w1.e.com cancel_consumer queue_name - - $ celery control add_consumer queue exchange direct rkey """ - - name = 'control' - control_group = 'control' - - def call(self, method, arguments, **options): - return self.app.control.broadcast( - method, arguments=arguments, reply=True, **options) - - -class status(Command): - """Show list of workers that are online.""" - - option_list = inspect.option_list - - def run(self, *args, **kwargs): - I = inspect( - app=self.app, - no_color=kwargs.get('no_color', False), - stdout=self.stdout, stderr=self.stderr, - show_reply=False, show_body=False, quiet=True, - ) - replies = I.run('ping', **kwargs) - if not replies: - raise self.Error('No nodes replied within time constraint', - status=EX_UNAVAILABLE) - nodecount = len(replies) - if not kwargs.get('quiet', False): - self.out('\n{} {} online.'.format( - nodecount, text.pluralize(nodecount, 'node'))) + callback = None if json else partial(_say_remote_command_reply, ctx, + show_reply=True) + args = ctx.args + arguments = _compile_arguments(action, args) + replies = ctx.obj.app.control.broadcast(action, timeout=timeout, + destination=destination, + callback=callback, + reply=True, + arguments=arguments) + + if not replies: + ctx.obj.echo('No nodes replied within time constraint') + return EX_UNAVAILABLE + + if json: + ctx.obj.echo(dumps(replies)) diff --git a/celery/bin/events.py b/celery/bin/events.py index 104ba48e007..a9978a1a0fe 100644 --- a/celery/bin/events.py +++ b/celery/bin/events.py @@ -1,177 +1,93 @@ -"""The :program:`celery events` command. - -.. program:: celery events - -.. seealso:: - - See :ref:`preload-options` and :ref:`daemon-options`. - -.. cmdoption:: -d, --dump - - Dump events to stdout. - -.. cmdoption:: -c, --camera - - Take snapshots of events using this camera. - -.. cmdoption:: --detach - - Camera: Detach and run in the background as a daemon. - -.. cmdoption:: -F, --freq, --frequency - - Camera: Shutter frequency. Default is every 1.0 seconds. - -.. cmdoption:: -r, --maxrate - - Camera: Optional shutter rate limit (e.g., 10/m). - -.. cmdoption:: -l, --loglevel - - Logging level, choose between `DEBUG`, `INFO`, `WARNING`, - `ERROR`, `CRITICAL`, or `FATAL`. Default is INFO. - -.. cmdoption:: -f, --logfile - - Path to log file. If no logfile is specified, `stderr` is used. - -.. cmdoption:: --pidfile - - Optional file used to store the process pid. - - The program won't start if this file already exists - and the pid is still alive. - -.. cmdoption:: --uid - - User id, or user name of the user to run as after detaching. - -.. cmdoption:: --gid - - Group id, or group name of the main group to change to after - detaching. - -.. cmdoption:: --umask - - Effective umask (in octal) of the process after detaching. Inherits - the umask of the parent process by default. - -.. cmdoption:: --workdir - - Optional directory to change to after detaching. - -.. cmdoption:: --executable - - Executable to use for the detached process. -""" +"""The ``celery events`` program.""" import sys from functools import partial -from celery.bin.base import Command, daemon_options -from celery.platforms import detached, set_process_title, strargv - -__all__ = ('events',) - -HELP = __doc__ - - -class events(Command): - """Event-stream utilities. - - Notes: - .. code-block:: console +import click - # - Start graphical monitor (requires curses) - $ celery events --app=proj - $ celery events -d --app=proj - # - Dump events to screen. - $ celery events -b amqp:// - # - Run snapshot camera. - $ celery events -c [options] - - Examples: - .. code-block:: console - - $ celery events - $ celery events -d - $ celery events -c mod.attr -F 1.0 --detach --maxrate=100/m -l info - """ +from celery.bin.base import LOG_LEVEL, CeleryDaemonCommand, CeleryOption +from celery.platforms import detached, set_process_title, strargv - doc = HELP - supports_args = False - def run(self, dump=False, camera=None, frequency=1.0, maxrate=None, - loglevel='INFO', logfile=None, prog_name='celery events', - pidfile=None, uid=None, gid=None, umask=None, - workdir=None, detach=False, **kwargs): - self.prog_name = prog_name +def _set_process_status(prog, info=''): + prog = '{0}:{1}'.format('celery events', prog) + info = '{0} {1}'.format(info, strargv(sys.argv)) + return set_process_title(prog, info=info) - if dump: - return self.run_evdump() - if camera: - return self.run_evcam(camera, freq=frequency, maxrate=maxrate, - loglevel=loglevel, logfile=logfile, - pidfile=pidfile, uid=uid, gid=gid, - umask=umask, - workdir=workdir, - detach=detach) - return self.run_evtop() - def run_evdump(self): - from celery.events.dumper import evdump - self.set_process_status('dump') - return evdump(app=self.app) +def _run_evdump(app): + from celery.events.dumper import evdump + _set_process_status('dump') + return evdump(app=app) - def run_evtop(self): - from celery.events.cursesmon import evtop - self.set_process_status('top') - return evtop(app=self.app) - def run_evcam(self, camera, logfile=None, pidfile=None, uid=None, - gid=None, umask=None, workdir=None, - detach=False, **kwargs): - from celery.events.snapshot import evcam - self.set_process_status('cam') - kwargs['app'] = self.app - cam = partial(evcam, camera, - logfile=logfile, pidfile=pidfile, **kwargs) +def _run_evcam(camera, app, logfile=None, pidfile=None, uid=None, + gid=None, umask=None, workdir=None, + detach=False, **kwargs): + from celery.events.snapshot import evcam + _set_process_status('cam') + kwargs['app'] = app + cam = partial(evcam, camera, + logfile=logfile, pidfile=pidfile, **kwargs) - if detach: - with detached(logfile, pidfile, uid, gid, umask, workdir): - return cam() - else: + if detach: + with detached(logfile, pidfile, uid, gid, umask, workdir): return cam() + else: + return cam() - def set_process_status(self, prog, info=''): - prog = f'{self.prog_name}:{prog}' - info = '{} {}'.format(info, strargv(sys.argv)) - return set_process_title(prog, info=info) - - def add_arguments(self, parser): - dopts = parser.add_argument_group('Dumper') - dopts.add_argument('-d', '--dump', action='store_true', default=False) - - copts = parser.add_argument_group('Snapshot') - copts.add_argument('-c', '--camera') - copts.add_argument('--detach', action='store_true', default=False) - copts.add_argument('-F', '--frequency', '--freq', - type=float, default=1.0) - copts.add_argument('-r', '--maxrate') - copts.add_argument('-l', '--loglevel', default='INFO') - daemon_options(parser, default_pidfile='celeryev.pid') - - user_options = self.app.user_options['events'] - if user_options: - self.add_compat_options( - parser.add_argument_group('User Options'), - user_options) - - -def main(): - ev = events() - ev.execute_from_commandline() - - -if __name__ == '__main__': # pragma: no cover - main() +def _run_evtop(app): + try: + from celery.events.cursesmon import evtop + _set_process_status('top') + return evtop(app=app) + except ModuleNotFoundError as e: + if e.name == '_curses': + # TODO: Improve this error message + raise click.UsageError("The curses module is required for this command.") + + +@click.command(cls=CeleryDaemonCommand) +@click.option('-d', + '--dump', + cls=CeleryOption, + is_flag=True, + help_group='Dumper') +@click.option('-c', + '--camera', + cls=CeleryOption, + help_group='Snapshot') +@click.option('-d', + '--detach', + cls=CeleryOption, + is_flag=True, + help_group='Snapshot') +@click.option('-F', '--frequency', '--freq', + type=float, + default=1.0, + cls=CeleryOption, + help_group='Snapshot') +@click.option('-r', '--maxrate', + cls=CeleryOption, + help_group='Snapshot') +@click.option('-l', + '--loglevel', + default='WARNING', + cls=CeleryOption, + type=LOG_LEVEL, + help_group="Snapshot", + help="Logging level.") +@click.pass_context +def events(ctx, dump, camera, detach, frequency, maxrate, loglevel, **kwargs): + """Event-stream utilities.""" + app = ctx.obj.app + if dump: + return _run_evdump(app) + + if camera: + return _run_evcam(camera, app=app, freq=frequency, maxrate=maxrate, + loglevel=loglevel, + detach=detach, + **kwargs) + + return _run_evtop(app) diff --git a/celery/bin/graph.py b/celery/bin/graph.py index 9b44088779b..1cdbc25f5e4 100644 --- a/celery/bin/graph.py +++ b/celery/bin/graph.py @@ -1,203 +1,195 @@ -"""The :program:`celery graph` command. - -.. program:: celery graph -""" +"""The ``celery graph`` command.""" +import sys from operator import itemgetter -from celery.five import items +import click + +from celery.bin.base import CeleryCommand from celery.utils.graph import DependencyGraph, GraphFormatter -from .base import Command -__all__ = ('graph',) +@click.group() +def graph(): + """The ``celery graph`` command.""" + +@graph.command(cls=CeleryCommand, context_settings={'allow_extra_args': True}) +@click.pass_context +def bootsteps(ctx): + """Display bootsteps graph.""" + worker = ctx.obj.app.WorkController() + include = {arg.lower() for arg in ctx.args or ['worker', 'consumer']} + if 'worker' in include: + worker_graph = worker.blueprint.graph + if 'consumer' in include: + worker.blueprint.connect_with(worker.consumer.blueprint) + else: + worker_graph = worker.consumer.blueprint.graph + worker_graph.to_dot(sys.stdout) + + +@graph.command(cls=CeleryCommand, context_settings={'allow_extra_args': True}) +@click.pass_context +def workers(ctx): + """Display workers graph.""" + def simplearg(arg): + return maybe_list(itemgetter(0, 2)(arg.partition(':'))) + + def maybe_list(l, sep=','): + return l[0], l[1].split(sep) if sep in l[1] else l[1] + + args = dict(simplearg(arg) for arg in ctx.args) + generic = 'generic' in args + + def generic_label(node): + return '{0} ({1}://)'.format(type(node).__name__, + node._label.split('://')[0]) + + class Node(object): + force_label = None + scheme = {} + + def __init__(self, label, pos=None): + self._label = label + self.pos = pos + + def label(self): + return self._label + + def __str__(self): + return self.label() + + class Thread(Node): + scheme = { + 'fillcolor': 'lightcyan4', + 'fontcolor': 'yellow', + 'shape': 'oval', + 'fontsize': 10, + 'width': 0.3, + 'color': 'black', + } + + def __init__(self, label, **kwargs): + self.real_label = label + super(Thread, self).__init__( + label='thr-{0}'.format(next(tids)), + pos=0, + ) -class graph(Command): - """The ``celery graph`` command.""" + class Formatter(GraphFormatter): - args = """ [arguments] - ..... bootsteps [worker] [consumer] - ..... workers [enumerate] - """ - - def run(self, what=None, *args, **kwargs): - map = {'bootsteps': self.bootsteps, 'workers': self.workers} - if not what: - raise self.UsageError('missing type') - elif what not in map: - raise self.Error('no graph {} in {}'.format(what, '|'.join(map))) - return map[what](*args, **kwargs) - - def bootsteps(self, *args, **kwargs): - worker = self.app.WorkController() - include = {arg.lower() for arg in args or ['worker', 'consumer']} - if 'worker' in include: - worker_graph = worker.blueprint.graph - if 'consumer' in include: - worker.blueprint.connect_with(worker.consumer.blueprint) - else: - worker_graph = worker.consumer.blueprint.graph - worker_graph.to_dot(self.stdout) - - def workers(self, *args, **kwargs): - - def simplearg(arg): - return maybe_list(itemgetter(0, 2)(arg.partition(':'))) - - def maybe_list(l, sep=','): - return (l[0], l[1].split(sep) if sep in l[1] else l[1]) - - args = dict(simplearg(arg) for arg in args) - generic = 'generic' in args - - def generic_label(node): - return '{} ({}://)'.format(type(node).__name__, - node._label.split('://')[0]) - - class Node: - force_label = None - scheme = {} - - def __init__(self, label, pos=None): - self._label = label - self.pos = pos - - def label(self): - return self._label - - def __str__(self): - return self.label() - - class Thread(Node): - scheme = { - 'fillcolor': 'lightcyan4', - 'fontcolor': 'yellow', - 'shape': 'oval', - 'fontsize': 10, - 'width': 0.3, - 'color': 'black', - } - - def __init__(self, label, **kwargs): - self.real_label = label - super().__init__( - label='thr-{}'.format(next(tids)), - pos=0, - ) - - class Formatter(GraphFormatter): - - def label(self, obj): - return obj and obj.label() - - def node(self, obj): - scheme = dict(obj.scheme) if obj.pos else obj.scheme - if isinstance(obj, Thread): - scheme['label'] = obj.real_label - return self.draw_node( - obj, dict(self.node_scheme, **scheme), - ) - - def terminal_node(self, obj): - return self.draw_node( - obj, dict(self.term_scheme, **obj.scheme), - ) - - def edge(self, a, b, **attrs): - if isinstance(a, Thread): - attrs.update(arrowhead='none', arrowtail='tee') - return self.draw_edge(a, b, self.edge_scheme, attrs) - - def subscript(n): - S = {'0': '₀', '1': '₁', '2': '₂', '3': '₃', '4': '₄', - '5': '₅', '6': '₆', '7': '₇', '8': '₈', '9': '₉'} - return ''.join([S[i] for i in str(n)]) - - class Worker(Node): - pass - - class Backend(Node): - scheme = { - 'shape': 'folder', - 'width': 2, - 'height': 1, - 'color': 'black', - 'fillcolor': 'peachpuff3', - } - - def label(self): - return generic_label(self) if generic else self._label - - class Broker(Node): - scheme = { - 'shape': 'circle', - 'fillcolor': 'cadetblue3', - 'color': 'cadetblue4', - 'height': 1, - } - - def label(self): - return generic_label(self) if generic else self._label - - from itertools import count - tids = count(1) - Wmax = int(args.get('wmax', 4) or 0) - Tmax = int(args.get('tmax', 3) or 0) - - def maybe_abbr(l, name, max=Wmax): - size = len(l) - abbr = max and size > max - if 'enumerate' in args: - l = ['{}{}'.format(name, subscript(i + 1)) - for i, obj in enumerate(l)] - if abbr: - l = l[0:max - 1] + [l[size - 1]] - l[max - 2] = '{}⎨…{}⎬'.format( - name[0], subscript(size - (max - 1))) - return l - - try: - workers = args['nodes'] - threads = args.get('threads') or [] - except KeyError: - replies = self.app.control.inspect().stats() or {} - workers, threads = [], [] - for worker, reply in items(replies): - workers.append(worker) - threads.append(reply['pool']['max-concurrency']) - - wlen = len(workers) - backend = args.get('backend', self.app.conf.result_backend) - threads_for = {} - workers = maybe_abbr(workers, 'Worker') - if Wmax and wlen > Wmax: - threads = threads[0:3] + [threads[-1]] - for i, threads in enumerate(threads): - threads_for[workers[i]] = maybe_abbr( - list(range(int(threads))), 'P', Tmax, + def label(self, obj): + return obj and obj.label() + + def node(self, obj): + scheme = dict(obj.scheme) if obj.pos else obj.scheme + if isinstance(obj, Thread): + scheme['label'] = obj.real_label + return self.draw_node( + obj, dict(self.node_scheme, **scheme), + ) + + def terminal_node(self, obj): + return self.draw_node( + obj, dict(self.term_scheme, **obj.scheme), ) - broker = Broker(args.get( - 'broker', self.app.connection_for_read().as_uri())) - backend = Backend(backend) if backend else None - deps = DependencyGraph(formatter=Formatter()) - deps.add_arc(broker) + def edge(self, a, b, **attrs): + if isinstance(a, Thread): + attrs.update(arrowhead='none', arrowtail='tee') + return self.draw_edge(a, b, self.edge_scheme, attrs) + + def subscript(n): + S = {'0': '₀', '1': '₁', '2': '₂', '3': '₃', '4': '₄', + '5': '₅', '6': '₆', '7': '₇', '8': '₈', '9': '₉'} + return ''.join([S[i] for i in str(n)]) + + class Worker(Node): + pass + + class Backend(Node): + scheme = { + 'shape': 'folder', + 'width': 2, + 'height': 1, + 'color': 'black', + 'fillcolor': 'peachpuff3', + } + + def label(self): + return generic_label(self) if generic else self._label + + class Broker(Node): + scheme = { + 'shape': 'circle', + 'fillcolor': 'cadetblue3', + 'color': 'cadetblue4', + 'height': 1, + } + + def label(self): + return generic_label(self) if generic else self._label + + from itertools import count + tids = count(1) + Wmax = int(args.get('wmax', 4) or 0) + Tmax = int(args.get('tmax', 3) or 0) + + def maybe_abbr(l, name, max=Wmax): + size = len(l) + abbr = max and size > max + if 'enumerate' in args: + l = ['{0}{1}'.format(name, subscript(i + 1)) + for i, obj in enumerate(l)] + if abbr: + l = l[0:max - 1] + [l[size - 1]] + l[max - 2] = '{0}⎨…{1}⎬'.format( + name[0], subscript(size - (max - 1))) + return l + + app = ctx.obj.app + try: + workers = args['nodes'] + threads = args.get('threads') or [] + except KeyError: + replies = app.control.inspect().stats() or {} + workers, threads = [], [] + for worker, reply in replies.items(): + workers.append(worker) + threads.append(reply['pool']['max-concurrency']) + + wlen = len(workers) + backend = args.get('backend', app.conf.result_backend) + threads_for = {} + workers = maybe_abbr(workers, 'Worker') + if Wmax and wlen > Wmax: + threads = threads[0:3] + [threads[-1]] + for i, threads in enumerate(threads): + threads_for[workers[i]] = maybe_abbr( + list(range(int(threads))), 'P', Tmax, + ) + + broker = Broker(args.get( + 'broker', app.connection_for_read().as_uri())) + backend = Backend(backend) if backend else None + deps = DependencyGraph(formatter=Formatter()) + deps.add_arc(broker) + if backend: + deps.add_arc(backend) + curworker = [0] + for i, worker in enumerate(workers): + worker = Worker(worker, pos=i) + deps.add_arc(worker) + deps.add_edge(worker, broker) if backend: - deps.add_arc(backend) - curworker = [0] - for i, worker in enumerate(workers): - worker = Worker(worker, pos=i) - deps.add_arc(worker) - deps.add_edge(worker, broker) - if backend: - deps.add_edge(worker, backend) - threads = threads_for.get(worker._label) - if threads: - for thread in threads: - thread = Thread(thread) - deps.add_arc(thread) - deps.add_edge(thread, worker) - - curworker[0] += 1 - - deps.to_dot(self.stdout) + deps.add_edge(worker, backend) + threads = threads_for.get(worker._label) + if threads: + for thread in threads: + thread = Thread(thread) + deps.add_arc(thread) + deps.add_edge(thread, worker) + + curworker[0] += 1 + + deps.to_dot(sys.stdout) diff --git a/celery/bin/list.py b/celery/bin/list.py index 00bc96455f2..47d71045fd0 100644 --- a/celery/bin/list.py +++ b/celery/bin/list.py @@ -1,44 +1,36 @@ """The ``celery list bindings`` command, used to inspect queue bindings.""" -from celery.bin.base import Command +import click +from celery.bin.base import CeleryCommand -class list_(Command): + +@click.group(name="list") +def list_(): """Get info from broker. Note: - For RabbitMQ the management plugin is required. - - Example: - .. code-block:: console - $ celery list bindings + For RabbitMQ the management plugin is required. """ - args = '[bindings]' - def list_bindings(self, management): +@list_.command(cls=CeleryCommand) +@click.pass_context +def bindings(ctx): + """Inspect queue bindings.""" + # TODO: Consider using a table formatter for this command. + app = ctx.obj.app + with app.connection() as conn: + app.amqp.TaskConsumer(conn).declare() + try: - bindings = management.get_bindings() + bindings = conn.manager.get_bindings() except NotImplementedError: - raise self.Error('Your transport cannot list bindings.') + raise click.UsageError('Your transport cannot list bindings.') def fmt(q, e, r): - return self.out(f'{q:<28} {e:<28} {r}') + ctx.obj.echo('{0:<28} {1:<28} {2}'.format(q, e, r)) fmt('Queue', 'Exchange', 'Routing Key') fmt('-' * 16, '-' * 16, '-' * 16) for b in bindings: fmt(b['destination'], b['source'], b['routing_key']) - - def run(self, what=None, *_, **kw): - topics = {'bindings': self.list_bindings} - available = ', '.join(topics) - if not what: - raise self.UsageError( - f'Missing argument, specify one of: {available}') - if what not in topics: - raise self.UsageError( - 'unknown topic {!r} (choose one of: {})'.format( - what, available)) - with self.app.connection() as conn: - self.app.amqp.TaskConsumer(conn).declare() - topics[what](conn.manager) diff --git a/celery/bin/logtool.py b/celery/bin/logtool.py index 48e0ac2dd4a..6430aad964e 100644 --- a/celery/bin/logtool.py +++ b/celery/bin/logtool.py @@ -1,12 +1,11 @@ -"""The :program:`celery logtool` command. - -.. program:: celery logtool -""" +"""The ``celery logtool`` command.""" import re from collections import Counter from fileinput import FileInput -from .base import Command +import click + +from celery.bin.base import CeleryCommand __all__ = ('logtool',) @@ -19,12 +18,10 @@ REPORT_FORMAT = """ Report ====== - Task total: {task[total]} Task errors: {task[errors]} Task success: {task[succeeded]} Task completed: {task[completed]} - Tasks ===== {task[types].format} @@ -35,7 +32,7 @@ class _task_counts(list): @property def format(self): - return '\n'.join('{}: {}'.format(*i) for i in self) + return '\n'.join('{0}: {1}'.format(*i) for i in self) def task_info(line): @@ -43,7 +40,7 @@ def task_info(line): return m.groups() -class Audit: +class Audit(object): def __init__(self, on_task_error=None, on_trace=None, on_debug=None): self.ids = set() @@ -113,53 +110,46 @@ def report(self): } -class logtool(Command): +@click.group() +def logtool(): """The ``celery logtool`` command.""" - args = """ [arguments] - ..... stats [file1|- [file2 [...]]] - ..... traces [file1|- [file2 [...]]] - ..... errors [file1|- [file2 [...]]] - ..... incomplete [file1|- [file2 [...]]] - ..... debug [file1|- [file2 [...]]] - """ - - def run(self, what=None, *files, **kwargs): - map = { - 'stats': self.stats, - 'traces': self.traces, - 'errors': self.errors, - 'incomplete': self.incomplete, - 'debug': self.debug, - } - if not what: - raise self.UsageError('missing action') - elif what not in map: - raise self.Error( - 'action {} not in {}'.format(what, '|'.join(map)), - ) - return map[what](files) +@logtool.command(cls=CeleryCommand) +@click.argument('files', nargs=-1) +@click.pass_context +def stats(ctx, files): + ctx.obj.echo(REPORT_FORMAT.format( + **Audit().run(files).report() + )) + + +@logtool.command(cls=CeleryCommand) +@click.argument('files', nargs=-1) +@click.pass_context +def traces(ctx, files): + Audit(on_trace=ctx.obj.echo).run(files) - def stats(self, files): - self.out(REPORT_FORMAT.format( - **Audit().run(files).report() - )) - def traces(self, files): - Audit(on_trace=self.out).run(files) +@logtool.command(cls=CeleryCommand) +@click.argument('files', nargs=-1) +@click.pass_context +def errors(ctx, files): + Audit(on_task_error=lambda line, *_: ctx.obj.echo(line)).run(files) - def errors(self, files): - Audit(on_task_error=self.say1).run(files) - def incomplete(self, files): - audit = Audit() - audit.run(files) - for task_id in audit.incomplete_tasks(): - self.error(f'Did not complete: {task_id!r}') +@logtool.command(cls=CeleryCommand) +@click.argument('files', nargs=-1) +@click.pass_context +def incomplete(ctx, files): + audit = Audit() + audit.run(files) + for task_id in audit.incomplete_tasks(): + ctx.obj.echo(f'Did not complete: {task_id}') - def debug(self, files): - Audit(on_debug=self.out).run(files) - def say1(self, line, *_): - self.out(line) +@logtool.command(cls=CeleryCommand) +@click.argument('files', nargs=-1) +@click.pass_context +def debug(ctx, files): + Audit(on_debug=ctx.obj.echo).run(files) diff --git a/celery/bin/migrate.py b/celery/bin/migrate.py index 5fdd4aa6e3f..c5ba9b33c43 100644 --- a/celery/bin/migrate.py +++ b/celery/bin/migrate.py @@ -1,65 +1,62 @@ """The ``celery migrate`` command, used to filter and move messages.""" -from celery.bin.base import Command - -MIGRATE_PROGRESS_FMT = """\ -Migrating task {state.count}/{state.strtotal}: \ -{body[task]}[{body[id]}]\ -""" - - -class migrate(Command): +import click +from kombu import Connection + +from celery.bin.base import CeleryCommand, CeleryOption +from celery.contrib.migrate import migrate_tasks + + +@click.command(cls=CeleryCommand) +@click.argument('source') +@click.argument('destination') +@click.option('-n', + '--limit', + cls=CeleryOption, + type=int, + help_group='Migration Options', + help='Number of tasks to consume.') +@click.option('-t', + '--timeout', + cls=CeleryOption, + type=float, + help_group='Migration Options', + help='Timeout in seconds waiting for tasks.') +@click.option('-a', + '--ack-messages', + cls=CeleryOption, + is_flag=True, + help_group='Migration Options', + help='Ack messages from source broker.') +@click.option('-T', + '--tasks', + cls=CeleryOption, + help_group='Migration Options', + help='List of task names to filter on.') +@click.option('-Q', + '--queues', + cls=CeleryOption, + help_group='Migration Options', + help='List of queues to migrate.') +@click.option('-F', + '--forever', + cls=CeleryOption, + is_flag=True, + help_group='Migration Options', + help='Continually migrate tasks until killed.') +@click.pass_context +def migrate(ctx, source, destination, **kwargs): """Migrate tasks from one broker to another. Warning: + This command is experimental, make sure you have a backup of the tasks before you continue. - - Example: - .. code-block:: console - - $ celery migrate amqp://A.example.com amqp://guest@B.example.com// - $ celery migrate redis://localhost amqp://guest@localhost// """ - - args = ' ' - progress_fmt = MIGRATE_PROGRESS_FMT - - def add_arguments(self, parser): - group = parser.add_argument_group('Migration Options') - group.add_argument( - '--limit', '-n', type=int, - help='Number of tasks to consume (int)', - ) - group.add_argument( - '--timeout', '-t', type=float, default=1.0, - help='Timeout in seconds (float) waiting for tasks', - ) - group.add_argument( - '--ack-messages', '-a', action='store_true', default=False, - help='Ack messages from source broker.', - ) - group.add_argument( - '--tasks', '-T', - help='List of task names to filter on.', - ) - group.add_argument( - '--queues', '-Q', - help='List of queues to migrate.', - ) - group.add_argument( - '--forever', '-F', action='store_true', default=False, - help='Continually migrate tasks until killed.', - ) - - def on_migrate_task(self, state, body, message): - self.out(self.progress_fmt.format(state=state, body=body)) - - def run(self, source, destination, **kwargs): - from kombu import Connection - - from celery.contrib.migrate import migrate_tasks - - migrate_tasks(Connection(source), - Connection(destination), - callback=self.on_migrate_task, - **kwargs) + # TODO: Use a progress bar + def on_migrate_task(state, body, message): + ctx.obj.echo(f"Migrating task {state.count}/{state.strtotal}: {body}") + + migrate_tasks(Connection(source), + Connection(destination), + callback=on_migrate_task, + **kwargs) diff --git a/celery/bin/multi.py b/celery/bin/multi.py index a0f7c0c9734..d25325df1ba 100644 --- a/celery/bin/multi.py +++ b/celery/bin/multi.py @@ -67,7 +67,7 @@ $ celery multi show 10 -l INFO -Q:1-3 images,video -Q:4,5 data -Q default -L:4,5 DEBUG - $ # Additional options are added to each celery worker' command, + $ # Additional options are added to each celery worker' comamnd, $ # but you can also modify the options for ranges of, or specific workers $ # 3 workers: Two with 3 processes, and one with 10 processes. @@ -103,10 +103,12 @@ import sys from functools import wraps +import click from kombu.utils.objects import cached_property from celery import VERSION_BANNER from celery.apps.multi import Cluster, MultiParser, NamespacedOptionParser +from celery.bin.base import CeleryCommand from celery.platforms import EX_FAILURE, EX_OK, signals from celery.utils import term from celery.utils.text import pluralize @@ -165,7 +167,7 @@ def _inner(self, *argv, **kwargs): return _inner -class TermLogger: +class TermLogger(object): splash_text = 'celery multi v{version}' splash_context = {'version': VERSION_BANNER} @@ -275,7 +277,7 @@ def call_command(self, command, argv): try: return self.commands[command](*argv) or EX_OK except KeyError: - return self.error(f'Invalid command: {command}') + return self.error('Invalid command: {0}'.format(command)) def _handle_reserved_options(self, argv): argv = list(argv) # don't modify callers argv. @@ -400,7 +402,7 @@ def on_still_waiting_for(self, nodes): num_left = len(nodes) if num_left: self.note(self.colored.blue( - '> Waiting for {} {} -> {}...'.format( + '> Waiting for {0} {1} -> {2}...'.format( num_left, pluralize(num_left, 'node'), ', '.join(str(node.pid) for node in nodes)), ), newline=False) @@ -417,17 +419,17 @@ def on_node_signal_dead(self, node): node)) def on_node_start(self, node): - self.note(f'\t> {node.name}: ', newline=False) + self.note('\t> {0.name}: '.format(node), newline=False) def on_node_restart(self, node): self.note(self.colored.blue( - f'> Restarting node {node.name}: '), newline=False) + '> Restarting node {0.name}: '.format(node)), newline=False) def on_node_down(self, node): - self.note(f'> {node.name}: {self.DOWN}') + self.note('> {0.name}: {1.DOWN}'.format(node, self)) def on_node_shutdown_ok(self, node): - self.note(f'\n\t> {node.name}: {self.OK}') + self.note('\n\t> {0.name}: {1.OK}'.format(node, self)) def on_node_status(self, node, retval): self.note(retval and self.FAILED or self.OK) @@ -437,13 +439,13 @@ def on_node_signal(self, node, sig): node, sig=sig)) def on_child_spawn(self, node, argstr, env): - self.info(f' {argstr}') + self.info(' {0}'.format(argstr)) def on_child_signalled(self, node, signum): - self.note(f'* Child was terminated by signal {signum}') + self.note('* Child was terminated by signal {0}'.format(signum)) def on_child_failure(self, node, retcode): - self.note(f'* Child terminated with exit code {retcode}') + self.note('* Child terminated with exit code {0}'.format(retcode)) @cached_property def OK(self): @@ -458,5 +460,15 @@ def DOWN(self): return str(self.colored.magenta('DOWN')) -if __name__ == '__main__': # pragma: no cover - main() +@click.command( + cls=CeleryCommand, + context_settings={ + 'allow_extra_args': True, + 'ignore_unknown_options': True + } +) +@click.pass_context +def multi(ctx): + """Start multiple worker instances.""" + cmd = MultiTool(quiet=ctx.obj.quiet, no_color=ctx.obj.no_color) + return cmd.execute_from_commandline([''] + ctx.args) diff --git a/celery/bin/purge.py b/celery/bin/purge.py index a09acc771a7..38245d02ff0 100644 --- a/celery/bin/purge.py +++ b/celery/bin/purge.py @@ -1,67 +1,67 @@ """The ``celery purge`` program, used to delete messages from queues.""" -from celery.bin.base import Command -from celery.five import keys +import click + +from celery.bin.base import COMMA_SEPARATED_LIST, CeleryCommand, CeleryOption from celery.utils import text -class purge(Command): +@click.command(cls=CeleryCommand) +@click.option('-f', + '--force', + cls=CeleryOption, + is_flag=True, + help_group='Purging Options', + help="Don't prompt for verification.") +@click.option('-Q', + '--queues', + cls=CeleryOption, + type=COMMA_SEPARATED_LIST, + help_group='Purging Options', + help="Comma separated list of queue names to purge.") +@click.option('-X', + '--exclude-queues', + cls=CeleryOption, + type=COMMA_SEPARATED_LIST, + help_group='Purging Options', + help="Comma separated list of queues names not to purge.") +@click.pass_context +def purge(ctx, force, queues, exclude_queues): """Erase all messages from all known task queues. Warning: + There's no undo operation for this command. """ + queues = queues or set() + exclude_queues = exclude_queues or set() + app = ctx.obj.app + names = (queues or set(app.amqp.queues.keys())) - exclude_queues + qnum = len(names) - warn_prelude = ( - '{warning}: This will remove all tasks from {queues}: {names}.\n' - ' There is no undo for this operation!\n\n' - '(to skip this prompt use the -f option)\n' - ) - warn_prompt = 'Are you sure you want to delete all tasks' - - fmt_purged = 'Purged {mnum} {messages} from {qnum} known task {queues}.' - fmt_empty = 'No messages purged from {qnum} {queues}' - - def add_arguments(self, parser): - group = parser.add_argument_group('Purging Options') - group.add_argument( - '--force', '-f', action='store_true', default=False, - help="Don't prompt for verification", - ) - group.add_argument( - '--queues', '-Q', default=[], - help='Comma separated list of queue names to purge.', - ) - group.add_argument( - '--exclude-queues', '-X', default=[], - help='Comma separated list of queues names not to purge.', - ) + if names: + queues_headline = text.pluralize(qnum, 'queue') + if not force: + queue_names = ', '.join(sorted(names)) + click.confirm(f"{ctx.obj.style('WARNING', fg='red')}:" + "This will remove all tasks from " + f"{queues_headline}: {queue_names}.\n" + " There is no undo for this operation!\n\n" + "(to skip this prompt use the -f option)\n" + "Are you sure you want to delete all tasks?", + abort=True) - def run(self, force=False, queues=None, exclude_queues=None, **kwargs): - queues = set(text.str_to_list(queues or [])) - exclude = set(text.str_to_list(exclude_queues or [])) - names = (queues or set(keys(self.app.amqp.queues))) - exclude - qnum = len(names) + def _purge(conn, queue): + try: + return conn.default_channel.queue_purge(queue) or 0 + except conn.channel_errors: + return 0 - messages = None - if names: - if not force: - self.out(self.warn_prelude.format( - warning=self.colored.red('WARNING'), - queues=text.pluralize(qnum, 'queue'), - names=', '.join(sorted(names)), - )) - if self.ask(self.warn_prompt, ('yes', 'no'), 'no') != 'yes': - return - with self.app.connection_for_write() as conn: - messages = sum(self._purge(conn, queue) for queue in names) - fmt = self.fmt_purged if messages else self.fmt_empty - self.out(fmt.format( - mnum=messages, qnum=qnum, - messages=text.pluralize(messages, 'message'), - queues=text.pluralize(qnum, 'queue'))) + with app.connection_for_write() as conn: + messages = sum(_purge(conn, queue) for queue in names) - def _purge(self, conn, queue): - try: - return conn.default_channel.queue_purge(queue) or 0 - except conn.channel_errors: - return 0 + if messages: + messages_headline = text.pluralize(messages, 'message') + ctx.obj.echo(f"Purged {messages} {messages_headline} from " + f"{qnum} known task {queues_headline}.") + else: + ctx.obj.echo(f"No messages purged from {qnum} {queues_headline}.") diff --git a/celery/bin/result.py b/celery/bin/result.py index 21131b928d9..d90421c4cde 100644 --- a/celery/bin/result.py +++ b/celery/bin/result.py @@ -1,40 +1,29 @@ """The ``celery result`` program, used to inspect task results.""" -from celery.bin.base import Command - - -class result(Command): - """Gives the return value for a given task id. - - Examples: - .. code-block:: console - - $ celery result 8f511516-e2f5-4da4-9d2f-0fb83a86e500 - $ celery result 8f511516-e2f5-4da4-9d2f-0fb83a86e500 -t tasks.add - $ celery result 8f511516-e2f5-4da4-9d2f-0fb83a86e500 --traceback - """ - - args = '' - - def add_arguments(self, parser): - group = parser.add_argument_group('Result Options') - group.add_argument( - '--task', '-t', help='name of task (if custom backend)', - ) - group.add_argument( - '--traceback', action='store_true', default=False, - help='show traceback instead', - ) - - def run(self, task_id, *args, **kwargs): - result_cls = self.app.AsyncResult - task = kwargs.get('task') - traceback = kwargs.get('traceback', False) - - if task: - result_cls = self.app.tasks[task].AsyncResult - task_result = result_cls(task_id) - if traceback: - value = task_result.traceback - else: - value = task_result.get() - self.out(self.pretty(value)[1]) +import click + +from celery.bin.base import CeleryCommand, CeleryOption + + +@click.command(cls=CeleryCommand) +@click.argument('task_id') +@click.option('-t', + '--task', + cls=CeleryOption, + help_group='Result Options', + help="Name of task (if custom backend).") +@click.option('--traceback', + cls=CeleryOption, + is_flag=True, + help_group='Result Options', + help="Show traceback instead.") +@click.pass_context +def result(ctx, task_id, task, traceback): + """Print the return value for a given task id.""" + app = ctx.obj.app + + result_cls = app.tasks[task].AsyncResult if task else app.AsyncResult + task_result = result_cls(task_id) + value = task_result.traceback if traceback else task_result.get() + + # TODO: Prettify result + ctx.obj.echo(value) diff --git a/celery/bin/shell.py b/celery/bin/shell.py index 4ed7f5bfb3d..966773c5d11 100644 --- a/celery/bin/shell.py +++ b/celery/bin/shell.py @@ -1,157 +1,170 @@ """The ``celery shell`` program, used to start a REPL.""" + import os import sys from importlib import import_module -from celery.bin.base import Command -from celery.five import values +import click +from celery.bin.base import CeleryCommand, CeleryOption -class shell(Command): # pragma: no cover - """Start shell session with convenient access to celery symbols. - The following symbols will be added to the main globals: +def _invoke_fallback_shell(locals): + import code + try: + import readline + except ImportError: + pass + else: + import rlcompleter + readline.set_completer( + rlcompleter.Completer(locals).complete) + readline.parse_and_bind('tab:complete') + code.interact(local=locals) - - ``celery``: the current application. - - ``chord``, ``group``, ``chain``, ``chunks``, - ``xmap``, ``xstarmap`` ``subtask``, ``Task`` - - all registered tasks. - """ - def add_arguments(self, parser): - group = parser.add_argument_group('Shell Options') - group.add_argument( - '--ipython', '-I', - action='store_true', help='force iPython.', default=False, - ) - group.add_argument( - '--bpython', '-B', - action='store_true', help='force bpython.', default=False, - ) - group.add_argument( - '--python', - action='store_true', default=False, - help='force default Python shell.', - ) - group.add_argument( - '--without-tasks', '-T', - action='store_true', default=False, - help="don't add tasks to locals.", - ) - group.add_argument( - '--eventlet', - action='store_true', default=False, - help='use eventlet.', - ) - group.add_argument( - '--gevent', action='store_true', default=False, - help='use gevent.', - ) - - def run(self, *args, **kwargs): - if args: - raise self.UsageError( - f'shell command does not take arguments: {args}') - return self._run(**kwargs) - - def _run(self, ipython=False, bpython=False, - python=False, without_tasks=False, eventlet=False, - gevent=False, **kwargs): - sys.path.insert(0, os.getcwd()) - if eventlet: - import_module('celery.concurrency.eventlet') - if gevent: - import_module('celery.concurrency.gevent') - import celery - import celery.task.base - self.app.loader.import_default_modules() - - # pylint: disable=attribute-defined-outside-init - self.locals = { - 'app': self.app, - 'celery': self.app, - 'Task': celery.Task, - 'chord': celery.chord, - 'group': celery.group, - 'chain': celery.chain, - 'chunks': celery.chunks, - 'xmap': celery.xmap, - 'xstarmap': celery.xstarmap, - 'subtask': celery.subtask, - 'signature': celery.signature, - } - - if not without_tasks: - self.locals.update({ - task.__name__: task for task in values(self.app.tasks) - if not task.name.startswith('celery.') - }) - - if python: - return self.invoke_fallback_shell() - elif bpython: - return self.invoke_bpython_shell() - elif ipython: - return self.invoke_ipython_shell() - return self.invoke_default_shell() - - def invoke_default_shell(self): +def _invoke_bpython_shell(locals): + import bpython + bpython.embed(locals) + + +def _invoke_ipython_shell(locals): + for ip in (_ipython, _ipython_pre_10, + _ipython_terminal, _ipython_010, + _no_ipython): try: - import IPython # noqa + return ip(locals) except ImportError: - try: - import bpython # noqa - except ImportError: - return self.invoke_fallback_shell() - else: - return self.invoke_bpython_shell() - else: - return self.invoke_ipython_shell() + pass + + +def _ipython(locals): + from IPython import start_ipython + start_ipython(argv=[], user_ns=locals) + + +def _ipython_pre_10(locals): # pragma: no cover + from IPython.frontend.terminal.ipapp import TerminalIPythonApp + app = TerminalIPythonApp.instance() + app.initialize(argv=[]) + app.shell.user_ns.update(locals) + app.start() + + +def _ipython_terminal(locals): # pragma: no cover + from IPython.terminal import embed + embed.TerminalInteractiveShell(user_ns=locals).mainloop() - def invoke_fallback_shell(self): - import code + +def _ipython_010(locals): # pragma: no cover + from IPython.Shell import IPShell + IPShell(argv=[], user_ns=locals).mainloop() + + +def _no_ipython(self): # pragma: no cover + raise ImportError('no suitable ipython found') + + +def _invoke_default_shell(locals): + try: + import IPython # noqa + except ImportError: try: - import readline + import bpython # noqa except ImportError: - pass + _invoke_fallback_shell(locals) else: - import rlcompleter - readline.set_completer( - rlcompleter.Completer(self.locals).complete) - readline.parse_and_bind('tab:complete') - code.interact(local=self.locals) - - def invoke_ipython_shell(self): - for ip in (self._ipython, self._ipython_pre_10, - self._ipython_terminal, self._ipython_010, - self._no_ipython): - try: - return ip() - except ImportError: - pass - - def _ipython(self): - from IPython import start_ipython - start_ipython(argv=[], user_ns=self.locals) - - def _ipython_pre_10(self): # pragma: no cover - from IPython.frontend.terminal.ipapp import TerminalIPythonApp - app = TerminalIPythonApp.instance() - app.initialize(argv=[]) - app.shell.user_ns.update(self.locals) - app.start() - - def _ipython_terminal(self): # pragma: no cover - from IPython.terminal import embed - embed.TerminalInteractiveShell(user_ns=self.locals).mainloop() - - def _ipython_010(self): # pragma: no cover - from IPython.Shell import IPShell - IPShell(argv=[], user_ns=self.locals).mainloop() - - def _no_ipython(self): # pragma: no cover - raise ImportError('no suitable ipython found') - - def invoke_bpython_shell(self): - import bpython - bpython.embed(self.locals) + _invoke_bpython_shell(locals) + else: + _invoke_ipython_shell(locals) + + +@click.command(cls=CeleryCommand) +@click.option('-I', + '--ipython', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Force IPython.") +@click.option('-B', + '--bpython', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Force bpython.") +@click.option('--python', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Force default Python shell.") +@click.option('-T', + '--without-tasks', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Don't add tasks to locals.") +@click.option('--eventlet', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Use eventlet.") +@click.option('--gevent', + is_flag=True, + cls=CeleryOption, + help_group="Shell Options", + help="Use gevent.") +@click.pass_context +def shell(ctx, ipython=False, bpython=False, + python=False, without_tasks=False, eventlet=False, + gevent=False): + """Start shell session with convenient access to celery symbols. + + The following symbols will be added to the main globals: + - ``celery``: the current application. + - ``chord``, ``group``, ``chain``, ``chunks``, + ``xmap``, ``xstarmap`` ``subtask``, ``Task`` + - all registered tasks. + """ + sys.path.insert(0, os.getcwd()) + if eventlet: + import_module('celery.concurrency.eventlet') + if gevent: + import_module('celery.concurrency.gevent') + import celery.task.base + app = ctx.obj.app + app.loader.import_default_modules() + + # pylint: disable=attribute-defined-outside-init + locals = { + 'app': app, + 'celery': app, + 'Task': celery.Task, + 'chord': celery.chord, + 'group': celery.group, + 'chain': celery.chain, + 'chunks': celery.chunks, + 'xmap': celery.xmap, + 'xstarmap': celery.xstarmap, + 'subtask': celery.subtask, + 'signature': celery.signature, + } + + if not without_tasks: + locals.update({ + task.__name__: task for task in app.tasks.values() + if not task.name.startswith('celery.') + }) + + if python: + _invoke_fallback_shell(locals) + elif bpython: + try: + _invoke_bpython_shell(locals) + except ImportError: + ctx.obj.echo(f'{ctx.obj.ERROR}: bpython is not installed') + elif ipython: + try: + _invoke_ipython_shell(locals) + except ImportError as e: + ctx.obj.echo(f'{ctx.obj.ERROR}: {e}') + _invoke_default_shell(locals) diff --git a/celery/bin/upgrade.py b/celery/bin/upgrade.py index 4515dd803b6..fbad503e1f0 100644 --- a/celery/bin/upgrade.py +++ b/celery/bin/upgrade.py @@ -1,96 +1,89 @@ """The ``celery upgrade`` command, used to upgrade from previous versions.""" import codecs +import sys + +import click from celery.app import defaults -from celery.bin.base import Command +from celery.bin.base import CeleryCommand, CeleryOption from celery.utils.functional import pass1 -class upgrade(Command): +@click.group() +def upgrade(): """Perform upgrade between versions.""" - choices = {'settings'} - - def add_arguments(self, parser): - group = parser.add_argument_group('Upgrading Options') - group.add_argument( - '--django', action='store_true', default=False, - help='Upgrade Django project', - ) - group.add_argument( - '--compat', action='store_true', default=False, - help='Maintain backwards compatibility', - ) - group.add_argument( - '--no-backup', action='store_true', default=False, - help='Dont backup original files', - ) - def usage(self, command): - return '%(prog)s settings [filename] [options]' +def _slurp(filename): + # TODO: Handle case when file does not exist + with codecs.open(filename, 'r', 'utf-8') as read_fh: + return [line for line in read_fh] - def run(self, *args, **kwargs): - try: - command = args[0] - except IndexError: - raise self.UsageError( - 'missing upgrade type: try `celery upgrade settings` ?') - if command not in self.choices: - raise self.UsageError(f'unknown upgrade type: {command}') - return getattr(self, command)(*args, **kwargs) - def settings(self, command, filename=None, - no_backup=False, django=False, compat=False, **kwargs): +def _compat_key(self, key, namespace='CELERY'): + key = key.upper() + if not key.startswith(namespace): + key = '_'.join([namespace, key]) + return key - if filename is None: - raise self.UsageError('missing settings filename to upgrade') - lines = self._slurp(filename) - keyfilter = self._compat_key if django or compat else pass1 - print(f'processing {filename}...', file=self.stderr) - # gives list of tuples: ``(did_change, line_contents)`` - new_lines = [ - self._to_new_key(line, keyfilter) for line in lines - ] - if any(n[0] for n in new_lines): # did have changes - if not no_backup: - self._backup(filename) - with codecs.open(filename, 'w', 'utf-8') as write_fh: - for _, line in new_lines: - write_fh.write(line) - print('Changes to your setting have been made!', - file=self.stdout) - else: - print('Does not seem to require any changes :-)', - file=self.stdout) +def _backup(filename, suffix='.orig'): + lines = [] + backup_filename = ''.join([filename, suffix]) + print('writing backup to {0}...'.format(backup_filename), + file=sys.stderr) + with codecs.open(filename, 'r', 'utf-8') as read_fh: + with codecs.open(backup_filename, 'w', 'utf-8') as backup_fh: + for line in read_fh: + backup_fh.write(line) + lines.append(line) + return lines - def _slurp(self, filename): - with codecs.open(filename, 'r', 'utf-8') as read_fh: - return [line for line in read_fh] - def _backup(self, filename, suffix='.orig'): - lines = [] - backup_filename = ''.join([filename, suffix]) - print(f'writing backup to {backup_filename}...', - file=self.stderr) - with codecs.open(filename, 'r', 'utf-8') as read_fh: - with codecs.open(backup_filename, 'w', 'utf-8') as backup_fh: - for line in read_fh: - backup_fh.write(line) - lines.append(line) - return lines +def _to_new_key(line, keyfilter=pass1, source=defaults._TO_NEW_KEY): + # sort by length to avoid, for example, broker_transport overriding + # broker_transport_options. + for old_key in reversed(sorted(source, key=lambda x: len(x))): + new_line = line.replace(old_key, keyfilter(source[old_key])) + if line != new_line and 'CELERY_CELERY' not in new_line: + return 1, new_line # only one match per line. + return 0, line - def _to_new_key(self, line, keyfilter=pass1, source=defaults._TO_NEW_KEY): - # sort by length to avoid, for example, broker_transport overriding - # broker_transport_options. - for old_key in reversed(sorted(source, key=lambda x: len(x))): - new_line = line.replace(old_key, keyfilter(source[old_key])) - if line != new_line and 'CELERY_CELERY' not in new_line: - return 1, new_line # only one match per line. - return 0, line - def _compat_key(self, key, namespace='CELERY'): - key = key.upper() - if not key.startswith(namespace): - key = '_'.join([namespace, key]) - return key +@upgrade.command(cls=CeleryCommand) +@click.argument('filename') +@click.option('-django', + cls=CeleryOption, + is_flag=True, + help_group='Upgrading Options', + help='Upgrade Django project.') +@click.option('-compat', + cls=CeleryOption, + is_flag=True, + help_group='Upgrading Options', + help='Maintain backwards compatibility.') +@click.option('--no-backup', + cls=CeleryOption, + is_flag=True, + help_group='Upgrading Options', + help='Dont backup original files.') +def settings(filename, django, compat, no_backup): + """Migrate settings from Celery 3.x to Celery 4.x.""" + lines = _slurp(filename) + keyfilter = _compat_key if django or compat else pass1 + print('processing {0}...'.format(filename), file=sys.stderr) + # gives list of tuples: ``(did_change, line_contents)`` + new_lines = [ + _to_new_key(line, keyfilter) for line in lines + ] + if any(n[0] for n in new_lines): # did have changes + if not no_backup: + _backup(filename) + with codecs.open(filename, 'w', 'utf-8') as write_fh: + for _, line in new_lines: + write_fh.write(line) + print('Changes to your setting have been made!', + file=sys.stdout) + else: + print('Does not seem to require any changes :-)', + file=sys.stdout) diff --git a/celery/bin/worker.py b/celery/bin/worker.py index 3612f183a6f..da35f665728 100644 --- a/celery/bin/worker.py +++ b/celery/bin/worker.py @@ -1,365 +1,331 @@ -"""Program used to start a Celery worker instance. +"""Program used to start a Celery worker instance.""" -The :program:`celery worker` command (previously known as ``celeryd``) - -.. program:: celery worker - -.. seealso:: - - See :ref:`preload-options`. - -.. cmdoption:: -c, --concurrency - - Number of child processes processing the queue. The default - is the number of CPUs available on your system. - -.. cmdoption:: -P, --pool - - Pool implementation: - - prefork (default), eventlet, gevent, threads or solo. - -.. cmdoption:: -n, --hostname - - Set custom hostname (e.g., 'w1@%%h'). Expands: %%h (hostname), - %%n (name) and %%d, (domain). - -.. cmdoption:: -B, --beat - - Also run the `celery beat` periodic task scheduler. Please note that - there must only be one instance of this service. - - .. note:: - - ``-B`` is meant to be used for development purposes. For production - environment, you need to start :program:`celery beat` separately. - -.. cmdoption:: -Q, --queues - - List of queues to enable for this worker, separated by comma. - By default all configured queues are enabled. - Example: `-Q video,image` - -.. cmdoption:: -X, --exclude-queues - - List of queues to disable for this worker, separated by comma. - By default all configured queues are enabled. - Example: `-X video,image`. - -.. cmdoption:: -I, --include - - Comma separated list of additional modules to import. - Example: -I foo.tasks,bar.tasks - -.. cmdoption:: -s, --schedule - - Path to the schedule database if running with the `-B` option. - Defaults to `celerybeat-schedule`. The extension ".db" may be - appended to the filename. - -.. cmdoption:: -O - - Apply optimization profile. Supported: default, fair - -.. cmdoption:: --prefetch-multiplier - - Set custom prefetch multiplier value for this worker instance. - -.. cmdoption:: --scheduler - - Scheduler class to use. Default is - :class:`celery.beat.PersistentScheduler` - -.. cmdoption:: -S, --statedb - - Path to the state database. The extension '.db' may - be appended to the filename. Default: {default} - -.. cmdoption:: -E, --task-events - - Send task-related events that can be captured by monitors like - :program:`celery events`, `celerymon`, and others. - -.. cmdoption:: --without-gossip - - Don't subscribe to other workers events. - -.. cmdoption:: --without-mingle - - Don't synchronize with other workers at start-up. - -.. cmdoption:: --without-heartbeat - - Don't send event heartbeats. - -.. cmdoption:: --heartbeat-interval - - Interval in seconds at which to send worker heartbeat - -.. cmdoption:: --purge - - Purges all waiting tasks before the daemon is started. - **WARNING**: This is unrecoverable, and the tasks will be - deleted from the messaging server. - -.. cmdoption:: --time-limit - - Enables a hard time limit (in seconds int/float) for tasks. - -.. cmdoption:: --soft-time-limit - - Enables a soft time limit (in seconds int/float) for tasks. - -.. cmdoption:: --max-tasks-per-child - - Maximum number of tasks a pool worker can execute before it's - terminated and replaced by a new worker. - -.. cmdoption:: --max-memory-per-child - - Maximum amount of resident memory, in KiB, that may be consumed by a - child process before it will be replaced by a new one. If a single - task causes a child process to exceed this limit, the task will be - completed and the child process will be replaced afterwards. - Default: no limit. - -.. cmdoption:: --autoscale - - Enable autoscaling by providing - max_concurrency, min_concurrency. Example:: - - --autoscale=10,3 - - (always keep 3 processes, but grow to 10 if necessary) - -.. cmdoption:: --detach +import os +import sys - Start worker as a background process. +import click +from click import ParamType +from click.types import StringParamType -.. cmdoption:: -f, --logfile +from celery import concurrency +from celery.bin.base import (COMMA_SEPARATED_LIST, LOG_LEVEL, + CeleryDaemonCommand, CeleryOption) +from celery.platforms import EX_FAILURE, detached, maybe_drop_privileges +from celery.utils.log import get_logger +from celery.utils.nodenames import default_nodename, host_format, node_format - Path to log file. If no logfile is specified, `stderr` is used. +logger = get_logger(__name__) -.. cmdoption:: -l, --loglevel - Logging level, choose between `DEBUG`, `INFO`, `WARNING`, - `ERROR`, `CRITICAL`, or `FATAL`. +class CeleryBeat(ParamType): + """Celery Beat flag.""" -.. cmdoption:: --pidfile + name = "beat" - Optional file used to store the process pid. + def convert(self, value, param, ctx): + if ctx.obj.app.IS_WINDOWS and value: + self.fail('-B option does not work on Windows. ' + 'Please run celery beat as a separate service.') - The program won't start if this file already exists - and the pid is still alive. + return value -.. cmdoption:: --uid - User id, or user name of the user to run as after detaching. +class WorkersPool(click.Choice): + """Workers pool option.""" -.. cmdoption:: --gid + name = "pool" - Group id, or group name of the main group to change to after - detaching. + def __init__(self): + """Initialize the workers pool option with the relevant choices.""" + super().__init__(('prefork', 'eventlet', 'gevent', 'solo')) -.. cmdoption:: --umask + def convert(self, value, param, ctx): + # Pools like eventlet/gevent needs to patch libs as early + # as possible. + return concurrency.get_implementation( + value) or ctx.obj.app.conf.worker_pool - Effective :manpage:`umask(1)` (in octal) of the process after detaching. - Inherits the :manpage:`umask(1)` of the parent process by default. -.. cmdoption:: --workdir +class Hostname(StringParamType): + """Hostname option.""" - Optional directory to change to after detaching. + name = "hostname" -.. cmdoption:: --executable + def convert(self, value, param, ctx): + return host_format(default_nodename(value)) - Executable to use for the detached process. -""" -import sys -from celery import concurrency -from celery.bin.base import Command, daemon_options -from celery.bin.celeryd_detach import detached_celeryd -from celery.five import string_t -from celery.platforms import maybe_drop_privileges -from celery.utils.log import LOG_LEVELS, mlevel -from celery.utils.nodenames import default_nodename +class Autoscale(ParamType): + """Autoscaling parameter.""" -__all__ = ('worker', 'main') + name = ", " -HELP = __doc__ + def convert(self, value, param, ctx): + value = value.split(',') + if len(value) > 2: + self.fail("Expected two comma separated integers or one integer." + f"Got {len(value)} instead.") -class worker(Command): + if len(value) == 1: + try: + value = (int(value[0]), 0) + except ValueError: + self.fail(f"Expected an integer. Got {value} instead.") + + try: + return tuple(reversed(sorted(map(int, value)))) + except ValueError: + self.fail("Expected two comma separated integers." + f"Got {value.join(',')} instead.") + + +CELERY_BEAT = CeleryBeat() +WORKERS_POOL = WorkersPool() +HOSTNAME = Hostname() +AUTOSCALE = Autoscale() + +C_FAKEFORK = os.environ.get('C_FAKEFORK') + + +def detach(path, argv, logfile=None, pidfile=None, uid=None, + gid=None, umask=None, workdir=None, fake=False, app=None, + executable=None, hostname=None): + """Detach program by argv.""" + fake = 1 if C_FAKEFORK else fake + with detached(logfile, pidfile, uid, gid, umask, workdir, fake, + after_forkers=False): + try: + if executable is not None: + path = executable + os.execv(path, [path] + argv) + except Exception: # pylint: disable=broad-except + if app is None: + from celery import current_app + app = current_app + app.log.setup_logging_subsystem( + 'ERROR', logfile, hostname=hostname) + logger.critical("Can't exec %r", ' '.join([path] + argv), + exc_info=True) + return EX_FAILURE + + +@click.command(cls=CeleryDaemonCommand, + context_settings={'allow_extra_args': True}) +@click.option('-n', + '--hostname', + default=host_format(default_nodename(None)), + cls=CeleryOption, + type=HOSTNAME, + help_group="Worker Options", + help="Set custom hostname (e.g., 'w1@%%h'). " + "Expands: %%h (hostname), %%n (name) and %%d, (domain).") +@click.option('-D', + '--detach', + cls=CeleryOption, + is_flag=True, + default=False, + help_group="Worker Options", + help="Start worker as a background process.") +@click.option('-S', + '--statedb', + cls=CeleryOption, + type=click.Path(), + callback=lambda ctx, _, value: value or ctx.obj.app.conf.worker_state_db, + help_group="Worker Options", + help="Path to the state database. The extension '.db' may be" + "appended to the filename.") +@click.option('-l', + '--loglevel', + default='WARNING', + cls=CeleryOption, + type=LOG_LEVEL, + help_group="Worker Options", + help="Logging level.") +@click.option('optimization', + '-O', + default='default', + cls=CeleryOption, + type=click.Choice(('default', 'fair')), + help_group="Worker Options", + help="Apply optimization profile.") +@click.option('--prefetch-multiplier', + type=int, + metavar="", + callback=lambda ctx, _, value: value or ctx.obj.app.conf.worker_prefetch_multiplier, + cls=CeleryOption, + help_group="Worker Options", + help="Set custom prefetch multiplier value" + "for this worker instance.") +@click.option('-c', + '--concurrency', + type=int, + metavar="", + callback=lambda ctx, _, value: value or ctx.obj.app.conf.worker_concurrency, + cls=CeleryOption, + help_group="Pool Options", + help="Number of child processes processing the queue. " + "The default is the number of CPUs available" + "on your system.") +@click.option('-P', + '--pool', + default='prefork', + type=WORKERS_POOL, + cls=CeleryOption, + help_group="Pool Options", + help="Pool implementation.") +@click.option('-E', + '--task-events', + '--events', + is_flag=True, + cls=CeleryOption, + help_group="Pool Options", + help="Send task-related events that can be captured by monitors" + " like celery events, celerymon, and others.") +@click.option('--time-limit', + type=float, + cls=CeleryOption, + help_group="Pool Options", + help="Enables a hard time limit " + "(in seconds int/float) for tasks.") +@click.option('--soft-time-limit', + type=float, + cls=CeleryOption, + help_group="Pool Options", + help="Enables a soft time limit " + "(in seconds int/float) for tasks.") +@click.option('--max-tasks-per-child', + type=int, + cls=CeleryOption, + help_group="Pool Options", + help="Maximum number of tasks a pool worker can execute before " + "it's terminated and replaced by a new worker.") +@click.option('--max-memory-per-child', + type=int, + cls=CeleryOption, + help_group="Pool Options", + help="Maximum amount of resident memory, in KiB, that may be " + "consumed by a child process before it will be replaced " + "by a new one. If a single task causes a child process " + "to exceed this limit, the task will be completed and " + "the child process will be replaced afterwards.\n" + "Default: no limit.") +@click.option('--purge', + '--discard', + is_flag=True, + cls=CeleryOption, + help_group="Queue Options") +@click.option('--queues', + '-Q', + type=COMMA_SEPARATED_LIST, + cls=CeleryOption, + help_group="Queue Options") +@click.option('--exclude-queues', + '-X', + type=COMMA_SEPARATED_LIST, + cls=CeleryOption, + help_group="Queue Options") +@click.option('--include', + '-I', + type=COMMA_SEPARATED_LIST, + cls=CeleryOption, + help_group="Queue Options") +@click.option('--without-gossip', + default=False, + cls=CeleryOption, + help_group="Features") +@click.option('--without-mingle', + default=False, + cls=CeleryOption, + help_group="Features") +@click.option('--without-heartbeat', + default=False, + cls=CeleryOption, + help_group="Features", ) +@click.option('--heartbeat-interval', + type=int, + cls=CeleryOption, + help_group="Features", ) +@click.option('--autoscale', + type=AUTOSCALE, + cls=CeleryOption, + help_group="Features", ) +@click.option('-B', + '--beat', + type=CELERY_BEAT, + cls=CeleryOption, + is_flag=True, + help_group="Embedded Beat Options") +@click.option('-s', + '--schedule-filename', + '--schedule', + callback=lambda ctx, _, value: value or ctx.obj.app.conf.beat_schedule_filename, + cls=CeleryOption, + help_group="Embedded Beat Options") +@click.option('--scheduler', + cls=CeleryOption, + help_group="Embedded Beat Options") +@click.pass_context +def worker(ctx, hostname=None, pool_cls=None, app=None, uid=None, gid=None, + loglevel=None, logfile=None, pidfile=None, statedb=None, + **kwargs): """Start worker instance. - Examples: - .. code-block:: console - - $ celery worker --app=proj -l info - $ celery worker -A proj -l info -Q hipri,lopri + Examples + -------- + $ celery worker --app=proj -l info + $ celery worker -A proj -l info -Q hipri,lopri + $ celery worker -A proj --concurrency=4 + $ celery worker -A proj --concurrency=1000 -P eventlet + $ celery worker --autoscale=10,0 - $ celery worker -A proj --concurrency=4 - $ celery worker -A proj --concurrency=1000 -P eventlet - $ celery worker --autoscale=10,0 """ - - doc = HELP # parse help from this too - namespace = 'worker' - enable_config_from_cmdline = True - supports_args = False - removed_flags = {'--no-execv', '--force-execv'} - - def run_from_argv(self, prog_name, argv=None, command=None): - argv = [x for x in argv if x not in self.removed_flags] - command = sys.argv[0] if command is None else command - argv = sys.argv[1:] if argv is None else argv - # parse options before detaching so errors can be handled. - options, args = self.prepare_args( - *self.parse_options(prog_name, argv, command)) - self.maybe_detach([command] + argv) - return self(*args, **options) - - def maybe_detach(self, argv, dopts=None): - dopts = ['-D', '--detach'] if not dopts else dopts - if any(arg in argv for arg in dopts): - argv = [v for v in argv if v not in dopts] - # will never return - detached_celeryd(self.app).execute_from_commandline(argv) - raise SystemExit(0) - - def run(self, hostname=None, pool_cls=None, app=None, uid=None, gid=None, - loglevel=None, logfile=None, pidfile=None, statedb=None, - **kwargs): - maybe_drop_privileges(uid=uid, gid=gid) - # Pools like eventlet/gevent needs to patch libs as early - # as possible. - pool_cls = (concurrency.get_implementation(pool_cls) or - self.app.conf.worker_pool) - if self.app.IS_WINDOWS and kwargs.get('beat'): - self.die('-B option does not work on Windows. ' - 'Please run celery beat as a separate service.') - hostname = self.host_format(default_nodename(hostname)) - if loglevel: - try: - loglevel = mlevel(loglevel) - except KeyError: # pragma: no cover - self.die('Unknown level {!r}. Please use one of {}.'.format( - loglevel, '|'.join( - l for l in LOG_LEVELS if isinstance(l, string_t)))) - - worker = self.app.Worker( - hostname=hostname, pool_cls=pool_cls, loglevel=loglevel, - logfile=logfile, # node format handled by celery.app.log.setup - pidfile=self.node_format(pidfile, hostname), - statedb=self.node_format(statedb, hostname), - **kwargs) - worker.start() - return worker.exitcode - - def with_pool_option(self, argv): - # this command support custom pools - # that may have to be loaded as early as possible. - return (['-P'], ['--pool']) - - def add_arguments(self, parser): - conf = self.app.conf - - wopts = parser.add_argument_group('Worker Options') - wopts.add_argument('-n', '--hostname') - wopts.add_argument( - '-D', '--detach', - action='store_true', default=False, - ) - wopts.add_argument( - '-S', '--statedb', - default=conf.worker_state_db, - ) - wopts.add_argument('-l', '--loglevel', default='WARN') - wopts.add_argument('-O', dest='optimization') - wopts.add_argument( - '--prefetch-multiplier', - type=int, default=conf.worker_prefetch_multiplier, - ) - - topts = parser.add_argument_group('Pool Options') - topts.add_argument( - '-c', '--concurrency', - default=conf.worker_concurrency, type=int, - ) - topts.add_argument( - '-P', '--pool', - default=conf.worker_pool, - ) - topts.add_argument( - '-E', '--task-events', '--events', - action='store_true', default=conf.worker_send_task_events, - ) - topts.add_argument( - '--time-limit', - type=float, default=conf.task_time_limit, - ) - topts.add_argument( - '--soft-time-limit', - type=float, default=conf.task_soft_time_limit, - ) - topts.add_argument( - '--max-tasks-per-child', '--maxtasksperchild', - type=int, default=conf.worker_max_tasks_per_child, - ) - topts.add_argument( - '--max-memory-per-child', '--maxmemperchild', - type=int, default=conf.worker_max_memory_per_child, - ) - - qopts = parser.add_argument_group('Queue Options') - qopts.add_argument( - '--purge', '--discard', - action='store_true', default=False, - ) - qopts.add_argument('--queues', '-Q', default=[]) - qopts.add_argument('--exclude-queues', '-X', default=[]) - qopts.add_argument('--include', '-I', default=[]) - - fopts = parser.add_argument_group('Features') - fopts.add_argument( - '--without-gossip', action='store_true', default=False, - ) - fopts.add_argument( - '--without-mingle', action='store_true', default=False, - ) - fopts.add_argument( - '--without-heartbeat', action='store_true', default=False, - ) - fopts.add_argument('--heartbeat-interval', type=int) - fopts.add_argument('--autoscale') - - daemon_options(parser) - - bopts = parser.add_argument_group('Embedded Beat Options') - bopts.add_argument('-B', '--beat', action='store_true', default=False) - bopts.add_argument( - '-s', '--schedule-filename', '--schedule', - default=conf.beat_schedule_filename, - ) - bopts.add_argument('--scheduler') - - user_options = self.app.user_options['worker'] - if user_options: - uopts = parser.add_argument_group('User Options') - self.add_compat_options(uopts, user_options) - - -def main(app=None): - """Start worker.""" - # Fix for setuptools generated scripts, so that it will - # work with multiprocessing fork emulation. - # (see multiprocessing.forking.get_preparation_data()) - if __name__ != '__main__': # pragma: no cover - sys.modules['__main__'] = sys.modules[__name__] - from billiard import freeze_support - freeze_support() - worker(app=app).execute_from_commandline() - - -if __name__ == '__main__': # pragma: no cover - main() + app = ctx.obj.app + if ctx.args: + try: + app.config_from_cmdline(ctx.args, namespace='worker') + except (KeyError, ValueError) as e: + # TODO: Improve the error messages + raise click.UsageError( + "Unable to parse extra configuration from command line.\n" + f"Reason: {e}", ctx=ctx) + if kwargs.get('detach', False): + params = ctx.params.copy() + params.pop('detach') + params.pop('logfile') + params.pop('pidfile') + params.pop('uid') + params.pop('gid') + umask = params.pop('umask') + workdir = ctx.obj.workdir + params.pop('hostname') + executable = params.pop('executable') + argv = ['-m', 'celery', 'worker'] + for arg, value in params.items(): + if isinstance(value, bool) and value: + argv.append(f'--{arg}') + else: + if value is not None: + argv.append(f'--{arg}') + argv.append(str(value)) + return detach(sys.executable, + argv, + logfile=logfile, + pidfile=pidfile, + uid=uid, gid=gid, + umask=umask, + workdir=workdir, + app=app, + executable=executable, + hostname=hostname) + return + maybe_drop_privileges(uid=uid, gid=gid) + worker = app.Worker( + hostname=hostname, pool_cls=pool_cls, loglevel=loglevel, + logfile=logfile, # node format handled by celery.app.log.setup + pidfile=node_format(pidfile, hostname), + statedb=node_format(statedb, hostname), + no_color=ctx.obj.no_color, + **kwargs) + worker.start() + return worker.exitcode diff --git a/docs/conf.py b/docs/conf.py index 85b3607a395..4b6750ae83a 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -16,6 +16,7 @@ html_favicon='images/favicon.ico', html_prepend_sidebars=['sidebardonations.html'], extra_extensions=[ + 'sphinx_click', 'sphinx.ext.napoleon', 'celery.contrib.sphinx', 'celerydocs', diff --git a/docs/reference/cli.rst b/docs/reference/cli.rst new file mode 100644 index 00000000000..cff2291d4ed --- /dev/null +++ b/docs/reference/cli.rst @@ -0,0 +1,7 @@ +======================= + Command Line Interface +======================= + +.. click:: celery.bin.celery:celery + :prog: celery + :show-nested: diff --git a/docs/reference/index.rst b/docs/reference/index.rst index 36d3b7c5ed9..19208fa22d0 100644 --- a/docs/reference/index.rst +++ b/docs/reference/index.rst @@ -10,6 +10,7 @@ .. toctree:: :maxdepth: 1 + cli celery celery.app celery.app.task diff --git a/requirements/default.txt b/requirements/default.txt index 7a6004ab422..de7bc9c14b0 100644 --- a/requirements/default.txt +++ b/requirements/default.txt @@ -2,3 +2,6 @@ pytz>dev billiard>=3.6.3.0,<4.0 kombu>=5.0.0,<6.0 vine==1.3.0 +click>=7.0 +click-didyoumean>=0.0.3 +click-repl>=0.1.6 diff --git a/requirements/docs.txt b/requirements/docs.txt index 2f20930a9ee..69d31dffcce 100644 --- a/requirements/docs.txt +++ b/requirements/docs.txt @@ -1,6 +1,7 @@ sphinx_celery==2.0.0 Sphinx>=3.0.0 sphinx-testing==0.7.2 +sphinx-click==2.5.0 -r extras/sqlalchemy.txt -r test.txt -r deps/mock.txt diff --git a/t/unit/app/test_app.py b/t/unit/app/test_app.py index 41718312cfe..884f563d1a0 100644 --- a/t/unit/app/test_app.py +++ b/t/unit/app/test_app.py @@ -555,20 +555,20 @@ def test_pickle_app(self): for key, value in changes.items(): assert restored.conf[key] == value - def test_worker_main(self): - from celery.bin import worker as worker_bin - - class worker(worker_bin.worker): - - def execute_from_commandline(self, argv): - return argv - - prev, worker_bin.worker = worker_bin.worker, worker - try: - ret = self.app.worker_main(argv=['--version']) - assert ret == ['--version'] - finally: - worker_bin.worker = prev + # def test_worker_main(self): + # from celery.bin import worker as worker_bin + # + # class worker(worker_bin.worker): + # + # def execute_from_commandline(self, argv): + # return argv + # + # prev, worker_bin.worker = worker_bin.worker, worker + # try: + # ret = self.app.worker_main(argv=['--version']) + # assert ret == ['--version'] + # finally: + # worker_bin.worker = prev def test_config_from_envvar(self): os.environ['CELERYTEST_CONFIG_OBJECT'] = 't.unit.app.test_app' @@ -751,11 +751,6 @@ def test_config_from_envvar_more(self, key='CELERY_HARNESS_CFG1'): assert self.app.conf['FOO'] == 10 assert self.app.conf['BAR'] == 20 - @patch('celery.bin.celery.CeleryCommand.execute_from_commandline') - def test_start(self, execute): - self.app.start() - execute.assert_called() - @pytest.mark.parametrize('url,expected_fields', [ ('pyamqp://', { 'hostname': 'localhost', diff --git a/t/unit/bin/test_amqp.py b/t/unit/bin/test_amqp.py deleted file mode 100644 index 8235a3351ee..00000000000 --- a/t/unit/bin/test_amqp.py +++ /dev/null @@ -1,142 +0,0 @@ -import pytest -from case import Mock, patch - -from celery.bin.amqp import AMQPAdmin, AMQShell, amqp, dump_message, main -from celery.five import WhateverIO - - -class test_AMQShell: - - def setup(self): - self.fh = WhateverIO() - self.adm = self.create_adm() - self.shell = AMQShell(connect=self.adm.connect, out=self.fh) - - def create_adm(self, *args, **kwargs): - return AMQPAdmin(app=self.app, out=self.fh, *args, **kwargs) - - def test_queue_declare(self): - self.shell.onecmd('queue.declare foo') - assert 'ok' in self.fh.getvalue() - - def test_missing_command(self): - self.shell.onecmd('foo foo') - assert 'unknown syntax' in self.fh.getvalue() - - def RV(self): - raise Exception(self.fh.getvalue()) - - def test_spec_format_response(self): - spec = self.shell.amqp['exchange.declare'] - assert spec.format_response(None) == 'ok.' - assert spec.format_response('NO') == 'NO' - - def test_missing_namespace(self): - self.shell.onecmd('ns.cmd arg') - assert 'unknown syntax' in self.fh.getvalue() - - def test_help(self): - self.shell.onecmd('help') - assert 'Example:' in self.fh.getvalue() - - def test_help_command(self): - self.shell.onecmd('help queue.declare') - assert 'passive:no' in self.fh.getvalue() - - def test_help_unknown_command(self): - self.shell.onecmd('help foo.baz') - assert 'unknown syntax' in self.fh.getvalue() - - def test_onecmd_error(self): - self.shell.dispatch = Mock() - self.shell.dispatch.side_effect = MemoryError() - self.shell.say = Mock() - assert not self.shell.needs_reconnect - self.shell.onecmd('hello') - self.shell.say.assert_called() - assert self.shell.needs_reconnect - - def test_exit(self): - with pytest.raises(SystemExit): - self.shell.onecmd('exit') - assert "don't leave!" in self.fh.getvalue() - - def test_note_silent(self): - self.shell.silent = True - self.shell.note('foo bar') - assert 'foo bar' not in self.fh.getvalue() - - def test_reconnect(self): - self.shell.onecmd('queue.declare foo') - self.shell.needs_reconnect = True - self.shell.onecmd('queue.delete foo') - - def test_completenames(self): - assert self.shell.completenames('queue.dec') == ['queue.declare'] - assert (sorted(self.shell.completenames('declare')) == - sorted(['queue.declare', 'exchange.declare'])) - - def test_empty_line(self): - self.shell.emptyline = Mock() - self.shell.default = Mock() - self.shell.onecmd('') - self.shell.emptyline.assert_called_with() - self.shell.onecmd('foo') - self.shell.default.assert_called_with('foo') - - def test_respond(self): - self.shell.respond({'foo': 'bar'}) - assert 'foo' in self.fh.getvalue() - - def test_prompt(self): - assert self.shell.prompt - - def test_no_returns(self): - self.shell.onecmd('queue.declare foo') - self.shell.onecmd('exchange.declare bar direct yes') - self.shell.onecmd('queue.bind foo bar baz') - self.shell.onecmd('basic.ack 1') - - def test_dump_message(self): - m = Mock() - m.body = 'the quick brown fox' - m.properties = {'a': 1} - m.delivery_info = {'exchange': 'bar'} - assert dump_message(m) - - def test_dump_message_no_message(self): - assert 'No messages in queue' in dump_message(None) - - def test_note(self): - self.adm.silent = True - self.adm.note('FOO') - assert 'FOO' not in self.fh.getvalue() - - def test_run(self): - a = self.create_adm('queue.declare', 'foo') - a.run() - assert 'ok' in self.fh.getvalue() - - def test_run_loop(self): - a = self.create_adm() - a.Shell = Mock() - shell = a.Shell.return_value = Mock() - shell.cmdloop = Mock() - a.run() - shell.cmdloop.assert_called_with() - - shell.cmdloop.side_effect = KeyboardInterrupt() - a.run() - assert 'bibi' in self.fh.getvalue() - - @patch('celery.bin.amqp.amqp') - def test_main(self, Command): - c = Command.return_value = Mock() - main() - c.execute_from_commandline.assert_called_with() - - @patch('celery.bin.amqp.AMQPAdmin') - def test_command(self, cls): - x = amqp(app=self.app) - x.run() - assert cls.call_args[1]['app'] is self.app diff --git a/t/unit/bin/test_base.py b/t/unit/bin/test_base.py deleted file mode 100644 index 0f3a1008bfc..00000000000 --- a/t/unit/bin/test_base.py +++ /dev/null @@ -1,374 +0,0 @@ -import os - -import pytest -from case import Mock, mock, patch - -from celery.bin.base import Command, Extensions, Option -from celery.five import bytes_if_py2 - - -class MyApp: - user_options = {'preload': None} - - -APP = MyApp() # <-- Used by test_with_custom_app - - -class MockCommand(Command): - mock_args = ('arg1', 'arg2', 'arg3') - - def parse_options(self, prog_name, arguments, command=None): - options = {'foo': 'bar', 'prog_name': prog_name} - return options, self.mock_args - - def run(self, *args, **kwargs): - return args, kwargs - - -class test_Extensions: - - def test_load(self): - with patch('pkg_resources.iter_entry_points') as iterep: - with patch('celery.utils.imports.symbol_by_name') as symbyname: - ep = Mock() - ep.name = 'ep' - ep.module_name = 'foo' - ep.attrs = ['bar', 'baz'] - iterep.return_value = [ep] - cls = symbyname.return_value = Mock() - register = Mock() - e = Extensions('unit', register) - e.load() - symbyname.assert_called_with('foo:bar') - register.assert_called_with(cls, name='ep') - - with patch('celery.utils.imports.symbol_by_name') as symbyname: - symbyname.side_effect = SyntaxError() - with patch('warnings.warn') as warn: - e.load() - warn.assert_called() - - with patch('celery.utils.imports.symbol_by_name') as symbyname: - symbyname.side_effect = KeyError('foo') - with pytest.raises(KeyError): - e.load() - - -class test_Command: - - def test_get_options(self): - cmd = Command() - cmd.option_list = (1, 2, 3) - assert cmd.get_options() == (1, 2, 3) - - def test_custom_description(self): - - class C(Command): - description = 'foo' - - c = C() - assert c.description == 'foo' - - def test_format_epilog(self): - assert Command()._format_epilog('hello') - assert not Command()._format_epilog('') - - def test_format_description(self): - assert Command()._format_description('hello') - - def test_register_callbacks(self): - c = Command(on_error=8, on_usage_error=9) - assert c.on_error == 8 - assert c.on_usage_error == 9 - - def test_run_raises_UsageError(self): - cb = Mock() - c = Command(on_usage_error=cb) - c.verify_args = Mock() - c.run = Mock() - exc = c.run.side_effect = c.UsageError('foo', status=3) - - assert c() == exc.status - cb.assert_called_with(exc) - c.verify_args.assert_called_with(()) - - def test_default_on_usage_error(self): - cmd = Command() - cmd.handle_error = Mock() - exc = Exception() - cmd.on_usage_error(exc) - cmd.handle_error.assert_called_with(exc) - - def test_verify_args_missing(self): - c = Command() - - def run(a, b, c): - pass - c.run = run - - with pytest.raises(c.UsageError): - c.verify_args((1,)) - c.verify_args((1, 2, 3)) - - def test_run_interface(self): - with pytest.raises(NotImplementedError): - Command().run() - - @patch('sys.stdout') - def test_early_version(self, stdout): - cmd = Command() - with pytest.raises(SystemExit): - cmd.early_version(['--version']) - - def test_execute_from_commandline(self, app): - cmd = MockCommand(app=app) - args1, kwargs1 = cmd.execute_from_commandline() # sys.argv - assert args1 == cmd.mock_args - assert kwargs1['foo'] == 'bar' - assert kwargs1.get('prog_name') - args2, kwargs2 = cmd.execute_from_commandline(['foo']) # pass list - assert args2 == cmd.mock_args - assert kwargs2['foo'] == 'bar' - assert kwargs2['prog_name'] == 'foo' - - def test_with_bogus_args(self, app): - with mock.stdouts() as (_, stderr): - cmd = MockCommand(app=app) - cmd.supports_args = False - with pytest.raises(SystemExit): - cmd.execute_from_commandline(argv=['--bogus']) - assert stderr.getvalue() - assert 'Unrecognized' in stderr.getvalue() - - def test_with_custom_config_module(self, app): - prev = os.environ.pop('CELERY_CONFIG_MODULE', None) - try: - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['--config=foo.bar.baz']) - assert os.environ.get('CELERY_CONFIG_MODULE') == 'foo.bar.baz' - finally: - if prev: - os.environ['CELERY_CONFIG_MODULE'] = prev - else: - os.environ.pop('CELERY_CONFIG_MODULE', None) - - def test_with_custom_broker(self, app): - prev = os.environ.pop('CELERY_BROKER_URL', None) - try: - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['--broker=xyzza://']) - assert os.environ.get('CELERY_BROKER_URL') == 'xyzza://' - finally: - if prev: - os.environ['CELERY_BROKER_URL'] = prev - else: - os.environ.pop('CELERY_BROKER_URL', None) - - def test_with_custom_result_backend(self, app): - prev = os.environ.pop('CELERY_RESULT_BACKEND', None) - try: - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['--result-backend=xyzza://']) - assert os.environ.get('CELERY_RESULT_BACKEND') == 'xyzza://' - finally: - if prev: - os.environ['CELERY_RESULT_BACKEND'] = prev - else: - os.environ.pop('CELERY_RESULT_BACKEND', None) - - def test_with_custom_app(self, app): - cmd = MockCommand(app=app) - appstr = '.'.join([__name__, 'APP']) - cmd.setup_app_from_commandline([f'--app={appstr}', - '--loglevel=INFO']) - assert cmd.app is APP - cmd.setup_app_from_commandline(['-A', appstr, - '--loglevel=INFO']) - assert cmd.app is APP - - def test_setup_app_sets_quiet(self, app): - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['-q']) - assert cmd.quiet - cmd2 = MockCommand(app=app) - cmd2.setup_app_from_commandline(['--quiet']) - assert cmd2.quiet - - def test_setup_app_sets_chdir(self, app): - with patch('os.chdir') as chdir: - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['--workdir=/opt']) - chdir.assert_called_with('/opt') - - def test_setup_app_sets_loader(self, app): - prev = os.environ.get('CELERY_LOADER') - try: - cmd = MockCommand(app=app) - cmd.setup_app_from_commandline(['--loader=X.Y:Z']) - assert os.environ['CELERY_LOADER'] == 'X.Y:Z' - finally: - if prev is not None: - os.environ['CELERY_LOADER'] = prev - else: - del(os.environ['CELERY_LOADER']) - - def test_setup_app_no_respect(self, app): - cmd = MockCommand(app=app) - cmd.respects_app_option = False - with patch('celery.bin.base.Celery') as cp: - cmd.setup_app_from_commandline(['--app=x.y:z']) - cp.assert_called() - - def test_setup_app_custom_app(self, app): - cmd = MockCommand(app=app) - app = cmd.app = Mock() - app.user_options = {'preload': None} - cmd.setup_app_from_commandline([]) - assert cmd.app == app - - def test_find_app_suspects(self, app): - cmd = MockCommand(app=app) - assert cmd.find_app('t.unit.bin.proj.app') - assert cmd.find_app('t.unit.bin.proj') - assert cmd.find_app('t.unit.bin.proj:hello') - assert cmd.find_app('t.unit.bin.proj.hello') - assert cmd.find_app('t.unit.bin.proj.app:app') - assert cmd.find_app('t.unit.bin.proj.app.app') - with pytest.raises(AttributeError, match='is the celery module'): - cmd.find_app('t.unit.bin.proj.app2') - with pytest.raises(AttributeError): - cmd.find_app('t.unit.bin') - - with pytest.raises(AttributeError): - cmd.find_app(__name__) - - def test_ask(self, app, patching): - try: - input = patching('celery.bin.base.input') - except AttributeError: - input = patching('builtins.input') - cmd = MockCommand(app=app) - input.return_value = 'yes' - assert cmd.ask('q', ('yes', 'no'), 'no') == 'yes' - input.return_value = 'nop' - assert cmd.ask('q', ('yes', 'no'), 'no') == 'no' - - def test_host_format(self, app): - cmd = MockCommand(app=app) - with patch('celery.utils.nodenames.gethostname') as hn: - hn.return_value = 'blacktron.example.com' - assert cmd.host_format('') == '' - assert (cmd.host_format('celery@%h') == - 'celery@blacktron.example.com') - assert cmd.host_format('celery@%d') == 'celery@example.com' - assert cmd.host_format('celery@%n') == 'celery@blacktron' - - def test_say_chat_quiet(self, app): - cmd = MockCommand(app=app) - cmd.quiet = True - assert cmd.say_chat('<-', 'foo', 'foo') is None - - def test_say_chat_show_body(self, app): - cmd = MockCommand(app=app) - cmd.out = Mock() - cmd.show_body = True - cmd.say_chat('->', 'foo', 'body') - cmd.out.assert_called_with('body') - - def test_say_chat_no_body(self, app): - cmd = MockCommand(app=app) - cmd.out = Mock() - cmd.show_body = False - cmd.say_chat('->', 'foo', 'body') - - @pytest.mark.usefixtures('depends_on_current_app') - def test_with_cmdline_config(self, app): - cmd = MockCommand(app=app) - cmd.enable_config_from_cmdline = True - cmd.namespace = 'worker' - rest = cmd.setup_app_from_commandline(argv=[ - '--loglevel=INFO', '--', - 'result.backend=redis://backend.example.com', - 'broker.url=amqp://broker.example.com', - '.prefetch_multiplier=100']) - assert cmd.app.conf.result_backend == 'redis://backend.example.com' - assert cmd.app.conf.broker_url == 'amqp://broker.example.com' - assert cmd.app.conf.worker_prefetch_multiplier == 100 - assert rest == ['--loglevel=INFO'] - - cmd.app = None - cmd.get_app = Mock(name='get_app') - cmd.get_app.return_value = app - app.user_options['preload'] = [ - Option('--foo', action='store_true'), - ] - cmd.setup_app_from_commandline(argv=[ - '--foo', '--loglevel=INFO', '--', - 'broker.url=amqp://broker.example.com', - '.prefetch_multiplier=100']) - assert cmd.app is cmd.get_app() - - def test_get_default_app(self, app, patching): - patching('celery._state.get_current_app') - cmd = MockCommand(app=app) - from celery._state import get_current_app - assert cmd._get_default_app() is get_current_app() - - def test_set_colored(self, app): - cmd = MockCommand(app=app) - cmd.colored = 'foo' - assert cmd.colored == 'foo' - - def test_set_no_color(self, app): - cmd = MockCommand(app=app) - cmd.no_color = False - _ = cmd.colored # noqa - cmd.no_color = True - assert not cmd.colored.enabled - - def test_find_app(self, app): - cmd = MockCommand(app=app) - with patch('celery.utils.imports.symbol_by_name') as sbn: - from types import ModuleType - x = ModuleType(bytes_if_py2('proj')) - - def on_sbn(*args, **kwargs): - - def after(*args, **kwargs): - x.app = 'quick brown fox' - x.__path__ = None - return x - sbn.side_effect = after - return x - sbn.side_effect = on_sbn - x.__path__ = [True] - assert cmd.find_app('proj') == 'quick brown fox' - - def test_parse_preload_options_shortopt(self): - - class TestCommand(Command): - - def add_preload_arguments(self, parser): - parser.add_argument('-s', action='store', dest='silent') - cmd = TestCommand() - acc, _ = cmd.parse_preload_options(['-s', 'yes']) - assert acc.get('silent') == 'yes' - - def test_parse_preload_options_with_equals_and_append(self): - - class TestCommand(Command): - - def add_preload_arguments(self, parser): - parser.add_argument('--zoom', action='append', default=[]) - cmd = Command() - acc, _ = cmd.parse_preload_options(['--zoom=1', '--zoom=2']) - - assert acc, {'zoom': ['1' == '2']} - - def test_parse_preload_options_without_equals_and_append(self): - cmd = Command() - opt = Option('--zoom', action='append', default=[]) - cmd.preload_options = (opt,) - acc, _ = cmd.parse_preload_options(['--zoom', '1', '--zoom', '2']) - - assert acc, {'zoom': ['1' == '2']} diff --git a/t/unit/bin/test_beat.py b/t/unit/bin/test_beat.py deleted file mode 100644 index 4e51afbb9b3..00000000000 --- a/t/unit/bin/test_beat.py +++ /dev/null @@ -1,144 +0,0 @@ -import logging -import sys - -import pytest -from case import Mock, mock, patch - -from celery import beat, platforms -from celery.apps import beat as beatapp -from celery.bin import beat as beat_bin - - -def MockBeat(*args, **kwargs): - class _Beat(beatapp.Beat): - Service = Mock( - name='MockBeat.Service', - return_value=Mock(name='MockBeat()', max_interval=3.3), - ) - b = _Beat(*args, **kwargs) - sched = b.Service.return_value.get_scheduler = Mock() - sched.return_value.max_interval = 3.3 - return b - - -class test_Beat: - - def test_loglevel_string(self): - b = beatapp.Beat(app=self.app, loglevel='DEBUG', - redirect_stdouts=False) - assert b.loglevel == logging.DEBUG - - b2 = beatapp.Beat(app=self.app, loglevel=logging.DEBUG, - redirect_stdouts=False) - assert b2.loglevel == logging.DEBUG - - def test_colorize(self): - self.app.log.setup = Mock() - b = beatapp.Beat(app=self.app, no_color=True, - redirect_stdouts=False) - b.setup_logging() - self.app.log.setup.assert_called() - assert not self.app.log.setup.call_args[1]['colorize'] - - def test_init_loader(self): - b = beatapp.Beat(app=self.app, redirect_stdouts=False) - b.init_loader() - - def test_process_title(self): - b = beatapp.Beat(app=self.app, redirect_stdouts=False) - b.set_process_title() - - def test_run(self): - b = MockBeat(app=self.app, redirect_stdouts=False) - b.install_sync_handler = Mock(name='beat.install_sync_handler') - b.Service.return_value.max_interval = 3.0 - b.run() - b.Service().start.assert_called_with() - - def psig(self, fun, *args, **kwargs): - handlers = {} - - class Signals(platforms.Signals): - - def __setitem__(self, sig, handler): - handlers[sig] = handler - - p, platforms.signals = platforms.signals, Signals() - try: - fun(*args, **kwargs) - return handlers - finally: - platforms.signals = p - - def test_install_sync_handler(self): - b = beatapp.Beat(app=self.app, redirect_stdouts=False) - clock = beat.Service(app=self.app) - clock.start = Mock(name='beat.Service().start') - clock.sync = Mock(name='beat.Service().sync') - handlers = self.psig(b.install_sync_handler, clock) - with pytest.raises(SystemExit): - handlers['SIGINT']('SIGINT', object()) - clock.sync.assert_called_with() - - @mock.restore_logging() - def test_setup_logging(self): - try: - # py3k - delattr(sys.stdout, 'logger') - except AttributeError: - pass - b = beatapp.Beat(app=self.app, redirect_stdouts=False) - b.redirect_stdouts = False - b.app.log.already_setup = False - b.setup_logging() - with pytest.raises(AttributeError): - sys.stdout.logger - - import sys - orig_stdout = sys.__stdout__ - - @patch('celery.apps.beat.logger') - def test_logs_errors(self, logger): - b = MockBeat( - app=self.app, redirect_stdouts=False, socket_timeout=None, - ) - b.install_sync_handler = Mock('beat.install_sync_handler') - b.install_sync_handler.side_effect = RuntimeError('xxx') - with mock.restore_logging(): - with pytest.raises(RuntimeError): - b.start_scheduler() - logger.critical.assert_called() - - @patch('celery.platforms.create_pidlock') - def test_using_pidfile(self, create_pidlock): - b = MockBeat(app=self.app, pidfile='pidfilelockfilepid', - socket_timeout=None, redirect_stdouts=False) - b.install_sync_handler = Mock(name='beat.install_sync_handler') - with mock.stdouts(): - b.start_scheduler() - create_pidlock.assert_called() - - -class test_div: - - def setup(self): - self.Beat = self.app.Beat = self.patching('celery.apps.beat.Beat') - self.detached = self.patching('celery.bin.beat.detached') - self.Beat.__name__ = 'Beat' - - def test_main(self): - sys.argv = [sys.argv[0], '-s', 'foo'] - beat_bin.main(app=self.app) - self.Beat().run.assert_called_with() - - def test_detach(self): - cmd = beat_bin.beat() - cmd.app = self.app - cmd.run(detach=True) - self.detached.assert_called() - - def test_parse_options(self): - cmd = beat_bin.beat() - cmd.app = self.app - options, args = cmd.parse_options('celery beat', ['-s', 'foo']) - assert options['schedule'] == 'foo' diff --git a/t/unit/bin/test_call.py b/t/unit/bin/test_call.py deleted file mode 100644 index 58f50fa11b8..00000000000 --- a/t/unit/bin/test_call.py +++ /dev/null @@ -1,41 +0,0 @@ -from datetime import datetime - -import pytest -from case import patch -from kombu.utils.json import dumps - -from celery.bin.call import call -from celery.five import WhateverIO - - -class test_call: - - def setup(self): - - @self.app.task(shared=False) - def add(x, y): - return x + y - self.add = add - - @patch('celery.app.base.Celery.send_task') - def test_run(self, send_task): - a = call(app=self.app, stderr=WhateverIO(), stdout=WhateverIO()) - a.run(self.add.name) - send_task.assert_called() - - a.run(self.add.name, - args=dumps([4, 4]), - kwargs=dumps({'x': 2, 'y': 2})) - assert send_task.call_args[1]['args'], [4 == 4] - assert send_task.call_args[1]['kwargs'] == {'x': 2, 'y': 2} - - a.run(self.add.name, expires=10, countdown=10) - assert send_task.call_args[1]['expires'] == 10 - assert send_task.call_args[1]['countdown'] == 10 - - now = datetime.now() - iso = now.isoformat() - a.run(self.add.name, expires=iso) - assert send_task.call_args[1]['expires'] == now - with pytest.raises(ValueError): - a.run(self.add.name, expires='foobaribazibar') diff --git a/t/unit/bin/test_celery.py b/t/unit/bin/test_celery.py deleted file mode 100644 index c36efde27ab..00000000000 --- a/t/unit/bin/test_celery.py +++ /dev/null @@ -1,295 +0,0 @@ -import sys - -import pytest -from case import Mock, patch - -from celery import __main__ -from celery.bin import celery as mod -from celery.bin.base import Error -from celery.bin.celery import (CeleryCommand, Command, determine_exit_status, - help) -from celery.bin.celery import main as mainfun -from celery.bin.celery import multi, report -from celery.five import WhateverIO -from celery.platforms import EX_FAILURE, EX_OK, EX_USAGE - - -class MyApp(object): - user_options = {'preload': None} - - -APP = MyApp() # <-- Used by test_short_and_long_arguments_be_the_same - - -class test__main__: - - def test_main(self): - with patch('celery.__main__.maybe_patch_concurrency') as mpc: - with patch('celery.bin.celery.main') as main: - __main__.main() - mpc.assert_called_with() - main.assert_called_with() - - def test_main__multi(self): - with patch('celery.__main__.maybe_patch_concurrency') as mpc: - with patch('celery.bin.celery.main') as main: - prev, sys.argv = sys.argv, ['foo', 'multi'] - try: - __main__.main() - mpc.assert_not_called() - main.assert_called_with() - finally: - sys.argv = prev - - -class test_Command: - - def test_Error_repr(self): - x = Error('something happened') - assert x.status is not None - assert x.reason - assert str(x) - - def setup(self): - self.out = WhateverIO() - self.err = WhateverIO() - self.cmd = Command(self.app, stdout=self.out, stderr=self.err) - - def test_error(self): - self.cmd.out = Mock() - self.cmd.error('FOO') - self.cmd.out.assert_called() - - def test_out(self): - f = Mock() - self.cmd.out('foo', f) - - def test_call(self): - - def ok_run(): - pass - - self.cmd.run = ok_run - assert self.cmd() == EX_OK - - def error_run(): - raise Error('error', EX_FAILURE) - self.cmd.run = error_run - assert self.cmd() == EX_FAILURE - - def test_run_from_argv(self): - with pytest.raises(NotImplementedError): - self.cmd.run_from_argv('prog', ['foo', 'bar']) - - def test_pretty_list(self): - assert self.cmd.pretty([])[1] == '- empty -' - assert 'bar', self.cmd.pretty(['foo' in 'bar'][1]) - - def test_pretty_dict(self, text='the quick brown fox'): - assert 'OK' in str(self.cmd.pretty({'ok': text})[0]) - assert 'ERROR' in str(self.cmd.pretty({'error': text})[0]) - - def test_pretty(self): - assert 'OK' in str(self.cmd.pretty('the quick brown')) - assert 'OK' in str(self.cmd.pretty(object())) - assert 'OK' in str(self.cmd.pretty({'foo': 'bar'})) - - -class test_report: - - def test_run(self): - out = WhateverIO() - r = report(app=self.app, stdout=out) - assert r.run() == EX_OK - assert out.getvalue() - - -class test_help: - - def test_run(self): - out = WhateverIO() - h = help(app=self.app, stdout=out) - h.parser = Mock() - assert h.run() == EX_USAGE - assert out.getvalue() - assert h.usage('help') - h.parser.print_help.assert_called_with() - - -class test_CeleryCommand: - - def test_execute_from_commandline(self): - x = CeleryCommand(app=self.app) - x.handle_argv = Mock() - x.handle_argv.return_value = 1 - with pytest.raises(SystemExit): - x.execute_from_commandline() - - x.handle_argv.return_value = True - with pytest.raises(SystemExit): - x.execute_from_commandline() - - x.handle_argv.side_effect = KeyboardInterrupt() - with pytest.raises(SystemExit): - x.execute_from_commandline() - - x.respects_app_option = True - with pytest.raises(SystemExit): - x.execute_from_commandline(['celery', 'multi']) - assert not x.respects_app_option - x.respects_app_option = True - with pytest.raises(SystemExit): - x.execute_from_commandline(['manage.py', 'celery', 'multi']) - assert not x.respects_app_option - - def test_with_pool_option(self): - x = CeleryCommand(app=self.app) - assert x.with_pool_option(['celery', 'events']) is None - assert x.with_pool_option(['celery', 'worker']) - assert x.with_pool_option(['manage.py', 'celery', 'worker']) - - def test_load_extensions_no_commands(self): - with patch('celery.bin.celery.Extensions') as Ext: - ext = Ext.return_value = Mock(name='Extension') - ext.load.return_value = None - x = CeleryCommand(app=self.app) - x.load_extension_commands() - - def test_load_extensions_commands(self): - with patch('celery.bin.celery.Extensions') as Ext: - prev, mod.command_classes = list(mod.command_classes), Mock() - try: - ext = Ext.return_value = Mock(name='Extension') - ext.load.return_value = ['foo', 'bar'] - x = CeleryCommand(app=self.app) - x.load_extension_commands() - mod.command_classes.append.assert_called_with( - ('Extensions', ['foo', 'bar'], 'magenta'), - ) - finally: - mod.command_classes = prev - - def test_determine_exit_status(self): - assert determine_exit_status('true') == EX_OK - assert determine_exit_status('') == EX_FAILURE - - def test_relocate_args_from_start(self): - x = CeleryCommand(app=self.app) - assert x._relocate_args_from_start(None) == [] - relargs1 = x._relocate_args_from_start([ - '-l', 'debug', 'worker', '-c', '3', '--foo', - ]) - assert relargs1 == ['worker', '-c', '3', '--foo', '-l', 'debug'] - relargs2 = x._relocate_args_from_start([ - '--pool=gevent', '-l', 'debug', 'worker', '--foo', '-c', '3', - ]) - assert relargs2 == [ - 'worker', '--foo', '-c', '3', - '--pool=gevent', '-l', 'debug', - ] - assert x._relocate_args_from_start(['foo', '--foo=1']) == [ - 'foo', '--foo=1', - ] - - def test_register_command(self): - prev, CeleryCommand.commands = dict(CeleryCommand.commands), {} - try: - fun = Mock(name='fun') - CeleryCommand.register_command(fun, name='foo') - assert CeleryCommand.commands['foo'] is fun - finally: - CeleryCommand.commands = prev - - def test_handle_argv(self): - x = CeleryCommand(app=self.app) - x.execute = Mock() - x.handle_argv('celery', []) - x.execute.assert_called_with('help', ['help']) - - x.handle_argv('celery', ['start', 'foo']) - x.execute.assert_called_with('start', ['start', 'foo']) - - def test_short_and_long_arguments_be_the_same(self): - for arg in "--app", "-A": - appstr = '.'.join([__name__, 'APP']) - x = CeleryCommand(app=self.app) - x.execute = Mock() - with pytest.raises(SystemExit): - x.execute_from_commandline(['celery', arg, appstr, 'worker']) - assert x.execute.called - assert x.execute.call_args[0] - assert x.execute.call_args[0][0] == "worker" - - def test_execute(self): - x = CeleryCommand(app=self.app) - Help = x.commands['help'] = Mock() - help = Help.return_value = Mock() - x.execute('fooox', ['a']) - help.run_from_argv.assert_called_with(x.prog_name, [], command='help') - help.reset() - x.execute('help', ['help']) - help.run_from_argv.assert_called_with(x.prog_name, [], command='help') - - Dummy = x.commands['dummy'] = Mock() - dummy = Dummy.return_value = Mock() - exc = dummy.run_from_argv.side_effect = Error( - 'foo', status='EX_FAILURE', - ) - x.on_error = Mock(name='on_error') - help.reset() - x.execute('dummy', ['dummy']) - x.on_error.assert_called_with(exc) - dummy.run_from_argv.assert_called_with( - x.prog_name, [], command='dummy', - ) - help.run_from_argv.assert_called_with( - x.prog_name, [], command='help', - ) - - exc = dummy.run_from_argv.side_effect = x.UsageError('foo') - x.on_usage_error = Mock() - x.execute('dummy', ['dummy']) - x.on_usage_error.assert_called_with(exc) - - def test_on_usage_error(self): - x = CeleryCommand(app=self.app) - x.error = Mock() - x.on_usage_error(x.UsageError('foo'), command=None) - x.error.assert_called() - x.on_usage_error(x.UsageError('foo'), command='dummy') - - def test_prepare_prog_name(self): - x = CeleryCommand(app=self.app) - main = Mock(name='__main__') - main.__file__ = '/opt/foo.py' - with patch.dict(sys.modules, __main__=main): - assert x.prepare_prog_name('__main__.py') == '/opt/foo.py' - assert x.prepare_prog_name('celery') == 'celery' - - -class test_multi: - - def test_get_options(self): - assert multi(app=self.app).get_options() is None - - def test_run_from_argv(self): - with patch('celery.bin.multi.MultiTool') as MultiTool: - m = MultiTool.return_value = Mock() - multi(self.app).run_from_argv('celery', ['arg'], command='multi') - m.execute_from_commandline.assert_called_with(['multi', 'arg']) - - -class test_main: - - @patch('celery.bin.celery.CeleryCommand') - def test_main(self, Command): - cmd = Command.return_value = Mock() - mainfun() - cmd.execute_from_commandline.assert_called_with(None) - - @patch('celery.bin.celery.CeleryCommand') - def test_main_KeyboardInterrupt(self, Command): - cmd = Command.return_value = Mock() - cmd.execute_from_commandline.side_effect = KeyboardInterrupt() - mainfun() - cmd.execute_from_commandline.assert_called_with(None) diff --git a/t/unit/bin/test_celeryd_detach.py b/t/unit/bin/test_celeryd_detach.py deleted file mode 100644 index 08c55cc5b62..00000000000 --- a/t/unit/bin/test_celeryd_detach.py +++ /dev/null @@ -1,126 +0,0 @@ -import pytest -from case import Mock, mock, patch - -from celery.bin.celeryd_detach import detach, detached_celeryd, main -from celery.platforms import IS_WINDOWS - -if not IS_WINDOWS: - class test_detached: - - @patch('celery.bin.celeryd_detach.detached') - @patch('os.execv') - @patch('celery.bin.celeryd_detach.logger') - @patch('celery.app.log.Logging.setup_logging_subsystem') - def test_execs(self, setup_logs, logger, execv, detached): - context = detached.return_value = Mock() - context.__enter__ = Mock() - context.__exit__ = Mock() - - detach('/bin/boo', ['a', 'b', 'c'], logfile='/var/log', - pidfile='/var/pid', hostname='foo@example.com') - detached.assert_called_with( - '/var/log', '/var/pid', None, None, None, None, False, - after_forkers=False, - ) - execv.assert_called_with('/bin/boo', ['/bin/boo', 'a', 'b', 'c']) - - r = detach('/bin/boo', ['a', 'b', 'c'], - logfile='/var/log', pidfile='/var/pid', - executable='/bin/foo', app=self.app) - execv.assert_called_with('/bin/foo', ['/bin/foo', 'a', 'b', 'c']) - - execv.side_effect = Exception('foo') - r = detach( - '/bin/boo', ['a', 'b', 'c'], - logfile='/var/log', pidfile='/var/pid', - hostname='foo@example.com', app=self.app) - context.__enter__.assert_called_with() - logger.critical.assert_called() - setup_logs.assert_called_with( - 'ERROR', '/var/log', hostname='foo@example.com') - assert r == 1 - - self.patching('celery.current_app') - from celery import current_app - r = detach( - '/bin/boo', ['a', 'b', 'c'], - logfile='/var/log', pidfile='/var/pid', - hostname='foo@example.com', app=None) - current_app.log.setup_logging_subsystem.assert_called_with( - 'ERROR', '/var/log', hostname='foo@example.com', - ) - - -class test_PartialOptionParser: - - def test_parser(self): - x = detached_celeryd(self.app) - p = x.create_parser('celeryd_detach') - options, leftovers = p.parse_known_args([ - '--logfile=foo', '--fake', '--enable', - 'a', 'b', '-c1', '-d', '2', - ]) - assert options.logfile == 'foo' - assert leftovers, ['--enable', '-c1', '-d' == '2'] - options, leftovers = p.parse_known_args([ - '--fake', '--enable', - '--pidfile=/var/pid/foo.pid', - 'a', 'b', '-c1', '-d', '2', - ]) - assert options.pidfile == '/var/pid/foo.pid' - - with mock.stdouts(): - with pytest.raises(SystemExit): - p.parse_args(['--logfile']) - p._option_string_actions['--logfile'].nargs = 2 - with pytest.raises(SystemExit): - p.parse_args(['--logfile=a']) - with pytest.raises(SystemExit): - p.parse_args(['--fake=abc']) - - assert p._option_string_actions['--logfile'].nargs == 2 - p.parse_args(['--logfile', 'a', 'b']) - - -class test_Command: - argv = [ - '--foobar=10,2', '-c', '1', - '--logfile=/var/log', '-lDEBUG', - '--', '.disable_rate_limits=1', - ] - - def test_parse_options(self): - x = detached_celeryd(app=self.app) - _, argv = x._split_command_line_config(self.argv) - o, l = x.parse_options('cd', argv) - assert o.logfile == '/var/log' - assert l == [ - '--foobar=10,2', '-c', '1', - '-lDEBUG', '--logfile=/var/log', - '--pidfile=celeryd.pid', - ] - x.parse_options('cd', []) # no args - - @patch('sys.exit') - @patch('celery.bin.celeryd_detach.detach') - def test_execute_from_commandline(self, detach, exit): - x = detached_celeryd(app=self.app) - x.execute_from_commandline(self.argv) - exit.assert_called() - detach.assert_called_with( - path=x.execv_path, uid=None, gid=None, - umask=None, fake=False, logfile='/var/log', pidfile='celeryd.pid', - workdir=None, executable=None, hostname=None, - argv=x.execv_argv + [ - '-c', '1', '-lDEBUG', - '--logfile=/var/log', '--pidfile=celeryd.pid', - '--', '.disable_rate_limits=1' - ], - app=self.app, - ) - - @patch('celery.bin.celeryd_detach.detached_celeryd') - def test_main(self, command): - c = command.return_value = Mock() - main(self.app) - c.execute_from_commandline.assert_called_with() diff --git a/t/unit/bin/test_celeryevdump.py b/t/unit/bin/test_celeryevdump.py deleted file mode 100644 index b142889cb8e..00000000000 --- a/t/unit/bin/test_celeryevdump.py +++ /dev/null @@ -1,63 +0,0 @@ -from time import time - -from case import Mock, patch - -from celery.events.dumper import Dumper, evdump, humanize_type -from celery.five import WhateverIO - - -class test_Dumper: - - def setup(self): - self.out = WhateverIO() - self.dumper = Dumper(out=self.out) - - def test_humanize_type(self): - assert humanize_type('worker-offline') == 'shutdown' - assert humanize_type('task-started') == 'task started' - - def test_format_task_event(self): - self.dumper.format_task_event( - 'worker@example.com', time(), 'task-started', 'tasks.add', {}) - assert self.out.getvalue() - - def test_on_event(self): - event = { - 'hostname': 'worker@example.com', - 'timestamp': time(), - 'uuid': '1ef', - 'name': 'tasks.add', - 'args': '(2, 2)', - 'kwargs': '{}', - } - self.dumper.on_event(dict(event, type='task-received')) - assert self.out.getvalue() - self.dumper.on_event(dict(event, type='task-revoked')) - self.dumper.on_event(dict(event, type='worker-online')) - - @patch('celery.events.EventReceiver.capture') - def test_evdump(self, capture): - capture.side_effect = KeyboardInterrupt() - evdump(app=self.app) - - def test_evdump_error_handler(self): - app = Mock(name='app') - with patch('celery.events.dumper.Dumper') as Dumper: - Dumper.return_value = Mock(name='dumper') - recv = app.events.Receiver.return_value = Mock() - - def se(*_a, **_k): - recv.capture.side_effect = SystemExit() - raise KeyError() - recv.capture.side_effect = se - - Conn = app.connection_for_read.return_value = Mock(name='conn') - conn = Conn.clone.return_value = Mock(name='cloned_conn') - conn.connection_errors = (KeyError,) - conn.channel_errors = () - - evdump(app) - conn.ensure_connection.assert_called() - errback = conn.ensure_connection.call_args[0][0] - errback(KeyError(), 1) - conn.as_uri.assert_called() diff --git a/t/unit/bin/test_control.py b/t/unit/bin/test_control.py deleted file mode 100644 index 8494da6cf68..00000000000 --- a/t/unit/bin/test_control.py +++ /dev/null @@ -1,125 +0,0 @@ -import pytest -from case import Mock, patch - -from celery.bin.base import Error -from celery.bin.control import _RemoteControl, control, inspect, status -from celery.five import WhateverIO - - -class test_RemoteControl: - - def test_call_interface(self): - with pytest.raises(NotImplementedError): - _RemoteControl(app=self.app).call() - - -class test_inspect: - - def test_usage(self): - assert inspect(app=self.app).usage('foo') - - def test_command_info(self): - i = inspect(app=self.app) - assert i.get_command_info( - 'ping', help=True, color=i.colored.red, app=self.app, - ) - - def test_list_commands_color(self): - i = inspect(app=self.app) - assert i.list_commands(help=True, color=i.colored.red, app=self.app) - assert i.list_commands(help=False, color=None, app=self.app) - - def test_epilog(self): - assert inspect(app=self.app).epilog - - def test_do_call_method_sql_transport_type(self): - self.app.connection = Mock() - conn = self.app.connection.return_value = Mock(name='Connection') - conn.transport.driver_type = 'sql' - i = inspect(app=self.app) - with pytest.raises(i.Error): - i.do_call_method(['ping']) - - def test_say_directions(self): - i = inspect(self.app) - i.out = Mock() - i.quiet = True - i.say_chat('<-', 'hello out') - i.out.assert_not_called() - - i.say_chat('->', 'hello in') - i.out.assert_called() - - i.quiet = False - i.out.reset_mock() - i.say_chat('<-', 'hello out', 'body') - i.out.assert_called() - - @patch('celery.app.control.Control.inspect') - def test_run(self, real): - out = WhateverIO() - i = inspect(app=self.app, stdout=out) - with pytest.raises(Error): - i.run() - with pytest.raises(Error): - i.run('help') - with pytest.raises(Error): - i.run('xyzzybaz') - - i.run('ping') - real.assert_called() - i.run('ping', destination='foo,bar') - assert real.call_args[1]['destination'], ['foo' == 'bar'] - assert real.call_args[1]['timeout'] == 0.2 - callback = real.call_args[1]['callback'] - - callback({'foo': {'ok': 'pong'}}) - assert 'OK' in out.getvalue() - - with patch('celery.bin.control.dumps') as dumps: - i.run('ping', json=True) - dumps.assert_called() - - instance = real.return_value = Mock() - instance._request.return_value = None - with pytest.raises(Error): - i.run('ping') - - out.seek(0) - out.truncate() - i.quiet = True - i.say_chat('<-', 'hello') - assert not out.getvalue() - - -class test_control: - - def control(self, patch_call, *args, **kwargs): - kwargs.setdefault('app', Mock(name='app')) - c = control(*args, **kwargs) - if patch_call: - c.call = Mock(name='control.call') - return c - - def test_call(self): - i = self.control(False) - i.call('foo', arguments={'kw': 2}) - i.app.control.broadcast.assert_called_with( - 'foo', arguments={'kw': 2}, reply=True) - - -class test_status: - - @patch('celery.bin.control.inspect') - def test_run(self, inspect_): - out, err = WhateverIO(), WhateverIO() - ins = inspect_.return_value = Mock() - ins.run.return_value = [] - s = status(self.app, stdout=out, stderr=err) - with pytest.raises(Error): - s.run() - - ins.run.return_value = ['a', 'b', 'c'] - s.run() - assert '3 nodes online' in out.getvalue() - s.run(quiet=True) diff --git a/t/unit/bin/test_events.py b/t/unit/bin/test_events.py deleted file mode 100644 index dd79a5311b9..00000000000 --- a/t/unit/bin/test_events.py +++ /dev/null @@ -1,89 +0,0 @@ -import importlib -from functools import wraps - -from case import patch, skip - -from celery.bin import events - - -def _old_patch(module, name, mocked): - module = importlib.import_module(module) - - def _patch(fun): - - @wraps(fun) - def __patched(*args, **kwargs): - prev = getattr(module, name) - setattr(module, name, mocked) - try: - return fun(*args, **kwargs) - finally: - setattr(module, name, prev) - return __patched - return _patch - - -class MockCommand: - executed = [] - - def execute_from_commandline(self, **kwargs): - self.executed.append(True) - - -def proctitle(prog, info=None): - proctitle.last = (prog, info) - - -proctitle.last = () # noqa: E305 - - -class test_events: - - def setup(self): - self.ev = events.events(app=self.app) - - @_old_patch('celery.events.dumper', 'evdump', - lambda **kw: 'me dumper, you?') - @_old_patch('celery.bin.events', 'set_process_title', proctitle) - def test_run_dump(self): - assert self.ev.run(dump=True), 'me dumper == you?' - assert 'celery events:dump' in proctitle.last[0] - - @skip.unless_module('curses', import_errors=(ImportError, OSError)) - def test_run_top(self): - @_old_patch('celery.events.cursesmon', 'evtop', - lambda **kw: 'me top, you?') - @_old_patch('celery.bin.events', 'set_process_title', proctitle) - def _inner(): - assert self.ev.run(), 'me top == you?' - assert 'celery events:top' in proctitle.last[0] - return _inner() - - @_old_patch('celery.events.snapshot', 'evcam', - lambda *a, **k: (a, k)) - @_old_patch('celery.bin.events', 'set_process_title', proctitle) - def test_run_cam(self): - a, kw = self.ev.run(camera='foo.bar.baz', logfile='logfile') - assert a[0] == 'foo.bar.baz' - assert kw['freq'] == 1.0 - assert kw['maxrate'] is None - assert kw['loglevel'] == 'INFO' - assert kw['logfile'] == 'logfile' - assert 'celery events:cam' in proctitle.last[0] - - @patch('celery.events.snapshot.evcam') - @patch('celery.bin.events.detached') - def test_run_cam_detached(self, detached, evcam): - self.ev.prog_name = 'celery events' - self.ev.run_evcam('myapp.Camera', detach=True) - detached.assert_called() - evcam.assert_called() - - def test_get_options(self): - assert not self.ev.get_options() - - @_old_patch('celery.bin.events', 'events', MockCommand) - def test_main(self): - MockCommand.executed = [] - events.main() - assert MockCommand.executed diff --git a/t/unit/bin/test_list.py b/t/unit/bin/test_list.py deleted file mode 100644 index 361ac3fe9b5..00000000000 --- a/t/unit/bin/test_list.py +++ /dev/null @@ -1,26 +0,0 @@ -import pytest -from case import Mock - -from celery.bin.base import Error -from celery.bin.list import list_ -from celery.utils.text import WhateverIO - - -class test_list: - - def test_list_bindings_no_support(self): - l = list_(app=self.app, stderr=WhateverIO()) - management = Mock() - management.get_bindings.side_effect = NotImplementedError() - with pytest.raises(Error): - l.list_bindings(management) - - def test_run(self): - l = list_(app=self.app, stderr=WhateverIO()) - l.run('bindings') - - with pytest.raises(Error): - l.run(None) - - with pytest.raises(Error): - l.run('foo') diff --git a/t/unit/bin/test_migrate.py b/t/unit/bin/test_migrate.py deleted file mode 100644 index a25e6539516..00000000000 --- a/t/unit/bin/test_migrate.py +++ /dev/null @@ -1,25 +0,0 @@ -import pytest -from case import Mock, patch - -from celery.bin.migrate import migrate -from celery.five import WhateverIO - - -class test_migrate: - - @patch('celery.contrib.migrate.migrate_tasks') - def test_run(self, migrate_tasks): - out = WhateverIO() - m = migrate(app=self.app, stdout=out, stderr=WhateverIO()) - with pytest.raises(TypeError): - m.run() - migrate_tasks.assert_not_called() - - m.run('memory://foo', 'memory://bar') - migrate_tasks.assert_called() - - state = Mock() - state.count = 10 - state.strtotal = 30 - m.on_migrate_task(state, {'task': 'tasks.add', 'id': 'ID'}, None) - assert '10/30' in out.getvalue() diff --git a/t/unit/bin/test_multi.py b/t/unit/bin/test_multi.py index d56a17eaa54..e69de29bb2d 100644 --- a/t/unit/bin/test_multi.py +++ b/t/unit/bin/test_multi.py @@ -1,407 +0,0 @@ -import signal -import sys - -import pytest -from case import Mock, patch - -from celery.bin.multi import MultiTool -from celery.bin.multi import __doc__ as doc -from celery.bin.multi import main -from celery.five import WhateverIO - - -class test_MultiTool: - - def setup(self): - self.fh = WhateverIO() - self.env = {} - self.t = MultiTool(env=self.env, fh=self.fh) - self.t.cluster_from_argv = Mock(name='cluster_from_argv') - self.t._cluster_from_argv = Mock(name='cluster_from_argv') - self.t.Cluster = Mock(name='Cluster') - self.t.carp = Mock(name='.carp') - self.t.usage = Mock(name='.usage') - self.t.splash = Mock(name='.splash') - self.t.say = Mock(name='.say') - self.t.ok = Mock(name='.ok') - self.cluster = self.t.Cluster.return_value - - def _cluster_from_argv(argv): - p = self.t.OptionParser(argv) - p.parse() - return p, self.cluster - self.t.cluster_from_argv.return_value = self.cluster - self.t._cluster_from_argv.side_effect = _cluster_from_argv - - def test_findsig(self): - self.assert_sig_argument(['a', 'b', 'c', '-1'], 1) - self.assert_sig_argument(['--foo=1', '-9'], 9) - self.assert_sig_argument(['-INT'], signal.SIGINT) - self.assert_sig_argument([], signal.SIGTERM) - self.assert_sig_argument(['-s'], signal.SIGTERM) - self.assert_sig_argument(['-log'], signal.SIGTERM) - - def assert_sig_argument(self, args, expected): - p = self.t.OptionParser(args) - p.parse() - assert self.t._find_sig_argument(p) == expected - - def test_execute_from_commandline(self): - self.t.call_command = Mock(name='call_command') - self.t.execute_from_commandline( - 'multi start --verbose 10 --foo'.split(), - cmd='X', - ) - assert self.t.cmd == 'X' - assert self.t.prog_name == 'multi' - self.t.call_command.assert_called_with('start', ['10', '--foo']) - - def test_execute_from_commandline__arguments(self): - assert self.t.execute_from_commandline('multi'.split()) - assert self.t.execute_from_commandline('multi -bar'.split()) - - def test_call_command(self): - cmd = self.t.commands['foo'] = Mock(name='foo') - self.t.retcode = 303 - assert (self.t.call_command('foo', ['1', '2', '--foo=3']) is - cmd.return_value) - cmd.assert_called_with('1', '2', '--foo=3') - - def test_call_command__error(self): - assert self.t.call_command('asdqwewqe', ['1', '2']) == 1 - self.t.carp.assert_called() - - def test_handle_reserved_options(self): - assert self.t._handle_reserved_options( - ['a', '-q', 'b', '--no-color', 'c']) == ['a', 'b', 'c'] - - @patch('celery.apps.multi.os.mkdir', new=Mock()) - def test_range_prefix(self): - m = MultiTool() - range_prefix = 'worker' - workers_count = 2 - _opt_parser, nodes = m._nodes_from_argv([ - '{}'.format(workers_count), - '--range-prefix={}'.format(range_prefix)]) - for i, node in enumerate(nodes, start=1): - assert node.name.startswith(range_prefix + str(i)) - - @patch('celery.apps.multi.os.mkdir', new=Mock()) - def test_range_prefix_not_set(self): - m = MultiTool() - default_prefix = 'celery' - workers_count = 2 - _opt_parser, nodes = m._nodes_from_argv([ - '{}'.format(workers_count)]) - for i, node in enumerate(nodes, start=1): - assert node.name.startswith(default_prefix + str(i)) - - @patch('celery.apps.multi.os.mkdir', new=Mock()) - def test_range_prefix_not_used_in_named_range(self): - m = MultiTool() - range_prefix = 'worker' - _opt_parser, nodes = m._nodes_from_argv([ - 'a b c', - '--range-prefix={}'.format(range_prefix)]) - for i, node in enumerate(nodes, start=1): - assert not node.name.startswith(range_prefix) - - def test_start(self): - self.cluster.start.return_value = [0, 0, 1, 0] - assert self.t.start('10', '-A', 'proj') - self.t.splash.assert_called_with() - self.t.cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.cluster.start.assert_called_with() - - def test_start__exitcodes(self): - self.cluster.start.return_value = [0, 0, 0] - assert not self.t.start('foo', 'bar', 'baz') - self.cluster.start.assert_called_with() - - self.cluster.start.return_value = [0, 1, 0] - assert self.t.start('foo', 'bar', 'baz') - - def test_stop(self): - self.t.stop('10', '-A', 'proj', retry=3) - self.t.splash.assert_called_with() - self.t._cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.cluster.stop.assert_called_with(retry=3, sig=signal.SIGTERM) - - def test_stopwait(self): - self.t.stopwait('10', '-A', 'proj', retry=3) - self.t.splash.assert_called_with() - self.t._cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.cluster.stopwait.assert_called_with(retry=3, sig=signal.SIGTERM) - - def test_restart(self): - self.cluster.restart.return_value = [0, 0, 1, 0] - self.t.restart('10', '-A', 'proj') - self.t.splash.assert_called_with() - self.t._cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.cluster.restart.assert_called_with(sig=signal.SIGTERM) - - def test_names(self): - self.t.cluster_from_argv.return_value = [Mock(), Mock()] - self.t.cluster_from_argv.return_value[0].name = 'x' - self.t.cluster_from_argv.return_value[1].name = 'y' - self.t.names('10', '-A', 'proj') - self.t.say.assert_called() - - def test_get(self): - node = self.cluster.find.return_value = Mock(name='node') - node.argv = ['A', 'B', 'C'] - assert (self.t.get('wanted', '10', '-A', 'proj') is - self.t.ok.return_value) - self.cluster.find.assert_called_with('wanted') - self.t.cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.t.ok.assert_called_with(' '.join(node.argv)) - - def test_get__KeyError(self): - self.cluster.find.side_effect = KeyError() - assert self.t.get('wanted', '10', '-A', 'proj') - - def test_show(self): - nodes = self.t.cluster_from_argv.return_value = [ - Mock(name='n1'), - Mock(name='n2'), - ] - nodes[0].argv_with_executable = ['python', 'foo', 'bar'] - nodes[1].argv_with_executable = ['python', 'xuzzy', 'baz'] - - assert self.t.show('10', '-A', 'proj') is self.t.ok.return_value - self.t.ok.assert_called_with( - '\n'.join(' '.join(node.argv_with_executable) for node in nodes)) - - def test_kill(self): - self.t.kill('10', '-A', 'proj') - self.t.splash.assert_called_with() - self.t.cluster_from_argv.assert_called_with(('10', '-A', 'proj')) - self.cluster.kill.assert_called_with() - - def test_expand(self): - node1 = Mock(name='n1') - node2 = Mock(name='n2') - node1.expander.return_value = 'A' - node2.expander.return_value = 'B' - nodes = self.t.cluster_from_argv.return_value = [node1, node2] - assert self.t.expand('%p', '10') is self.t.ok.return_value - self.t.cluster_from_argv.assert_called_with(('10',)) - for node in nodes: - node.expander.assert_called_with('%p') - self.t.ok.assert_called_with('A\nB') - - def test_note(self): - self.t.quiet = True - self.t.note('foo') - self.t.say.assert_not_called() - self.t.quiet = False - self.t.note('foo') - self.t.say.assert_called_with('foo', newline=True) - - def test_splash(self): - x = MultiTool() - x.note = Mock() - x.nosplash = True - x.splash() - x.note.assert_not_called() - x.nosplash = False - x.splash() - x.note.assert_called() - - @patch('celery.apps.multi.os.mkdir') - def test_Cluster(self, mkdir_mock): - m = MultiTool() - c = m.cluster_from_argv(['A', 'B', 'C']) - assert c.env is m.env - assert c.cmd == 'celery worker' - assert c.on_stopping_preamble == m.on_stopping_preamble - assert c.on_send_signal == m.on_send_signal - assert c.on_still_waiting_for == m.on_still_waiting_for - assert c.on_still_waiting_progress == m.on_still_waiting_progress - assert c.on_still_waiting_end == m.on_still_waiting_end - assert c.on_node_start == m.on_node_start - assert c.on_node_restart == m.on_node_restart - assert c.on_node_shutdown_ok == m.on_node_shutdown_ok - assert c.on_node_status == m.on_node_status - assert c.on_node_signal_dead == m.on_node_signal_dead - assert c.on_node_signal == m.on_node_signal - assert c.on_node_down == m.on_node_down - assert c.on_child_spawn == m.on_child_spawn - assert c.on_child_signalled == m.on_child_signalled - assert c.on_child_failure == m.on_child_failure - - def test_on_stopping_preamble(self): - self.t.on_stopping_preamble([]) - - def test_on_send_signal(self): - self.t.on_send_signal(Mock(), Mock()) - - def test_on_still_waiting_for(self): - self.t.on_still_waiting_for([Mock(), Mock()]) - - def test_on_still_waiting_for__empty(self): - self.t.on_still_waiting_for([]) - - def test_on_still_waiting_progress(self): - self.t.on_still_waiting_progress([]) - - def test_on_still_waiting_end(self): - self.t.on_still_waiting_end() - - def test_on_node_signal_dead(self): - self.t.on_node_signal_dead(Mock()) - - def test_on_node_start(self): - self.t.on_node_start(Mock()) - - def test_on_node_restart(self): - self.t.on_node_restart(Mock()) - - def test_on_node_down(self): - self.t.on_node_down(Mock()) - - def test_on_node_shutdown_ok(self): - self.t.on_node_shutdown_ok(Mock()) - - def test_on_node_status__FAIL(self): - self.t.on_node_status(Mock(), 1) - self.t.say.assert_called_with(self.t.FAILED, newline=True) - - def test_on_node_status__OK(self): - self.t.on_node_status(Mock(), 0) - self.t.say.assert_called_with(self.t.OK, newline=True) - - def test_on_node_signal(self): - self.t.on_node_signal(Mock(), Mock()) - - def test_on_child_spawn(self): - self.t.on_child_spawn(Mock(), Mock(), Mock()) - - def test_on_child_signalled(self): - self.t.on_child_signalled(Mock(), Mock()) - - def test_on_child_failure(self): - self.t.on_child_failure(Mock(), Mock()) - - def test_constant_strings(self): - assert self.t.OK - assert self.t.DOWN - assert self.t.FAILED - - -class test_MultiTool_functional: - - def setup(self): - self.fh = WhateverIO() - self.env = {} - with patch('celery.apps.multi.os.mkdir'): - self.t = MultiTool(env=self.env, fh=self.fh) - - def test_note(self): - self.t.note('hello world') - assert self.fh.getvalue() == 'hello world\n' - - def test_note_quiet(self): - self.t.quiet = True - self.t.note('hello world') - assert not self.fh.getvalue() - - def test_carp(self): - self.t.say = Mock() - self.t.carp('foo') - self.t.say.assert_called_with('foo', True, self.t.stderr) - - def test_info(self): - self.t.verbose = True - self.t.info('hello info') - assert self.fh.getvalue() == 'hello info\n' - - def test_info_not_verbose(self): - self.t.verbose = False - self.t.info('hello info') - assert not self.fh.getvalue() - - def test_error(self): - self.t.carp = Mock() - self.t.usage = Mock() - assert self.t.error('foo') == 1 - self.t.carp.assert_called_with('foo') - self.t.usage.assert_called_with() - - self.t.carp = Mock() - assert self.t.error() == 1 - self.t.carp.assert_not_called() - - def test_nosplash(self): - self.t.nosplash = True - self.t.splash() - assert not self.fh.getvalue() - - def test_splash(self): - self.t.nosplash = False - self.t.splash() - assert 'celery multi' in self.fh.getvalue() - - def test_usage(self): - self.t.usage() - assert self.fh.getvalue() - - def test_help(self): - self.t.help([]) - assert doc in self.fh.getvalue() - - @patch('celery.apps.multi.os.makedirs') - def test_expand(self, makedirs_mock): - self.t.expand('foo%n', 'ask', 'klask', 'dask') - assert self.fh.getvalue() == 'fooask\nfooklask\nfoodask\n' - - @patch('celery.apps.multi.os.makedirs') - @patch('celery.apps.multi.gethostname') - def test_get(self, gethostname, makedirs_mock): - gethostname.return_value = 'e.com' - self.t.get('xuzzy@e.com', 'foo', 'bar', 'baz') - assert not self.fh.getvalue() - self.t.get('foo@e.com', 'foo', 'bar', 'baz') - assert self.fh.getvalue() - - @patch('celery.apps.multi.os.makedirs') - @patch('celery.apps.multi.gethostname') - def test_names(self, gethostname, makedirs_mock): - gethostname.return_value = 'e.com' - self.t.names('foo', 'bar', 'baz') - assert 'foo@e.com\nbar@e.com\nbaz@e.com' in self.fh.getvalue() - - def test_execute_from_commandline(self): - start = self.t.commands['start'] = Mock() - self.t.error = Mock() - self.t.execute_from_commandline(['multi', 'start', 'foo', 'bar']) - self.t.error.assert_not_called() - start.assert_called_with('foo', 'bar') - - self.t.error = Mock() - self.t.execute_from_commandline(['multi', 'frob', 'foo', 'bar']) - self.t.error.assert_called_with('Invalid command: frob') - - self.t.error = Mock() - self.t.execute_from_commandline(['multi']) - self.t.error.assert_called_with() - - self.t.error = Mock() - self.t.execute_from_commandline(['multi', '-foo']) - self.t.error.assert_called_with() - - self.t.execute_from_commandline( - ['multi', 'start', 'foo', - '--nosplash', '--quiet', '-q', '--verbose', '--no-color'], - ) - assert self.t.nosplash - assert self.t.quiet - assert self.t.verbose - assert self.t.no_color - - @patch('celery.bin.multi.MultiTool') - def test_main(self, MultiTool): - m = MultiTool.return_value = Mock() - with pytest.raises(SystemExit): - main() - m.execute_from_commandline.assert_called_with(sys.argv) diff --git a/t/unit/bin/test_purge.py b/t/unit/bin/test_purge.py deleted file mode 100644 index 974fca0ded3..00000000000 --- a/t/unit/bin/test_purge.py +++ /dev/null @@ -1,26 +0,0 @@ -from case import Mock - -from celery.bin.purge import purge -from celery.five import WhateverIO - - -class test_purge: - - def test_run(self): - out = WhateverIO() - a = purge(app=self.app, stdout=out) - a._purge = Mock(name='_purge') - a._purge.return_value = 0 - a.run(force=True) - assert 'No messages purged' in out.getvalue() - - a._purge.return_value = 100 - a.run(force=True) - assert '100 messages' in out.getvalue() - - a.out = Mock(name='out') - a.ask = Mock(name='ask') - a.run(force=False) - a.ask.assert_called_with(a.warn_prompt, ('yes', 'no'), 'no') - a.ask.return_value = 'yes' - a.run(force=False) diff --git a/t/unit/bin/test_report.py b/t/unit/bin/test_report.py deleted file mode 100644 index 9967e63e2af..00000000000 --- a/t/unit/bin/test_report.py +++ /dev/null @@ -1,27 +0,0 @@ -"""Tests for ``celery report`` command.""" - -from case import Mock, call, patch - -from celery.bin.celery import report -from celery.five import WhateverIO - - -class test_report: - """Test report command class.""" - - def test_run(self): - out = WhateverIO() - with patch( - 'celery.loaders.base.BaseLoader.import_default_modules' - ) as import_default_modules: - with patch( - 'celery.app.base.Celery.bugreport' - ) as bugreport: - # Method call order mock obj - mco = Mock() - mco.attach_mock(import_default_modules, 'idm') - mco.attach_mock(bugreport, 'br') - a = report(app=self.app, stdout=out) - a.run() - calls = [call.idm(), call.br()] - mco.assert_has_calls(calls) diff --git a/t/unit/bin/test_result.py b/t/unit/bin/test_result.py deleted file mode 100644 index 7612fca33b3..00000000000 --- a/t/unit/bin/test_result.py +++ /dev/null @@ -1,30 +0,0 @@ -from case import patch - -from celery.bin.result import result -from celery.five import WhateverIO - - -class test_result: - - def setup(self): - - @self.app.task(shared=False) - def add(x, y): - return x + y - self.add = add - - def test_run(self): - with patch('celery.result.AsyncResult.get') as get: - out = WhateverIO() - r = result(app=self.app, stdout=out) - get.return_value = 'Jerry' - r.run('id') - assert 'Jerry' in out.getvalue() - - get.return_value = 'Elaine' - r.run('id', task=self.add.name) - assert 'Elaine' in out.getvalue() - - with patch('celery.result.AsyncResult.traceback') as tb: - r.run('id', task=self.add.name, traceback=True) - assert str(tb) in out.getvalue() diff --git a/t/unit/bin/test_upgrade.py b/t/unit/bin/test_upgrade.py deleted file mode 100644 index d521c56c82d..00000000000 --- a/t/unit/bin/test_upgrade.py +++ /dev/null @@ -1,20 +0,0 @@ -"""Tests for ``celery upgrade`` command.""" - -import pytest - -from celery.bin.celery import upgrade -from celery.five import WhateverIO - - -class test_upgrade: - """Test upgrade command class.""" - - def test_run(self): - out = WhateverIO() - a = upgrade(app=self.app, stdout=out) - - with pytest.raises(a.UsageError, match=r'missing upgrade type'): - a.run() - - with pytest.raises(a.UsageError, match=r'missing settings filename'): - a.run('settings') diff --git a/t/unit/bin/test_worker.py b/t/unit/bin/test_worker.py deleted file mode 100644 index e4aea6d3358..00000000000 --- a/t/unit/bin/test_worker.py +++ /dev/null @@ -1,695 +0,0 @@ -import logging -import os -import signal -import sys - -import pytest -from billiard.process import current_process -from case import Mock, mock, patch, skip -from kombu import Exchange, Queue - -from celery import platforms, signals -from celery.app import trace -from celery.apps import worker as cd -from celery.bin.worker import main as worker_main -from celery.bin.worker import worker -from celery.exceptions import (ImproperlyConfigured, WorkerShutdown, - WorkerTerminate) -from celery.five import reload as reload_module -from celery.platforms import EX_FAILURE, EX_OK -from celery.worker import state - - -@pytest.fixture(autouse=True) -def reset_worker_optimizations(): - yield - trace.reset_worker_optimizations() - - -class Worker(cd.Worker): - redirect_stdouts = False - - def start(self, *args, **kwargs): - self.on_start() - - -class test_Worker: - Worker = Worker - - def test_queues_string(self): - with mock.stdouts(): - w = self.app.Worker() - w.setup_queues('foo,bar,baz') - assert 'foo' in self.app.amqp.queues - - def test_cpu_count(self): - with mock.stdouts(): - with patch('celery.worker.worker.cpu_count') as cpu_count: - cpu_count.side_effect = NotImplementedError() - w = self.app.Worker(concurrency=None) - assert w.concurrency == 2 - w = self.app.Worker(concurrency=5) - assert w.concurrency == 5 - - def test_windows_B_option(self): - with mock.stdouts(): - self.app.IS_WINDOWS = True - with pytest.raises(SystemExit): - worker(app=self.app).run(beat=True) - - def test_setup_concurrency_very_early(self): - x = worker() - x.run = Mock() - with pytest.raises(ImportError): - x.execute_from_commandline(['worker', '-P', 'xyzybox']) - - def test_run_from_argv_basic(self): - x = worker(app=self.app) - x.run = Mock() - x.maybe_detach = Mock() - - def run(*args, **kwargs): - pass - - x.run = run - x.run_from_argv('celery', []) - x.maybe_detach.assert_called() - - def test_maybe_detach(self): - x = worker(app=self.app) - with patch('celery.bin.worker.detached_celeryd') as detached: - x.maybe_detach([]) - detached.assert_not_called() - with pytest.raises(SystemExit): - x.maybe_detach(['--detach']) - detached.assert_called() - - def test_invalid_loglevel_gives_error(self): - with mock.stdouts(): - x = worker(app=self.app) - with pytest.raises(SystemExit): - x.run(loglevel='GRIM_REAPER') - - def test_no_loglevel(self): - self.app.Worker = Mock() - worker(app=self.app).run(loglevel=None) - - def test_tasklist(self): - worker = self.app.Worker() - assert worker.app.tasks - assert worker.app.finalized - assert worker.tasklist(include_builtins=True) - worker.tasklist(include_builtins=False) - - def test_extra_info(self): - worker = self.app.Worker() - worker.loglevel = logging.WARNING - assert not worker.extra_info() - worker.loglevel = logging.INFO - assert worker.extra_info() - - def test_loglevel_string(self): - with mock.stdouts(): - worker = self.Worker(app=self.app, loglevel='INFO') - assert worker.loglevel == logging.INFO - - def test_run_worker(self, patching): - handlers = {} - - class Signals(platforms.Signals): - - def __setitem__(self, sig, handler): - handlers[sig] = handler - - patching.setattr('celery.platforms.signals', Signals()) - with mock.stdouts(): - w = self.Worker(app=self.app) - w._isatty = False - w.on_start() - for sig in 'SIGINT', 'SIGHUP', 'SIGTERM': - assert sig in handlers - - handlers.clear() - w = self.Worker(app=self.app) - w._isatty = True - w.on_start() - for sig in 'SIGINT', 'SIGTERM': - assert sig in handlers - assert 'SIGHUP' not in handlers - - def test_startup_info(self): - with mock.stdouts(): - worker = self.Worker(app=self.app) - worker.on_start() - assert worker.startup_info() - worker.loglevel = logging.DEBUG - assert worker.startup_info() - worker.loglevel = logging.INFO - assert worker.startup_info() - worker.autoscale = 13, 10 - assert worker.startup_info() - - prev_loader = self.app.loader - worker = self.Worker( - app=self.app, - queues='foo,bar,baz,xuzzy,do,re,mi', - ) - with patch('celery.apps.worker.qualname') as qualname: - qualname.return_value = 'acme.backed_beans.Loader' - assert worker.startup_info() - - with patch('celery.apps.worker.qualname') as qualname: - qualname.return_value = 'celery.loaders.Loader' - assert worker.startup_info() - - from celery.loaders.app import AppLoader - self.app.loader = AppLoader(app=self.app) - assert worker.startup_info() - - self.app.loader = prev_loader - worker.task_events = True - assert worker.startup_info() - - # test when there are too few output lines - # to draft the ascii art onto - prev, cd.ARTLINES = cd.ARTLINES, ['the quick brown fox'] - try: - assert worker.startup_info() - finally: - cd.ARTLINES = prev - - def test_run(self): - with mock.stdouts(): - self.Worker(app=self.app).on_start() - self.Worker(app=self.app, purge=True).on_start() - worker = self.Worker(app=self.app) - worker.on_start() - - def test_purge_messages(self): - with mock.stdouts(): - self.Worker(app=self.app).purge_messages() - - def test_init_queues(self): - with mock.stdouts(): - app = self.app - c = app.conf - app.amqp.queues = app.amqp.Queues({ - 'celery': { - 'exchange': 'celery', - 'routing_key': 'celery', - }, - 'video': { - 'exchange': 'video', - 'routing_key': 'video', - }, - }) - worker = self.Worker(app=self.app) - worker.setup_queues(['video']) - assert 'video' in app.amqp.queues - assert 'video' in app.amqp.queues.consume_from - assert 'celery' in app.amqp.queues - assert 'celery' not in app.amqp.queues.consume_from - - c.task_create_missing_queues = False - del (app.amqp.queues) - with pytest.raises(ImproperlyConfigured): - self.Worker(app=self.app).setup_queues(['image']) - del (app.amqp.queues) - c.task_create_missing_queues = True - worker = self.Worker(app=self.app) - worker.setup_queues(['image']) - assert 'image' in app.amqp.queues.consume_from - assert app.amqp.queues['image'] == Queue( - 'image', Exchange('image'), - routing_key='image', - ) - - def test_autoscale_argument(self): - with mock.stdouts(): - worker1 = self.Worker(app=self.app, autoscale='10,3') - assert worker1.autoscale == [10, 3] - worker2 = self.Worker(app=self.app, autoscale='10') - assert worker2.autoscale == [10, 0] - - def test_include_argument(self): - worker1 = self.Worker(app=self.app, include='os') - assert worker1.include == ['os'] - worker2 = self.Worker(app=self.app, - include='os,sys') - assert worker2.include == ['os', 'sys'] - self.Worker(app=self.app, include=['os', 'sys']) - - def test_unknown_loglevel(self): - with mock.stdouts(): - with pytest.raises(SystemExit): - worker(app=self.app).run(loglevel='ALIEN') - worker1 = self.Worker(app=self.app, loglevel=0xFFFF) - assert worker1.loglevel == 0xFFFF - - @patch('os._exit') - @skip.if_win32() - def test_warns_if_running_as_privileged_user(self, _exit, patching): - getuid = patching('os.getuid') - - with mock.stdouts() as (_, stderr): - getuid.return_value = 0 - self.app.conf.accept_content = ['pickle'] - worker = self.Worker(app=self.app) - worker.on_start() - _exit.assert_called_with(1) - patching.setattr('celery.platforms.C_FORCE_ROOT', True) - worker = self.Worker(app=self.app) - worker.on_start() - assert 'a very bad idea' in stderr.getvalue() - patching.setattr('celery.platforms.C_FORCE_ROOT', False) - self.app.conf.accept_content = ['json'] - worker = self.Worker(app=self.app) - worker.on_start() - assert 'superuser' in stderr.getvalue() - - def test_redirect_stdouts(self): - with mock.stdouts(): - self.Worker(app=self.app, redirect_stdouts=False) - with pytest.raises(AttributeError): - sys.stdout.logger - - def test_on_start_custom_logging(self): - with mock.stdouts(): - self.app.log.redirect_stdouts = Mock() - worker = self.Worker(app=self.app, redirect_stoutds=True) - worker._custom_logging = True - worker.on_start() - self.app.log.redirect_stdouts.assert_not_called() - - def test_setup_logging_no_color(self): - worker = self.Worker( - app=self.app, redirect_stdouts=False, no_color=True, - ) - prev, self.app.log.setup = self.app.log.setup, Mock() - try: - worker.setup_logging() - assert not self.app.log.setup.call_args[1]['colorize'] - finally: - self.app.log.setup = prev - - def test_startup_info_pool_is_str(self): - with mock.stdouts(): - worker = self.Worker(app=self.app, redirect_stdouts=False) - worker.pool_cls = 'foo' - worker.startup_info() - - def test_redirect_stdouts_already_handled(self): - logging_setup = [False] - - @signals.setup_logging.connect - def on_logging_setup(**kwargs): - logging_setup[0] = True - - try: - worker = self.Worker(app=self.app, redirect_stdouts=False) - worker.app.log.already_setup = False - worker.setup_logging() - assert logging_setup[0] - with pytest.raises(AttributeError): - sys.stdout.logger - finally: - signals.setup_logging.disconnect(on_logging_setup) - - def test_platform_tweaks_macOS(self): - - class macOSWorker(Worker): - proxy_workaround_installed = False - - def macOS_proxy_detection_workaround(self): - self.proxy_workaround_installed = True - - with mock.stdouts(): - worker = macOSWorker(app=self.app, redirect_stdouts=False) - - def install_HUP_nosupport(controller): - controller.hup_not_supported_installed = True - - class Controller: - pass - - prev = cd.install_HUP_not_supported_handler - cd.install_HUP_not_supported_handler = install_HUP_nosupport - try: - worker.app.IS_macOS = True - controller = Controller() - worker.install_platform_tweaks(controller) - assert controller.hup_not_supported_installed - assert worker.proxy_workaround_installed - finally: - cd.install_HUP_not_supported_handler = prev - - def test_general_platform_tweaks(self): - - restart_worker_handler_installed = [False] - - def install_worker_restart_handler(worker): - restart_worker_handler_installed[0] = True - - class Controller: - pass - - with mock.stdouts(): - prev = cd.install_worker_restart_handler - cd.install_worker_restart_handler = install_worker_restart_handler - try: - worker = self.Worker(app=self.app) - worker.app.IS_macOS = False - worker.install_platform_tweaks(Controller()) - assert restart_worker_handler_installed[0] - finally: - cd.install_worker_restart_handler = prev - - def test_on_consumer_ready(self): - worker_ready_sent = [False] - - @signals.worker_ready.connect - def on_worker_ready(**kwargs): - worker_ready_sent[0] = True - - with mock.stdouts(): - self.Worker(app=self.app).on_consumer_ready(object()) - assert worker_ready_sent[0] - - def test_disable_task_events(self): - worker = self.Worker(app=self.app, task_events=False, - without_gossip=True, - without_heartbeat=True) - consumer_steps = worker.blueprint.steps['celery.worker.components.Consumer'].obj.steps - assert not any(True for step in consumer_steps - if step.alias == 'Events') - - def test_enable_task_events(self): - worker = self.Worker(app=self.app, task_events=True) - consumer_steps = worker.blueprint.steps['celery.worker.components.Consumer'].obj.steps - assert any(True for step in consumer_steps - if step.alias == 'Events') - - -@mock.stdouts -class test_funs: - - def test_active_thread_count(self): - assert cd.active_thread_count() - - @skip.unless_module('setproctitle') - def test_set_process_status(self): - worker = Worker(app=self.app, hostname='xyzza') - prev1, sys.argv = sys.argv, ['Arg0'] - try: - st = worker.set_process_status('Running') - assert 'celeryd' in st - assert 'xyzza' in st - assert 'Running' in st - prev2, sys.argv = sys.argv, ['Arg0', 'Arg1'] - try: - st = worker.set_process_status('Running') - assert 'celeryd' in st - assert 'xyzza' in st - assert 'Running' in st - assert 'Arg1' in st - finally: - sys.argv = prev2 - finally: - sys.argv = prev1 - - def test_parse_options(self): - cmd = worker() - cmd.app = self.app - opts, args = cmd.parse_options('worker', ['--concurrency=512', - '--heartbeat-interval=10']) - assert opts['concurrency'] == 512 - assert opts['heartbeat_interval'] == 10 - - def test_main(self): - p, cd.Worker = cd.Worker, Worker - s, sys.argv = sys.argv, ['worker', '--discard'] - try: - worker_main(app=self.app) - finally: - cd.Worker = p - sys.argv = s - - -@mock.stdouts -class test_signal_handlers: - class _Worker: - hostname = 'foo' - stopped = False - terminated = False - - def stop(self, in_sighandler=False): - self.stopped = True - - def terminate(self, in_sighandler=False): - self.terminated = True - - def psig(self, fun, *args, **kwargs): - handlers = {} - - class Signals(platforms.Signals): - def __setitem__(self, sig, handler): - handlers[sig] = handler - - p, platforms.signals = platforms.signals, Signals() - try: - fun(*args, **kwargs) - return handlers - finally: - platforms.signals = p - - def test_worker_int_handler(self): - worker = self._Worker() - handlers = self.psig(cd.install_worker_int_handler, worker) - next_handlers = {} - state.should_stop = None - state.should_terminate = None - - class Signals(platforms.Signals): - - def __setitem__(self, sig, handler): - next_handlers[sig] = handler - - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - p, platforms.signals = platforms.signals, Signals() - try: - handlers['SIGINT']('SIGINT', object()) - assert state.should_stop - assert state.should_stop == EX_FAILURE - finally: - platforms.signals = p - state.should_stop = None - - try: - next_handlers['SIGINT']('SIGINT', object()) - assert state.should_terminate - assert state.should_terminate == EX_FAILURE - finally: - state.should_terminate = None - - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - p, platforms.signals = platforms.signals, Signals() - try: - with pytest.raises(WorkerShutdown): - handlers['SIGINT']('SIGINT', object()) - finally: - platforms.signals = p - - with pytest.raises(WorkerTerminate): - next_handlers['SIGINT']('SIGINT', object()) - - @skip.unless_module('multiprocessing') - def test_worker_int_handler_only_stop_MainProcess(self): - process = current_process() - name, process.name = process.name, 'OtherProcess' - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - try: - worker = self._Worker() - handlers = self.psig(cd.install_worker_int_handler, worker) - handlers['SIGINT']('SIGINT', object()) - assert state.should_stop - finally: - process.name = name - state.should_stop = None - - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - try: - worker = self._Worker() - handlers = self.psig(cd.install_worker_int_handler, worker) - with pytest.raises(WorkerShutdown): - handlers['SIGINT']('SIGINT', object()) - finally: - process.name = name - state.should_stop = None - - def test_install_HUP_not_supported_handler(self): - worker = self._Worker() - handlers = self.psig(cd.install_HUP_not_supported_handler, worker) - handlers['SIGHUP']('SIGHUP', object()) - - @skip.unless_module('multiprocessing') - def test_worker_term_hard_handler_only_stop_MainProcess(self): - process = current_process() - name, process.name = process.name, 'OtherProcess' - try: - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - worker = self._Worker() - handlers = self.psig( - cd.install_worker_term_hard_handler, worker) - try: - handlers['SIGQUIT']('SIGQUIT', object()) - assert state.should_terminate - finally: - state.should_terminate = None - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - worker = self._Worker() - handlers = self.psig( - cd.install_worker_term_hard_handler, worker) - try: - with pytest.raises(WorkerTerminate): - handlers['SIGQUIT']('SIGQUIT', object()) - finally: - state.should_terminate = None - finally: - process.name = name - - def test_worker_term_handler_when_threads(self): - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - try: - handlers['SIGTERM']('SIGTERM', object()) - assert state.should_stop == EX_OK - finally: - state.should_stop = None - - def test_worker_term_handler_when_single_thread(self): - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - try: - with pytest.raises(WorkerShutdown): - handlers['SIGTERM']('SIGTERM', object()) - finally: - state.should_stop = None - - @patch('sys.__stderr__') - @skip.if_pypy() - @skip.if_jython() - def test_worker_cry_handler(self, stderr): - handlers = self.psig(cd.install_cry_handler) - assert handlers['SIGUSR1']('SIGUSR1', object()) is None - stderr.write.assert_called() - - @skip.unless_module('multiprocessing') - def test_worker_term_handler_only_stop_MainProcess(self): - process = current_process() - name, process.name = process.name, 'OtherProcess' - try: - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - handlers['SIGTERM']('SIGTERM', object()) - assert state.should_stop == EX_OK - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - with pytest.raises(WorkerShutdown): - handlers['SIGTERM']('SIGTERM', object()) - finally: - process.name = name - state.should_stop = None - - @skip.unless_symbol('os.execv') - @patch('celery.platforms.close_open_fds') - @patch('atexit.register') - @patch('os.close') - def test_worker_restart_handler(self, _close, register, close_open): - argv = [] - - def _execv(*args): - argv.extend(args) - - execv, os.execv = os.execv, _execv - try: - worker = self._Worker() - handlers = self.psig(cd.install_worker_restart_handler, worker) - handlers['SIGHUP']('SIGHUP', object()) - assert state.should_stop == EX_OK - register.assert_called() - callback = register.call_args[0][0] - callback() - assert argv - finally: - os.execv = execv - state.should_stop = None - - def test_worker_term_hard_handler_when_threaded(self): - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 3 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_hard_handler, worker) - try: - handlers['SIGQUIT']('SIGQUIT', object()) - assert state.should_terminate - finally: - state.should_terminate = None - - def test_worker_term_hard_handler_when_single_threaded(self): - with patch('celery.apps.worker.active_thread_count') as c: - c.return_value = 1 - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_hard_handler, worker) - with pytest.raises(WorkerTerminate): - handlers['SIGQUIT']('SIGQUIT', object()) - - def test_send_worker_shutting_down_signal(self): - with patch('celery.apps.worker.signals.worker_shutting_down') as wsd: - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - try: - with pytest.raises(WorkerShutdown): - handlers['SIGTERM']('SIGTERM', object()) - finally: - state.should_stop = None - wsd.send.assert_called_with( - sender='foo', sig='SIGTERM', how='Warm', exitcode=0, - ) - - @pytest.mark.xfail( - not hasattr(signal, "SIGQUIT"), - reason="Windows does not support SIGQUIT", - raises=AttributeError, - ) - @patch.dict(os.environ, {"REMAP_SIGTERM": "SIGQUIT"}) - def test_send_worker_shutting_down_signal_with_remap_sigquit(self): - with patch('celery.apps.worker.signals.worker_shutting_down') as wsd: - from billiard import common - - reload_module(common) - reload_module(cd) - - worker = self._Worker() - handlers = self.psig(cd.install_worker_term_handler, worker) - try: - with pytest.raises(WorkerTerminate): - handlers['SIGTERM']('SIGTERM', object()) - finally: - state.should_stop = None - wsd.send.assert_called_with( - sender='foo', sig='SIGTERM', how='Cold', exitcode=1, - )