-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When running autogpt in pytest, we shouldn't wait for the logs to be written #4199
Comments
We need help with this one, high value item. |
Related #4437 |
One possible solution to your issue could be to make use of Python's asynchronous features. In essence, we could leverage asyncio to write the logs asynchronously, so the test doesn't have to wait for the log writing operation to complete. Here's how you could do it: log_cycle.py import json
import os
import asyncio
from typing import Any, Dict, Union
from autogpt.logs import logger
# ... rest of your existing code ...
class LogCycleHandler:
# ... rest of your existing code ...
async def log_cycle(
self,
ai_name: str,
created_at: str,
cycle_count: int,
data: Union[Dict[str, Any], Any],
file_name: str,
) -> None:
"""
Log cycle data to a JSON file.
Args:
data (Any): The data to be logged.
file_name (str): The name of the file to save the logged data.
"""
nested_folder_path = self.create_nested_directory(
ai_name, created_at, cycle_count
)
json_data = json.dumps(data, ensure_ascii=False, indent=4)
log_file_path = os.path.join(
nested_folder_path, f"{self.log_count_within_cycle}_{file_name}"
)
# call logger.log_json asynchronously
asyncio.create_task(logger.log_json(json_data, log_file_path))
self.log_count_within_cycle += 1 Note that this is a basic solution, and you might need to handle exceptions and ensure the asyncio task is completed if your program exits before the task is finished. Please keep in mind that modifying the Once you have made the changes, you can create a new branch and commit your changes. Then you can create a Pull Request for the changes to be reviewed and possibly merged into the main codebase. Let me know if this helps or if you have further questions! This response was generated by Git-Aid and may not be accurate or appropriate. The author of this repository and the creator of the AI model assume no responsibility or liability for any consequences arising from the use of the information provided in this response. 🤖 |
Do we need the logs in the test? If not, you could monkeypatch logging to do nothing with the output. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Duplicates
Summary 💡
if you run python -m pytest -s tests/integration/goal_oriented/test_write_file.py
you will see the test takes 6 seconds, althought everything is recorded with cassettes.
So why is it taking 6 seconds ?
Because we wait for the logs.
Please find a trick to solve that, the less lines of codes needed the better
And please make a PR once you find it
Examples 🌈
No response
Motivation 🔦
No response
The text was updated successfully, but these errors were encountered: