Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When running autogpt in pytest, we shouldn't wait for the logs to be written #4199

Closed
1 task done
waynehamadi opened this issue May 14, 2023 · 7 comments
Closed
1 task done
Labels
enhancement New feature or request Stale testing

Comments

@waynehamadi
Copy link
Contributor

Duplicates

  • I have searched the existing issues

Summary 💡

if you run python -m pytest -s tests/integration/goal_oriented/test_write_file.py
you will see the test takes 6 seconds, althought everything is recorded with cassettes.
So why is it taking 6 seconds ?
Because we wait for the logs.

Please find a trick to solve that, the less lines of codes needed the better
And please make a PR once you find it

Examples 🌈

No response

Motivation 🔦

No response

@waynehamadi
Copy link
Contributor Author

We need help with this one, high value item.

@k-boikov
Copy link
Contributor

Related #4437

@Torantulino
Copy link
Member

One possible solution to your issue could be to make use of Python's asynchronous features. In essence, we could leverage asyncio to write the logs asynchronously, so the test doesn't have to wait for the log writing operation to complete.

Here's how you could do it:

log_cycle.py

import json
import os
import asyncio
from typing import Any, Dict, Union

from autogpt.logs import logger

# ... rest of your existing code ...

class LogCycleHandler:
    # ... rest of your existing code ...

    async def log_cycle(
        self,
        ai_name: str,
        created_at: str,
        cycle_count: int,
        data: Union[Dict[str, Any], Any],
        file_name: str,
    ) -> None:
        """
        Log cycle data to a JSON file.

        Args:
            data (Any): The data to be logged.
            file_name (str): The name of the file to save the logged data.
        """
        nested_folder_path = self.create_nested_directory(
            ai_name, created_at, cycle_count
        )

        json_data = json.dumps(data, ensure_ascii=False, indent=4)
        log_file_path = os.path.join(
            nested_folder_path, f"{self.log_count_within_cycle}_{file_name}"
        )

        # call logger.log_json asynchronously
        asyncio.create_task(logger.log_json(json_data, log_file_path))
        self.log_count_within_cycle += 1

Note that this is a basic solution, and you might need to handle exceptions and ensure the asyncio task is completed if your program exits before the task is finished.

Please keep in mind that modifying the logger.log_json function might be needed to support asynchronous operations. Also, this approach should be used carefully as it could potentially cause issues with file I/O if you're writing to the same file concurrently. In your case, as different log files are created for each log, this should not be an issue.

Once you have made the changes, you can create a new branch and commit your changes. Then you can create a Pull Request for the changes to be reviewed and possibly merged into the main codebase.

Let me know if this helps or if you have further questions!


This response was generated by Git-Aid and may not be accurate or appropriate. The author of this repository and the creator of the AI model assume no responsibility or liability for any consequences arising from the use of the information provided in this response. 🤖

@collijk
Copy link
Contributor

collijk commented May 30, 2023

Do we need the logs in the test? If not, you could monkeypatch logging to do nothing with the output.

@waynehamadi
Copy link
Contributor Author

#4486

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale testing
Projects
None yet
Development

No branches or pull requests

4 participants