Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Challenges should be using the automatic AI Config generation #3883

Closed
1 task done
waynehamadi opened this issue May 6, 2023 · 3 comments
Closed
1 task done

Challenges should be using the automatic AI Config generation #3883

waynehamadi opened this issue May 6, 2023 · 3 comments
Labels

Comments

@waynehamadi
Copy link
Contributor

waynehamadi commented May 6, 2023

Duplicates

  • I have searched the existing issues

Summary 💡

Our challenges should use the default Auto-GPT mode. In the default mode, we generate the AI Config automatically.

pick one test, for example test_browse_website.and attempt to generate the AI Config automatically at the beginning of the test. You will need to define a prompt that makes the test pass.
replace the browser_agent located in tests/integration/agent_factory.py by this :

@pytest.fixture
def browser_agent(agent_test_config, memory_none: NoMemory, workspace: Workspace):
    
    
    command_registry = CommandRegistry()
    command_registry.import_commands("autogpt.commands.file_operations")
    command_registry.import_commands("autogpt.commands.web_selenium")
    command_registry.import_commands("autogpt.app")
    command_registry.import_commands("autogpt.commands.task_statuses")
    ai_config  = generate_aiconfig_automatic(user_desire)
    ai_config.command_registry = command_registry

    system_prompt = ai_config.construct_full_prompt()

    agent = Agent(
        ai_name="",
        memory=memory_none,
        full_message_history=[],
        command_registry=command_registry,
        config=ai_config,
        next_action_count=0,
        system_prompt=system_prompt,
        triggering_prompt=DEFAULT_TRIGGERING_PROMPT,
        workspace_directory=workspace.root,
    )

    return agent

pick the correct user_desire that makes the test pass.

Apply the same logic for all the tests located in the goal_oriented and challenges folders. Create a PR for each test so it's easier to review.

@anonhostpi
Copy link

Related: #3907

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants