-
-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image prompts - Entrypoint prompt - additional CLI argument #1077
Merged
Merged
Changes from 45 commits
Commits
Show all changes
52 commits
Select commit
Hold shift + click to select a range
4abcb66
accept image prompts on cli
TheoMcCabe 49c4008
add token usage tracking for image prompts
TheoMcCabe c2ad358
add token usage tests
TheoMcCabe 833eafb
fix broken tests 1
TheoMcCabe 43fe516
almost all tests passing. Vision model has some issues with the promp…
TheoMcCabe 9324158
pre commit
TheoMcCabe bb7b3c0
import pillow
TheoMcCabe a5173fc
update poetry lock
TheoMcCabe 7ac31ee
log exception
TheoMcCabe 8b682da
remove print
TheoMcCabe 8a20266
load env
TheoMcCabe ac24948
pre commit
TheoMcCabe db29ed3
update ai cache
TheoMcCabe eefbb87
revert caching ai change
TheoMcCabe 56639eb
update poetry lock
TheoMcCabe 6ca459a
precommit and snake case
TheoMcCabe a2808da
ai cache
TheoMcCabe 847a5e4
try recreating cache from scratch
TheoMcCabe fcd7cfe
print missing key on build server
TheoMcCabe de16424
fix message collapse
TheoMcCabe bc14d84
pre commit
TheoMcCabe 4d806a4
add more cache
TheoMcCabe 7353d6f
poetry lock
TheoMcCabe f32a610
Fixing tests for now
ATheorell 21cfc94
Now specifying image and prompt with dedicated arguments
ATheorell 34acbd2
cleaning up problems from rebase
ATheorell 06f0bed
updating cache and lock
ATheorell d39f766
linting with ruff
ATheorell 9945ff1
removing falsely added prompt file
ATheorell d4f0aa2
self-heal mechanism using improve instead of rewriting code base
ATheorell d64ea18
Extended gen_entrypoint to take a prompt from the user
ATheorell c2dfec1
possible to pass prompt to make entrypoint
ATheorell f8eb4f8
small parsing adjustment
ATheorell ea42905
before implementing better self-heal
ATheorell c71f7b4
Fixing failing test caused by --entrypoint_prompt
ATheorell e68c246
made paths relative for load prompts flow
ATheorell 60278a2
committing to binary search langchain error
ATheorell 47ccb46
Fixes to self-heal printing
ATheorell 3c2fcd3
Removed double improve
ATheorell 376d402
fixing tests
ATheorell 23c5e4e
Added cache option and fixed ruff errors
ATheorell 93b8476
added more docstrings
ATheorell 42d08b5
after rebasing
ATheorell b5ef722
ruff linting
ATheorell 5263a2c
added missing use_cache argument in the minimized main callable
ATheorell 9f0d49a
prompt file argument
TheoMcCabe 8065a8b
remove print
TheoMcCabe 48d991b
add vision example
TheoMcCabe 9cd72ca
only collapse if not in vision mode
TheoMcCabe a7ebb48
update readme
TheoMcCabe eb84a28
pre commit
TheoMcCabe eb057ee
re insert fix to non vision ai use case
TheoMcCabe File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,6 +29,8 @@ | |
import typer | ||
|
||
from dotenv import load_dotenv | ||
from langchain.cache import SQLiteCache | ||
from langchain.globals import set_llm_cache | ||
|
||
from gpt_engineer.applications.cli.cli_agent import CliAgent | ||
from gpt_engineer.applications.cli.collect import collect_and_send_human_review | ||
|
@@ -52,6 +54,7 @@ | |
stage_files, | ||
) | ||
from gpt_engineer.core.preprompts_holder import PrepromptsHolder | ||
from gpt_engineer.core.prompt import Prompt | ||
from gpt_engineer.tools.custom_steps import clarified_gen, lite_gen, self_heal | ||
|
||
app = typer.Typer() # creates a CLI app | ||
|
@@ -73,7 +76,25 @@ | |
openai.api_key = os.getenv("OPENAI_API_KEY") | ||
|
||
|
||
def load_prompt(input_repo: DiskMemory, improve_mode): | ||
def concatenate_paths(base_path, sub_path): | ||
# Compute the relative path from base_path to sub_path | ||
relative_path = os.path.relpath(sub_path, base_path) | ||
|
||
# If the relative path is not in the parent directory, use the original sub_path | ||
if not relative_path.startswith(".."): | ||
return sub_path | ||
|
||
# Otherwise, concatenate base_path and sub_path | ||
return os.path.normpath(os.path.join(base_path, sub_path)) | ||
|
||
|
||
def load_prompt( | ||
input_repo: DiskMemory, | ||
improve_mode: bool, | ||
prompt_file: str, | ||
image_directory: str, | ||
entrypoint_prompt_file: str = "", | ||
) -> Prompt: | ||
""" | ||
Load or request a prompt from the user based on the mode. | ||
|
||
|
@@ -89,16 +110,47 @@ | |
str | ||
The loaded or inputted prompt. | ||
""" | ||
if input_repo.get("prompt"): | ||
return input_repo.get("prompt") | ||
|
||
if not improve_mode: | ||
input_repo["prompt"] = input( | ||
"\nWhat application do you want gpt-engineer to generate?\n" | ||
if os.path.isdir(prompt_file): | ||
raise ValueError( | ||
f"The path to the prompt, {prompt_file}, already exists as a directory. No prompt can be read from it. Please specify a prompt file using --prompt" | ||
) | ||
prompt_str = input_repo.get(prompt_file) | ||
if not prompt_str: | ||
if not improve_mode: | ||
prompt_str = input( | ||
"\nWhat application do you want gpt-engineer to generate?\n" | ||
) | ||
else: | ||
prompt_str = input("\nHow do you want to improve the application?\n") | ||
|
||
if entrypoint_prompt_file == "": | ||
entrypoint_prompt = "" | ||
else: | ||
full_entrypoint_prompt_file = concatenate_paths( | ||
input_repo.path, entrypoint_prompt_file | ||
) | ||
if os.path.isfile(full_entrypoint_prompt_file): | ||
entrypoint_prompt = input_repo.get(full_entrypoint_prompt_file) | ||
|
||
else: | ||
raise ValueError("The provided file at --entrypoint-prompt does not exist") | ||
|
||
if image_directory == "": | ||
return Prompt(prompt_str, entrypoint_prompt=entrypoint_prompt) | ||
|
||
full_image_directory = concatenate_paths(input_repo.path, image_directory) | ||
if os.path.isdir(full_image_directory): | ||
if len(os.listdir(full_image_directory)) == 0: | ||
raise ValueError("The provided --image_directory is empty.") | ||
image_repo = DiskMemory(full_image_directory) | ||
return Prompt( | ||
prompt_str, | ||
image_repo.get(".").to_dict(), | ||
entrypoint_prompt=entrypoint_prompt, | ||
) | ||
else: | ||
input_repo["prompt"] = input("\nHow do you want to improve the application?\n") | ||
return input_repo.get("prompt") | ||
raise ValueError("The provided --image_directory is not a directory.") | ||
|
||
|
||
def get_preprompts_path(use_custom_preprompts: bool, input_path: Path) -> Path: | ||
|
@@ -140,7 +192,7 @@ | |
@app.command() | ||
def main( | ||
project_path: str = typer.Argument("projects/example", help="path"), | ||
model: str = typer.Argument("gpt-4-1106-preview", help="model id string"), | ||
model: str = typer.Argument("gpt-4-0125-preview", help="model id string"), | ||
temperature: float = 0.1, | ||
improve_mode: bool = typer.Option( | ||
False, | ||
|
@@ -184,6 +236,26 @@ | |
"--llm-via-clipboard", | ||
help="Use the clipboard to communicate with the AI.", | ||
), | ||
prompt_file: str = typer.Option( | ||
"prompt", | ||
"--prompt_file", | ||
help="Relative path to a text file containing a prompt.", | ||
), | ||
entrypoint_prompt_file: str = typer.Option( | ||
"", | ||
"--entrypoint_prompt", | ||
help="Relative path to a text file containing a file that specifies requirements for you entrypoint.", | ||
), | ||
image_directory: str = typer.Option( | ||
"", | ||
"--image_directory", | ||
help="Relative path to a folder containing images.", | ||
), | ||
use_cache: bool = typer.Option( | ||
False, | ||
"--use_cache", | ||
help="Speeds up computations and saves tokens when running the same prompt multiple times by caching the LLM response.", | ||
), | ||
verbose: bool = typer.Option(False, "--verbose", "-v"), | ||
): | ||
""" | ||
|
@@ -213,6 +285,14 @@ | |
The endpoint for Azure OpenAI services. | ||
use_custom_preprompts : bool | ||
Flag indicating whether to use custom preprompts. | ||
prompt_file : str | ||
Relative path to a text file containing a prompt. | ||
entrypoint_prompt_file: str | ||
Relative path to a text file containing a file that specifies requirements for you entrypoint. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. great |
||
image_directory: str | ||
Relative path to a folder containing images. | ||
use_cache: bool | ||
Speeds up computations and saves tokens when running the same prompt multiple times by caching the LLM response. | ||
verbose : bool | ||
Flag indicating whether to enable verbose logging. | ||
|
||
|
@@ -223,6 +303,8 @@ | |
|
||
logging.basicConfig(level=logging.DEBUG if verbose else logging.INFO) | ||
|
||
if use_cache: | ||
set_llm_cache(SQLiteCache(database_path=".langchain.db")) | ||
if improve_mode: | ||
assert not ( | ||
clarify_mode or lite_mode | ||
|
@@ -248,7 +330,17 @@ | |
print("Initializing an empty git repository") | ||
init_git_repo(path) | ||
|
||
prompt = load_prompt(DiskMemory(path), improve_mode) | ||
prompt = load_prompt( | ||
DiskMemory(path), | ||
improve_mode, | ||
prompt_file, | ||
image_directory, | ||
entrypoint_prompt_file, | ||
) | ||
|
||
# todo: if ai.vision is false and not llm_via_clipboard - ask if they would like to use gpt-4-vision-preview instead? If so recreate AI | ||
if not ai.vision: | ||
prompt.image_urls = None | ||
|
||
# configure generation function | ||
if clarify_mode: | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we sure we want commented code here? its always in the git history if we want to bring it back later - or are we expecting this to be commented in and out regularly