Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos #1254

Merged
merged 7 commits into from
Jun 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ If you encounter a bug or have a feature in mind, don't hesitate to [open a new

## Philosophy

This is a minimalist, **tighly scoped** project that places a premium on simplicity. We're skeptical of new extensions, integrations, and extra features. We would rather not extend the system if it adds nonessential complexity.
This is a minimalist, **tightly scoped** project that places a premium on simplicity. We're skeptical of new extensions, integrations, and extra features. We would rather not extend the system if it adds nonessential complexity.

# Contribution Guidelines

Expand Down
4 changes: 2 additions & 2 deletions docs/ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@

# What's in our scope?

Open Interpreter contains two projects which support eachother, whose scopes are as follows:
Open Interpreter contains two projects which support each other, whose scopes are as follows:

1. `core`, which is dedicated to figuring out how to get LLMs to safely control a computer. Right now, this means creating a real-time code execution environment that language models can operate.
2. `terminal_interface`, a text-only way for users to direct the code-running LLM running inside `core`. This includes functions for connecting the `core` to various local and hosted LLMs (which the `core` itself should not know about).

# What's not in our scope?

Our guiding philosphy is minimalism, so we have also decided to explicitly consider the following as **out of scope**:
Our guiding philosophy is minimalism, so we have also decided to explicitly consider the following as **out of scope**:

1. Additional functions in `core` beyond running code.
2. More complex interactions with the LLM in `terminal_interface` beyond text (but file paths to more complex inputs, like images or video, can be included in that text).
Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started/setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Setup

## Experimental one-line installers

To try our experimental installers, open your Terminal with admin priviledges [(click here to learn how)](https://chat.openai.com/share/66672c0f-0935-4c16-ac96-75c1afe14fe3), then paste the following commands:
To try our experimental installers, open your Terminal with admin privileges [(click here to learn how)](https://chat.openai.com/share/66672c0f-0935-4c16-ac96-75c1afe14fe3), then paste the following commands:

<CodeGroup>

Expand Down Expand Up @@ -57,7 +57,7 @@ from interpreter import interpreter
interpreter.chat()
```

You can also pass messages to `interpreter` programatically:
You can also pass messages to `interpreter` programmatically:

```python
interpreter.chat("Get the last 5 BBC news headlines.")
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/advanced-terminal-usage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Magic commands can be used to control the interpreter's behavior in interactive
- `%undo`: Remove previous messages and its response from the message history.
- `%save_message [path]`: Saves messages to a specified JSON path. If no path is provided, it defaults to 'messages.json'.
- `%load_message [path]`: Loads messages from a specified JSON path. If no path is provided, it defaults to 'messages.json'.
- `%tokens [prompt]`: EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calulate the tokens used by that prompt and the total amount of tokens that will be sent with the next request.
- `%tokens [prompt]`: EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calculate the tokens used by that prompt and the total amount of tokens that will be sent with the next request.
- `%info`: Show system and interpreter information.
- `%help`: Show this help message.
- `%jupyter`: Export the current session to a Jupyter notebook file (.ipynb) to the Downloads folder.
2 changes: 1 addition & 1 deletion docs/guides/demos.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ OS mode using Logic Pro X to record a piano song and play it back:

#### Generating images in Everart.ai

Open Interpreter descibing pictures it wants to make, then creating them using OS mode:
Open Interpreter describing pictures it wants to make, then creating them using OS mode:

<iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EThis%20is%20wild.%20I%20gave%20OS%20control%20to%20GPT-4%20via%20the%20latest%20update%20of%20Open%20Interpreter%20and%20now%20it%27s%20generating%20pictures%20it%20wants%20to%20see%20in%20%40everartai%20%F0%9F%A4%AF%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20GPT%20is%20controlling%20the%20mouse%20and%20adding%20text%20in%20the%20fields%2C%20I%20am%20not%20doing%20anything.%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/hGgML9epEc%22%3Epic.twitter.com/hGgML9epEc%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Pietro%20Schirano%20%28%40skirano%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/skirano/status/1747670816437735836%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%2017%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/multiple-instances.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ To create multiple instances, use the base class, `OpenInterpreter`:
from interpreter import OpenInterpreter

agent_1 = OpenInterpreter()
agent_1.system_message = "This is a seperate instance."
agent_1.system_message = "This is a separate instance."

agent_2 = OpenInterpreter()
agent_2.system_message = "This is yet another instance."
Expand Down
4 changes: 2 additions & 2 deletions docs/language-models/hosted-models/aws-sagemaker.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,6 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| ----------------------- | ----------------------------------------------- | ----------------------------------------------------------------------------------- |
| `AWS_ACCESS_KEY_ID` | The API access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
| `AWS_SECRET_ACCESS_KEY` | The API secret access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
| `AWS_ACCESS_KEY_ID` | The API access key for your AWS account. | [AWS Account Overview -> Security Credentials](https://console.aws.amazon.com/) |
| `AWS_SECRET_ACCESS_KEY` | The API secret access key for your AWS account. | [AWS Account Overview -> Security Credentials](https://console.aws.amazon.com/) |
| `AWS_REGION_NAME` | The AWS region you want to use | [AWS Account Overview -> Navigation bar -> Region](https://console.aws.amazon.com/) |
2 changes: 1 addition & 1 deletion docs/safety/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Safety is a top priority for us at Open Interpreter. Running LLM generated code

- Requiring confirmation with the user before the code is actually run. This is a simple measure that can prevent a lot of accidents. It exists as another layer of protection, but can be disabled with the `--auto-run` flag if you wish.

- Sandboxing code execution. Open Interpreter can be run in a sandboxed envirnoment using [Docker](/integrations/docker). This is a great way to run code without worrying about it affecting your system. Docker support is currently experimental, but we are working on making it a core feature of Open Interpreter. Another option for sandboxing is [E2B](https://e2b.dev/), which overrides the default python language with a sandboxed, hosted version of python through E2B. Follow [this guide](/integrations/e2b) to set it up.
- Sandboxing code execution. Open Interpreter can be run in a sandboxed environment using [Docker](/integrations/docker). This is a great way to run code without worrying about it affecting your system. Docker support is currently experimental, but we are working on making it a core feature of Open Interpreter. Another option for sandboxing is [E2B](https://e2b.dev/), which overrides the default python language with a sandboxed, hosted version of python through E2B. Follow [this guide](/integrations/e2b) to set it up.

## Notice

Expand Down
4 changes: 2 additions & 2 deletions docs/usage/python/multiple-instances.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ To create multiple instances, use the base class, `OpenInterpreter`:
from interpreter import OpenInterpreter

agent_1 = OpenInterpreter()
agent_1.system_message = "This is a seperate instance."
agent_1.system_message = "This is a separate instance."

agent_2 = OpenInterpreter()
agent_2.system_message = "This is yet another instance."
```

For fun, you could make these instances talk to eachother:
For fun, you could make these instances talk to each other:

```python
def swap_roles(messages):
Expand Down
2 changes: 1 addition & 1 deletion docs/usage/terminal/magic-commands.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,6 @@ Magic commands can be used to control the interpreter's behavior in interactive
- `%undo`: Remove previous messages and its response from the message history.
- `%save_message [path]`: Saves messages to a specified JSON path. If no path is provided, it defaults to 'messages.json'.
- `%load_message [path]`: Loads messages from a specified JSON path. If no path is provided, it defaults to 'messages.json'.
- `%tokens [prompt]`: EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calulate the tokens used by that prompt and the total amount of tokens that will be sent with the next request.
- `%tokens [prompt]`: EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calculate the tokens used by that prompt and the total amount of tokens that will be sent with the next request.
- `%info`: Show system and interpreter information.
- `%help`: Show this help message.
2 changes: 1 addition & 1 deletion interpreter/core/computer/calendar/calendar.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
class Calendar:
def __init__(self, computer):
self.computer = computer
# In the future, we might consider a way to use a different calender app. For now its Calendar
# In the future, we might consider a way to use a different calendar app. For now its Calendar
self.calendar_app = "Calendar"

def get_events(self, start_date=datetime.date.today(), end_date=None):
Expand Down
4 changes: 2 additions & 2 deletions interpreter/core/computer/display/display.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def center(self):

def info(self):
"""
Returns a list of all connected monitor/displays and thir information
Returns a list of all connected monitor/displays and their information
"""
return get_displays()

Expand Down Expand Up @@ -145,7 +145,7 @@ def screenshot(
screen=screen, combine_screens=combine_screens
) # this function uses pyautogui.screenshot which works fine for all OS (mac, linux and windows)
message = format_to_recipient(
"Taking a screenshot of the entire screen. This is not recommended. You (the language model assistant) will recieve it with low resolution.\n\nTo maximize performance, use computer.display.view(active_app_only=True). This will produce an ultra high quality image of the active application.",
"Taking a screenshot of the entire screen. This is not recommended. You (the language model assistant) will receive it with low resolution.\n\nTo maximize performance, use computer.display.view(active_app_only=True). This will produce an ultra high quality image of the active application.",
"assistant",
)
print(message)
Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/computer/mouse/mouse.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def move(self, *args, x=None, y=None, icon=None, text=None, screenshot=None):
elif x is not None and y is not None:
print(
format_to_recipient(
"Unless you have just recieved these EXACT coordinates from a computer.mouse.move or computer.mouse.click command, PLEASE take a screenshot with computer.display.view() to find TEXT OR ICONS to click, then use computer.mouse.click(text) or computer.mouse.click(icon=description_of_icon) if at all possible. This is **significantly** more accurate than using coordinates. Specifying (x=x, y=y) is highly likely to fail. Specifying ('text to click') is highly likely to succeed.",
"Unless you have just received these EXACT coordinates from a computer.mouse.move or computer.mouse.click command, PLEASE take a screenshot with computer.display.view() to find TEXT OR ICONS to click, then use computer.mouse.click(text) or computer.mouse.click(icon=description_of_icon) if at all possible. This is **significantly** more accurate than using coordinates. Specifying (x=x, y=y) is highly likely to fail. Specifying ('text to click') is highly likely to succeed.",
"assistant",
)
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ def iopub_message_listener():

if DEBUG_MODE:
print("-----------" * 10)
print("Message recieved:", msg["content"])
print("Message received:", msg["content"])
print("-----------" * 10)

if (
Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/computer/terminal/languages/python.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from .jupyter_language import JupyterLanguage

# Supresses a weird debugging error
# Suppresses a weird debugging error
os.environ["PYDEVD_DISABLE_FILE_VALIDATION"] = "1"
# turn off colors in "terminal"
os.environ["ANSI_COLORS_DISABLED"] = "1"
Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ def is_active_line_chunk(chunk):

last_flag_base = {"role": chunk["role"], "type": chunk["type"]}

# Don't add format to type: "console" flags, to accomodate active_line AND output formats
# Don't add format to type: "console" flags, to accommodate active_line AND output formats
if "format" in chunk and chunk["type"] != "console":
last_flag_base["format"] = chunk["format"]

Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/respond.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def respond(interpreter):
"content": force_task_completion_message,
}
)
# Yield two newlines to seperate the LLMs reply from previous messages.
# Yield two newlines to separate the LLMs reply from previous messages.
yield {"role": "assistant", "type": "message", "content": "\n\n"}
insert_force_task_completion_message = False

Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/utils/scan_code.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def scan_code(code, language, interpreter):
# Run semgrep
try:
# HACK: we need to give the subprocess shell access so that the semgrep from our pyproject.toml is available
# the global namespace might have semgrep from guarddog installed, but guarddog is currenlty
# the global namespace might have semgrep from guarddog installed, but guarddog is currently
# pinned to an old semgrep version that has issues with reading the semgrep registry
# while scanning a single file like the temporary one we generate
# if guarddog solves [#249](https://github.com/DataDog/guarddog/issues/249) we can change this approach a bit
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/magic_commands.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def handle_help(self, arguments):
"%undo": "Remove previous messages and its response from the message history.",
"%save_message [path]": "Saves messages to a specified JSON path. If no path is provided, it defaults to 'messages.json'.",
"%load_message [path]": "Loads messages from a specified JSON path. If no path is provided, it defaults to 'messages.json'.",
"%tokens [prompt]": "EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calulate the tokens used by that prompt and the total amount of tokens that will be sent with the next request",
"%tokens [prompt]": "EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calculate the tokens used by that prompt and the total amount of tokens that will be sent with the next request",
"%help": "Show this help message.",
"%info": "Show system and interpreter information",
"%jupyter": "Export the conversation to a Jupyter notebook file",
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/profiles/defaults/01.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@
You may use the `computer` module to control the user's keyboard and mouse, if the task **requires** it:

```python
computer.display.info() # Returns a list of connected monitors/Displays and their info (x and y cordinates, width, height, width_mm, height_mm, name). Use this to verify the monitors connected before using computer.display.view() when neccessary
computer.display.info() # Returns a list of connected monitors/Displays and their info (x and y coordinates, width, height, width_mm, height_mm, name). Use this to verify the monitors connected before using computer.display.view() when necessary
computer.display.view() # Shows you what's on the screen (primary display by default), returns a `pil_image` `in case you need it (rarely). To get a specific display, use the parameter screen=DISPLAY_NUMBER (0 for primary monitor 1 and above for secondary monitors). **You almost always want to do this first!**
computer.keyboard.hotkey(" ", "command") # Opens spotlight
computer.keyboard.write("hello")
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/profiles/defaults/os.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
```python
computer.browser.search(query) # Silently searches Google for the query, returns result. The user's browser is unaffected. (does not open a browser!)

computer.display.info() # Returns a list of connected monitors/Displays and their info (x and y cordinates, width, height, width_mm, height_mm, name). Use this to verify the monitors connected before using computer.display.view() when neccessary
computer.display.info() # Returns a list of connected monitors/Displays and their info (x and y coordinates, width, height, width_mm, height_mm, name). Use this to verify the monitors connected before using computer.display.view() when necessary
computer.display.view() # Shows you what's on the screen (primary display by default), returns a `pil_image` `in case you need it (rarely). To get a specific display, use the parameter screen=DISPLAY_NUMBER (0 for primary monitor 1 and above for secondary monitors). **You almost always want to do this first!**

computer.keyboard.hotkey(" ", "command") # Opens spotlight (very useful)
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/start_terminal_interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def start_terminal_interface(interpreter):
{
"name": "llm_supports_vision",
"nickname": "lsv",
"help_text": "inform OI that your model supports vision, and can recieve vision inputs",
"help_text": "inform OI that your model supports vision, and can receive vision inputs",
"type": bool,
"action": argparse.BooleanOptionalAction,
"attribute": {"object": interpreter.llm, "attr_name": "supports_vision"},
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/terminal_interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@


def terminal_interface(interpreter, message):
# Auto run and offline (this.. this isnt right) don't display messages.
# Auto run and offline (this.. this isn't right) don't display messages.
# Probably worth abstracting this to something like "debug_cli" at some point.
if not interpreter.auto_run and not interpreter.offline:
interpreter_intro_message = [
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/utils/display_output.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def display_output(output):

# Return a message for the LLM.
# We should make this specific to what happened in the future,
# like saying WHAT temporary file we made, ect. Keep the LLM informed.
# like saying WHAT temporary file we made, etc. Keep the LLM informed.
return "Displayed on the user's machine."


Expand Down
Loading