Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reopen pull request for conflict resolution on notebook beta feature #553

Closed
wants to merge 0 commits into from

Conversation

blujus
Copy link
Contributor

@blujus blujus commented Sep 28, 2023

Describe the changes you have made:

Sorry I accidently closed original request - have resolved new conflicts and re-opened.

Reference any relevant issue (Fixes #000)

  • I have performed a self-review of my code:

I have tested the code on the following OS:

  • Windows
  • MacOS
  • Linux

AI Language Model (if applicable)

  • GPT4
  • GPT3
  • Llama 7B
  • Llama 13B
  • Llama 34B
  • Huggingface model (Please specify which one)

@ericrallen
Copy link
Collaborator

I think this is great idea, but I think it has some challenges with consistent internal state that we might need to address because the code is running in the Open Interpreter subprocess and not in the Jupyter Notebook.

It looks like the model assumes that it can use code like it would in a Jupyter Notebook, (for example: import pandas as pd in one cell making pd available in another), and so you run into a lot of failed and repeated execution of code when you ask for a slightly more more complex task than just "Draw a graph."

OpenInterpreter-NotebookMode.mp4

@blujus
Copy link
Contributor Author

blujus commented Oct 3, 2023 via email

@ericrallen
Copy link
Collaborator

I was going to try a really hacky, temporary solution and add instructions to the model to include code from each step in the next step to make sure each code block is fully encapsulated.

Some other thoughts that might be totally off base:

I also wondered if we could leverage some more advanced Jupyter functionality to actually create and execute dynamic cells.

Maybe there’s some combination of programmatic cell execution and programmatic cell creation that could work, but I’m not sure if we’d need to or be able to pause and resume the main interpreter thread for that, though. It looks like there could be a way to run it in the background with multiprocessing, but I don’t think we can get intermediate outputs from it.

If any of my tests prove to be useful, I’ll report back with my findings.

Just super excited about this functionality. Thanks so much for tackling it.

@blujus
Copy link
Contributor Author

blujus commented Oct 3, 2023 via email

@blujus
Copy link
Contributor Author

blujus commented Oct 5, 2023

Have a working example using the JSON serialisation built into Plotly - so using plotly as the graphing library - much nicer regardless. Matplotlib is total ancient rubbish with totally non understandable API :-(. But serialising the graph to JSON then handing back and having a handler intercept and calling show in hosting notebook environment - will get it pushed after some tests.

@blujus
Copy link
Contributor Author

blujus commented Oct 5, 2023

Right I have committed this with use for serialisation of Plotly graphs - it now executes in the sub process as per the main architecture and attempts to pass the JSON graph back out to hosting notebook

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants