Skip to content

tleyden/coach

 
 

Repository files navigation

Coach

I work at Replicate. It's an API for running models.

One of the cool models is llava. llava is like an open source alternative to GPT vision. What can you do with llava? 🤔

prompt: what's in this fridge? llava example

Well, I procrastinate a lot. So I procrastinated by making a thing that helps me stop procrastinating!

demo1 Screenshot 2024-02-05 at 9 58 33 AM

How does it work?

First, give coach your goal.

python coach.py --goal "work on a coding project"

Take screenshots every 2s

recorder.mp4

Ask Llava what it sees

llava

Ask MacOS what app is focused

osascript -e 'tell application "System Events" to get the name of the first process whose frontmost is true'

Track activities in a JSON file

Each activity is saved in this format:

Activity(
    datetime: datetime
    application: str
    activity: str
    image_path: str
    model: str
    prompt: str
    goal: str = None
    is_productive: bool = None
    explanation: str = None
    iteration_duration: float = None
)

activities

You can already do interesting things with this data:

chart

Use a language model to decide whether current activity is productive

> python test_coach.py \
>       --image_description "The computer screen displays a code editor with a file open, showing a Python script." \
>       --goal "work on a coding project"
>
> productive=True explanation='Based on the information provided, it appears that you have a code editor open and are viewing a Python script, which aligns with your goal of working on a coding project. Therefore, your current activity is considered productive.'
> python test_coach.py \
>       --image_description "The computer screen displays a web browser with YouTube Open" \
>       --goal "work on a coding project"
>
> productive=False explanation='Watching videos on YouTube is not helping you work on your coding project. Try closing the YouTube tab and opening your coding project instead.'

How do I guarantee that the output is JSON? Mixtral doesn't support function calling yet, so I just ask it nicely to give me JSON. I then use a library called instructor to retry if the output fails.

        model = "ollama/mixtral"
        messages = [
            {
                "role": "system",
                "content": """You are a JSON extractor. Please extract the following JSON, No Talking at all. Just output JSON based on the description. NO TALKING AT ALL!!""",
            },
            {
                "role": "user",
                "content": f"""You are a productivity coach. You are helping my accomplish my goal of {goal}. Let me know if you think the description of my current activity is in line with my goals.

RULES: You must respond in JSON format. DO NOT RESPOND WITH ANY TALKING.

## Current status:
Goal: {goal}
Current activity: {description}

## Result:""",
            },
        ]

        record = completion(
            model=model,
            response_model=GoalExtract,
            max_retries=5,
            messages=messages,
        )

See it live!

python coach.py --goal 'work on a coding project' --cloud

OR remove cloud flag to run locally on Ollama:

python coach.py --goal 'work on a coding project'

Optionally, activate hard mode: python coach.py --goal 'work on a coding project' --cloud

Demo video:

coach-demo-compressed.mp4

Future ideas

Screenshot 2024-02-12 at 11 20 10 AM

What happens if you embed the text on your screen and see how far it is from distracting keywords?

python ocr.py

https://github.com/straussmaximilian/ocrmac

Models

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.2%
  • Python 1.8%