Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions .github/workflows/check_code_snippets.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Test W&B Doc Code Snippets
on: [pull_request]

jobs:
generate-matrix:
runs-on: ubuntu-latest
outputs:
scripts: ${{ steps.list-scripts.outputs.scripts }}
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Find Python scripts
id: list-scripts
run: |
SCRIPTS=$(find code_examples/source -name "*.py" | jq -R -s -c 'split("\n")[:-1]')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit greedy. At a minimum I'd add -type f so you don't find directories that end in .py. I'm also confused why you have to take something that is going to be multi-line input with one result per line, join it up, and then split it again right away?

Suggested change
SCRIPTS=$(find code_examples/source -name "*.py" | jq -R -s -c 'split("\n")[:-1]')
SCRIPTS=$(find 'code_examples/source' -type f -name "*.py" | jq -R -s -c 'split("\n")[:-1]')

echo "scripts=$SCRIPTS" >> $GITHUB_ENV
echo "::set-output name=scripts::$SCRIPTS"

test:
needs: generate-matrix
runs-on: ubuntu-latest
strategy:
matrix:
script: ${{ fromJson(needs.generate-matrix.outputs.scripts) }}
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"

- name: Install and check dependencies
run: bash code_examples/check_dependencies.sh

- name: Run W&B script
run: python3 ${{ matrix.script }}
12 changes: 11 additions & 1 deletion .github/workflows/gpt-editor-ci-version.yml
Original file line number Diff line number Diff line change
Expand Up @@ -345,12 +345,22 @@ jobs:
repo = os.environ.get('GITHUB_REPOSITORY')
action_url = f"https://github.com/{repo}/actions/runs/{run_id}"

# Validate that the line numbers make sense
if start_line < 1:
print(f"Warning: Invalid start line {start_line}, adjusting to 1")
start_line = 1

# Format the suggestion with link to the action run
body = f"```suggestion\n{chr(10).join(improved_section)}\n```\n\n*[Generated by GPT Editor]({action_url})*"

pr_number = os.environ.get('PR_NUMBER')
github_token = os.environ.get('GITHUB_TOKEN')

# Debug output to help track line mappings
print(f"Suggesting change for {file_path} lines {start_line}-{end_line}")
print(f"Original ({len(original_section)} lines): {original_section[:1]}...")
print(f"Improved ({len(improved_section)} lines): {improved_section[:1]}...")

# Create a review comment using GitHub API
try:
# First get the latest commit SHA on the PR
Expand Down Expand Up @@ -546,4 +556,4 @@ jobs:
-H "Authorization: token $GITHUB_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/$GITHUB_REPOSITORY/issues/$PR_NUMBER/comments \
-d "$COMMENT_BODY"
-d "$COMMENT_BODY"
5 changes: 5 additions & 0 deletions code_examples/check_dependencies.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash
REQUIREMENTS_PATH="code_examples/requirements.txt"
pip install -r "$REQUIREMENTS_PATH" && echo "All packages installed successfully!" || echo "Installation failed."

python -c "import pkgutil; import sys; packages = [pkg.strip() for pkg in open('$REQUIREMENTS_PATH')]; missing = [p for p in packages if not pkgutil.find_loader(p)]; sys.exit(1) if missing else print('✅ All packages are installed.')"
1 change: 1 addition & 0 deletions code_examples/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
wandb
26 changes: 26 additions & 0 deletions code_examples/snippets/quickstart.snippet.all.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import wandb
import random

wandb.login()

epochs = 10
lr = 0.01

run = wandb.init(
project="my-awesome-project", # Specify your project
config={ # Track hyperparameters and metadata
"learning_rate": lr,
"epochs": epochs,
},
)

offset = random.random() / 5
print(f"lr: {lr}")

# Simulate a training run
for epoch in range(2, epochs):
acc = 1 - 2**-epoch - random.random() / epoch - offset
loss = 2**-epoch + random.random() / epoch + offset
print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
wandb.log({"accuracy": acc, "loss": loss})
run.finish()
2 changes: 2 additions & 0 deletions code_examples/snippets/quickstart.snippet.import.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
import wandb
import random
7 changes: 7 additions & 0 deletions code_examples/snippets/quickstart.snippet.init.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
run = wandb.init(
project="my-awesome-project", # Specify your project
config={ # Track hyperparameters and metadata
"learning_rate": lr,
"epochs": epochs,
},
)
1 change: 1 addition & 0 deletions code_examples/snippets/quickstart.snippet.login.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
wandb.login()
41 changes: 41 additions & 0 deletions code_examples/source/quickstart.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#!/usr/bin/env python3

try:
# :snippet-start: all
# :snippet-start: import
import wandb
import random
# :snippet-end: import

# :snippet-start: login
wandb.login()
# :snippet-end: login

epochs = 10
lr = 0.01

# :snippet-start: init
run = wandb.init(
project="my-awesome-project", # Specify your project
config={ # Track hyperparameters and metadata
"learning_rate": lr,
"epochs": epochs,
},
)
# :snippet-end: init

offset = random.random() / 5
print(f"lr: {lr}")

# Simulate a training run
for epoch in range(2, epochs):
acc = 1 - 2**-epoch - random.random() / epoch - offset
loss = 2**-epoch + random.random() / epoch + offset
print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
wandb.log({"accuracy": acc, "loss": loss})
run.finish()
# :snippet-end: all
exit(0)
except Exception as e:
print(f"Error: {e}")
exit(1)
49 changes: 10 additions & 39 deletions content/en/guides/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,18 @@ To authenticate your machine with W&B, generate an API key from your user profil

{{% tab header="Python" value="python" %}}

Navigate to your terminal and run the following command:
```bash
pip install wandb
```
```python
import wandb

wandb.login()
```
Within your Python script, import the `wandb` library:

{{< code language="python" source="/code_examples/snippets/quickstart.snippet.import.py" >}}

Next, log in to W&B using the `wandb.login()` method:

{{< code language="python" source="/code_examples/snippets/quickstart.snippet.login.py" >}}

{{% /tab %}}

Expand All @@ -66,48 +70,15 @@ wandb.login()

In your Python script or notebook, initialize a W&B run object with [`wandb.init()`]({{< relref "/ref/python/sdk/classes/run.md" >}}). Use a dictionary for the `config` parameter to specify hyperparameter names and values.

```python
run = wandb.init(
project="my-awesome-project", # Specify your project
config={ # Track hyperparameters and metadata
"learning_rate": 0.01,
"epochs": 10,
},
)
```
{{< code language="python" source="/code_examples/snippets/quickstart.snippet.init.py" >}}

A [run]({{< relref "/guides/models/track/runs/" >}}) serves as the core element of W&B, used to [track metrics]({{< relref "/guides/models/track/" >}}), [create logs]({{< relref "/guides/models/track/log/" >}}), and more.

## Assemble the components

This mock training script logs simulated accuracy and loss metrics to W&B:

```python
import wandb
import random

wandb.login()

# Project that the run is recorded to
project = "my-awesome-project"

# Dictionary with hyperparameters
config = {
'epochs' : 10,
'lr' : 0.01
}

with wandb.init(project=project, config=config) as run:
offset = random.random() / 5
print(f"lr: {config['lr']}")

# Simulate a training run
for epoch in range(2, config['epochs']):
acc = 1 - 2**-config['epochs'] - random.random() / config['epochs'] - offset
loss = 2**-config['epochs'] + random.random() / config['epochs'] + offset
print(f"epoch={config['epochs']}, accuracy={acc}, loss={loss}")
run.log({"accuracy": acc, "loss": loss})
```
{{< code language="python" source="/code_examples/snippets/quickstart.snippet.all.py" >}}

Visit [wandb.ai/home](https://wandb.ai/home) to view recorded metrics such as accuracy and loss and how they changed during each training step. The following image shows the loss and accuracy tracked from each run. Each run object appears in the **Runs** column with generated names.

Expand Down
7 changes: 7 additions & 0 deletions layouts/shortcodes/code.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{{ $language := .Get "language" }}
{{ $source := .Get "source" }}
{{ $options := .Get "options" }}

{{ with $source | readFile }}
{{ highlight (trim . "\n\r") $language $options }}
{{ end }}
Loading