Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions .github/workflows/integration_test_8gpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,5 @@ jobs:

python -m pip install --force-reinstall --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124

# install torchtitan to test the files in ./scripts
python -m pip install -e .

mkdir artifacts-to-be-uploaded
python ./tests/integration_tests.py artifacts-to-be-uploaded --ngpu 8
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@ cd torchtitan
pip install -r requirements.txt
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124 --force-reinstall
[For AMD GPU] pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/rocm6.3 --force-reinstall
pip install -e .
```

### Downloading a tokenizer
Expand Down
2 changes: 1 addition & 1 deletion docs/checkpoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ An example script for converting the original Llama3 checkpoints into the expect

The script expects a path to the original checkpoint files, and a path to an output directory:
```bash
python3 scripts/convert_llama_to_dcp.py <input_dir> <output_dir>
python -m scripts.convert_llama_to_dcp <input_dir> <output_dir>
```


Expand Down
2 changes: 1 addition & 1 deletion run_train.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" \
TORCHFT_LIGHTHOUSE=${TORCHFT_LIGHTHOUSE} \
torchrun --nproc_per_node=${NGPU} --rdzv_backend c10d --rdzv_endpoint="localhost:0" \
--local-ranks-filter ${LOG_RANK} --role rank --tee 3 \
torchtitan/train.py --job.config_file ${CONFIG_FILE} $overrides
-m torchtitan.train --job.config_file ${CONFIG_FILE} $overrides
2 changes: 1 addition & 1 deletion scripts/estimate/run_memory_estimation.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ fi
# Export WORLD_SIZE and LOCAL_RANK
export WORLD_SIZE=$((NGPU * NNODES))
export LOCAL_RANK=0
python scripts/estimate/estimation.py --job.config_file ${CONFIG_FILE} --memory_estimation.enabled $overrides
python -m scripts.estimate.estimation --job.config_file ${CONFIG_FILE} --memory_estimation.enabled $overrides
2 changes: 1 addition & 1 deletion scripts/generate/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,5 +33,5 @@ PROMPT="What is the meaning of life?" \
#### View Available Arguments

```bash
> python ./scripts/generate/test_generate.py --help
> python -m scripts.generate.test_generate --help
```
2 changes: 1 addition & 1 deletion scripts/generate/run_llama_generate.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ set -x
torchrun --standalone \
--nproc_per_node="${NGPU}" \
--local-ranks-filter="${LOG_RANK}" \
scripts/generate/test_generate.py \
-m scripts.generate.test_generate \
--config="${CONFIG_FILE}" \
--checkpoint="${CHECKPOINT_DIR}" \
--prompt="${PROMPT}" \
Expand Down