You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if [ $BACKGROUND -eq 1 ]; then
echo "Search in background..."
OUTPUT_FILE="Search_${MODEL_SIZE}${MEMORY}GB${NUM_NODES}Nodes_${NUM_GPUS_PER_NODE}GPUs_per_node_bsz64.log"
nohup python3 search_dist.py ${SEARCH_ARGS} 1> ${OUTPUT_FILE} 2>&1 &
else
echo "Search in foreground..."
python3 search_dist.py ${SEARCH_ARGS}
fi`
The final search result shows that the Max throughput=1.8973885326702808 samples/s
Then I run the scripts/train_dist.sh using the searched config file, the real execution time is about 6.7s, which means the throughput is about 9.55 samples/s (the global batchsize is 64). It seems that the prediction error is large, is there anything wrong?
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our system! We have recently been upgrading the system with new features and it seems that this bug was caused by recent commits. We will fix and reply as soon as possible!
I run the scripts/search_dist.sh of llama_hf model using A800 with 8 gpus
`export NUM_NODES=1
export NUM_GPUS_PER_NODE=8
MODEL_SIZE="llama-13b"
MEMORY=75
MODEL_ARGS="
--model_size ${MODEL_SIZE}
--set_model_config_manually 0
--set_layernum_manually 0
--vocab_size 32000
--hidden_size 5120
--num_hidden_layers 40
--num_attention_heads 40
--seq_length 2048"
BSZ_ARGS="
--min_bsz 64
--max_bsz 64
--bsz_scale 16
--settle_bsz -1
--recommend_min_bsz 0
"
SEARCH_SPACE_ARGS="
--search_space full
--disable_dp 0
--disable_tp 0
--disable_pp 0
--disable_sdp 1
--disable_ckpt 0
--disable_tp_consec 0
--max_tp_deg 8
--max_pp_deg 8
"
SEARCH_ARGS="
${BSZ_ARGS}
${SEARCH_SPACE_ARGS}
${MODEL_ARGS}
--num_nodes ${NUM_NODES}
--num_gpus_per_node ${NUM_GPUS_PER_NODE}
--memory_constraint $MEMORY
--mixed_precision bf16
--pipeline_type pipedream_flush
--default_dp_type ddp
--embed_sdp 0
"
BACKGROUND=1
if [ $BACKGROUND -eq 1 ]; then
echo "Search in background..."
OUTPUT_FILE="Search_${MODEL_SIZE}${MEMORY}GB${NUM_NODES}Nodes_${NUM_GPUS_PER_NODE}GPUs_per_node_bsz64.log"
nohup python3 search_dist.py ${SEARCH_ARGS} 1> ${OUTPUT_FILE} 2>&1 &
else
echo "Search in foreground..."
python3 search_dist.py ${SEARCH_ARGS}
fi`
The final search result shows that the Max throughput=1.8973885326702808 samples/s
![image](https://private-user-images.githubusercontent.com/30850807/340974310-83134fe7-31a5-46af-bcd2-0848d79aff68.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkzODA0MDYsIm5iZiI6MTcxOTM4MDEwNiwicGF0aCI6Ii8zMDg1MDgwNy8zNDA5NzQzMTAtODMxMzRmZTctMzFhNS00NmFmLWJjZDItMDg0OGQ3OWFmZjY4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDA1MzUwNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM1NjZhNzZiMWQ3NWNhY2Q2MmY4ZTc2NTlmMTRjZTA2Y2NkZDkxNzVmMGU2MTI4NDUxMmJjNTU3ODFhYzc0NTkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.ESOP5zNt87YqMwV2TyImaRKDDt_VfKUlheL0w8SmZI0)
Then I run the scripts/train_dist.sh using the searched config file, the real execution time is about 6.7s, which means the throughput is about 9.55 samples/s (the global batchsize is 64). It seems that the prediction error is large, is there anything wrong?
![image](https://private-user-images.githubusercontent.com/30850807/340974507-f1bfeb4a-4b71-49e7-90e9-f6b871b5a82d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkzODA0MDYsIm5iZiI6MTcxOTM4MDEwNiwicGF0aCI6Ii8zMDg1MDgwNy8zNDA5NzQ1MDctZjFiZmViNGEtNGI3MS00OWU3LTkwZTktZjZiODcxYjVhODJkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDA1MzUwNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRjODNhYTQ2MjAwOWE3NmU5MWY2NjQ2ZjVmODU0MmRjZDkzYjMzNmRiMGNjNTAyNDYyZTZmYjJhNDkzZTk4ZDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.N8QFEPMlACU1eCdmNXrH1W9qwfqG6MjheA6AQO7Qhtc)
The text was updated successfully, but these errors were encountered: