You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was interested in your work on PPoPP'22, thank you for making the code open source. I tried to run the LSTM AN4 code, but cannot achieve the results claimed in the paper(WER=0.309 or 0.368),only reach 0.46. I know I'm using a different environment, Perhaps you can give me some suggestions to improve the WER?
Here is the environment I use:
8*A100 within one server
Horovod 0.22.1
Here are the parameters I use: horovodrun -np 8 python horovod_trainer.py --dnn lstman4 --dataset an4 --max-epochs 1000 --batch-size 2 --nworkers 8 --data-dir ./audio_data --lr 0.001 --nwpernode 8 --nsteps-update 1
In addition, I also adjusted learning rate decay rate(In dl_trainer.py/_Adjust_Learning_Rate_LSTMan4() ). The original 1.01 May not be suitable for my environment, so I changed it to 1.005.
Thank you for seeing this, do you have any suggestions?
The text was updated successfully, but these errors were encountered:
Thank you for your reply, I tried Batchsize 4 and 8 respectively, but the results were even worse(the former was 0.73 and the latter was 0.92). I don't know why, but it seems to prefer the smaller batchsize.
I was interested in your work on PPoPP'22, thank you for making the code open source. I tried to run the LSTM AN4 code, but cannot achieve the results claimed in the paper(WER=0.309 or 0.368),only reach 0.46. I know I'm using a different environment, Perhaps you can give me some suggestions to improve the WER?
Here is the environment I use:
In addition, I also adjusted learning rate decay rate(In dl_trainer.py/_Adjust_Learning_Rate_LSTMan4() ). The original 1.01 May not be suitable for my environment, so I changed it to 1.005.
Thank you for seeing this, do you have any suggestions?
The text was updated successfully, but these errors were encountered: