You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am quite frustrating because I am training a model and training is very very slow on RTX 3080. I am training on 500 CSV files. If anyone can help me for this,
To Reproduce
Define the DeepAR estimator
estimator = DeepAREstimator(
prediction_length=12, # Adjust based on how far you want to predict
context_length=24, # Context length should be at least as long as prediction length
freq="1min", # Change to your data's frequency
batch_size=64,
trainer_kwargs={"max_epochs": 1, "accelerator": "gpu"}
).train(training_data, )
predictor = estimator.train(training_data=training_data)
# Load your dataset# Base directory where the folders are locatedbase_dir='/media/cvpr/CM_1/coremax_cpu_usage/coremax_cpu/rnd'# List of folder namesfolders= ['2013-7', '2013-8', '2013-9']
# Initialize an empty DataFrame to store all dataall_data=pd.DataFrame()
# Iterate over each folder and read each fileforfolderinfolders:
folder_path=os.path.join(base_dir, folder)
forfileinos.listdir(folder_path):
iffile.endswith('.csv'):
file_path=os.path.join(folder_path, file)
temp_df=pd.read_csv(file_path, delimiter=';')
temp_df.columns=temp_df.columns.str.strip() # Strip whitespace from column names hereall_data=pd.concat([all_data, temp_df], ignore_index=True)
print(all_data)
# Convert timestamp to datetime and set it as the indexall_data['Timestamp'] =pd.to_datetime(all_data['Timestamp [ms]'], unit='ms')
all_data.set_index('Timestamp', inplace=True)
# Prepare the dataset for GluonTStraining_data=ListDataset([{
"start": all_data.index[0],
"target": all_data['CPU usage [MHZ]'].values,
"feat_dynamic_real": all_data[
['CPU cores', 'Memory usage [KB]', 'Disk read throughput [KB/s]', 'Disk write throughput [KB/s]',
'Network received throughput [KB/s]', 'Network transmitted throughput [KB/s]']].values.T
}], freq="1min") # Change '5min' to the actual frequency of your data# Define the DeepAR estimatorestimator=DeepAREstimator(
prediction_length=12, # Adjust based on how far you want to predictcontext_length=24, # Context length should be at least as long as prediction lengthfreq="1min", # Change to your data's frequencybatch_size=64,
trainer_kwargs={"max_epochs": 1, "accelerator": "gpu"}
).train(training_data, )```
## Error message or code output
(Pastethecompleteerrormessage, includingstacktrace, ortheundesiredoutputthattheabovesnippetproduces.)
## Environment
- Operating system: 20.04
- Python version: 3.8.18
- GluonTS version: 0.14.3
- MXNet version: Using torch
(Add as much information about your environment as possible, e.g. dependencies versions.)
The text was updated successfully, but these errors were encountered:
@khawar-islam what is the performance when running on CPU?
I'm not sure you can expect great performance with a DeepAR model (at least with default hyperparameters) since it's based on a recurrent neural network: this makes the model operations non-parallelizable, hence the GPU utilization will be extremely low.
Description
I am quite frustrating because I am training a model and training is very very slow on RTX 3080. I am training on 500 CSV files. If anyone can help me for this,
To Reproduce
Define the DeepAR estimator
Epoch 0: | | 3/? [08:49<00:00, 0.01it/s, v_num=22]
The text was updated successfully, but these errors were encountered: