-
Couldn't load subscription status.
- Fork 24
Description
Currently tested on master: 78b0def
- requirements-dev.txt is missing the entry
-r requirements.txtto install streamlit with appropriate version
-r requirements.txt
# dev
pytorch-ignite
torch
torchvision
jinja2
requests
# test
pytest
hypothesis
-
Change text: "Those in the parenthesis are used in the generated code." -> "Names in the parenthesis are variable names in the generated code." or something similar.
-
Let's explicitly create the trainer in CIFAR10 example to show how to write
training_step -
Let's add AMP option
-
Let's add Error metric (to show how we can do metrics arithmetics) :
accuracy_metric = Accuracy(device=device)
metrics = {
'eval_accuracy': accuracy_metric,
'eval_loss': Loss(loss_fn=loss_fn, device=device),
'eval_error': (1.0 - accuracy_metric) * 100
}- Let's change the output of
initializeand also set up a LR scheduler:
- device, model, optimizer, loss_fn = initialize(config)
+ device = idist.device()
+ model, optimizer, loss_fn, lr_scheduler = initialize(config)-
Distributed option if used as multiprocessing schema:
python main.py-> multiple childs have/had a certain issue with dataloaders: first iteration of each epoch is very slow. To avoid that let's prefer to say to the user to lauch things withtorch.distributed.launch -
I think this code is useless to add to main.py if exp_logger is None
# --------------------------------
# setup common experiment loggers
# --------------------------------
exp_logger = setup_exp_logging(
config=config,
eval_engine=eval_engine,
train_engine=train_engine,
optimizer=optimizer,
name=name
)-
I'm a bit confused about this option:
eval_max_epochsand its value = 2. It is something I've never seen before. I think that we have to follow the standard practices and by default run once on the validation dataloader. Thoughts ? -
If possible make sidebar resizable from a min possible to a max value.