Skip to content

Learn Template Tricks

Override any config parameter from command line
python train.py trainer.max_epochs=20 model.optimizer.lr=1e-4
**Note**: You can also add new parameters with `+` sign.
python train.py +model.new_param="owo"
Train on CPU, GPU, multi-GPU and TPU
# train on CPU
python train.py trainer=cpu

# train on 1 GPU
python train.py trainer=gpu

# train on TPU
python train.py +trainer.tpu_cores=8

# train with DDP (Distributed Data Parallel) (4 GPUs)
python train.py trainer=ddp trainer.devices=4

# train with DDP (Distributed Data Parallel) (8 GPUs, 2 nodes)
python train.py trainer=ddp trainer.devices=4 trainer.num_nodes=2

# simulate DDP on CPU processes
python train.py trainer=ddp_sim trainer.devices=2

# accelerate training on mac
python train.py trainer=mps
Train with mixed precision
# train with pytorch native automatic mixed precision (AMP)
python train.py trainer=gpu +trainer.precision=16
Train model with any logger available in PyTorch Lightning, like W&B or Tensorboard
# set project and entity names in `configs/logger/wandb`
wandb:
  project: "your_project_name"
  entity: "your_wandb_team_name"
# train model with Weights&Biases (link to wandb dashboard should appear in the terminal)
python train.py logger=wandb
**Note**: Lightning provides convenient integrations with most popular logging frameworks. Learn more [here](#experiment-tracking). **Note**: Using wandb requires you to [setup account](https://www.wandb.com/) first. After that just complete the config as below. **Note**: Click [here](https://wandb.ai/hobglob/template-dashboard/) to see example wandb dashboard generated with this template.
Train model with chosen experiment config
python train.py experiment=example
**Note**: Experiment configs are placed in [configs/experiment/](configs/experiment/).
Attach some callbacks to run
python train.py callbacks=default
**Note**: Callbacks can be used for things such as as model checkpointing, early stopping and [many more](https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html#built-in-callbacks). **Note**: Callbacks configs are placed in [configs/callbacks/](configs/callbacks/).
Use different tricks available in Pytorch Lightning
# gradient clipping may be enabled to avoid exploding gradients
python train.py +trainer.gradient_clip_val=0.5

# run validation loop 4 times during a training epoch
python train.py +trainer.val_check_interval=0.25

# accumulate gradients
python train.py +trainer.accumulate_grad_batches=10

# terminate training after 12 hours
python train.py +trainer.max_time="00:12:00:00"
**Note**: PyTorch Lightning provides about [40+ useful trainer flags](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags).
Easily debug
# runs 1 epoch in default debugging mode
# changes logging directory to `logs/debugs/...`
# sets level of all command line loggers to 'DEBUG'
# enforces debug-friendly configuration
python train.py debug=default

# run 1 train, val and test loop, using only 1 batch
python train.py debug=fdr

# print execution time profiling
python train.py debug=profiler

# try overfitting to 1 batch
python train.py debug=overfit

# raise exception if there are any numerical anomalies in tensors, like NaN or +/-inf
python train.py +trainer.detect_anomaly=true

# use only 20% of the data
python train.py +trainer.limit_train_batches=0.2 \
+trainer.limit_val_batches=0.2 +trainer.limit_test_batches=0.2
**Note**: Visit [configs/debug/](configs/debug/) for different debugging configs.
Resume training from checkpoint
python train.py ckpt_path="/path/to/ckpt/name.ckpt"
**Note**: Checkpoint can be either path or URL. **Note**: Currently loading ckpt doesn't resume logger experiment, but it will be supported in future Lightning release.
Evaluate checkpoint on test dataset
python eval.py ckpt_path="/path/to/ckpt/name.ckpt"
**Note**: Checkpoint can be either path or URL.
Create a sweep over hyperparameters
# this will run 6 experiments one after the other,
# each with different combination of batch_size and learning rate
python train.py -m data.batch_size=32,64,128 model.lr=0.001,0.0005
**Note**: Hydra composes configs lazily at job launch time. If you change code or configs after launching a job/sweep, the final composed configs might be impacted.
Create a sweep over hyperparameters with Optuna
# this will run hyperparameter search defined in `configs/hparams_search/mnist_optuna.yaml`
# over chosen experiment config
python train.py -m hparams_search=mnist_optuna experiment=example
**Note**: Using [Optuna Sweeper](https://hydra.cc/docs/next/plugins/optuna_sweeper) doesn't require you to add any boilerplate to your code, everything is defined in a [single config file](configs/hparams_search/mnist_optuna.yaml). **Warning**: Optuna sweeps are not failure-resistant (if one job crashes then the whole sweep crashes).
Execute all experiments from folder
python train.py -m 'experiment=glob(*)'
**Note**: Hydra provides special syntax for controlling behavior of multiruns. Learn more [here](https://hydra.cc/docs/next/tutorials/basic/running_your_app/multi-run). The command above executes all experiments from [configs/experiment/](configs/experiment/).
Execute run for multiple different seeds
python train.py -m seed=1,2,3,4,5 trainer.deterministic=True logger=csv tags=["benchmark"]
**Note**: `trainer.deterministic=True` makes pytorch more deterministic but impacts the performance.
Use Hydra tab completion **Note**: Hydra allows you to autocomplete config argument overrides in shell as you write them, by pressing `tab` key. Read the [docs](https://hydra.cc/docs/tutorials/basic/running_your_app/tab_completion).
Run tests
# run all tests
pytest

# run tests from specific file
pytest tests/test_train.py

# run all tests except the ones marked as slow
pytest -k "not slow"
Use tags Each experiment should be tagged in order to easily filter them across files or in logger UI:
python train.py tags=["mnist","experiment_X"]
**Note**: You might need to escape the bracket characters in your shell with `python train.py tags=\["mnist","experiment_X"\]`. If no tags are provided, you will be asked to input them from command line:
>>python train.py tags=[]
[2022-07-11 15:40:09,358][src.utils.utils][INFO] - Enforcing tags! <cfg.extras.enforce_tags=True>
[2022-07-11 15:40:09,359][src.utils.rich_utils][WARNING] - No tags provided in config. Prompting user to input tags...
Enter a list of comma separated tags (dev):
If no tags are provided for multirun, an error will be raised:
>>python train.py -m +x=1,2,3 tags=[]
ValueError: Specify tags before launching a multirun!