diff --git a/docs/en/user_guides/config.md b/docs/en/user_guides/config.md index 46ac1e6a9..5ebe9ccbb 100644 --- a/docs/en/user_guides/config.md +++ b/docs/en/user_guides/config.md @@ -1 +1,710 @@ # Config + +MMOCR mainly uses Python files as configuration files. The design of its configuration file system integrates the ideas of modularity and inheritance to facilitate various experiments. + +## Common Usage + +```{note} +This section is recommended to be read together with the primary usage in {external+mmengine:doc}`MMEngine: Config `. +``` + +There are three most common operations in MMOCR: inheritance of configuration files, reference to `_base_` variables, and modification of `_base_` variables. Config provides two syntaxes for inheriting and modifying `_base_`, one for Python, Json, and Yaml, and one for Python configuration files only. In MMOCR, we **prefer the Python-only syntax**, so this will be the basis for further description. + +The `configs/textdet/dbnet/dbnet_resnet18_fpnc_1200e_icdar2015.py` is used as an example to illustrate the three common uses. + +```Python +_base_ = [ + '_base_dbnet_resnet18_fpnc.py', + '../_base_/datasets/icdar2015.py', + '../_base_/default_runtime.py', + '../_base_/schedules/schedule_sgd_1200e.py', +] + +# dataset settings +ic15_det_train = _base_.ic15_det_train +ic15_det_train.pipeline = _base_.train_pipeline +ic15_det_test = _base_.ic15_det_test +ic15_det_test.pipeline = _base_.test_pipeline + +train_dataloader = dict( + batch_size=16, + num_workers=8, + persistent_workers=True, + sampler=dict(type='DefaultSampler', shuffle=True), + dataset=ic15_det_train) + +val_dataloader = dict( + batch_size=1, + num_workers=4, + persistent_workers=True, + sampler=dict(type='DefaultSampler', shuffle=False), + dataset=ic15_det_test) +``` + +### Configuration Inheritance + +There is an inheritance mechanism for configuration files, i.e. one configuration file A can use another configuration file B as its base and inherit all the fields directly from it, thus avoiding a lot of copy-pasting. + +In `dbnet_resnet18_fpnc_1200e_icdar2015.py` you can see that + +```Python +_base_ = [ + '_base_dbnet_resnet18_fpnc.py', + '../_base_/datasets/icdar2015.py', + '../_base_/default_runtime.py', + '../_base_/schedules/schedule_sgd_1200e.py', +] +``` + +The above statement reads all the base configuration files in the list, and all the fields in them are loaded into `dbnet_resnet18_fpnc_1200e_icdar2015.py`. We can see the structure of the configuration file after it has been parsed by running the following statement in a Python interpretation. + +```Python +from mmengine import Config +db_config = Config.fromfile('configs/textdet/dbnet/dbnet_resnet18_fpnc_1200e_icdar2015.py') +print(db_config) +``` + +It can be found that the parsed configuration contains all the fields and information in the base configuration. + +```{note} +Variables with the same name cannot exist in each `_base_` profile. +``` + +### `_base_` Variable References + +Sometimes we may need to reference some fields in the `_base_` configuration directly in order to avoid duplicate definitions. Suppose we want to get the variable `pseudo` in the `_base_` configuration, we can get the variable in the `_base_` configuration directly via `_base_.pseudo`. + +This syntax has been used extensively in the configuration of MMOCR, and the dataset and pipeline configurations for each model in MMOCR are referenced in the *_base_* configuration. For example, + +```Python +ic15_det_train = _base_.ic15_det_train +# ... +train_dataloader = dict( + # ... + dataset=ic15_det_train) +``` + +
+ +### `_base_` Variable Modification + +In MMOCR, different algorithms usually have different pipelines in different datasets, so there are often scenarios to modify the `pipeline` in the dataset. There are also many scenarios where you need to modify variables in the `_base_` configuration, for example, modifying the training strategy of an algorithm, replacing some modules of an algorithm(backbone, etc.). Users can directly modify the referenced `_base_` variables using Python syntax. For dict, we also provide a method similar to class attribute modification to modify the contents of the dictionary directly. + +1. Dictionary + + Here is an example of modifying `pipeline` in a dataset. + + The dictionary can be modified using Python syntax: + + ```Python + # Get the dataset in _base_ + ic15_det_train = _base_.ic15_det_train + # You can modify the variables directly with Python's update + ic15_det_train.update(pipeline=_base_.train_pipeline) + ``` + + It can also be modified in the same way as changing Python class attributes. + + ```Python + # Get the dataset in _base_ + ic15_det_train = _base_.ic15_det_train + # The class property method is modified + ic15_det_train.pipeline = _base_.train_pipeline + ``` + +2. List + + Suppose the variable `pseudo = [1, 2, 3]` in the `_base_` configuration needs to be modified to `[1, 2, 4]`: + + ```Python + # pseudo.py + pseudo = [1, 2, 3] + ``` + + Can be rewritten directly as. + + ```Python + _base_ = ['pseudo.py'] + pseudo = [1, 2, 4] + ``` + + Or modify the list using Python syntax: + + ```Python + _base_ = ['pseudo.py'] + pseudo = _base_.pseudo + pseudo[2] = 4 + ``` + +### Command Line Modification + +Sometimes we only want to fix part of the configuration and do not want to modify the configuration file itself. For example, if you want to change the learning rate during an experiment but do not want to write a new configuration file, you can pass in parameters on the command line to override the relevant configuration. + +We can pass `--cfg-options` on the command line and modify the corresponding fields directly with the arguments after it. For example, we can run the following command to modify the learning rate temporarily for this training session. + +```Shell +python tools/train.py example.py --cfg-options optim_wrapper.optimizer.lr=1 +``` + +For more detailed usage, refer to {external+mmengine:doc}`MMEngine: Command Line Modification `. + +## Configuration Content + +With config files and Registry, MMOCR can modify the training parameters as well as the model configuration without invading the code. Specifically, users can customize the following modules in the configuration file: environment configuration, hook configuration, log configuration, training strategy configuration, data-related configuration, model-related configuration, evaluation configuration, and visualization configuration. + +This document will take the text detection algorithm `DBNet` and the text recognition algorithm `CRNN` as examples to introduce the contents of Config in detail. + +
+ +### Environment Configuration + +```Python +default_scope = 'mmocr' +env_cfg = dict( + cudnn_benchmark=True, + mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), + dist_cfg=dict(backend='nccl')) +randomness = dict(seed=None) +``` + +There are three main components: + +- Set the default `scope` of all registries to `mmocr`, ensuring that all modules are searched first from the `MMOCR` codebase. If the module does not exist, the search will continue from the upstream algorithm libraries `MMEngine` and `MMCV`, see {external+mmengine:doc}`MMEngine: Registry ` for more details. + +- `env_cfg` configures the distributed environment, see {external+mmengine:doc}`MMEngine: Runner ` for more details. + +- `randomness`: Some settings to make the experiment as reproducible + as possible like seed and deterministic. See {external+mmengine:doc}`MMEngine: Runner ` for more details. + +
+ +### Hook Configuration + +Hooks are divided into two main parts, default hooks, which are required for all tasks to run, and custom hooks, which generally serve specific algorithms or specific tasks (there are no custom hooks in MMOCR so far). + +```Python +default_hooks = dict( + timer=dict(type='IterTimerHook'), # Time recording, including data time as well as model inference time + logger=dict(type='LoggerHook', interval=1), # Collect logs from different components + param_scheduler=dict(type='ParamSchedulerHook'), # Update some hyper-parameters in optimizer + checkpoint=dict(type='CheckpointHook', interval=1),# Save checkpoint. `interval` control save interval + sampler_seed=dict(type='DistSamplerSeedHook'), # Data-loading sampler for distributed training. + sync_buffer=dict(type='SyncBuffersHook'), # Synchronize buffer in case of distributed training + visualization=dict( # Visualize the results of val and test + type='VisualizationHook', + interval=1, + enable=False, + show=False, + draw_gt=False, + draw_pred=False)) + custom_hooks = [] +``` + +Here is a brief description of a few hooks whose parameters may be changed frequently. For a general modification method, refer to Modify configuration. + +- `LoggerHook`: Used to configure the behavior of the logger. For example, by modifying `interval` you can control the interval of log printing, so that the log is printed once per `interval` iteration, for more settings refer to [LoggerHook API](mmengine.hooks.LoggerHook). + +- `CheckpointHook`: Used to configure checkpoint-related behavior, such as saving optimal and/or latest weights. You can also modify `interval` to control the checkpoint saving interval. More settings can be found in [CheckpointHook API](mmengine.hooks.CheckpointHook) + +- `VisualizationHook`: Used to configure visualization-related behavior, such as visualizing predicted results during validation or testing. **Default is off**. This Hook also depends on [Visualizaiton Configuration](#visualizaiton-configuration). You can refer to [Visualizer](visualization.md) for more details. For more configuration, you can refer to [VisualizationHook API](mmocr.engine.hooks.VisualizationHook). + +If you want to learn more about the configuration of the default hooks and their functions, you can refer to {external+mmengine:doc}`MMEngine: Hooks `. + +
+ +### Log Configuration + +This section is mainly used to configure the log level and the log processor. + +```Python +log_level = 'INFO' # Logging Level +log_processor = dict(type='LogProcessor', + window_size=10, + by_epoch=True) +``` + +- The logging severity level is the same as that of {external+python:doc}`Python: logging ` + +- The log processor is mainly used to control the format of the output, detailed functions can be found in {external+mmengine:doc}`MMEngine: logging `. + + - `by_epoch=True` indicates that the logs are output in accordance to "epoch", and the log format needs to be consistent with the `type='EpochBasedTrainLoop'` parameter in `train_cfg`. For example, if you want to output logs by iteration number, you need to set ` by_epoch=False` in `log_processor` and `type='IterBasedTrainLoop'` in `train_cfg`. + + - `window_size` indicates the smoothing window of the loss, i.e. the average value of the various losses for the last `window_size` iterations. the final loss value printed in logger is the average of all the losses. + +
+ +### Training Strategy Configuration + +This section mainly contains optimizer settings, learning rate schedules and `Loop` settings. + +Training strategies usually vary for different tasks (text detection, text recognition, key information extraction). Here we explain the example configuration in `CRNN`, which is a text recognition model. + +```Python +# optimizer +optim_wrapper = dict( + type='OptimWrapper', optimizer=dict(type='Adadelta', lr=1.0)) +param_scheduler = [dict(type='ConstantLR', factor=1.0)] +train_cfg = dict(type='EpochBasedTrainLoop', + max_epochs=5, # train epochs + val_interval=1) # val interval +val_cfg = dict(type='ValLoop') +test_cfg = dict(type='TestLoop') +``` + +- `optim_wrapper` : It contains two main parts, OptimWrapper and Optimizer. Detailed usage information can be found in {external+mmengine:doc}`MMEngine: Optimizer Wrapper `. + + - The Optimizer wrapper supports different training strategies, including mixed-accuracy training (AMP), gradient accumulation, and gradient truncation. + + - All PyTorch optimizers are supported in the optimizer settings. All supported optimizers are available in {external+torch:ref}`PyTorch Optimizer List `. + +- `param_scheduler` : learning rate tuning strategy, supports most of the learning rate schedulers in PyTorch, such as `ExponentialLR`, `LinearLR`, `StepLR`, `MultiStepLR`, etc., and is used in much the same way, see [scheduler interface](mmengine.optim.scheduler), and more features can be found in the {external+mmengine:doc}`MMEngine: Optimizer Parameter Tuning Strategy `. + +- `train/test/val_cfg` : the execution flow of the task, MMEngine provides four kinds of flow: `EpochBasedTrainLoop`, `IterBasedTrainLoop`, `ValLoop`, `TestLoop` More can be found in {external+mmengine:doc}`MMEngine: loop controller `. + +### Data-related Configuration + +
+ +#### Dataset Configuration + +It is mainly about two parts. + +- The location of the dataset(s), including images and annotation files. + +- Data augmentation related configurations. In the OCR domain, data augmentation is usually strongly associated with the model. + +More parameter configurations can be found in [Data Base Class](#TODO). + +The naming convention for dataset fields in MMOCR is + +```Python +{dataset}_{task}_{train/val/test} = dict(...) +``` + +- dataset: See [dataset abbreviations](#TODO) + +- task: `det`(text detection), `rec`(text recognition), `kie`(key information extraction) + +- train/val/test: Dataset split. + +For example, for text recognition tasks, Syn90k is used as the training set, while icdar2013 and icdar2015 serve as the test sets. These are configured as follows. + +```Python +# text recognition dataset configuration +mj_rec_train = dict( + type='OCRDataset', + data_root='data/rec/Syn90k/', + data_prefix=dict(img_path='mnt/ramdisk/max/90kDICT32px'), + ann_file='train_labels.json', + test_mode=False, + pipeline=None) + +ic13_rec_test = dict( + type='OCRDataset', + data_root='data/rec/icdar_2013/', + data_prefix=dict(img_path='Challenge2_Test_Task3_Images/'), + ann_file='test_labels.json', + test_mode=True, + pipeline=None) + +ic15_rec_test = dict( + type='OCRDataset', + data_root='data/rec/icdar_2015/', + data_prefix=dict(img_path='ch4_test_word_images_gt/'), + ann_file='test_labels.json', + test_mode=True, + pipeline=None) +``` + +
+ +#### Data Pipeline Configuration + +In MMOCR, dataset construction and data preparation are decoupled from each other. In other words, dataset classes such as `OCRDataset` are responsible for reading and parsing annotation files, while Data Transforms further implement data loading, data augmentation, data formatting and other related functions. + +In general, there are different augmentation strategies for training and testing, so there are usually `training_pipeline` and `testing_pipeline`. More information can be found in [Data Transforms](../basic_concepts/transforms.md) + +- The data augmentation process of the training pipeline is usually: data loading (LoadImageFromFile) -> annotation information loading (LoadXXXAnntation) -> data augmentation -> data formatting (PackXXXInputs). + +- The data augmentation flow of the test pipeline is usually: Data Loading (LoadImageFromFile) -> Data Augmentation -> Annotation Loading (LoadXXXAnntation) -> Data Formatting (PackXXXInputs). + +Due to the specificity of the OCR task, different models have different data augmentation techniques, and even the same model can have different data augmentation strategies for different datasets. Take `CRNN` as an example. + +```Python +# Data Augmentation +file_client_args = dict(backend='disk') +train_pipeline = [ + dict( + type='LoadImageFromFile', + color_type='grayscale', + file_client_args=dict(backend='disk'), + ignore_empty=True, + min_size=5), + dict(type='LoadOCRAnnotations', with_text=True), + dict(type='Resize', scale=(100, 32), keep_ratio=False), + dict( + type='PackTextRecogInputs', + meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio')) +] +test_pipeline = [ + dict( + type='LoadImageFromFile', + color_type='grayscale', + file_client_args=dict(backend='disk')), + dict( + type='RescaleToHeight', + height=32, + min_width=32, + max_width=None, + width_divisor=16), + dict(type='LoadOCRAnnotations', with_text=True), + dict( + type='PackTextRecogInputs', + meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio')) +] +``` + +#### Dataloader Configuration + +The main configuration information needed to construct the dataset loader (dataloader), see {external+torch:doc}`PyTorch DataLoader ` for more tutorials. + +```Python +# Dataloader +train_dataloader = dict( + batch_size=64, + num_workers=8, + persistent_workers=True, + sampler=dict(type='DefaultSampler', shuffle=True), + dataset=dict( + type='ConcatDataset', + datasets=[mj_rec_train], + pipeline=train_pipeline)) +val_dataloader = dict( + batch_size=1, + num_workers=4, + persistent_workers=True, + drop_last=False, + sampler=dict(type='DefaultSampler', shuffle=False), + dataset=dict( + type='ConcatDataset', + datasets=[ic13_rec_test, ic15_rec_test], + pipeline=test_pipeline)) +test_dataloader = val_dataloader +``` + +### Model-related Configuration + +
+ +#### Network Configuration + +This section configures the network architecture. Different algorithmic tasks use different network architectures. Find more info about network architecture in [structures](../basic_concepts/structures.md) + +##### Text Detection + +Text detection consists of several parts: + +- `data_preprocessor`: [data_preprocessor](mmocr.models.textdet.data_preprocessors.TextDetDataPreprocessor) +- `backbone`: backbone network configuration +- `neck`: neck network configuration +- `det_head`: detection head network configuration + - `module_loss`: module loss configuration + - `postprocessor`: postprocessor configuration + +We present the model configuration in text detection using DBNet as an example. + +```Python +model = dict( + type='DBNet', + data_preprocessor=dict( + type='TextDetDataPreprocessor', + mean=[123.675, 116.28, 103.53], + std=[58.395, 57.12, 57.375], + bgr_to_rgb=True, + pad_size_divisor=32) + backbone=dict( + type='mmdet.ResNet', + depth=18, + num_stages=4, + out_indices=(0, 1, 2, 3), + frozen_stages=-1, + norm_cfg=dict(type='BN', requires_grad=True), + init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), + norm_eval=False, + style='caffe'), + neck=dict( + type='FPNC', in_channels=[64, 128, 256, 512], lateral_channels=256), + det_head=dict( + type='DBHead', + in_channels=256, + module_loss=dict(type='DBModuleLoss'), + postprocessor=dict(type='DBPostprocessor', text_repr_type='quad'))) +``` + +##### Text Recognition + +Text recognition mainly contains: + +- `data_processor`: [data preprocessor configuration](mmocr.models.textrecog.data_processors.TextRecDataPreprocessor) +- `preprocessor`: network preprocessor configuration, e.g. TPS +- `backbone`: backbone configuration +- `encoder`: encoder configuration +- `decoder`: decoder configuration + - `module_loss`: decoder module loss configuration + - `postprocessor`: decoder postprocessor configuration + - `dictionary`: dictionary configuration + +Using CRNN as an example. + +```Python +# model +model = dict( + type='CRNN', + data_preprocessor=dict( + type='TextRecogDataPreprocessor', mean=[127], std=[127]) + preprocessor=None, + backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), + encoder=None, + decoder=dict( + type='CRNNDecoder', + in_channels=512, + rnn_flag=True, + module_loss=dict(type='CTCModuleLoss', letter_case='lower'), + postprocessor=dict(type='CTCPostProcessor'), + dictionary=dict( + type='Dictionary', + dict_file='dicts/lower_english_digits.txt', + with_padding=True))) +``` + +
+ +#### Checkpoint Loading Configuration + +The model weights in the checkpoint file can be loaded via the `load_from` parameter, simply by setting the `load_from` parameter to the path of the checkpoint file. + +You can also resume training by setting `resume=True` to load the training status information in the checkpoint. When both `load_from` and `resume=True` are set, MMEngine will load the training state from the checkpoint file at the `load_from` path. + +If only `resume=True` is set, the executor will try to find and read the latest checkpoint file from the `work_dir` folder + +```Python +load_from = None # Path to load checkpoint +resume = False # whether resume +``` + +More can be found in {external+mmengine:doc}`MMEngine: Load Weights or Recover Training ` and [OCR Advanced Tips - Resume Training from Checkpoints](train_test.md#resume-training-from-a-checkpoint). + +
+ +### Evaluation Configuration + +In model validation and model testing, quantitative measurement of model accuracy is often required. MMOCR performs this function by means of `Metric` and `Evaluator`. For more information, please refer to {external+mmengine:doc}`MMEngine: Evaluation ` and [Evaluation](../basic_concepts/evaluation.md) + +#### Evaluator + +Evaluator is mainly used to manage multiple datasets and multiple `Metrics`. For single and multiple dataset cases, there are single and multiple dataset evaluators, both of which can manage multiple `Metrics`. + +The single-dataset evaluator is configured as follows. + +```Python +# Single Dataset Single Metric +val_evaluator = dict( + type='Evaluator', + metrics=dict()) + +# Single Dataset Multiple Metric +val_evaluator = dict( + type='Evaluator', + metrics=[...]) +``` + +`MultiDatasetsEvaluator` differs from single-dataset evaluation in two aspects: `type` and `dataset_prefixes`. The evaluator type must be `MultiDatasetsEvaluator` and cannot be omitted. The `dataset_prefixes` is mainly used to distinguish the results of different datasets with the same evaluation metrics, see [MultiDatasetsEvaluation](../basic_concepts/evaluation.md). + +Assuming that we need to test accuracy on IC13 and IC15 datasets, the configuration is as follows. + +```Python +# Multiple datasets, single Metric +val_evaluator = dict( + type='MultiDatasetsEvaluator', + metrics=dict(), + dataset_prefixes=['IC13', 'IC15']) + +# Multiple datasets, multiple Metrics +val_evaluator = dict( + type='MultiDatasetsEvaluator', + metrics=[...], + dataset_prefixes=['IC13', 'IC15']) +``` + +#### Metric + +A metric evaluates a model's performance from a specific perspective. While there is no such common metric that fits all the tasks, MMOCR provides enough flexibility such that multiple metrics serving the same task can be used simultaneously. Here we list task-specific metrics for reference. + +Text detection: [`HmeanIOUMetric`](mmocr.evaluation.metrics.HmeanIOUMetric) + +Text recognition: [`WordMetric`](mmocr.evaluation.metrics.WordMetric), [`CharMetric`](mmocr.evaluation.metrics.CharMetric), [`OneMinusNEDMetric`](mmocr.evaluation.metrics.OneMinusNEDMetric) + +Key information extraction: [`F1Metric`](mmocr.evaluation.metrics.F1Metric) + +Text detection as an example, using a single `Metric` in the case of single dataset evaluation. + +```Python +val_evaluator = dict(type='HmeanIOUMetric') +``` + +Take text recognition as an example, multiple datasets (`IC13` and `IC15`) are evaluated using multiple `Metric`s (`WordMetric` and `CharMetric`). + +```Python +val_evaluator = dict( + type='MultiDatasetsEvaluator', + metrics=[ + dict( + type='WordMetric', + mode=['exact', 'ignore_case', 'ignore_case_symbol']), + dict(type='CharMetric') + ], + dataset_prefixes=['IC13', 'IC15']) +test_evaluator = val_evaluator +``` + +
+ +### Visualizaiton Configuration + +Each task is bound to a task-specific visualizer. The visualizer is mainly used for visualizing or storing intermediate results of user models and visualizing val and test prediction results. The visualization results can also be stored in different backends such as WandB, TensorBoard, etc. through the corresponding visualization backend. Commonly used modification operations can be found in [visualization](visualization.md). + +The default configuration of visualization for text detection is as follows. + +```Python +vis_backends = [dict(type='LocalVisBackend')] +visualizer = dict( + type='TextDetLocalVisualizer', # Different visualizers for different tasks + vis_backends=vis_backends, + name='visualizer') +``` + +## Directory Structure + +All configuration files of `MMOCR` are placed under the `configs` folder. To avoid config files from being too long and improve their reusability and clarity, MMOCR takes advantage of the inheritance mechanism and split config files into eight sections. Since each section is closely related to the task type, MMOCR provides a task folder for each task in `configs/`, namely `textdet` (text detection task), `textrecog` (text recognition task), and `kie` (key information extraction). Each folder is further divided into two parts: `_base_` folder and algorithm configuration folders. + +1. the `_base_` folder stores some general config files unrelated to specific algorithms, and each section is divided into datasets, training strategies and runtime configurations by directory. + +2. The algorithm configuration folder stores config files that are strongly related to the algorithm. The algorithm configuration folder has two kinds of config files. + + 1. Config files starting with `_base_`: Configures the model and data pipeline of an algorithm. In OCR domain, data augmentation strategies are generally strongly related to the algorithm, so the model and data pipeline are usually placed in the same config file. + + 2. Other config files, i.e. the algorithm-specific configurations on the specific dataset(s): These are the full config files that further configure training and testing settings, aggregating `_base_` configurations that are scattered in different locations. Inside some modifications to the fields in `_base_` configs may be performed, such as data pipeline, training strategy, etc. + +All these config files are distributed in different folders according to their contents as follows: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
textdet
_base_datasetsicdar_datasets.py
ctw1500.py
...
Dataset configuration
schedulesschedule_adam_600e.py
...
Training Strategy Configuration
default_runtime.py
-Environment Configuration
Hook Configuration
Log Configuration
Checkpoint Loading Configuration
Evaluation Configuration
Visualizaiton Configuration
dbnet_base_dbnet_resnet18_fpnc.py-Network Configuration
Data Pipeline Configuration
dbnet_resnet18_fpnc_1200e_icdar2015.py-Dataloader Configuration
Data Pipeline Configuration(Optional)
+ +The final directory structure is as follows. + +```Python +configs +├── textdet +│ ├── _base_ +│ │ ├── datasets +│ │ │ ├── icdar2015.py +│ │ │ ├── icdar2017.py +│ │ │ └── totaltext.py +│ │ ├── schedules +│ │ │ └── schedule_adam_600e.py +│ │ └── default_runtime.py +│ └── dbnet +│ ├── _base_dbnet_resnet18_fpnc.py +│ └── dbnet_resnet18_fpnc_1200e_icdar2015.py +├── textrecog +│ ├── _base_ +│ │ ├── datasets +│ │ │ ├── icdar2015.py +│ │ │ ├── icdar2017.py +│ │ │ └── totaltext.py +│ │ ├── schedules +│ │ │ └── schedule_adam_base.py +│ │ └── default_runtime.py +│ └── crnn +│ ├── _base_crnn_mini-vgg.py +│ └── crnn_mini-vgg_5e_mj.py +└── kie + ├── _base_ + │ ├──datasets + │ └── default_runtime.py + └── sgdmr + └── sdmgr_novisual_60e_wildreceipt_openset.py +``` + +## Naming Conventions + +MMOCR has a convention to name config files, and contributors to the code base need to follow the same naming rules. The file names are divided into four sections: algorithm information, module information, training information, and data information. Words that logically belong to different sections are connected by an underscore `'_'`, and multiple words in the same section are connected by a hyphen `'-'`. + +```Python +{{algorithm info}}_{{module info}}_{{training info}}_{{data info}}.py +``` + +- algorithm info: the name of the algorithm, such as dbnet, crnn, etc. + +- module info: list some intermediate modules in the order of data flow. Its content depends on the algorithm, and some modules strongly related to the model will be omitted to avoid an overly long name. For example: + + - For the text detection task and the key information extraction task : + + ```Python + {{algorithm info}}_{{backbone}}_{{neck}}_{{head}}_{{training info}}_{{data info}}.py + ``` + + `{head}` is usually omitted since it's algorithm-specific. + + - For text recognition tasks. + + ```Python + {{algorithm info}}_{{backbone}}_{{encoder}}_{{decoder}}_{{training info}}_{{data info}}.py + ``` + + Since encoder and decoder are generally bound to the algorithm, they are usually omitted. + +- training info: some settings of the training strategy, including batch size, schedule, etc. + +- data info: dataset name, modality, input size, etc., such as icdar2015 and synthtext. diff --git a/docs/zh_cn/user_guides/config.md b/docs/zh_cn/user_guides/config.md index 8c4f63256..fcec3fa67 100644 --- a/docs/zh_cn/user_guides/config.md +++ b/docs/zh_cn/user_guides/config.md @@ -5,7 +5,7 @@ MMOCR 主要使用 Python 文件作为配置文件。其配置文件系统的设 ## 常见用法 ```{note} -本小节建议结合 [配置(Config)](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md) 中的初级用法共同阅读。 +本小节建议结合 {external+mmengine:doc}`MMEngine: 配置(Config) ` 中的初级用法共同阅读。 ``` MMOCR 最常用的操作为三种:配置文件的继承,对 `_base_` 变量的引用以及对 `_base_` 变量的修改。对于 `_base_` 的继承与修改, MMEngine.Config 提供了两种语法,一种是针对 Python,Json, Yaml 均可使用的操作;另一种则仅适用于 Python 配置文件。在 MMOCR 中,我们**更推荐使用只针对Python的语法**,因此下文将以此为基础作进一步介绍。 @@ -144,7 +144,7 @@ train_dataloader = dict( python tools/train.py example.py --cfg-options optim_wrapper.optimizer.lr=1 ``` -更多详细用法参考[命令行修改配置](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md#%E5%91%BD%E4%BB%A4%E8%A1%8C%E4%BF%AE%E6%94%B9%E9%85%8D%E7%BD%AE) +更多详细用法参考 {external+mmengine:ref}`MMEngine: 命令行修改配置 <命令行修改配置>`. ## 配置内容 @@ -162,16 +162,16 @@ env_cfg = dict( cudnn_benchmark=True, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) -random_cfg = dict(seed=None) +randomness = dict(seed=None) ``` 主要包含三个部分: -- 设置所有注册器的默认 `scope` 为 `mmocr`, 保证所有的模块首先从 `MMOCR` 代码库中进行搜索。若果该模块不存在,则继续从上游算法库 `MMEngine` 和 `MMCV` 中进行搜索(详见[注册器](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/registry.md)。 +- 设置所有注册器的默认 `scope` 为 `mmocr`, 保证所有的模块首先从 `MMOCR` 代码库中进行搜索。若果该模块不存在,则继续从上游算法库 `MMEngine` 和 `MMCV` 中进行搜索,详见 {external+mmengine:doc}`MMEngine: 注册器 `。 -- `env_cfg` 设置分布式环境配置, 更多配置可以详见 [MMEngine Runner](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/runner.md) +- `env_cfg` 设置分布式环境配置, 更多配置可以详见 {external+mmengine:doc}`MMEngine: Runner `。 -- `random_cfg` 设置 numpy, torch,cudnn 等随机种子,更多配置详见 [Runner](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/runner.md) +- `randomness` 设置 numpy, torch,cudnn 等随机种子,更多配置详见 {external+mmengine:doc}`MMEngine: Runner `。
@@ -183,11 +183,11 @@ Hook 主要分为两个部分,默认 hook 以及自定义 hook。默认 hook default_hooks = dict( timer=dict(type='IterTimerHook'), # 时间记录,包括数据增强时间以及模型推理时间 logger=dict(type='LoggerHook', interval=1), # 日志打印间隔 - param_scheduler=dict(type='ParamSchedulerHook'), # 与param_scheduler 更新学习率等超参 + param_scheduler=dict(type='ParamSchedulerHook'), # 更新学习率等超参 checkpoint=dict(type='CheckpointHook', interval=1),# 保存 checkpoint, interval控制保存间隔 sampler_seed=dict(type='DistSamplerSeedHook'), # 多机情况下设置种子 - sync_buffer=dict(type='SyncBuffersHook'), # 同步多卡情况下,buffer - visualization=dict( # 用户可视化val 和 test 的结果 + sync_buffer=dict(type='SyncBuffersHook'), # 多卡情况下,同步buffer + visualization=dict( # 可视化val 和 test 的结果 type='VisualizationHook', interval=1, enable=False, @@ -203,9 +203,9 @@ default_hooks = dict( - `CheckpointHook`:用于配置模型断点保存相关的行为,如保存最优权重,保存最新权重等。同样可以修改 `interval` 控制保存 checkpoint 的间隔。更多设置可参考 [CheckpointHook API](mmengine.hooks.CheckpointHook) -- `VisualizationHook`:用于配置可视化相关行为,例如在验证或测试时可视化预测结果,默认为关。同时该 Hook 依赖[可视化配置](#TODO)。想要了解详细功能可以参考 [Visualizer](visualization.md)。更多配置可以参考 [VisualizationHook API](mmocr.engine.hooks.VisualizationHook)。 +- `VisualizationHook`:用于配置可视化相关行为,例如在验证或测试时可视化预测结果,**默认为关**。同时该 Hook 依赖[可视化配置](#可视化配置)。想要了解详细功能可以参考 [Visualizer](visualization.md)。更多配置可以参考 [VisualizationHook API](mmocr.engine.hooks.VisualizationHook)。 -如果想进一步了解默认 hook 的配置以及功能,可以参考[钩子(Hook)](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/hook.md)。 +如果想进一步了解默认 hook 的配置以及功能,可以参考 {external+mmengine:doc}`MMEngine: 钩子(Hook) `。
@@ -220,13 +220,13 @@ log_processor = dict(type='LogProcessor', by_epoch=True) ``` -- 日志配置等级与 [logging](https://docs.python.org/3/library/logging.html) 的配置一致, +- 日志配置等级与 {external+python:doc}`Python: logging ` 的配置一致, -- 日志处理器主要用来控制输出的格式,详细功能可参考[记录日志](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/advanced_tutorials/logging.md): +- 日志处理器主要用来控制输出的格式,详细功能可参考 {external+mmengine:doc}`MMEngine: 记录日志 `: - `by_epoch=True` 表示按照epoch输出日志,日志格式需要和 `train_cfg` 中的 `type='EpochBasedTrainLoop'` 参数保持一致。例如想按迭代次数输出日志,就需要令 `log_processor` 中的 ` by_epoch=False` 的同时 `train_cfg` 中的 `type = 'IterBasedTrainLoop'`。 - - `window_size` 表示损失的平滑窗口,即最近 `window_size` 次迭代的各种损失的均值。logger 中最终打印的 loss 值为经过各种损失的平均值。 + - `window_size` 表示损失的平滑窗口,即最近 `window_size` 次迭代的各种损失的均值。logger 中最终打印的 loss 值为各种损失的平均值。
@@ -248,15 +248,15 @@ val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') ``` -- `optim_wrapper` : 主要包含两个部分,优化器封装 (OptimWrapper) 以及优化器 (Optimizer)。详情使用信息可见 [MMEngine 优化器封装](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/optim_wrapper.md) +- `optim_wrapper` : 主要包含两个部分,优化器封装 (OptimWrapper) 以及优化器 (Optimizer)。详情使用信息可见 {external+mmengine:doc}`MMEngine: 优化器封装 ` - 优化器封装支持不同的训练策略,包括混合精度训练(AMP)、梯度累加和梯度截断。 - - 优化器设置中支持了 PyTorch 所有的优化器,所有支持的优化器见 [PyTorch 优化器列表](torch.optim.algorithms)。 + - 优化器设置中支持了 PyTorch 所有的优化器,所有支持的优化器见 {external+torch:ref}`PyTorch 优化器列表 `。 -- `param_scheduler` : 学习率调整策略,支持大部分 PyTorch 中的学习率调度器,例如 `ExponentialLR`,`LinearLR`,`StepLR`,`MultiStepLR` 等,使用方式也基本一致,所有支持的调度器见[调度器接口文档](mmengine.optim.scheduler), 更多功能可以[参考优化器参数调整策略](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/param_scheduler.md) +- `param_scheduler` : 学习率调整策略,支持大部分 PyTorch 中的学习率调度器,例如 `ExponentialLR`,`LinearLR`,`StepLR`,`MultiStepLR` 等,使用方式也基本一致,所有支持的调度器见[调度器接口文档](mmengine.optim.scheduler), 更多功能可以参考 {external+mmengine:doc}`MMEngine: 优化器参数调整策略 `。 -- `train/test/val_cfg` : 任务的执行流程,MMEngine 提供了四种流程:`EpochBasedTrainLoop`, `IterBasedTrainLoop`, `ValLoop`, `TestLoop` 更多可以参考[循环控制器](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/runner.md)。 +- `train/test/val_cfg` : 任务的执行流程,MMEngine 提供了四种流程:`EpochBasedTrainLoop`, `IterBasedTrainLoop`, `ValLoop`, `TestLoop` 更多可以参考 {external+mmengine:doc}`MMEngine: 循环控制器 `。 ### 数据相关配置 @@ -275,14 +275,14 @@ test_cfg = dict(type='TestLoop') 数据集字段的命名规则在 MMOCR 中为: ```Python -{数据集名称缩写}_{算法任务}_{训练/测试} = dict(...) +{数据集名称缩写}_{算法任务}_{训练/测试/验证} = dict(...) ``` - 数据集缩写:见 [数据集名称对应表](#TODO) - 算法任务:文本检测-det,文字识别-rec,关键信息提取-kie -- 训练/测试:数据集用于训练还是测试 +- 训练/测试/验证:数据集用于训练,测试还是验证 以识别为例,使用 Syn90k 作为训练集,以 icdar2013 和 icdar2015 作为测试集配置如下: @@ -319,13 +319,11 @@ ic15_rec_test = dict( MMOCR 中,数据集的构建与数据准备是相互解耦的。也就是说,`OCRDataset` 等数据集构建类负责完成标注文件的读取与解析功能;而数据变换方法(Data Transforms)则进一步实现了数据读取、数据增强、数据格式化等相关功能。 -同时一般情况下训练和测试会存在不同的增强策略,因此一般会存在训练流水线(train_pipeline)和测试流水线(test_pipeline)。 +同时一般情况下训练和测试会存在不同的增强策略,因此一般会存在训练流水线(train_pipeline)和测试流水线(test_pipeline)。更多信息可以参考[数据流水线](../basic_concepts/transforms.md) -训练流水线的数据增强流程通常为:数据读取(LoadImageFromFile)->标注信息读取(LoadXXXAnntation)->数据增强->数据格式化(PackXXXInputs)。 +- 训练流水线的数据增强流程通常为:数据读取(LoadImageFromFile)->标注信息读取(LoadXXXAnntation)->数据增强->数据格式化(PackXXXInputs)。 -测试流水线的数据增强流程通常为:数据读取(LoadImageFromFile)->数据增强->标注信息读取(LoadXXXAnntation)->数据格式化(PackXXXInputs)。 - -更多信息可以参考[数据流水线](../basic_concepts/transforms.md) +- 测试流水线的数据增强流程通常为:数据读取(LoadImageFromFile)->数据增强->标注信息读取(LoadXXXAnntation)->数据格式化(PackXXXInputs)。 由于 OCR 任务的特殊性,一般情况下不同模型有不同数据增强的方式,相同模型在不同数据集一般也会有不同的数据增强方式。以 CRNN 为例: @@ -367,7 +365,7 @@ test_pipeline = [ #### Dataloader 配置 -主要为构造数据集加载器(dataloader)所需的配置信息,更多教程看参考[PyTorch 数据加载器](torch.data)。 +主要为构造数据集加载器(dataloader)所需的配置信息,更多教程看参考 {external+torch:doc}`PyTorch 数据加载器 `。 ```Python # Dataloader 部分 @@ -388,7 +386,7 @@ val_dataloader = dict( sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='ConcatDataset', - datasets=[ic13_rec_test,ic15_rec_test], + datasets=[ic13_rec_test, ic15_rec_test], pipeline=test_pipeline)) test_dataloader = val_dataloader ``` @@ -399,7 +397,7 @@ test_dataloader = val_dataloader #### 网络配置 -用于配置模型的网络结构,不同的算法任务有不同的网络结构, +用于配置模型的网络结构,不同的算法任务有不同的网络结构。更多信息可以参考[网络结构](../basic_concepts/structures.md) ##### 文本检测 @@ -493,13 +491,13 @@ load_from = None # 加载checkpoint的路径 resume = False # 是否 resume ``` -更多可以参考[加载权重或恢复训练](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/runner.md)与[OCR进阶技巧-断点恢复训练](https://mmocr.readthedocs.io/zh_CN/dev-1.x/user_guides/train_test.html#id11)。 +更多可以参考 {external+mmengine:ref}`MMEngine: 加载权重或恢复训练 <加载权重或恢复训练>` 与 [OCR 进阶技巧-断点恢复训练](train_test.md#从断点恢复训练)。
### 评测配置 -在模型验证和模型测试中,通常需要对模型精度做定量评测。MMOCR 通过评测指标(Metric)和评测器(Evaluator)来完成这一功能。更多可以参考[评测指标(Metric)和评测器(Evaluator)](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/evaluation.md) +在模型验证和模型测试中,通常需要对模型精度做定量评测。MMOCR 通过评测指标(Metric)和评测器(Evaluator)来完成这一功能。更多可以参考{external+mmengine:doc}`MMEngine: 评测指标(Metric)和评测器(Evaluator) ` 和 [评测器](../basic_concepts/evaluation.md) 评测部分包含两个部分,评测器和评测指标。接下来我们分部分展开讲解。 @@ -551,13 +549,13 @@ val_evaluator = dict( #### 评测指标 -评测指标指不同度量精度的方法,同时可以多个评测指标共同使用,更多评测指标原理参考[评测指标](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/evaluation.md),在 MMOCR 中不同算法任务有不同的评测指标。 +评测指标指不同度量精度的方法,同时可以多个评测指标共同使用,更多评测指标原理参考 {external+mmengine:doc}`MMEngine: 评测指标 `,在 MMOCR 中不同算法任务有不同的评测指标。 更多 OCR 相关的评测指标可以参考 [评测指标](../basic_concepts/evaluation.md)。 -文字检测: `HmeanIOU` +文字检测: [`HmeanIOUMetric`](mmocr.evaluation.metrics.HmeanIOUMetric) -文字识别: `WordMetric`,`CharMetric`, `OneMinusNEDMetric` +文字识别: [`WordMetric`](mmocr.evaluation.metrics.WordMetric),[`CharMetric`](mmocr.evaluation.metrics.CharMetric), [`OneMinusNEDMetric`](mmocr.evaluation.metrics.OneMinusNEDMetric) -关键信息提取: `F1Metric` +关键信息提取: [`F1Metric`](mmocr.evaluation.metrics.F1Metric) 以文本检测为例说明,在单数据集评测情况下,使用单个 `Metric`: @@ -565,7 +563,7 @@ val_evaluator = dict( val_evaluator = dict(type='HmeanIOUMetric') ``` -以文本识别为例,多数据集使用多个 `Metric` 评测: +以文本识别为例,对多个数据集(IC13 和 IC15)用多个 `Metric` (`WordMetric` 和 `CharMetric`)进行评测: ```Python # 评测部分 @@ -585,7 +583,7 @@ test_evaluator = val_evaluator ### 可视化配置 -每个任务配置该任务对应的可视化器。可视化器主要用于用户模型中间结果的可视化或存储,及 val 和 test 预测结果的可视化。同时可视化的结果可以通过可视化后端储存到不同的后端,比如 Wandb,TensorBoard 等。常用修改操作可见[可视化](visualization.md)。 +每个任务配置该任务对应的可视化器。可视化器主要用于用户模型中间结果的可视化或存储,及 val 和 test 预测结果的可视化。同时可视化的结果可以通过可视化后端储存到不同的后端,比如 WandB,TensorBoard 等。常用修改操作可见[可视化](visualization.md)。 文本检测的可视化默认配置如下: @@ -599,7 +597,7 @@ visualizer = dict( ## 目录结构 -`MMOCR` 所有配置文件都放置在 `configs` 文件夹下。为了避免配置文件过长,同时提高配置文件的可复用性以及清晰性,MMOCR 利用 Config 文件的继承特性,将配置内容的八个部分做了拆分。因为每部分均与算法任务相关,因此 MMOCR 对每个任务在 Config 中提供了一个任务文件夹,即 `textdet` (文字检测任务)、`textrec` (文字识别任务)、`kie` (关键信息提取)。同时各个任务算法配置文件夹下进一步划分为两个部分:`_base_` 文件夹与诸多算法文件夹: +`MMOCR` 所有配置文件都放置在 `configs` 文件夹下。为了避免配置文件过长,同时提高配置文件的可复用性以及清晰性,MMOCR 利用 Config 文件的继承特性,将配置内容的八个部分做了拆分。因为每部分均与算法任务相关,因此 MMOCR 对每个任务在 Config 中提供了一个任务文件夹,即 `textdet` (文字检测任务)、`textrecog` (文字识别任务)、`kie` (关键信息提取)。同时各个任务算法配置文件夹下进一步划分为两个部分:`_base_` 文件夹与诸多算法文件夹: 1. `_base_` 文件夹下主要存放与具体算法无关的一些通用配置文件,各部分依目录分为常用的数据集、常用的训练策略以及通用的运行配置。 @@ -607,7 +605,7 @@ visualizer = dict( 1. 算法的模型与数据流水线:OCR 领域中一般情况下数据增强策略与算法强相关,因此模型与数据流水线通常置于统一位置。 - 2. 算法在制定数据集上的特定配置:用于训练和测试的配置,将分散在不同位置的配置汇总。同时修改或配置一些在该数据集特有的配置比如batch size以及一些可能修改如数据流水线,训练策略等 + 2. 算法在制定数据集上的特定配置:用于训练和测试的配置,将分散在不同位置的 *base* 配置汇总。同时可能会修改一些`_base_`中的变量,如batch size, 数据流水线,训练策略等 最后的将配置内容中的各个模块分布在不同配置文件中,最终各配置文件内容如下: @@ -632,12 +630,12 @@ visualizer = dict( 数据集配置 - schedulers + schedules schedule_adam_600e.py
... 训练策略配置 - defaults_runtime.py
+ default_runtime.py
- 环境配置
默认hook配置
日志配置
权重加载配置
评测配置
可视化配置 @@ -658,7 +656,7 @@ visualizer = dict( 最终目录结构如下: ```Python -config +configs ├── textdet │ ├── _base_ │ │ ├── datasets @@ -699,7 +697,7 @@ MMOCR 按照以下风格进行配置文件命名,代码库的贡献者需要 {{算法信息}}_{{模块信息}}_{{训练信息}}_{{数据信息}}.py ``` -- 算法信息(algorithm info):算法名称,如 DBNet,CRNN 等 +- 算法信息(algorithm info):算法名称,如 dbnet, crnn 等 - 模块信息(module info):按照数据流的顺序列举一些中间的模块,其内容依赖于算法任务,同时为了避免Config过长,会省略一些与模型强相关的模块。下面举例说明: @@ -717,7 +715,7 @@ MMOCR 按照以下风格进行配置文件命名,代码库的贡献者需要 {{算法信息}}_{{backbone}}_{{encoder}}_{{decoder}}_{{训练信息}}_{{数据信息}}.py ``` - 一般情况下 encode 和 decoder 位置一般为算法专有,因此一般省略。 + 一般情况下 encoder 和 decoder 位置一般为算法专有,因此一般省略。 - 训练信息(training info):训练策略的一些设置,包括 batch size,schedule 等