Skip to content

Commit

Permalink
Bump version to 0.6.3 (#1511)
Browse files Browse the repository at this point in the history
* Bump version to 0.6.3

* update readme
  • Loading branch information
gaotongxiao committed Nov 3, 2022
1 parent b90d672 commit 26bc471
Show file tree
Hide file tree
Showing 6 changed files with 67 additions and 21 deletions.
25 changes: 15 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,30 +71,35 @@ The main branch works with **PyTorch 1.6+**.

## What's New

While the stable version (0.6.2) and the preview version (1.0.0) are being maintained concurrently now, the former version will be deprecated by the end of 2022. Therefore, we recommend users upgrade to [MMOCR 1.0](https://github.com/open-mmlab/mmocr/tree/1.x) to fruitful new features and better performance brought by the new architecture. Check out our [maintenance plan](https://mmocr.readthedocs.io/en/dev-1.x/migration/overview.html) for how we will maintain them in the future.
While the stable version (0.6.3) and the preview version (1.0.0) are being maintained concurrently now, the former version will be deprecated by the end of 2022. Therefore, we recommend users upgrade to [MMOCR 1.0](https://github.com/open-mmlab/mmocr/tree/1.x) to fruitful new features and better performance brought by the new architecture. Check out our [maintenance plan](https://mmocr.readthedocs.io/en/dev-1.x/migration/overview.html) for how we will maintain them in the future.

### 💎 Stable version

v0.6.2 was released in 2022-10-14.
v0.6.3 was released in 2022-11-03.

1. It's now possible to train/test models through Python Interface.
2. ResizeOCR now fully supports all the parameters in mmcv.impad.
This release enhances the inference script and fixes a bug that might cause failure on TorchServe.

Read [Changelog](https://mmocr.readthedocs.io/en/latest/changelog.html) for more details!

### 🌟 Preview of 1.x version

A brand new version of **MMOCR v1.0.0rc2** was released in 2022-10-14:
A brand new version of **MMOCR v1.0.0rc3** was released in 2022-11-03:

1. **New engines**. MMOCR 1.x is based on [MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.
1. We release several pretrained models using [oCLIP-ResNet](https://github.com/open-mmlab/mmocr/blob/1.x/configs/backbone/oclip/README.md) as the backbone, which is a ResNet variant trained with [oCLIP](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136880282.pdf) and can significantly boost the performance of text detection models.

2. **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMOCR 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.
2. Preparing datasets is troublesome and tedious, especially in OCR domain where multiple datasets are usually required. In order to free our users from laborious work, we designed a [Dataset Preparer](https://mmocr.readthedocs.io/en/dev-1.x/user_guides/data_prepare/dataset_preparer.html) to help you get a bunch of datasets ready for use, with only **one line of command**! Dataset Preparer is also crafted to consist of a series of reusable modules, each responsible for handling one of the standardized phases throughout the preparation process, shortening the development cycle on supporting new datasets.

3. **Cross project calling**. Benefiting from the unified design, you can use the models implemented in other OpenMMLab projects, such as MMDet. We provide an example of how to use MMDetection's Mask R-CNN through `MMDetWrapper`. Check our documents for more details. More wrappers will be released in the future.
3. **New engines**. MMOCR 1.x is based on [MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.

4. **Stronger visualization**. We provide a series of useful tools which are mostly based on brand-new visualizers. As a result, it is more convenient for the users to explore the models and datasets now.
4. **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMOCR 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

5. **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmocr.readthedocs.io/en/dev-1.x/).
5. **Cross project calling**. Benefiting from the unified design, you can use the models implemented in other OpenMMLab projects, such as MMDet. We provide an example of how to use MMDetection's Mask R-CNN through `MMDetWrapper`. Check our documents for more details. More wrappers will be released in the future.

6. **Stronger visualization**. We provide a series of useful tools which are mostly based on brand-new visualizers. As a result, it is more convenient for the users to explore the models and datasets now.

7. **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmocr.readthedocs.io/en/dev-1.x/).

8. **One-stop Dataset Preparaion**. Multiple datasets are instantly ready with only one line of command, via our [Dataset Preparer](https://mmocr.readthedocs.io/en/dev-1.x/user_guides/data_prepare/dataset_preparer.html).

Find more new features in [1.x branch](https://github.com/open-mmlab/mmocr/tree/1.x). Issues and PRs are welcome!

Expand Down
25 changes: 15 additions & 10 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,30 +72,35 @@ MMOCR 的模块化设计使用户可以定义自己的优化器,数据预处

## 最新进展

目前我们正同步维护稳定版 (0.6.2) 和预览版 (1.0.0) 的 MMOCR,但稳定版会在 2022 年末开始逐步停止维护。我们建议用户尽早升级至 [MMOCR 1.0](https://github.com/open-mmlab/mmocr/tree/1.x),以享受到由新架构带来的更多新特性和更佳的性能表现。阅读我们的[维护计划](https://mmocr.readthedocs.io/zh_CN/dev-1.x/migration/overview.html)以了解更多信息。
目前我们正同步维护稳定版 (0.6.3) 和预览版 (1.0.0) 的 MMOCR,但稳定版会在 2022 年末开始逐步停止维护。我们建议用户尽早升级至 [MMOCR 1.0](https://github.com/open-mmlab/mmocr/tree/1.x),以享受到由新架构带来的更多新特性和更佳的性能表现。阅读我们的[维护计划](https://mmocr.readthedocs.io/zh_CN/dev-1.x/migration/overview.html)以了解更多信息。

### 💎 稳定版本

最新的月度版本 v0.6.2 在 2022.10.14 发布。
最新的月度版本 v0.6.3 在 2022.11.03 发布。

1. 支持在 Python 内直接训练和测试模型。
2. ResizeOCR 支持了 mmcv.impad 的所有参数。
这个版本增强了推理脚本的稳定性,并修复了可能导致 TorchServe 运行错误的问题。

阅读[更新日志](https://mmocr.readthedocs.io/en/latest/changelog.html)以获取更多信息。

### 🌟 1.x 预览版本

全新的 **v1.0.0rc2** 版本已经在 2022.10.14 发布:
全新的 **v1.0.0rc3** 版本已经在 2022.11.03 发布:

1. 架构升级:MMOCR 1.x 是基于 [MMEngine](https://github.com/open-mmlab/mmengine),提供了一个通用的、强大的执行器,允许更灵活的定制,提供了统一的训练和测试入口
1. 我们发布了数个以 [oCLIP-ResNet](https://github.com/open-mmlab/mmocr/blob/1.x/configs/backbone/oclip/README.md) 为骨干网络的预训练模型。该骨干网络是一种以 [oCLIP](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136880282.pdf) 技术训练的 ResNet 变体,可以显著提升检测模型的表现

2. 统一接口:MMOCR 1.x 统一了数据集、模型、评估和可视化的接口和内部逻辑。支持更强的扩展性
2. 准备数据集通常是一件很繁琐的事情,在 OCR 领域尤甚。我们推出了全新的 [Dataset Preparer](https://mmocr.readthedocs.io/en/dev-1.x/user_guides/data_prepare/dataset_preparer.html),帮助大家脱离繁琐的手工作业,仅需一条命令即可自动准备好多个 OCR 常用数据集。同时,该组件也通过模块化的设计,极大地减少了未来支持新数据集的难度

3. 跨项目调用:受益于统一的设计,你可以使用其他OpenMMLab项目中实现的模型,如 MMDet。 我们提供了一个例子,说明如何通过 `MMDetWrapper` 使用 MMDetection 的 Mask R-CNN。查看我们的文档以了解更多细节。更多的包装器将在未来发布
3. 架构升级:MMOCR 1.x 是基于 [MMEngine](https://github.com/open-mmlab/mmengine),提供了一个通用的、强大的执行器,允许更灵活的定制,提供了统一的训练和测试入口

4. 更强的可视化:我们提供了一系列可视化工具, 用户现在可以更方便可视化数据
4. 统一接口:MMOCR 1.x 统一了数据集、模型、评估和可视化的接口和内部逻辑。支持更强的扩展性

5. 更多的文档和教程:我们增加了更多的教程,降低用户的学习门槛。详见[教程](https://mmocr.readthedocs.io/zh_CN/dev-1.x/)
5. 跨项目调用:受益于统一的设计,你可以使用其他OpenMMLab项目中实现的模型,如 MMDet。 我们提供了一个例子,说明如何通过 `MMDetWrapper` 使用 MMDetection 的 Mask R-CNN。查看我们的文档以了解更多细节。更多的包装器将在未来发布。

6. 更强的可视化:我们提供了一系列可视化工具, 用户现在可以更方便可视化数据。

7. 更多的文档和教程:我们增加了更多的教程,降低用户的学习门槛。详见[教程](https://mmocr.readthedocs.io/zh_CN/dev-1.x/)

8. 一站式数据准备:准备数据集已经不再是难事。使用我们的 [Dataset Preparer](https://mmocr.readthedocs.io/zh_CN/dev-1.x/user_guides/data_prepare/dataset_preparer.html),一行命令即可让多个数据集准备就绪。

可以在 [1.x 分支](https://github.com/open-mmlab/mmocr/tree/1.x) 获取更多新特性。欢迎试用并提出反馈。

Expand Down
34 changes: 34 additions & 0 deletions docs/en/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,39 @@
# Changelog

## 0.6.3 (03/11/2022)

### Highlights

This release enhances the inference script and fixes a bug that might cause failure on TorchServe.

Besides, a new backbone, oCLIP-ResNet, and a dataset preparation tool, Dataset Preparer, have been released in
MMOCR 1.0.0rc3 ([1.x branch](https://github.com/open-mmlab/mmocr/tree/1.x)). Check out the [changelog](https://mmocr.readthedocs.io/en/dev-1.x/notes/changelog.html) for more information about the features, and [maintenance plan](https://mmocr.readthedocs.io/en/dev-1.x/migration/overview.html) for how we will maintain MMOCR in the future.

### New Features & Enhancements

- Convert numpy.float32 type to python built-in float type by @JunYao1020 in https://github.com/open-mmlab/mmocr/pull/1462
- When '.' char not in output string, output is also considered to be a… by @JunYao1020 in https://github.com/open-mmlab/mmocr/pull/1457
- Refactor issue template by @Harold-lkk in https://github.com/open-mmlab/mmocr/pull/1449
- issue template by @Harold-lkk in https://github.com/open-mmlab/mmocr/pull/1489
- Update maintainers by @gaotongxiao in https://github.com/open-mmlab/mmocr/pull/1504
- Support MMCV \< 1.8.0 by @gaotongxiao in https://github.com/open-mmlab/mmocr/pull/1508

### Bug Fixes

- fix ci by @Harold-lkk in https://github.com/open-mmlab/mmocr/pull/1491
- \[CI\] Fix CI by @gaotongxiao in https://github.com/open-mmlab/mmocr/pull/1463

### Docs

- \[DOCs\] Add MMYOLO in Readme. by @ysh329 in https://github.com/open-mmlab/mmocr/pull/1475
- \[Docs\] Update contributing.md by @gaotongxiao in https://github.com/open-mmlab/mmocr/pull/1490

### New Contributors

- @ysh329 made their first contribution in https://github.com/open-mmlab/mmocr/pull/1475

**Full Changelog**: https://github.com/open-mmlab/mmocr/compare/v0.6.2...v0.6.3

## 0.6.2 (14/10/2022)

### Highlights
Expand Down
1 change: 1 addition & 0 deletions docs/en/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,7 @@ MMOCR has different version requirements on MMCV and MMDetection at each release
| MMOCR | MMCV | MMDetection |
| ------------ | ------------------------ | --------------------------- |
| main | 1.3.8 \<= mmcv \< 1.8.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.3 | 1.3.8 \<= mmcv \< 1.8.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.1, 0.6.2 | 1.3.8 \<= mmcv \<= 1.7.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.0 | 1.3.8 \<= mmcv \<= 1.6.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.5.0 | 1.3.8 \<= mmcv \<= 1.5.0 | 2.14.0 \<= mmdet \<= 3.0.0 |
Expand Down
1 change: 1 addition & 0 deletions docs/zh_cn/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,7 @@ docker run --gpus all --shm-size=8g -it -v {实际数据目录}:/mmocr/data mmoc
| MMOCR | MMCV | MMDetection |
| ------------ | ------------------------ | --------------------------- |
| main | 1.3.8 \<= mmcv \<= 1.7.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.3 | 1.3.8 \<= mmcv \<= 1.7.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.1, 0.6.2 | 1.3.8 \<= mmcv \<= 1.7.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.6.0 | 1.3.8 \<= mmcv \<= 1.6.0 | 2.21.0 \<= mmdet \<= 3.0.0 |
| 0.5.0 | 1.3.8 \<= mmcv \<= 1.5.0 | 2.14.0 \<= mmdet \<= 3.0.0 |
Expand Down
2 changes: 1 addition & 1 deletion mmocr/version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) Open-MMLab. All rights reserved.

__version__ = '0.6.2'
__version__ = '0.6.3'
short_version = __version__

0 comments on commit 26bc471

Please sign in to comment.