Skip to content

Commit

Permalink
Merge pull request #58 from leegang/master
Browse files Browse the repository at this point in the history
Translate the doc for driving policy training.
  • Loading branch information
thias15 committed Sep 24, 2020
2 parents d7bce8c + bf9e2d2 commit b0dd032
Show file tree
Hide file tree
Showing 4 changed files with 74 additions and 0 deletions.
4 changes: 4 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,7 @@ OpenBot利用智能手机作为低成本机器人的大脑。我们设计了一
## 视频

[![OpenBot Video](https://img.youtube.com/vi/qc8hFLyWDOM/0.jpg)](https://www.youtube.com/watch?v=qc8hFLyWDOM)

## 联系我们
- 加入 [Slack](https://join.slack.com/t/openbot-community/shared_invite/zt-hpso8cfl-JO9OVhVMdUWvR4vDXwcMGA) 频道与OpenBot社区联系.
- 给我们法[Email](mailto:openbot.team@gmail.com)
1 change: 1 addition & 0 deletions body/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@
- 数量 2
- 价格:¥1.98
- [淘宝购买](https://s.click.taobao.com/rjXJ4xu)
- 电阻器(2个150<span>&#8486;</span>用于LED,一个20k<span>&#8486;</span>和10k<span>&#8486;</span>用于分压器。

## 制作说明

Expand Down
1 change: 1 addition & 0 deletions policy/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

# Driving Policy (Advanced)
[简体中文](README_CN.md)
WARNING: To get a good driving policy for your custom dataset will require some patience. It is not straight-forward, involves data collection, hyperparameter tuning, etc. If you have never trained machine learning models before, it will be challenging and may even get frustrating.

In order to train an autonomous driving policy, you will first need to collect a dataset. The more data you collect, the better the resulting driving policy. For the experiments in our paper, we collected about 30 minutes worth of data. Note that the network will imitate your driving behaviour. The better and more consistent you drive, the better the network will learn to drive.
Expand Down
68 changes: 68 additions & 0 deletions policy/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@

# 驾驶策略(高级)
警告:要为您的自定义数据集制定良好的驾驶策略,需要一些耐心。 它不是直接的,涉及数据收集,超参数调整等。如果您以前从未训练过机器学习模型,这将是挑战,甚至可能令人沮丧。

为了训练自动驾驶策略,你首先需要收集一个数据集。你收集的数据越多,得出的驾驶策略就越好。对于我们论文中的实验,我们收集了大约30分钟的数据。注意,网络会模仿你的驾驶行为。你的驾驶行为越好、越稳定,网络就会越好地学习驾驶。
## 数据收集
1. 将蓝牙游戏遥控手柄连接到手机上(如PS4 遥控手柄)。
2. 在应用程序中选择AUTOPILOT_F网络。
3. 现在通过游戏遥控手柄驱动汽车并记录数据集。在PS4控制手柄上可以用 **X** 按钮切换记录。

你现在会在智能手机的内部存储上找到一个名为Openbot的文件夹。对于每个记录,都会有一个压缩文件。zip文件的名称格式为*yyyymmdd_hhmmss.zip*,对应记录开始的时间戳。

你的数据集文件应该以以下结构存储。
```
dataset
└── train_data
└── my_openbot_1
└── recording_1
recording_2
...
my_openbot_2
...
test_data
└── my_openbot_3
└── recording_1
recording_2
...
```

从手机上的*Openbot*文件夹中导出,每一条记录都对应着一个解压的压缩文件。

## 训练策略
您首先需要设置您的培训环境。


### 依赖

我们建议为OpenBot创建一个conda环境。关于安装conda的说明可以在[这里](https://docs.conda.io/projects/conda/en/latest/user-guide/install/)找到。


如果你没有专用的GPU(例如使用你的笔记本电脑),你可以用以下命令创建一个新的环境。

```
conda create -n openbot python tensorflow notebook matplotlib pillow
```
请注意,训练速度会非常慢。所以,如果你具有专用GPU的计算机,我们强烈建议使用它。在这种情况下,你将需要支持GPU的Tensorflow。运行以下命令来设置conda环境。

```
conda create -n openbot python tensorflow-gpu notebook matplotlib pillow
```

如果你喜欢手动设置环境,这里有一个依赖列表。
- Tensorflow
- Jupyter Notebook
- Matplotlib
- Numpy
- PIL

### Jupyter Notebook

我们提供了一个 [Jupyter Notebook](policy_learning.ipynb)文件,指导您完成训练自动驾驶策略的步骤。笔记本会生成两个`tflite`文件,分别对应根据验证指标的最佳checkpoint和最新checkpoint。选取其中一个并将其重命名为autopilot_float.tflite。替换现有的模型,在
```
app
└── assets
└── networks
└── autopilot_float.tflite
```
并重新编译安卓App。

0 comments on commit b0dd032

Please sign in to comment.