Skip to content

Commit

Permalink
Update FAQ.md.
Browse files Browse the repository at this point in the history
Delete outdated questions and answers.

Pull request leela-zero#196.
  • Loading branch information
LL145 authored and gcp committed Oct 29, 2018
1 parent 40260b0 commit a0baa60
Showing 1 changed file with 0 additions and 32 deletions.
32 changes: 0 additions & 32 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,6 @@

AZ also had this behavior, besides we're testing our approach right now. Please be patient.

## 为什么现在训练的是5/6 block网络,而AZ用的是20block ##
## Why the network size is only 6 blocks comparing to 20 blocks of AZ ##

在项目起步阶段,较小的网络可以在短时间内得到结果,也可以尽早发现/解决问题,

目前的主要目的是为了测试系统的可行性,这对今后的完整重现十分重要(为将来的大网络打好基础)。

This is effectively a testing run to see if the system works, and which things are important for doing a full run. I expected 10 to 100 people to run the client, not 600.

Even so, the 20 block version is 13 times more computationally expensive, and expected to make SLOWER progress early on. I think it's unwise to do such a run unless it's proven that the setup works, because you are going to be in for a very long haul.

## 为什么比较两个网络强弱时经常下十几盘就不下了 ##
## Why only dozens of games are played when comparing two networks ##

Expand All @@ -33,27 +22,6 @@ We use SPRT to decide if a newly trained network is better. A better network is

The MCTS playouts of self-play games is only 3200, and with noise added (For randomness of each move thus training has something to learn from). If you load Leela Zero with Sabaki, you'll probably find it is actually not that weak.

## 自对弈为什么使用1000的模拟次数,而不是AZ的1600 ##
## For self-play, why use 3200 visits instead of 1600 playouts as AZ ##

没人知道AZ的1600是怎么得到的。这里的3200是基于下面几点估计得到的:

1. 对于某一个选点,MCTS需要模拟几次才能得出概率结果。在开始阶段,每个选点的概率不会差太多,所以开始的360次模拟大概会覆盖整个棋盘。所以如果要让某些选点可以做几次模拟的话,大概需要2到3 x 360次的模拟。

2. 在computer-go上有人跑过7x7的实验,看到模拟次数从1000到2000的时候性能有提高。所以如果我们观察到瓶颈的时候,可能是可以考虑增加模拟次数。

3. 模拟次数太多会影响得到数据的速度。

Nobody knows. The Zero paper doesn't mention how they arrive at this number, and I know of no sound background to estimate the optimal. I chose it based on some observations:

a) For the MCTS to feed back search probabilities to the learning, it must be able to achieve a reasonable amount of look-ahead on at least a few variations. In the beginning, when the network is untrained, the move probabilities are not very extreme, and this means that the first 360~ simulations will be spent expanding every answer at the root. So if we want to expand at least a few moves, we probably need 2 to 3 x 360 playouts.

b) One person on computer-go, who ran a similar experiment on 7x7, reported that near the end of the learning, he observed increased performance from increasing the number from 1000 to 2000. So maybe this is worthwhile to try when the learning speed starts to decrease or flatten out. But it almost certainly isn't needed early on.

c) Obviously, the speed of acquiring data is linearly related to this setting.

So, the current number is a best guess based on these observations. To be sure what the best value is, one would have to rerun this experiment several times.

## 有些自对弈对局非常短 ##
## Very short self-play games ends with White win?! ##

Expand Down

0 comments on commit a0baa60

Please sign in to comment.