Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reinforcement Learning #55

Open
srini1948 opened this issue Jul 5, 2017 · 49 comments
Open

Reinforcement Learning #55

srini1948 opened this issue Jul 5, 2017 · 49 comments

Comments

@srini1948
Copy link

RL has been added to original ConvNetJS.
Will you be adding that too?
Any plans for LSTM?

Thanks

@MarcoMeter
Copy link

+1

Shouldn't be that tough to implement DQN. Maybe I can contribute that in like 8 weeks. Though, I haven't checked out if ConvNetSharp is suitable for my implementation in Unity performance wise.

@srini1948
Copy link
Author

srini1948 commented Jul 5, 2017 via email

@cbovar
Copy link
Owner

cbovar commented Jul 6, 2017

For DQN you can check out this repo. It should be easy to adapt it to newer version of ConvNetSharp.

I have worked on LSTM. I will eventually release a 'shakespeare' demo. I only worked on the GPU versions.

@cbovar
Copy link
Owner

cbovar commented Jul 6, 2017

I also see a DQN using WPF for display in this fork

@MarcoMeter
Copy link

I worked with Deep-QLearning-Demo the past weeks. But it lacks in performance (due to single threaded) and that code is hard to read and to maintain. But well, this was almost completely adapted from the ConvNetJS version, which uses that strange coding convention.

@srini1948
Copy link
Author

srini1948 commented Jul 6, 2017 via email

@srini1948
Copy link
Author

srini1948 commented Jul 6, 2017 via email

@srini1948
Copy link
Author

srini1948 commented Jul 7, 2017 via email

@MarcoMeter
Copy link

I'm applying DQN to my game BRO ( https://www.youtube.com/watch?v=_mZaGTGn96Y ) right now.
Within the next months , I'll release BRO open source on Github. BRO features an AI framework and a match sequence editor for match automation. The game is done with Unity.

Right now I need a much faster DQN implementation. The DQN Demo from above lacks in that, so the training time takes 30 minutes. That's why I'm considering to contribute DQN to this repo.

And this is a video about the AI framework and the match sequence editor https://www.youtube.com/watch?v=EE7EqoaOL34

@srini1948
Copy link
Author

srini1948 commented Jul 7, 2017 via email

@MarcoMeter
Copy link

Hey,

if anybody has some ideas for testing the to be implemented DQN algorithm, please let me know.

So far I've got these ideas for integration testing:

  • ContNetJs's apple and poison example (windows forms), just like the already mentioned C# port (Deep-QLearning-Demo-csharp)
  • slot machine (just console application)
  • a moving target, which has to be shot by the agent (maybe Unity game engine)
  • agent has to move a basket to catch fruits and to avoid stuff like poison (maybe Unity game engine)

I'll probably find more through research.

After that, I'll start with the DQN implementation. I'll probably start with the implementation of "Deep-QLearning-Demo-csharp". Then I'll compare it to the Python implementation done by DeepMind for the Atari games.

@srini1948
Copy link
Author

srini1948 commented Jul 24, 2017 via email

@cbovar
Copy link
Owner

cbovar commented Jul 24, 2017

Maybe you could also try on a very simple task, reproduce the input:

  • 0 -> 0
  • 1 -> 1

It may fit in an unit test.

@MarcoMeter
Copy link

MarcoMeter commented Jul 25, 2017

slotmachine
I wrote a simple slot machine (console app) using 3 reels. Just hold down space to start the slot machine and to stop each reel one by one.

New items for the reels' slots are sampled from a specific probability distribution.

In the end, the agent has to decide when to stop the first reel, the second reel and finally the third one.
(I should consider to let the AI decide which reel to stop first to have a few more dimensions on the outputs)

SlotMachine.zip

Given this slot machine example, I'm going to approach the DQN implementation now.

@srini1948
Copy link
Author

srini1948 commented Jul 25, 2017 via email

@MarcoMeter
Copy link

Creating an interface between Python and C# might end up consuming too much time. I know there is the so called IronPython (http://ironpython.net/) Library which allows to use Python in C#, but I haven't really looked into it.

@srini1948
Copy link
Author

srini1948 commented Jul 25, 2017 via email

@MarcoMeter
Copy link

Here is an update on the progress referring to a commit on the DQN branch of my fork:

Added major chunks of the DeepQLearner.cs [WIP]
A few TODOs left before testing and verification:

  • TODO: Overload or modify RetrievePolicy() to make use of Volumes, return output Volume from the net as well
  • TODO: Overload or modify GetNetInput() to make use of Volumes
  • TODO: Compute loss
  • TODO: Verify the consistency of the composed neural net upon initializing the DeepQLearner

https://github.com/MarcoMeter/ConvNetSharp/commit/5711468362d6f3551f82bad1e24d784e31f59a4b

@MarcoMeter
Copy link

And there is one more major thing on the list:

Adding a regression layer. I guess there is no regression layer implemented yet, right?

@cbovar
Copy link
Owner

cbovar commented Jul 27, 2017

It seems that RegressionLayer disappeared at some point (from tag 0.3.2). I will try to reintroduce it this week end.

@MarcoMeter
Copy link

Maybe this is related to this commit, because the file 'F:\Repositories\ConvNetSharp\src\ConvNetSharp\Layers\RegressionLayer.cs' got removed:

Commit: 56fec45 [56fec45]
Parents: 5a47e2e, 37cdfbf
Author: Augustin Juricic ajuricic@neogov.net
Date: Dienstag, 28. März 2017 11:18:18
Committer: Augustin Juricic
Merge remote-tracking branch 'github/master' into develop

@cbovar
Copy link
Owner

cbovar commented Jul 28, 2017

I think I have never implemented RegressionLayer since ConvNetSharp handles batch.

@cbovar
Copy link
Owner

cbovar commented Jul 29, 2017

RegressionLayer committed

@MarcoMeter
Copy link

Great, thanks. I'll move on soon.

@MarcoMeter
Copy link

As of now, I'm struggling with the issue that the computed action values grow exponentially towards positive or negative infinity.

@cbovar
Copy link
Owner

cbovar commented Jul 31, 2017

Have you tried with a lower learning rate? E.g. 0.001

@MarcoMeter
Copy link

The learning rate slightly delays this outcome.

Nevertheless, for the outputs I'm expecting values to be less than 2. Just because of the fact that the maximum reward for the slot machine example is 1, which is probably handed out after making at least 3 decisions.

@MarcoMeter
Copy link

I'm still trying to figure out the issue. Maybe I'm misusing the volume class, or I might don't have enough experience with the actual implementation of neural nets (like understanding every single detail of the regression layer implementation). So I'm dropping some more information.

Here is some pseudo code (Matiisen, 2015) featuring the core pieces of the algorithm:

`initialize replay memory D
initialize action-value function Q with random weights
observe initial state s
repeat
select an action a
with probability ε select a random action
otherwise select a = argmaxa’Q(s,a’)
carry out action a
observe reward r and new state s’
store experience <s, a, r, s’> in replay memory D

sample random transitions <ss, aa, rr, ss’> from replay memory D
calculate target for each minibatch transition
    if ss’ is terminal state then tt = rr
    otherwise tt = rr + γmaxa’Q(ss’, aa’)
train the Q network using (tt - Q(ss, aa))^2 as loss

s = s'

until terminated`

And this is the stated loss function for training:

lossfunction

For the DQN implementation of Karpathy this loss function seems to be not present. The regression layer implementation looks to be similar (comparing Karpathy and this repo). For the rest, everything is implemented accordingly (i.e. sampling experiences for computing new q values).

Using the Deep Q Learning Demo CSharp, the output values for the slot machine stay below 0.02.
SlotMachine.zip

@MarcoMeter
Copy link

MarcoMeter commented Aug 2, 2017

And this is a flow chart of the implementation of the Q Learning part
brainforwardbackward

@cbovar
Copy link
Owner

cbovar commented Aug 3, 2017

I haven't had time to look at the code yet. But you could maybe make the problem even simpler (like this) to make it easier to debug.

@MarcoMeter
Copy link

I could implement an example for contextual bandits according to Bandit Dungeon Demo (Example form the same author of your provided link).

I just fear that the bandit examples are not complex enough for using a policy network. At least it can be observed, if the q-values grow to infinity or not.

@srini1948
Copy link
Author

srini1948 commented Aug 3, 2017 via email

@MarcoMeter
Copy link

The only news I have is that I'm working on a different example (made with Unity). This example is about controlling a basket to catch rewarding items and to avoid punishing ones.

newexmaple

Concerning the DQN implementation I'm still stuck. I hope that Cedric can find some time to check the usage of Volumes.

@cbovar
Copy link
Owner

cbovar commented Aug 10, 2017

Sorry guys. I have been very busy with my new job. I'll try to look at this soon.

@MarcoMeter
Copy link

I just tested the implementation on the apples & poison example. The issue of exploding output values is observed as well.

I didn't add the example to the version control, since the code is not well written but functional (I took the known implementation and just substituted the DQN parts).

ApplesPoisonDQNDemo.zip

@MarcoMeter
Copy link

MarcoMeter commented Aug 18, 2017

Just some update:

I created a UI for displaying the training progress. The red graph plots the average reward and the blue one the average loss. I resolved a bug concerning the epsilon exploration strategy (epsilon was always equal to 1 due to an integer division).

gui

As Cederic fixed a bug of the regression layer, the outputs do not explode anymore. Regardless, I did not achieve a good behavior for the slot machine yet. Though I came up with a new reward function, which signals rewards based on the result of one stopped reel. The first stop rewards the agent by the item of slot (e.g. 0.5 for a cherry or 1 for a 7). Stopping the second or third reel rewards the agent by 1 for a matching item. For failure, the agent is punished by -0.5. Waiting does not punish or reward the agent. Most of the time the agent learns to wait. It seems to be that this way, any punishments are avoided.

I probably try to focus now on the Apples and Poison demo, because suitable hyperparemeters are already known. One drawback is the performance. The referenced demo performs much better. So I'll have to find the bottleneck.

@cbovar
Copy link
Owner

cbovar commented Aug 18, 2017

I think you should focus on getting the correct results first. For the performances, we can look at it later (using batch size > 1 and GPU will help)

@MarcoMeter
Copy link

MarcoMeter commented Aug 19, 2017

Still it surprises me that the Apples and Poison demo is much much slower compared to Deep-QLearning-Demo-csharp.

performanceprofile

Edit 1: If I'm enabling the GPU support, by changing the namespaces, I get a BadImageFormatException because it can not load the ConvNetSharp.Volume.GPU. Even though it is added to the references to all project dependencies.

Edit 2: The Apples and Poison demo takes probably a whole day for training. It progresses on like 4fps.

Edit 3: 240,000 learning steps (DeepQLearner.Backward) take 27h. In comparison to Deep-QLearning-Demo-csharp, 50.000 learning steps take less than 9 minutes.

@cbovar
Copy link
Owner

cbovar commented Aug 21, 2017

You probably get BadImageFormatException because you are in 32bits. GPU only works in 64bits.

@MarcoMeter
Copy link

MarcoMeter commented Aug 21, 2017

Thanks, this solved the BadImageFormatException.

And now its a CudaException thrown at CudaHostMemoryRegion.cs:25, triggered by
var chosenAction = _brain.Forward(new ConvNetSharp.Volume.GPU.Double.Volume(GatherInput(), new Shape(GatherInput().Length)));

One question:
Is there any way to avoid specifying the full path to the Volume object, like seen above? VS complains about that Volume is a namespace even though the namespace is imported. The ConvNetSharp.Volume namespace is required for the Shape class, so I guess that's the conflict.

@cbovar
Copy link
Owner

cbovar commented Aug 26, 2017

I have fixed the loss computation in the regression layer.

I think there is an issue here. You get the output of the FinalState and update the reward related to current Action. However you should get the output related to the InitialState.

In ConvNetJs, it only regress on the current Action dimension here.

You could do something like that:

// Create desired output volume
var desiredOutputVolume = _trainingOptions.Net.Forward(experience.InitialState).Clone();
desiredOutputVolume.Set(actionPolicy.Action, newActionValue);

I applied this modification on this branch: https://github.com/cbovar/ConvNetSharp/tree/DQN

@MarcoMeter
Copy link

It looks that you are right on that. I missed out on that detail inside the train function.

@cbovar
Copy link
Owner

cbovar commented Aug 26, 2017

As for the exception in GPU (CudaException thrown at CudaHostMemoryRegion.cs:25), it turns out it's a multi-threading issue: some volume allocation is done in the workerthread whereas the GPU context was acquired on the main thread.

@masatoyamada1973
Copy link

desiredOutputVolume.Set(actionPolicy.Action, newActionValue)

desiredOutputVolume.Set(experience.Action, newActionValue)

@MarcoMeter
Copy link

Hey,
I wanted to let you guys know that I stopped working on this concern.
I switched to working with Python and the just released ML Agents of Unity.

@GospodinNoob
Copy link

@MarcoMeter Hello. Link (https://github.com/MarcoMeter/Basket-Catch-Deep-Reinforcement-Learning) is broken. Is there an opportunity to download the source code of this Unity implementation (Unity project)? Thanks.

@MarcoMeter
Copy link

@GospodinNoob
Copy link

@MarcoMeter Thanks

@GospodinNoob
Copy link

GospodinNoob commented Dec 21, 2017

@MarcoMeter Maybe you have a repo with Unity and your DQN? I am trying to add it, but still have some misunderstanding of this system. Of course, if it not hard to you)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants