Skip to content

DRAFT The easiest and fastest guide to building your Game super AI in Javascript with Liquid Carrot!

Nicholas Szerman edited this page Jul 19, 2019 · 8 revisions

SUPER WORK IN PROGRESS

The final bot code

For those that like looking at an example and just learning from it, here is the full code for the bot (without comments):

<insert final code for the bot (not the simulation) without the comments, to emphasize length of code>

The guide

First, make sure you have access to Carrot's API. If you're running your bot in a website, just include this link in the

section.
<script src="https://liquidcarrot.io/carrot/cdn/0.3.0/carrot.js"></script>

In this example, we are going to make the bot play a very simple game: avoiding the ball. There are two rules in this game:

  1. The user cannot leave the playing area.
  2. The user cannot touch the ball that is following the mouse.

Here is the game in action, with the trained bot.

In this guide, we are going to assume that the game has been coded already. If you're interested in the code for the game, it is provided at the end of the guide.

Building the bot

First, you need to get a bot object. In this case, to train the bot, we are going to create another bot that will behave like a user. The enemy bot will train by playing against the user bot.

enemy_bot = Carrot.newBot();
player_bot = Carrot.newBot();

The bots require at least one example input and output for each kind of input and output. We will make the enemy bot tell us in what direction to move itself, and the player bot in what direction to move itself. Each bot will receive the position of the other bot and the position of itself.

The position will be encoded like this: [x_position, y_position]. The position coordinates will be between -1 and 1, where 1 and -1 would be the borders of the screen. The direction will be encoded like this [x_velocity, y_velocity]. To make the movements realistic, the max velocity per direction will be capped at 5 for the player and 2 for the enemy.

enemy_input_example = { player_position: [0.7, 0.2], position: [-0.5, 0.6] };
enemy_output_example = { velocity: [-2, 2] };
enemy_example = { state: enemy_input_example, output: enemy_output_example };

player_input_example = { enemy_position: [-0.5, 0.6], position: [0.7, 0.2] };
player_output_example = { velocity: [-4, 3] };
player_example = { state: player_input_example, output: player_output_example };

enemy_bot.addExample(enemy_example);
player_bot.addExample(player_example);

We want the bot to be working well by the time the player plays against it (or whenever the user has to do anything with the bot). So we want to train the bot beforehand. To do that, we make a simulation in which the bot will learn how it should behave. We are going to use the function bot.deploySimulation(startSimulation, options) which returns a bot simulation interface. More on that soon.

This function requires telling the bot what function starts the simulation, and what function ends it. Also, the function startSimulation receives a function that will be called if the simulation finishes.

This is the simulation function. There are many comments explaining the code. Further explanation is provided afterward.

// the simulation requires that both the player bot and the enemy bot are playing
number_of_bots_ready = 0;
bot_simulation_over_callbacks = [];
function startSimulation(simulationOverCallback) {
  number_of_bots_ready++;
  if (number_of_bots_ready > 2) throw Error('More than two bots attempted to start the simulation - something is wrong');
  bot_simulation_over_callbacks.push(simulationOverCallback);
  if (number_of_bots_ready < 2) return;
  
  const update = function () {
    player_position = getPlayerPosition();
    enemy_position = getBallPosition();
    if (distance(player_position, enemy_position) < 0.04 || playerIsOutOfBounds(player_position)) {
      // the enemy caught the player
      player_bot_simulation_interface.stimulus(-10);
      enemy_bot_simulation_interface.stimulus(10);
      
      // tell the bots that the simulation has finished
      bot_simulation_over_callbacks.forEach(callback => {
        setTimeout(callback, 0);
      });
      number_of_bots_ready = 0;
      return;
    }

    // We're also going to guide the enemy bot a bit by rewarding it for being close to the player
    distance_based_stimulus = 0.0001 / distance(player_position, enemy_position);
    enemy_bot_simulation_interface.stimulus(distance_based_stimulus);

    // we want the enemy bot to act fast, instead of being lazy and 
    // getting rewarded for being close to the player
    // so we will penalize based on time alive
    enemy_bot_simulation_interface.stimulus(-0.001);
    player_bot_simulation_interface.stimulus(0.001); // reward player for staying alive
    
    // now tell each bot the position of the other bot and their position
    player_bot_simulation_interface.updateState({ enemy_position, position: player_position });
    enemy_bot_simulation_interface.updateState({ player_position, position: enemy_position });

    // now request a decision
    player_velocity = player_bot_simulation_interface.think();
    enemy_velocity = enemy_bot_simulation_interface.think();
    
    // now move according to the decision
    player_position = player_position.map((coordinate, index) => coordinate + player_velocity[index]);
    enemy_position = enemy_position.map((coordinate, index) => coordinate + enemy_velocity[index]);

    // now call update again in the future
    setTimeout(update, 0);
  }

  // finally, start the simulation update chain
  setTimeout(update, 0);
}

Okay, enough code! Here's what's going on.

You tell the bot if its behavior is good or bad by providing it stimuli (plural of stimulus) bot.stimulus(Number). Positive numbers are desired, negative numbers penalize.

For the bot to make a decision it requires an input and a state. One of them can be empty. You get a decision by telling the bot to think bot.think(input). You set the state and update it by using the same function bot.updateState(state_update).

This function updates the passed fields of the state, it does not replace the entire state. So, if the time matters but everything else remained equal, you would call bot.updateState({ time: new_time }).

Having an independent input allows for a bunch of features. Among the most important ones is the ability to query. So, for example, you can teach the bot that the reply to bot.think({ color: True }) is { color: 'blue' }, but the reply to bot.think({ resources: 0 }) is { velocity: 0, turn_light: true }. Of course, you would be using the state to provide information about what's going on.

You could also forget about the state and exclusively use the input for requesting output. Just add some examples before so the bot knows what to expect. In the current game, this would be done by not using bot.updateState(), and passing the state to the think function (also update the examples). It looks like this: player_bot_simulation_interface.think({ enemy_position, position: player_position }).

Finally, you would start all the action by doing this:

player_bot_simulation_interface = player_bot.deploySimulation(startSimulation);
enemy_bot_simulation_interface = enemy_bot.deploySimulation(startSimulation);

Deploying to the final game

Once your bots are trained, simply call

importing_information = enemy_bot.export()

This will give you a string. Copy the string somewhere and then, in your game (or in the cloud), you can use your bot by calling

enemy_bot.import(importing_information);

You have to deploy your bot in the final application. You do this by calling

enemy_bot_interface = enemy_bot.deployReal();

Using deployReal instead of deploySimulation is important. Internally, the bots behave differently depending on the environment - bots are optimized for real deployment, they also allow different options to be passed via deployReal(options). Bots can also be trained during a real deployment, but that's more advanced (links for more info will be added soon, this also works by using stimuli).

Using the bot in the final app is very similar to the simulation. For example:

enemy_bot_interface.updateState({ player_position, position });
ball_velocity = enemy_bot_interface.think();

You can provide options by using think(undefined, options). Obviously, the first parameter can also be an input for the bot. Some options that can be passed include { think_time: xxx, allow_exploration: true }.

That's it for this guide! I hope you had fun and you're ready to build some super AI using Liquid Carrot!