Skip to content

Commit

Permalink
Updated style again
Browse files Browse the repository at this point in the history
  • Loading branch information
DaRavenox committed Dec 19, 2018
1 parent b2fd682 commit e5a5411
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions book/AIDO/70_developers/50_define.md
Expand Up @@ -7,7 +7,7 @@ The main parts of the challenge are: the evaluation folder and the ```challenge.

#### challenge.yaml

The ```challenge.yaml``` file defines the challenge name, metadata about the challenge \(tags,title to display on the server, which dates the challenge should be accessable, a description of the challenge\). This file also defines the associated score and the steps the challenge runs through. These can be defined as follows:
The `challenge.yaml` file defines the challenge name, metadata about the challenge \(tags,title to display on the server, which dates the challenge should be accessable, a description of the challenge\). This file also defines the associated score and the steps the challenge runs through. These can be defined as follows:

```
scoring:
Expand Down Expand Up @@ -35,7 +35,7 @@ steps:
disk_available_mb: 100
```

The top level, steps, defines that we are going to define the steps of the challenge here. The next level includes the steps themselves. In our example we only have a single step, namely the step which generates the scores for our submission. One level down we define the title, description of the step, the timeout time, the parameters to be used for evaluation and the features required. In the ```evaluation_parameters``` we define the services, the evaluator and the solution. The simplest definition for these, and the ones we will use, are the evaluator from the dockerimage we will define in the evaluation folder and the solution image will be the submission image. Finally the features\_required defines how much ram and disk memory we need for our evaluation \(in mb\).
The top level, steps, defines that we are going to define the steps of the challenge here. The next level includes the steps themselves. In our example we only have a single step, namely the step which generates the scores for our submission. One level down we define the title, description of the step, the timeout time, the parameters to be used for evaluation and the features required. In the `evaluation_parameters` we define the services, the evaluator and the solution. The simplest definition for these, and the ones we will use, are the evaluator from the dockerimage we will define in the evaluation folder and the solution image will be the submission image. Finally the features\_required defines how much ram and disk memory we need for our evaluation \(in mb\).

The next part is the state-machine that defines how we should move step to step. In our case it is defined as follows.

Expand All @@ -53,7 +53,7 @@ If you have only one step, as the simple regression test here, then you don't ha

#### The evaluation

The evaluation is found in the ```evaluation``` folder. The important, non-boilerplate, file is the ```eval.py```. An example of the content of this file can be found bellow.
The evaluation is found in the `evaluation` folder. The important, non-boilerplate, file is the `eval.py`. An example of the content of this file can be found bellow.

```
#!/usr/bin/env python
Expand Down Expand Up @@ -86,5 +86,5 @@ if __name__ == '__main__':
wrap_evaluator(Evaluator())
```

What we care about here is the prepare and the score functions. In the prepare function we can define the input we want to give the users submission code. The simplest way to do this is to define a dictionary containing fields for all the input data and then pass this dictionary to the ```cie.set_challenge_parameters``` function. This dictionary can then be accessed in the solution code by calling```cie.get_challenge_parameters```. The second part is the score function. This function takes the contents of you put in the ```solution_output_dict``` in your submission and uses it to generate a numerical score. The numerical score is then passed to the system by calling ```cie.set_score``` with the corresponding score field (since we only have one score we called this score1 in the challenge.yaml file) along with the actual numerical score we want to give to the submission and a text blurb related to the score.
What we care about here is the prepare and the score functions. In the prepare function we can define the input we want to give the users submission code. The simplest way to do this is to define a dictionary containing fields for all the input data and then pass this dictionary to the `cie.set_challenge_parameters` function. This dictionary can then be accessed in the solution code by calling`cie.get_challenge_parameters`.The second part is the score function. This function takes the contents of you put in the `solution_output_dict` in your submission and uses it to generate a numerical score. The numerical score is then passed to the system by calling `cie.set_score` with the corresponding score field (since we only have one score we called this score1 in the `challenge.yaml` file) along with the actual numerical score we want to give to the submission and a text blurb related to the score.

0 comments on commit e5a5411

Please sign in to comment.