-
-
Notifications
You must be signed in to change notification settings - Fork 240
Model Runs and Review Tab
Note: development of this feature has been on the shelf for a while. Some of the information on this page is outdated, but it should still be possible to try out the feature using the instructions in the How to run it section.
The NetLogo devel team at Northwestern is making model runs be artifacts that can be recorded, replayed, saved to disk, emailed, etc. This is for possible inclusion in some future release of NetLogo.
This support exists both headless and in the NetLogo GUI.
It isn't connected to BehaviorSpace (for now) and (for now) only records visible state not the complete world state.
On this page you can learn how to run it, how to record your runs, about the review tab interface, and about the runs extension.
The following pages give more information about the state of the project including incomplete tasks and an overview of the code:
(Please note: The information on this page describes functionality that does not yet exist in any official release of NetLogo; the stuff that it says you can "now" do refers to functionality that lives on the 6.x
branch, which is intended to become NetLogo 6.0.)
Development work is currently stopped. The most recent version of the code lives at the 6.x-archive
tag.
Here's how to build and run it:
git clone git@github.com:NetLogo/NetLogo.git
cd NetLogo/
git checkout 6.x-archive
git submodule update --init
./sbt all
./sbt run
(Note that you'll need a Java 7 JDK, which might be tricky to install nowadays since its support from Oracle has reached end of life.)
The last command should launch the GUI. It's also the only command you need to repeat in order to launch it again.
To turn on recording, enable the Review tab from the Tools menu.
Having trouble? Ask questions on netlogo-devel.
NetLogo now allows you to record, replay, annotate, and save specific runs of a model.
To make use of this feature, you need to bring up the Review tab by seleting it from the Tools menu or by pressing Ctrl+Shift+R.
Notice the Recording check box in the toolbar. It has to be checked for recording to take place. It is checked by default when you first activate the Review tab.
We will come back shortly to other elements of the Review tab interface but, first, let's see how to record your runs.
The Review feature is designed to work with models using tick-based view updates. Most of the models in the NetLogo models library use tick-based updates, but the default setting of a new NetLogo model is still "continuous" view updates. Nothing gets recorded if you use continuous updates, so you should be careful to switch to tick-based updates if you want to use the Review feature for a model that you created yourself.
The recording of a model run starts at the moment when the tick counter is reset (i.e., at the moment reset-ticks
is called.) This typically happens during the execution of a procedure named setup
, but it doesn't have to: reset-ticks
would have the same effect if called from somewhere else.
If there is a model run already being recorded at the moment reset-ticks
is called, recording for this run stops and a new run starts being recorded.
We will come back later to the question of recording multiple runs and actually reviewing them, but there is an important thing to understand before moving on to this: the concept of a frame, and how it differs from ticks.
A model run gets recorded as a sequence of frames. A frame represents the visual state of a model at a specific point in time. There usually is one frame per tick, but that is not always the case. A new frame is recorded every time a model asks for a display update. The tick
command is the most common way of asking for a display update, but it is not the only one. There is also display
. In the Fireworks model from the library, for example, tick
is only called at the end of a round of fireworks but display
is called each time the go
procedure runs, i.e., multiple times per tick. If you record a run of the Fireworks model, you will see that multiple frames are recorded per tick.
Another case where ticks and frames can be out of sync is if you start recording after a run has already been started. In this case, your frames will start at 1 just like they always do, but your ticks will start at some higher number. Yet another case is if you pause recording (by temporarily unchecking the Recording checkbox) during a simulation, which would create a gap between ticks and frames.
Let us start by noting that, currently, only the visual elements of model get recorded in a run. This means everything that appears on the screen: the content of the view, the state of different interface elements (i.e., widgets), and plots. This saves some memory, and allows NetLogo to record new frames faster, but it also means that not all information about the state of a model can be reviewed from a recorded run. You cannot, e.g., bring up agent monitors and inspect the breed variables of a turtle. Basically: if you don't see on the screen, it's not recorded.
Only the differences with the last frame are actually stored in each frame. This is what allows NetLogo to record whole runs of a model. If it tried to record the whole state of a model at each frame (even with just the visual elements), the memory would quickly fill up. Recording only differences saves a lot of memory, though it is still expensive for models with many agents that change often. Recording differences also means that, when you scroll through a recorded run, NetLogo has to reconstruct each frame by re-applying these differences on a previous state. We tried to optimize this process as best as we could, but you may still notice a bit of lag sometimes.
Now that we have covered the basic concepts of model runs recording, let us switch our attention to the actual Review tab and how to interact with it.
We will use the classic Wolf Sheep Predation model as an example. You can load it from the Models library (File / Models Library in the menu, or Ctrl+M). Next, make sure that the Review tab is activated (Tools / Show Review Tab in the menu, or Ctrl+Shift+R) and that the Recording check box in the Review tab is checked.
If you look at the Code tab, you can see that the setup
procedure ends with reset-ticks
and that the go
procedure issues the tick
command towards its end. Running setup
should thus start recording a new run, and a new frame should be created each time go
is executed. Let us try this from the interface by first clicking the setup button and then the go button and letting the model run for a while (or until all wolves and sheep die out, like it happens sometimes). After stopping the model, switch to the Review tab.
The first thing you should notice is that the interface of the model is also shown in the Review tab. You can't, however, interact with it in the same way as you can in the regular Interface tab. The setup and go buttons are not clickable and other widgets can't be interacted with either.
When you navigate through the run, though, you will see the interface panel being updated for each frame as the run is replayed for you.
Navigation is done through what we call the "scrubber panel," which is located right below the model's interface:
The first thing indicated is the current frame. Again, do not confuse frames with ticks: ticks are shown at the top of the view widget, just like in the Interface tab.
The middle part of the scrubber panel is occupied by the scrubber itself. The scrubber looks similar to a NetLogo slider. You can drag it back and forth to change which frame you are looking at (which can be quite fun).
To the right are five buttons allowing you to perform the following actions:
Button | Action |
---|---|
Go to first frame | |
Go back 5 frames | |
Go back 1 frame | |
Go forward 1 frame | |
Go forward 5 frames | |
Go to last frame |
The left of the window is occupied by the list of loaded runs. If you have followed the instructions we have provided so far, you should only see one run, and it should be called "* Wolf Sheep Predation" (the asterix in front of the name indicates that it hasn't been saved yet.)
Let us go back to the interface tab and record a few more runs of Wolf Sheep Predation. Click on setup, and then go. Let the model run for a little bit, and then click on setup again. You don't have to unpress go: you can just leave it running and click setup a few more times. When you think you have enough runs, you can unpress go and switch back to the Review tab. The run list should now look something like this:
You can see that NetLogo added numbers in parenthesis to differentiate the runs from one another. As we will see later, you can always rename a run if you don't like that.
By clicking on the different runs on the list, or by using ↑ and ↓ on your keyboard when the list is selected, you can go from one run to another and examine them, compare them, etc.
You are not limited to multiple runs of the same model, however: you also can have runs from different models loaded in memory.
To see how it works, go back to the model library and open the Ants (Perspective Demo) model in the Code Examples / Perspective Demos section.
In the Interface tab, click setup and then go. While the model is running, you can successively click the watch one-of turtles and follow one-of turtles buttons. Then stop the model and switch back to the Review tab. The run list should now look like this:
Your Ants run is loaded in the Review tab. And if you scrub back and forth, you will see that the different perspectives you tried during the run are taken into account.
Now what if you want to switch back to a Wolf Sheep Predation run? You can (just click on it) but there is one important thing to be aware of: switching to a run loads the corresponding model into the Interface tab. This also means that the currently loaded model (Ants (Perspective Demo), in this case) needs to be closed first. Don't worry, though: if you have made any changes to it, NetLogo will offer you to save them before closing the model.
You should also keep in mind that, from the point of view of NetLogo, any change that you make to the code of a model turns it into a different model, even if it has the same name and general behavior. Let's say you have Wolf Sheep Predation loaded, for example, and that you first record a regular run. You then make some change in the code. It can be something as insignificant as wanting your wolves to look bigger and changing set size 2
for set size 3
in the setup
procedure. Record a run with your bigger wolves and play around with it a bit in the Review tab. Great. Now click back on a previous Wolf Sheep Predation run. NetLogo will ask you if you want to save your changes because it now wants to load the regular, unmodified, Wolf Sheep Predation model.
We haven't said anything about it yet, but you probably have noticed that the bottom section of the Review tab is occupied by two sub-tabs named Indexed notes and General notes:
As you might have guessed, these allow you to annotate your runs. Annotations are saved with a run, so you will get them back when you load the run again later.
Indexed notes are associated with a particular frame of the current run. To add a new note, scrub to the frame of interest and click the Add note button. This will add a new note and take your cursor inside the Notes column of the table shown below:
The first two columns indicate the Frame and Ticks that the note is associated with. In most cases, these numbers will be equal, but as we have seen earlier with the Fireworks example, they can sometimes be different.
The Notes column doesn't give you a lot of space for editing, but if you want to enter more extensive notes, you can click the button on the right with the little pen: . This will bring up a separate edit window:
Just click OK to save your changes (or Cancel if you don't want to save them.)
If you want to delete a note altogether, you can click on the little thrashcan button to the right of it: .
One last thing to note about indexed notes is that clicking on one of them will automatically scrub to the corresponding frame in your run. Thus, they can act as some sort of bookmarks, allowing you to quickly jump to points of interest in your runs.
There is not much to say about general notes. They are what they are:
At the top of the Review tab, there is a toolbar:
We have already seen what the Recording check box is for (i.e., turning recording On or Off). The button names should be pretty self-explanatory, but here are some short explanations nonetheless:
Button | Action | Description |
---|---|---|
Save the current run. | NetLogo will bring up the regular "Save as..." dialog box and give you a chance to save your run file. The default extension is .nlrun , but you can name it however you like. A copy of the model is stored inside the run file. |
|
Load a new run. | Brings up a dialog for you to choose the run file that you want to load in memory. The model that the run was generated from is loaded as well. | |
Rename the current run. | Brings up a simple input dialog allowing you to enter a new name for the run. | |
Close the current run. | If the run is not saved yet, you will be offered to save it. | |
Close all loaded runs. | For each unsaved run, NetLogo will ask you if you want to save it. |
There exists an extension to interact with model runs through NetLogo code. It currently provides only the runs:add-annotation
primitive, but should do more eventually.
- Extensions
- NetLogo Libraries
- Controlling API
- Extensions API
- 6.0 Extension and Controlling API Transition-Guide
- Optimizing NetLogo Runs
- Setting English as the Default Language when Running NetLogo
- Unofficial Features
- Advanced Installation
- Optimization List
- Java System Properties and how to use them
- NetLogo on ARM Devices
- Multiple Views via HubNet
- Branches
- Building
- Tests
- Windows Setup
- Continuous Integration
- Draft: How to Help
- Google Summer of Code Ideas List
- Syntax Highlighting
- Building with IntelliJ
- Code Formatting
- Localization
- File (.nlogo) and Widget Format
- Benchmarking
- Releasing
- Preparing the Models Library for Release
- Documentation
- NetLogo Bundled Java Versions
- JOGL
- Plugins API
- Architecture
- LazyAgentset
- Model Runs and Review Tab
- Model Runs: To Do and Code Overview
- Notes on in Radius
- Archived Branches
- The nlogox format
- Touch API Proposal
- Why isn't NetLogo "Parallel"?
- Potential Speedups
- Tortoise
- SimServer, WebStart, and NetLogo in Classrooms