Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
1083 lines (865 sloc) 49.7 KB
==================================================
TODO list, not at all sorted by priority
==================================================
* doc: mvlc and jumbo frames
* trigger_io gui
- show static connections when hovering over units.
- highlight possible connections when hovering a pin
- allow making connections by dragging from pin to pin or by clicking two
pins.
- maybe allow inline editing of names.
- ellide long names. maybe add logic to make the block wider when longer
names are used or increase the width in general.
* mvlc locking: use a separate lock for pipe stats. this should fix gui stalls
at low rates because the stats lock would only be taken for the short time
the stats update requires, not for the length of the read operation (which
might take the full timeout interval).
http://www.sympnp.org/proceedings/57/G29.pdf
* load and save button in the trigger ui editor are hidden now.
implement the functionality!
what should happen if the script is not a trigger io script? meaning the
correct meta block doesn't exist in it?
* generate text file of input and output names used in the setup. this could be
printed and used at experiment time to figure out what's what.
* when no data arrives from the mvlc_eth while the debug gui is open the ui
thread gets very sluggish. this is because the debug gui polls the pipe queue
sizes which means it has to acquire the locks. at the same time the readout
thread will read till timeout so the lock is held for the timeout period.
I think the lock might be taken to get the pipe receive stats shown in the
daqstats widget.
* when modifying the mvlc trigger io via the gui the vme config is not marked
as modified. I'm not sure why because the underlying script does get
modified. maybe i'm ignoring the modification on purpose?
* daqcontrol listfile filename is not updated when using the vmusb
* Event Server component only mentioned in the daq_control section of the docs.
This needs to have its own section once it performs better.
* Docs: info about basic controls. right-click context menus, shift and control
click selections, double clicks :D
* Help overlay for the analysis. Big fat arrows indicating data flow and arrows
pointing at histograms.
* add a vme script library for manual scripts and display the available script
in the 'add script' context menu. populate this with mvlc counter readouts
and soft trigger actication scripts.
* Fix the mvlc_daq setup_trigger_io reset stuff
Without doing a reset the stack will still be executed after having removed
the corresponding event from the vme config. What happens is that the daq
init code resets all stack offsets which means stack number two points to the
same data as the direct exec stack0. This then means that everytime the
trigger for the now removed event is activated, for example via a manually
setup timer, the current command stack will be executed. What's weird is
that I do not see any errors or stat counts confirming this in the GUI at
all. Probably the data is polled and maybe even counted somewhere but not
shown.
* LUT editor minimize might not be correct. Test this more. Mostly I confuse
myself when creating functions. Input bit order also is not documented (LSBit
is the rightmost).
* LUT editor: make the top section smaller when resizing the window. ideally it
would only just show all 6 input bits and nothing more. the bottom area can
then be used for the output bits.
https://stackoverflow.com/questions/8766633/how-to-determine-the-correct-size-of-a-qtablewidget
* add readout side buffer sniffing similar to what's done on the analysis side.
* after finishing a run/replay with pause enabled the next run will always
start paused even after having toggled pause/resume
* analysis pause does not work when replaying from mvlc(_usb) listfile
* when creating a new counter difference rate monitor the x-axis label says
'not set'. also the y-axis label has no effect. toggling the top-right
'combined' checkbox fixes this.
MVLC mac address for 0007: 04:85:46:d2:00:07
Mon 09 Dec 2019 04:21:00 PM CET Performance Tests with the MVLC
---------------------------------------------------------------
* Linux, USB3, MTDC, Pulser 7 (event len 26-30), 100 max_xfer -> 127 MB/s
TODO: vary max_xfer for the above test
* Linux, USB2 -> 38 MB/s (USB2 limit)
* Linux, ETH, MTDC, Pulser 7 (event len 26-30), 10 max_xfer -> TODO
* vme scripts cannot be renamed while they're being edited
* when removing an analysis object the eventwidgets are recreated and thus lose
their node expansions state. fix this at some point
* make all name input dialogs only accept valid C identifiers
* MVLC issues
======================
* the controller stays 'connected' even tought the command or data pipe
operations lead to connection error. for example when turning off the
mvlc_usb the error poller will get FT_DEVICE_NOT_CONNECTED but the
MVLC_VMEController will not be set to disconnected.
* still deadlocks and short blocking of the ui especially when having the dev gui open.
* Check the whole UI for things that are dead/obsolete when using the mvlc
* Documentation needs updates
* After a boot the firmware and hardware registers yield 0. Only after starting
and stopping a DAQ those registers are populated. Try to narrow this down.
* Running a VMEScript should completely disable any polling being done on the
cmd pipe. Failing to do so will slow down script execution dramatically due
to the time spent waiting for the lock whenever the polling code is waiting
for a read to finish.
-> RAII polling inhibitor
* ETH under windows: when running a vme script while a high data rate readout
is in progress some of the vmeSingleWrite calls will return a timeout error.
Figure out if this happens when reading or writing to the socket or in both
cases. This could just be "natural" UDP packet loss and would have to be
handled by retrying the operation.
* ETH under windows: after an error like the one above mvme is in an odd state.
The controller lost the connection and reconnects, no data is received
anymore but the DAQ still has to be stopped via the GUI.
Also at least one time stopping the DAQ did not work.
-> When using the ETH MVLC retry (VME) register writes a couple of times if
they result in a timeout error. Do not immediately disconnect the MVLC.
* Trigger I/O Editor:
- Add ability to reserve units. Used for Timer and StackStart units when
periodic events are present.
- Add ability to generate a script containing only selected/modified units
* NSIS installer: make sure there's no "release" component when packaging the
release branch. This is just because I noticed a 'dev' component in the dev
builds.
* NSIS Installer: make the zadig installer less prominent and annoying.
* failing host lookups block the gui. espcially noticeable when loading a
replay and starting it. the gui will feel unresponsive.
* 'QuaZipFile::close(): file isn't open' warnings when recording with the MVLC_ETH
Configurable Button Grid for VME Scripts
------------------------------------------
Buttons in a grid. Can do single column and single row layouts.
Other layout options? No. Just allow having less buttons than (rows * cols).
Also do not immediately discard data when resizing the grid. Instead use it in
case the number of buttons increases again.
Edit mode:
Set Number of rows
Set Number of columns
assign objects or actions to buttons
Set checked/unchecked action/object
Set checked/unchecked icon
set button label
Override icon
Feature: Checkable buttons where one script is executed on check and one on un-check.
Have different labels for the states.
-> Implement this externally from the grid. By passing the bool 'checked' around.
Note: passing the checkstate to the outside means that QSignalMapper
cannot be directly used. Instead two mappings, one for the checked
and one for the unchecked case can be used.
Feature: Repeat mode
Set repeat interval.
Add extra checkboxes next to each button. If checked the action will
be executed periodically.
Checkable buttons will toggle between states as if clicked manually.
Note: there's no way to have different intervals for the checked and
unchecked states.
* Analysis:
- Connecting to arrays of size 1 is not great: the index[0] is always used
when generating names for objects
- Play around with connecting to things. Some cases are worse than they where
in previous mvme versions.
-> Difference: first input connects to an array of size 1. Now the 2nd
input only accepts values instead of full arrays even if those have size 1.
- Check that histo under/overflow is working with active subranges.
- Figure out how the entrycount for histos works right now. It does not get
reset when the histo is reset, only when a new run starts.
* mvme_root_client: on new run detect if the incoming data is compatible in
case things are already loaded. use streaminfo, maybe generate code in memory
and compare to on disk versions
* fix the EventServer "disconnect on listfile" load issue
-> Managing Qt thread affinity and parent/child things are the real problems
here. Maybe a non-qt rewrite of the server component would be a good idea?
* how to get run completion info when interrupting a replay in mvme?
buffers_processed can be equal to buffers_read. The issue is that not all of
the listfile was read. Maybe use a SystemEvent::End
* void MVMEContext::prepareStart() starting mvme stream worker
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt.
You must not let any exception whatsoever propagate through Qt code.
If that is not possible, in Qt 5 you must at least reimplement
QCoreApplication::notify() and catch all exceptions there.
terminate called after throwing an instance of 'vme_script::ParseError'
Thread 6 "analysis" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffe37fe700 (LWP 18067)]
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff55c3535 in __GI_abort () at abort.c:79
#2 0x00007ffff5ae24f5 in __gnu_cxx::__verbose_terminate_handler() () from /home/florian/src/mvme2/3rdparty/ftdi-d3xx-linux-x86_64/libftd3xx.so
#3 0x00007ffff5adfc56 in __cxxabiv1::__terminate(void (*)()) () from /home/florian/src/mvme2/3rdparty/ftdi-d3xx-linux-x86_64/libftd3xx.so
#4 0x00007ffff5adfc83 in std::terminate() () from /home/florian/src/mvme2/3rdparty/ftdi-d3xx-linux-x86_64/libftd3xx.so
#5 0x00007ffff6180dd3 in qTerminate() () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5
#6 0x00007ffff6183182 in ?? () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5
#7 0x00007ffff7451fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#8 0x00007ffff569a4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
* when writing a zip listfile change the names of the analysis and message log
files to contain the run id. this also needs changes to the loading code when
searching both of these files.
* cleanup threaded listfile writing code. Also come up with a way to handle write
errors.
* analysis refactoring:
- Get rid of most of data_filter.h in ~/src. Implement what's needed in terms
of the a2 data filter. I think what's needed is something to contain the
original strings and forward everything to the a2 implementation of the
filter.
- Get rid of Parameter::value but add a name field instead. The high level
parameter vector pipes are not needed to hold data and valid/invalid state
anymore. Instead a feature for naming individual array elements should be
added. The name could be stored directly in the parameter.
- Get rid of runtime data from the higher level Analysis layer. Refactor
Analysis to only contain config data and output size calculations where
needed. Maybe move runtime information and the a2 adapter into a separate
state object. This way Analysis would not be responsible for config
storage, connection logic, building and running.
- Rethink the connection logic, who owns things and when they are destroyed.
Also redesign how threading and the worker startup logic is done. Try to
use a copy of the analysis inside the worker. Make ui access to the runtime
data safe.
* Protect access to the globals DAQStats object with a mutex.
* histogram data is wrong after zooming and applying a subrange
* check out 'pigz' by Mark Adler for how to compress data in parallel.
* Maybe: slow mode for replay so that users can see what's going on.
* Nicer format for analysis rates (top side): less decimals, monospace font
* When clearing histos the entry counts in the analysis tree do not reflect the
new counts (bottom row of analysis ui).
* revive the event_server client code. treewriter_client2 needs some work.
Then document the whole thing and the idea behind it.
* fix the windows warning about winsock2.h before windows.h include order warning somehow
* MVLC_ReadoutWorker: If no data was read within some interval that fact should be logged.
* readout code and timetick sections in the listfile:
if multiple timeticks are generated by
TimetickGenerator::generateElapsedSeconds() in one of the readout workers
and then written to the listfile using via
m_listfileHelper->writeTimetickSection() then the text contents of the
timetick sections in the listfile will all contain the formatted datetime
value of the point in time where writeTimetickSection() is called. This means
multiple timeticks will have the same formatted time string instead of each
being different by rougly 1s.
When does timetick gen generate more than one second?
-> If the readout loop blocks for >= 2s. Otherwise if the time difference is
>= 1s only one tick will be generated and the remainder is stored for the
next tick.
This is not a problem anymore with the MVLC because 64-bit unix timestamp
values are stored in the contents of the timetick system event.
* use TicketMutex in histograms
* Design and implement an interface to make VMEScript extendable. This way
controller specific functions do not need to be implemented directly in the
VMEScript implementation but could be added dynamically when switching VME
controllers.
Also and probably more important is that libmvme_core would not have to
depend on VMUSB anymore.
* Find a better way to access global actions than the method used in
VMEConfigTreeWidget::setupActions(). Maybe use a global action factory thingy
returning QAction "singletons".
* remote control: issuing a start followed by a stop can crash the application.
There are missing checks if the system is transitioning. Set a flag
immediately before calling startDAQ and unset it either on error or when the
start succeeded. While the flag is set do not allow calls to startDAQ/stopDAQ
but return "Starting"/"Stopping", etc
* JSON RPC Remote Control and/or DAQ Start/Stop handling is still buggy!
Quickly issuing startDAQ/stopDAQ commands (including multiple starts/stops in
sequence) can end up in a broken state and eventually lead to a crash:
DAQ: Idle, Analysis State: Running is shown in the gui
Crash in the json rpc code but I believe the heap is corrupted before that.
In MVMEContext::startDAQReadout(): introduce a definitive DAQ/System state
inside MVMEContext that is modified from within the GUI thread only and
that reflects what the state the system is in.
Set this flag to "Starting" right before issuing invokeMethod calls on the
workers. This way multiple calls to startDAQ in quick succession would not
cause an issue as the 2nd call would immediately see the "Starting" state.
Also in stopDAQ check that the current state is "Running" only then issue
the stop request to the worker threads.
* when trying to open a workspace but the directory does not actually contain a
workspace (ini missing is the only hard criteria I think) then ask the user
if she wants to create a workspace instead of just creating stuff.
* during a replay extract timestamps from timetick text and pass this on to
data consumers.
think about storing a 64bit seconds-since-epoch value instead to avoid having
to parse string data.
On the other hand I do like the string data.
In any case timezone information is not stored right now but I think
durations are more interesting to the user than absolute points in time.
data transfer client lib
- no libmvme dependency
- depends on headers shipped with mvme at compile time
- header only for now
root data transfer client
- uses data transfer client lib
high level overview:
- connect to server
- receive run description
- generate code for exported objects, Experiment, Events, Modules, data members inside the modules.
- at this point the base class interfaces are known to the compiler, but not the
concrete, newly generated specific classes
- the generated code is built and loaded using TSystem::Load()
- the specific Experiment subclass can be instantiated via
auto exp = reinterpret_cast<TheExperimentClassName *>(
ProcessLineSync("new TheExperimentClassName()"));
-> We now have a working Experiment instance that can generate its event trees
and that can be queried for events and modules.
- open output file now and generate the event trees using the virtual exp->MakeTrees()
During the run switch on the incoming event number and fill the corresponding
event and module data structures. Figure out how to safely fill the arrays
without crashing in case of size mismatches, etc.
Then call fill on the event tree to commit the data to ROOT.
1) At this point more code could be loaded, e.g. from a user lib.
What's needed: begin and end hooks and hooks for specific events.
Also info on whether this is a replay or an online run.
NOTE: Now there are multiple ways to replay data: from within mvme doing the
steps outlined above and from an existing output ROOT tree.
/* State of the analysis UI and future plans
*
* Finding nodes for objects
* Can use QTreeWidgetItem::treeWidget() to get the containing tree. Right now
* trees are recreated when updating (repopulate) and even with smarter updates
* trees and nodes will get added and removed so the mapping has to be updated
* constantly.
* Can easily visit nodes given a container of analysis objects and create
* chains of node handling objects. These objects could be configured by giving
* them different sets of trees and modes. Also the responsibility chains could
* be modified on the fly.
*
*/
--------------------------------------------------
BUGS
--------------------------------------------------
* runid handling: currently a chicken egg problem:
the readout worker thread generates a new run filename and tries to open that file.
errors can happen and thus the run cannot proceed properly. also the runid
information cannot be sent to e.g. clients connected to the data server.
Fix: move output file creation, opening and error handling from the readout worker
to the main thread and pass the open file handle to the readout worker.
* When having a single h1d bin filled the calculated and displayed fwhm of the
gauss is 1, but I was told it should be 0. Figure this out and fix it.
* Recheck/Test ConditionFilter
* Analysis single step button can hit an assertion when clicked in quick
succession
* blt count reads do not work properly. This needs fixing for both controllers.
Being able to run those commands by uploading a command list and executing
that directly would be very helpful for debugging this.
- VMUSB: data is offset by 16 bits and this looks mangled. The parser code errors out.
- SIS: figure out what's happening. It looks like the number read from the
register is put into the data stream (verify).
* analyis processing trees: After renaming an object resort the tree it is contained in.
This will conflict with the planned manual sorting of DataSources. Do not auto sort
the data extraction tree.
* Add a refresh button to the ui. (same a repop right now, just with a nicer icon :)
* SIS: when not connected and doing an Init Module mvme hangs in recvfrom()
* Editing a H2D name via 'F2' is not really possible while a run is active. The
nodes text will be updated periodically which interferes with editing.
* When changing VME event trigger conditions the readout cycle end script can
become empty. How? What's happening? Is this true?
* Add BlockReadFlags to VMEController::blockRead() and implement support for
the controller side wordswap in the sis library.
--------------------------------------------------
Testing setup for ESS
--------------------------------------------------
* Use RPC interface to start/stop the DAQ
* Enable Jumbo Frames!
* Do this repeatedly with different amounts of on/off time.
* Check the results of the RPC calls.
* Poll stats peridocally and make sure they are valid.
* Try with listfile writing on and off.
* Cycle 100 times or more
According to Anton things get unstable with Jumbos enabled (he cannot enable
the pulser which looks like register writes are failing). Also mvme crashes
sometimes when starting a new DAQ run.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WISHLIST
--------------------------------------------------
* unify runinfo into a single variantmap
* Think about file I/O for the expression operator
* Add new data sources yielding:
- the current timetick value (1s granularity, paramvector of size 1)
- the current event #
* Doc: add ListFilterExtractor to Data Sources
* DAQ Start and then immediately pressing "Run script" in the debug window
causes a crash.
* Debug CBUS functionality: MDPP-16 doesn't get a CBUS result after leaving DAQ
mode. It's needs a write to the reset register to make it work again. Why?
* if loading a config file fails when opening a filename from the
mvmeworkspace.ini file show an explicit error message about the filename in
the ini possibly being wrong.
* void Histo1DWidgetPrivate::calibApply()
void Histo1DWidgetPrivate::calibResetToFilter()
Change them to make use of the improved analysis rebuild mechanism
* JSON RPC: add info about SIS hostname/IP and firmware revision and VMUSB usb
port and firmware.
* Try to get rid of the module assignment dialog. Instead use the "unassigned"
category for data sources, the usual "input is not conneceted -> draw in red"
way of communicating issues with the operator to the user and drag/drop to
fix module assignments.
Problems: events may not match up at all. Right now there's no way to display
stuff that can't be assigned to a particular VME event. Also cross-event
drag/drop is not possible in any way right now.
* workspace and vme controller settings and such:
Roberts use case is: open listfile, try to locally reproduce problems a
customer has. This means he will in most/all cases have to change the sis
address or even the controller type to match his local setup. This is
annoying. Even more of a problem is that after making changes he has to
change the SIS back to the customer settings if he wants to send them the
listfile.
Instead when loading a listfile keep the local controller settings and/or use
a "dummy" controller.
My use case: see exactly what the customer is doing: hostname or ip-address,
jumbo frames, additional controller settings. I sometimes really need this
info and do not want it to be hidden behind some magic.
* New Feature: add pause/resume markers and/or absolute timestamps to the listfile.
This would allow for example splitting the run on pause/resume if the user
changed settings in-between.
* RPC: limit max request size to something reasonable
* RPC: fork jcon-cpp on github and push local changes there
* Improve the loadAnalysisConfig() and similar callchains. Right now they end
up in gui_read_json_file() which displays a message box with a generic "could
not read from" error but nothing specific.
Return value can be tested by checking for isNull() on the returned document.
* Multievent: figure out how and if the stream processor correctly handles
modules that are disabled in the vme config.
* FIXME: why does mutli event for the whole event get disabled if one of
the modules is disabled? This does seem unnecessary.
* in openWorkspace(): figure out if the special handling of the listfile output
dir is really needed.
* try to apply DRY to analysis and vme config serialization. grep for
"DAQConfig" and "AnalysisNG" extract that code into functions and update
callers
* multi output arraymap
* constant generator operator:
multiple outputs
stepped each time the event has data (a2_end_event)
fixed rank of 0 so that these are always stepped first, thus consumers can
count on having the data be active on the output pipes
* DAQ readout performance: try to make better use of the data buffers.
Currently one processed controller buffer ends up in the data buffer, which
means for sis it's only around 1500 bytes.
Do the following: put events into the same buffer until less than a user-set
limit is free in the buffer or a specific time interval has passed. Then
write the buffer to disk and in case of a non-local buffer put it into the
"full" queue.
Make sure the condition where an event should still go into a certain buffer
but the space inside the buffer is not enough is reported properly. In this
case the user-set limit has to be adjusted.
Reallocations of the buffer could be done but the case where a buffer is
enlarged again and again must be avoided.
* a2::Operator_KeepPrevious does not seem to set output limits. Is this bugged?
Fix the test case if needed.
--------------------------------------------------
ExpressionOperator future design plan
--------------------------------------------------
Additional/Advanced Features:
* user defined functions using the exprtk syntax. SECTION 15 -> function_compositor
Limited to returning a single scalar value and up to six input parameters
- function name
- args and their names
- code
// define function koo1(x,y,z) { ... }
compositor
.add(function_t()
.name("koo1")
.var("x").var("y").var("z")
.expression("1 + cos(x * y) / z;"));
-> one function compositor per operator. This also allows to define recursive
functions and functions that call other "dynamic" functions.
Also check out the add_auxiliary_symtab() method of the compositor. This
would allow having access to variables from arbitrary symbol tables in the
function body definition. -> use the a2 runtime library symbol table with this method.
* persistent / static variables
Could be used to keep some accumulator or the last N events or the last
result around in the step script.
statics would be (re)initialized at compile time and then kept during calls to step.
To make this work the following is needed:
for scalars: name and initial value
for arrays: name and size.
Then the expression operator can create the variables in the symbol table
instance and initialize them.
These variables will only be available in the step expression.
The size definition of arrays must be done via an expression aswell as that's
the only way to dynamically get to the input sizes. This script will have the
same symbol table as the begin expression.
Alternatively could use a return statement returning pairs of (name,
variable) and then loop through the variables and create and register the
appropriate types.
Which solution is better?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* 13:22:12: SIS3153 Warning: (buffer #109373) (partialEvent) Unexpected packetStatus: 0x89, expected 0x132. Skipping buffer. (suppressed 105 earlier messages)
packetStatus 0x132 is not valid! This should wrap.
* SIS readout: received count and Timestamp histo count have a small delta of 1
up to 3 counts. Figure out where this is coming from.
* listfile writing, MVMEStreamWriterHelper: can writing of the final EndMarker
be moved into closeEventSection? I was surprised that this had to be done
manually.
* efficiency: when the efficiency is slightly below 100% the display will still
show a 1.00 due to rounding. maybe color the number if it's not 100%.
* New Export/ListfileFilter Idea: use an operator to decide whether a full, raw
VME data event should be copied to an output listfile or be suppressed.
This is basically a copy and filter operation for listfile with the decision
being made at an arbitrary point in the analysis for the specific event.
This needs access to the raw input data.
* Transport more state of the DAQ to the analysis side. Right now the stream
processor always creates a session auto-save even if the DAQ did not start up
successfully (e.g. because the controller already was in DAQ mode). Need to
set some flags in the stream worker so that it can decide on which actions to
perform and which to skip.
* zachary use case: splitting data by time: 1st, 2nd and 3rd hour. good system
to solve these kinds of problems.
* add example of how to extract an extended timestamp (48 bits) from our modules to the documentation
* Tell Tino about SIS packet loss detection not being accurately possible
because of the sequence numbers only appearing before events, which has the
effect that with partial and/or multievent packets, the number of packets and
the sequence number do not match 1:1 anymore.
In the original SIS3153 format this wasn't an issue as there was no buffering
or partials.
A real packet number prefix would be ideal.
* Maybe add a session file browser/explorer thingy. Could even allow multiple
open sessions at once. Make sure histos and other objects clearly show which
session they belong to.
* Change analysis input selection handling to not modify the involved operators
directly. Instead create fake input pipes and connect the target operator to
those. Event better would be to create a clone of the original operator to be
edited and work on that with fake inputs.
This way no pausing and/or clearing would have to be done if the user opens
the editor, clicks around and then cancels.
Also if only the name or unit info is edited nothing actually would need to
be rebuild (NOTE: some operators might pull copies of their input unit labels
or prepare some calculation depending on the limits so be careful with
this!).
* Fix assertion when building a2 (might lead to corruption or crashes in a
release build!) in the following scenario:
- H1DSink is connected to a DataFilter by a direct index connection.
- The address bit of the filter are edited by the user. The new number of
address bits is less than the old number.
- The H1DSink is now connected to an index that doesn't exist anymore.
- The a2 adapter will assert because no histo is found for the index.
This also affects other operators. Fix this!
* Rate Monitor Widget:
- make sure rate values are scaled to the "middle" of the y axis
- improve time labels: don't show hours and minutes if time scale is less than a minute
-> This has the problem that if you look at a window of the last 60
seconds, in a run that's over an hour long you won't see the hour number. You
lose the information that you're in hour 1 plus x in the run.
I could check if the scale is past a threshold and then switch the formatting.
if (elapsed_time > 1 hour)
set scale for intervals below the hour threshold to show hours
else
use a scale that hides the hours
Does the same problem happen for other intervals?
* All filters: make sure address bits are <= 16. This is a reasonable limit and
avoids crashes due to wrong inputs into the filter fields.
* VME Script "Run" button: this still blocks the GUI.
--------------------------------------------------------------------------------
Data Export Implementation - ExportSink
--------------------------------------------------------------------------------
More ideas from Robert:
* Make it possible to export multiple events into the same file. This will need
a way to select data from different events. Also the file needs to contain
the event number for each record.
* Add a "system" event containing the current timetick count and systems stats,
rates, etc.
* The export sink needs to log which output file is being written! Right now
it's all very mysterious.
* (maybe) byte order marking. The bin file are not meant as permanent storage
but to be consumed by a user program which then stored data in e.g. ROOT
format.
* (maybe) file magic number
* (maybe) version number
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* add build date to info window thingy
* Andi has a setup where the "compression" combo box has an empty value. no/fast compression can still be selected but the initial value is empty.
* filters: add option to disable adding the random value (done for listfilters)
* Projection of combined H2D does not update axis scales after the source H1D
has been recalibrated.
* Changing H1D calibration via its widget clears all histograms. This is not
needed as only the interpretation of the data in the histo changes. Improve
this case so that it keeps existing histo data.
-> This needs fixing in Histo1DSink::beginRun(): currently the histo
mem is cleared no matter what changed.
Also the case where the analysis is not running has to be taken care of: the
AnalysisPauser won't invoke reusmeAnalysis() and thus it won't be rebuilt.
* add sis3153 firmware update utility to 'extras' directory. Also package and
install sis library code (maybe). Ask struck if this is allowed.
* Add element wise array multiplication operator and/or a calibration operator
using slope and offset values.
* add 64 bit DataFilter
* phase out data_filter.h (DataFilter and MultiWordDataFilter). stick to the a2 stuff
* Make mvme internal "system" rates available to the analysis.
Might split this into VME and Analysis rates. Think about how to feed those
to the analysis system.
Make this efficient and easily integrated.
* RateHistory needs to only be kept if the specific rate should be plotted.
Otherwise if the rate should only be displayed it's enough to keep the last
(most recent) rate value that was calculated. The GUI should be able to pick
that up.
Totals are also interesting and should be displayed alongside rates.
System design:
List / Tree of available counters. Must be able to deal with arrays of known
size, like ModuleCounters[E][M]. On the other hand at runtime it is known how
many events and modules there are so only parts of the arrays have to be
exposed.
Hierarchy is good. Names are good. Serialization of the information to restore
a counter display and the required history buffers is needed.
Then be able to find a RateHistoryBuffer via a "hierarchy path".
readout.bytes
readout.buffers
streamProc.bytes
streamProc.event[0] .. event[N] with N either begin MaxVMEEvents or the actual number of events in the VMEConfig
streamProc.event[0].module[0] with limits being the number of events and the number of modules in that event
accessor function to get the current counter value
accessor function to get the previous counter value
-> delta calc possible
accessor to get calibration variables and unit labels and additional options
accessor to get the RateHistoryBuffer for the specific counter
-> filling the buffer is possible
With a system like this different counters could be updated in different
places, e.g. some counters could be handled entirely in the GUI, others might
be updated iff it's a replay and a timetick section is processed.
The update code needs an efficient way to query the rate monitoring system for
counters it needs to process:
void foo(double sample_value)
{
// constant during the course of the run
auto byteRateHandle = get_handle_for("daqtime.streamProc.bytes");
enter_sample(byteRateHandle, sample_value);
}
// FIXME: where does dt come from? does each "sampler" have to pass its own dt?
void enter_sample(RateHandle handle, double value, double dt_s)
{
prev = value(handle);
delta = calc_delta0(value, prev);
setValue(handle, value);
setDelta(handle, delta);
rate = delta / dt;
rate = calibrate_rate(handle, rate)
if (auto historyBuffer = get_rate_history_buffer(handle))
{
historyBuffer->push_back(rate);
}
}
void handle_replay_timetick() // somewhere in the MVMEStreamProcessor
{
for (int ei = 0; ei < MaxVMEEvents; ei++)
{
for (int mi = 0; mi < MaxVMEModules; mi++)
{
enter_sample()
}
}
if (rate_monitor_enabled_for(handle))
rate_monitor_sample(
}
Rate Monitoring in the analysis
-------------------------------
Experiment time rates are what's interesting, not the rates achieved during a
replay! This means sampling should happen based on replay timeticks or for periodic events
every time the event is triggered.
* Data sources and types
- Values from periodically reading out the counter value of a scaler module. Rate
calculation and then sampling has to be done using two successive scaler values. The
result can then be recorded in a RateHistoryBuffer.
-> Right now something roughly similar can be achieved using Calib, KeepPrevious and
A-B to calculate a rate.
-> This functionality could be combined in a RateCalculation operator which could then
be used as the input for a RateHistorySink.
This approach would also allow to use the calculated and calibrated rate as the
input for further operators.
* How this case plays out:
VME readout is triggered, data is read and transfered to the analysis. The
corresponding analysis event is stepped. DataSources consume the readout data and
yield output values (processModuleData). Operators are stepped in the EndEvent step,
where they can consume the data source output values.
Storing the calculated rate could happen every time the event is trigged. This would
mean it's synchronous to the analysis stepping process. Also the rate recording
interval would be the same as the interval at which the event is triggered.
-> Doing synchronous sampling of the readout values only really makes sense for
periodic events as otherwise the recording interval could differ immensely.
- Readout of a module that yields a precalculated rate. No calculation is required, only
recording of the rate values.
Calibration might still be desired.
-> Calibration plus RateSink would meet the requirements. Additionally the remarks for
the first case above apply: recording can be done synchronously to processing the
event in the analysis.
- Extraction filter rates
This type of rate does not use data values from the readout itself but is sort of a
meta rate. Calculation can be done using the HitCounts array that's currently part of
every data source in the analysis.
-> The calculated rates can serve as input for a RateSink.
To get a consistent timebase sampling and recording should be based on analysis
timeticks. I think this means that a better resolution than the timetick one can not
really be achieved.
FIXME: The case where the fraction of two rates should be monitored as in
Wanpengs tool has not been taken care of.
Input could be a pair of rate samplers. The fraction of the two samplers would
then be treated as a rate itself and accumulated in the history buffer in the
same way as a normal rate.
Be careful with the different rate types as they are updated at different times:
- System rates are currently updated in the gui
- Analysis rate values (Precalculated and CounterDifference) are updated each
time the event is triggered.
- FlowRates are updated on each timetick which can be wallclock time for DAQ or
experiment time for a replay.
--------------------------------------------------
Single Stepping and Stream Processing
--------------------------------------------------
* Show if multievent processing is active for an event
* Display buffer context in case of a parse error
* Distinction/difference between a stream and a buffer?
As a buffer can contain multiple sections or even a complete listfile it is an MVMEStream.
But an MVMEStream will mostly consist of multiple buffers as generated by the
readout or from reading a listfile.
* Info to parse an mvme stream and report errors:
- is multievent enabled for eventIndex N?
- module header filter strings for (eventIndex, moduleIndex)
- listfile format version
* State:
BufferIterator
EventIterator
* Result information for each step:
- error?
- error offset
- data offsets: event section header, module section header, module data begin, module data end
[number of this event section], [number of the module sections in the event section]
Available info from the above: section type, event index
* Approaches to handling a stream:
- Iterator style
Create an StreamBufferIterator supplying all required information to parse
the buffer.
Call next() on the buffer until the result is atEnd.
- Callback based
The user passes in a structure containing callbacks for each section type
and for module data. Not all callbacks have to be implented as they can be
set to null.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------
* Remove now unused run-time code from the Analysis classes. Make it clear that
they're for serialization, internal connectiviy and a2 generation only.
* cmake: Fix the issue with FindROOT adding to the global cflags. Might be
fixed by moving everything ROOT related into a subdirectory.
* Get rid of automoc and as it mocs stuff that doesn't need mocing.
* Fix sphinx doc build when using the cmake ninja generator.
* Setup controller outputs (NIM, etc) to be active when DAQ is active and maybe
more. Do this for both controllers and document and/or log what's being setup.
* Crash in a2_adapter when BinSumDiff inputs don't have the same length
* sis3153 implementation
* On connect read the stacklist control register to figure out if the sis is
in autonomous mode. If it is and a controller option "Reset SIS3153 on
connect" is not set then consider the connection attempt to have failed.
Reasoning: currently when accidentially connecting to a sis that's in use,
a reset command will be sent and thus the running DAQ will be stopped. This
can be quite bad if it happens during a real experiment.
* Running "Init Module" when disconnected blocks the UI
* Make the mblt count read commands work with the default data_len_format
setting. Use the leftshift value of the sis register for this.
Document the behaviour, maybe change the command to not include the mask
anymore, etc...
* Think about the sis3153 counted block read commands. Maybe add a "dialect" to
vmescript so that the fact that sis3153 does not have a mask operation is
clear.
* Fix socket error messages under windows. Use and test the WSA stuff.
* check the case where listfile creation throws an exception. right now this
results in a communication error with the sis controller when trying to start
another run after a failed one.
* Fix anaylsis object selection color under win7. It's horrible and the text is
not readable anymore.
* Improve VMEScriptEditor search:
Make it get focus under windows and unity
Make it get focus when hitting ctrl+f again
Bind "find next" to F3 or something.
Maybe add "highlight" all.
* Analysis: fix names generated by "generate default filters"
* Analysis: getDisplayName() and getShortName() are not that great. For the
AggregateOps operator I'd like to display "Aggregate Operations" in the "add
operator" context menu instead of "sum". Where else is getDisplayName() used?
-> The above works. What I need now is to change the suggested operator name
of selecting a different aggregate op. Right now it always appends ".sum".
* Resolution change for 2D combined histograms?
-> Would have to implement this in a different way: Instead of just looking
up the data in the corresponding H1D instances I'd have to create a new H2D
with the desired resolution and insert the source values into that histogram.
* Make a RCbus example including a delay between switchting NIM to cbus mode!
* Implement adding of module specific vme init scripts via the GUI.
The user should be able to modify the order of init scripts by
dragging/dropping, remove existing scripts and add new ones.
Optionally copying of an init script to the same or a different module could
be implemented.
* Maybe reload the analysis on opening a listfile.
Without the reload the analysis window can become empty if the listfile
contains different module uuids than the ones in the analysis. A reload could
trigger a simple 1->1 import of the module and all would be good again.
The reload could als trigger opening the import dialog if it requires user
intervention.
* Logspam still makes the app hang! Fix this!!!
* Histo1D and Histo1DWidget
* Bug: H1D does not work when unitMin > unitMax
* Histo1DWidget: zoom out after setHistogram()
But not when the usage is the one in x- and y-projections where the histo is
replaced with a similar one.
* Read up on binning error calculations. See how ROOT does it.
* Analysis and AnalysisUI
* Open new histograms immediately. Do this when creating a sub-histogram.
Maybe only if DAQ/Replay is running?
* Copy/Paste
- Having the ability to clone operators would allow making editing of
operators much safer.
Right now if the user picks another input or clears any input slots the
analysis has to be paused and restarted and dependent operators
immediately notice the change. This means even if the user cancels the
dialog the analysis can have unexpected modifications.
With cloning a copy of the operator would be created. Selecting and
clearing inputs would only affect the cloned version of the operator
(input pipes would not need to be fully connected so the input sources
do not need to change at all during editing -> half connected state!)
- Add error reporting to VMEConfig and Analysis version conversion and read routines
- Refactor gui_write_json_file() to write_json_file() and use exceptions to report errors.
Meaning: implement functionality using exceptions and not caring about if
it's used from the gui or not. Then build and use gui wrappers which show
messageboxes where needed.
- Add warnings to template loading if there are duplicate module types
- Fix diagnosis window hanging and eating system memory when EOE check is
active and a lot of errors are occuring
* Check for off-by-one errors in the 2D projection code
* lastlog.log:
Write every logged message to lastlog.log. Do this in the main thread. Flush
to disk frequently.
At startup move lastlog.log to lastlog.1.log.
Keep 5 logfiles around. 4 -> 5, 3 -> 4, 2 -> 3, 1 -> 2, 0 -> 1
* select input UI: disable other tree highlights while it's active.
* ZIP file support:
On close:
- write some user description text file to the zip. "Run description", "Run notes", "DAQ notes"
- save/saveas/load error reporting
* VMEConfig
* Allow Adding of VME-Init scripts to modules via the gui!
* EventWidget and AnalysisWidget repopulate:
- Restore open PipeDisplays:
Store (ObjectId, OutputIndex) to get the output pipe and (pos, size) to restore the widget.
- Restore splitter states when recreating event widgets.
- Restore node expansion state
Could store QUuids for expanded object nodes and after repop rebuild the
VoidStar set of expanded nodes by getting the pointers via the ids.
* Analysis structure:
SourceEntry does not really need the eventId as the moduleId implicitly
specifies the eventId.
* H1D: Take the median of the visible left and right edges. Subtract that value
from the calculated statistics values. This is supposed to remove the noise
to the left and right of a peak.
* HistoViewer: Ability to save filled Histograms to disk and reload them later.
This will allow using histogram tools (fits, etc.) on loaded histos. It also
allows comparing histos from different runs etc.
* Calculated listfile size in stats display and size shown by windows explorer
are not the same at all. Why?
* // FIXME: when using subranges the getBinUnchecked() calculation often yields negative bins. why?
Verify this does in fact happen.
* DAQStats:
- add pause/resume
- figure out why buffers/s never has a fractional part
-> part of the system. addBuffersRead() and addBytesRead() each call maybeUpdateIntervalCounters().
If we're reading less than 1 buffer per second the resulting
bytesPerSecond will be 0 for that interval.
How to make this better?
- License info needs Apache License for PCG. Maybe others. Check what is needed here!
Also use info from resources/README
- VMUSB: if there are connect/reconnect errors write them to the log view
Do this when porting to libusb-1.0 as that should also provide better error messages.
- VMUSB: make the mutex non-recursive!
* GUI
* add versioning to the GUI state saved via QSettings. If the version changes
use the default layout instead of restoring a possibly incompatible state.
* add a "reset gui state" button somewhere
* add ability to add notes to VME/analysis config (maybe other objects too)
* VMEScript: add print command
* Histos:
* add histogram cuts
* add histogram fitting
* Mesytec Werbung ^^ Hintergrundbilder, Wasserzeichen, etc. (CSS Theme?)
* threading and vme commands
* run vme scripts in a separate thread when invoked from the gui
* save and restore the zoom and pan of histograms
* save and restore open histograms
* Fix diagnostics window blocking when there are a lot of error messages being generated.
-> use the leaky bucket throttle for this
You can’t perform that action at this time.