-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Take stock of the capsulengine + populse_db 2 merge with mia #180
Comments
̶_̶C̶h̶a̶n̶g̶e̶ ̶o̶f̶ ̶c̶o̶n̶f̶i̶g̶:̶_̶
|
issues to solve: Are #172 and populse/capsul#167 still causing problems ? (I have worked on them)
|
Yes, I removed the empty files creation. We can of curse discuss about it. My motivations were that 1) in a distributed execution model, files are not necessarily available locally so there is no point in creating them on the local filesystem, and 2) empty files are not valid, they cannot be used in later processing, thus should ideally not exist: they would introduce much confusion in the database and we would not know the state of the data. If some processes need them to exist, it's probably a bug in them and we may rather fix/patch them instead.
|
As I mentioned at the last meeting, this dogma of creation of empty file at the initialisation time comes from the early work on the mia and I don't remember in detail all the reasons but as far as I can remember, it was mainly because the nipype was asking for these files to exist as input for the next brick in a pipeline. Again this is a bit dated and if I remember correctly this was the main motivation. Today it seems that this constraint was more a bug than a real need. I fully understand and agree with your arguments for not creating the empty files or adding them to the database at the time of initialisation. However as we have integrated this notion (that's why I'm writing dogma at the beginning of this post) from the beginning of our work on mia and mia_processes I find it difficult to evaluate the impact in terms of code changes if we remove this way of doing things. Certainly small for mia, but not negligible for mia_processes. That there is a need to change the codes to be in a more rational functioning is not a valid reason not to do it! I'll just need some time to evaluate what this will entail in terms of code change (I'm looking forward to moving on to higher releases in populse, so this evaluation should be done quickly, say this week). The second little problem I see with not adding data at the time of initialisation is that there are "init "metadata: Done or not Done and "Init time" that are created for the data (in Bricks field). I think this is an interesting piece of information. We should find a way to keep this information and add it at the run time if everything is done at that time. In short, I'm not against it, but we need to think about a few things. |
To my opinion, there must be two things in the database :
- A record of the process with all its parameters stored at init time. This
is where the required output file names would be stored (whether they exist
or not)
- An entry for each data created by the process. This has to be created in
the post execution method Denis talked about.
I also think we should clearly identify and isolate (possibly in a specific
method) initialization steps that may require user interaction. If
possible, when a question is asked to the user, the answer should be stored
in the configuration. That way, with a properly configured environment, it
will always be possible to run pipelines without user interaction.
|
|
|
For tags inheritance, I agree that MIA itself cannot know how to do it in a general way. But a process, or a pipeline, may have this knowledge when it's designed for a specific application (although there are also some generic processes). Thus it should not be the responsibility of the user to decide on that (moreover human users are error-prone, and lazy...). More precisely this information should be brought alongside processes or pipelines, in an optional or maybe context-specific way. And I think Capsul has already a system for that in the completion engine, which (as far as I remember) allows to assign attributes (or tags) to each parameter (input or output ones), then to build filenames for these parameters using the tags. |
Meta information goes far beyond image characteristics, it can describe the
typology of data (it's an image, it's an MR image, it's a 3D T1 image,
etc.), the subject of the data (species, identification code, etc.), the
acquisition context, etc. To make it simple, it is possible to define a
few layers that may produce metadata for an output after a process
execution. For instance :
1. The process producing the output
1. The pipeline in which the process is included
1. The execution environment
1. The destination database environment
According to my experience with BrainVISA that has handled user defined
ontologies for years, I am convinced that all metadata related stuff must
be defined separately from the computing part. This is non intuitive
because it is highly related to the process itself. But doing this is the
only way to handle, in the long run, the variety of existing data
organizations as well as their modifications in time. If metadata
management is hardcoded in processes, it imposes a de facto ontology on the
data that we will never have time to properly define and any change in data
organization will be a mess that involves modification on many (if not all)
processes.
Another topic that we must talk about.
…On Mon, Nov 16, 2020 at 3:08 PM Eric Condamine ***@***.***> wrote:
- concerning the questions that can be asked to the user:
Given the centrality of the database in mia, it is important that all
fields (tags, metadata, etc. - there are several terms for the same thing)
are filled in for each data in the database (this bring up the problem of
adding by hand a data to the database, i.e. without using mri_conv as
input, because in this case all the tags have an empty value, but this is
another problem - which we will have to discuss one day, because in release
2, if I understand correctly, there is clearly the possibility of using,
via the controller, data browser without using mri_conv. ..). So during
initialisation, if there is only one input image (for example a smooth),
the output inherits all the tags of the input image. To do this strictly we
should modify the tags changed by the brick (example, the tag corresponding
to the resolution must be changed to match the smooth done). It's not the
case yet, it's something I'll have to do for all the mia_processes bricks
but I confess that for the moment I didn't have time to do it, it's not
very complicated. On the other hand if there are several inputs (e.g. a
co-registration) mia has no way of knowing which image will inherit the
output brick. This management is done in the mia_processes bricks which
must define the class attribute inheritance_dict which defines exactly
which should inherit the output brick or bricks. This is in my opinion the
right place to manage inheritance because mia has no way to know what
happens in the brick. However, when it comes to bricks not coming from
mia_processes this inheritance is not managed. In this case if there are
more than two input images there is "uncertainty" and mia opens a popup to
ask from which to inherit the output(s) (if i remember well - I only use
currently mia_processes bricks - we set a skip option if user don't want to
choose ... but it's a bad idea because the tags could be wrong ..). All
these questions are really not trivial. If we delete the creation in the
database at the initialisation time you will have to do it at the time of
the run (or after). Otherwise, I believe that we will produce an interest
regression of mia (in my opinion, the innovative feature of mia is the
ubiquitous nature of the database with the pipeline manager).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#180 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXLUTQI5GHJPTKT2FI6LNLSQEW5ZANCNFSM4TUOBY2Q>
.
|
concercing:
in fact, unless I'm wrong, this is a bit like what we currently do during initialisation, but not in "process form". Currently during initialisation there is creation in the database of an entry for each output image. As said previously, this entry contains all the tags of the image from which it inherits and the tags modified by the brick (process) should be modified accordingly (remains to be done, easy to do, just lack of time). In addition, there is a special "Bricks" tag which, during initialisation, includes the name of the brick (process). If we click on this brick it gives access to all the input and output parameters of the brick as well as the "init" "init_time" tags etc ... It is a little close to what you describe. The best would be to see in practice that it is better than a description. |
I completely agree with you it is the responsibility of the process (brick) to manage the inheritance and as I wrote previously this is what is done for the bricks of mia_processes. But as Mia allows to use other bricks than those of mia_processes (i.e nipype) there may be cases where the inheritance is not managed in the brick (process) ... And as said previously, the popup asking to define which image should inherit an output, is triggered only when there are several inputs and the inheritance is not managed in the brick. I do not see how to do otherwise. Otherwise, we would lose the utility of the database which gives access to all a bunch of metadata used for the automatic launch or pipeline. |
I'm still putting together the different posts about upgrading to the next version.
In a nutshell (as described in #169, that I close now to avoid duplicates) Mia already allows in File > Package library manager to select the libraries or sub-libraries that will be visible in the pipeline manager. This allows to simplify a lot the "packages" part in the pipeline manager. From here there are two options. Either all parts are unfolded each time you open mia (old view mode), or they are all folded (new view mode). Everyone will be able to prefer one way or the other. Maybe the intermediate situation would be optimal for everyone: The configuration chosen by the user is saved in mia's preferences (properties/process_config.yml?) so that the user finds his preferred environment at the next start of mia. |
New tests: Field visualised in Data Browser (the number should impact the working time):
I) Test of an atomic process (a brick)
̶-̶ ̶T̶e̶s̶t̶ ̶n̶e̶w̶ ̶r̶e̶l̶e̶a̶s̶e̶ ̶(̶c̶a̶p̶s̶u̶l̶ ̶m̶a̶s̶t̶e̶r̶ ̶b̶r̶a̶n̶c̶h̶)̶:̶
̶̶̶r̶4̶ ̶=̶>̶ ̶T̶h̶e̶ ̶n̶e̶w̶ ̶r̶e̶l̶e̶a̶s̶e̶ ̶d̶o̶e̶s̶ ̶n̶o̶t̶ ̶y̶e̶t̶ ̶w̶o̶r̶k̶ ̶w̶i̶t̶h̶ ̶m̶a̶s̶t̶e̶r̶ ̶b̶r̶a̶n̶c̶h̶.̶ ̶S̶t̶i̶l̶l̶ ̶n̶e̶e̶d̶ ̶o̶f̶ ̶4̶P̶R̶1̶6̶5̶o̶n̶M̶I̶A̶ ̶b̶r̶a̶n̶c̶h̶.̶ ̶W̶h̶a̶t̶ ̶i̶s̶ ̶p̶l̶a̶n̶e̶d̶ ̶?̶ ̶m̶e̶r̶g̶e̶ ̶o̶f̶ ̶4̶P̶R̶1̶6̶5̶o̶n̶M̶I̶A̶ ̶(̶P̶R̶)̶ ̶?̶ ̶o̶t̶h̶e̶r̶ ̶o̶p̶t̶i̶o̶n̶s̶ ̶?̶̶̶
̶
in the stdout for each brick at the time of the run. |
̶J̶o̶b̶ ̶e̶r̶r̶o̶r̶ ̶r̶e̶p̶o̶r̶t̶e̶d̶ ̶a̶b̶o̶v̶e̶:̶ ̶t̶h̶e̶r̶e̶ ̶i̶s̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶w̶r̶o̶n̶g̶ ̶i̶n̶ ̶t̶h̶e̶ ̶c̶o̶n̶f̶i̶g̶u̶r̶a̶t̶i̶o̶n̶:̶
̶P̶y̶t̶h̶o̶n̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶s̶ ̶n̶o̶t̶ ̶g̶o̶o̶d̶.̶
̶(̶w̶i̶t̶h̶ ̶o̶t̶h̶e̶r̶ ̶p̶a̶t̶h̶s̶,̶ ̶o̶f̶ ̶c̶o̶u̶r̶s̶e̶)̶ |
-̶ ̶c̶o̶n̶f̶i̶g̶:̶
|
̶T̶h̶e̶n̶ ̶t̶h̶e̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶s̶ ̶c̶o̶r̶r̶e̶c̶t̶l̶y̶ ̶d̶e̶t̶e̶c̶t̶e̶d̶ ̶a̶n̶d̶ ̶s̶e̶t̶u̶p̶,̶ ̶b̶u̶t̶ ̶n̶o̶t̶ ̶p̶a̶s̶s̶e̶d̶ ̶t̶o̶ ̶t̶h̶e̶ ̶j̶o̶b̶,̶ ̶o̶r̶ ̶i̶t̶ ̶h̶a̶s̶ ̶b̶e̶e̶n̶ ̶r̶e̶s̶e̶t̶ ̶b̶e̶f̶o̶r̶e̶ ̶s̶t̶a̶r̶t̶i̶n̶g̶ ̶t̶h̶e̶ ̶j̶o̶b̶,̶ ̶b̶e̶c̶a̶u̶s̶e̶ ̶t̶h̶e̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶n̶ ̶t̶h̶e̶ ̶e̶r̶r̶o̶r̶ ̶l̶o̶g̶ ̶d̶o̶e̶s̶ ̶n̶o̶t̶ ̶m̶a̶t̶c̶h̶ ̶w̶h̶a̶t̶ ̶i̶s̶ ̶d̶e̶t̶e̶c̶t̶e̶d̶ ̶a̶t̶ ̶s̶t̶a̶r̶t̶u̶p̶.̶ |
̶ ̶c̶o̶n̶f̶i̶g̶:̶ |
̶Y̶e̶s̶,̶ ̶t̶h̶e̶ ̶c̶o̶n̶f̶i̶g̶ ̶h̶a̶s̶ ̶b̶e̶e̶n̶ ̶r̶e̶s̶e̶t̶ ̶s̶o̶m̶e̶w̶h̶e̶r̶e̶.̶ ̶A̶r̶e̶ ̶y̶o̶u̶ ̶s̶o̶u̶r̶c̶e̶s̶ ̶u̶p̶-̶t̶o̶-̶d̶a̶t̶e̶ ̶?̶ ̶(̶p̶o̶p̶u̶l̶s̶e̶_̶m̶i̶a̶ ̶b̶r̶a̶n̶c̶h̶ ̶p̶o̶p̶u̶l̶s̶e̶_̶d̶b̶2̶_̶c̶a̶p̶s̶u̶l̶e̶n̶g̶i̶n̶e̶,̶ ̶c̶h̶a̶n̶g̶e̶s̶e̶t̶ ̶e̶f̶a̶f̶b̶9̶0̶a̶0̶8̶1̶d̶3̶f̶2̶3̶9̶9̶f̶c̶6̶b̶e̶6̶a̶b̶5̶f̶9̶c̶3̶c̶ ̶?̶ |
̶Y̶e̶s̶ ̶a̶l̶l̶ ̶s̶o̶u̶r̶c̶e̶s̶ ̶a̶r̶e̶ ̶u̶p̶ ̶t̶o̶ ̶d̶a̶t̶e̶.̶ ̶O̶k̶ ̶I̶ ̶w̶i̶l̶l̶ ̶c̶h̶e̶c̶k̶ ̶a̶g̶a̶i̶n̶ ̶.̶.̶.̶ |
̶J̶u̶s̶t̶ ̶c̶h̶e̶c̶k̶e̶d̶,̶ ̶I̶ ̶u̶s̶e̶ ̶f̶r̶e̶s̶h̶ ̶s̶o̶u̶r̶c̶e̶s̶ ̶.̶.̶.̶ |
̶I ̶h̶a̶v̶e̶ ̶p̶u̶s̶h̶e̶d̶ ̶c̶o̶d̶e̶ ̶i̶n̶ ̶s̶o̶m̶a̶-̶b̶a̶s̶e̶,̶ ̶c̶a̶p̶s̶u̶l̶ ̶(̶r̶i̶g̶h̶t̶ ̶n̶o̶w̶)̶,̶ ̶a̶n̶d̶ ̶p̶o̶p̶u̶l̶s̶e̶_̶m̶i̶a̶,̶ ̶t̶o̶ ̶f̶i̶x̶ ̶a̶ ̶f̶e̶w̶ ̶i̶s̶s̶u̶e̶s̶ ̶i̶n̶ ̶c̶o̶n̶f̶i̶g̶s̶.̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶t̶h̶i̶n̶k̶ ̶t̶h̶i̶s̶ ̶w̶i̶l̶l̶ ̶s̶o̶l̶v̶e̶ ̶y̶o̶u̶r̶ ̶p̶r̶o̶b̶l̶e̶m̶,̶ ̶b̶u̶t̶ ̶a̶s̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶n̶d̶ ̶w̶h̶e̶r̶e̶ ̶i̶t̶ ̶c̶o̶m̶e̶s̶ ̶f̶r̶o̶m̶,̶ ̶n̶o̶b̶o̶d̶y̶ ̶k̶n̶o̶w̶s̶.̶.̶.̶ ̶;̶)̶ |
In a few words: CapsulEngine has a settings section (a Settings object) which holds several configurations, which are separated by "environment" (computing resources) and by module (spm, matlab, etc). Settings are stored in a populse_db (v2 preferably for thread-safety) database. It's a bit tricky in there because there is code to synchronize these settings with the former StudyConfig object. Never mind. |
̶N̶o̶ ̶i̶t̶'̶s̶ ̶s̶t̶i̶l̶l̶ ̶n̶o̶t̶ ̶w̶o̶r̶k̶i̶n̶g̶ ̶w̶i̶t̶h̶ ̶f̶r̶e̶s̶h̶ ̶s̶o̶u̶r̶c̶e̶s̶ ̶.̶.̶.̶ |
̶o̶k̶ ̶t̶h̶a̶n̶k̶s̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶f̶e̶w̶ ̶w̶o̶r̶d̶s̶ ̶o̶n̶ ̶C̶a̶p̶s̶u̶l̶E̶n̶g̶i̶n̶e̶!̶ |
̶>̶ ̶I̶n̶ ̶o̶t̶h̶e̶r̶ ̶h̶a̶n̶d̶,̶ ̶t̶h̶e̶r̶e̶ ̶i̶s̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶I̶ ̶d̶o̶n̶'̶t̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶n̶d̶:̶ ̶ |
̶I̶f̶ ̶I̶ ̶p̶u̶t̶ ̶
̶I̶ ̶o̶b̶s̶e̶r̶v̶e̶ ̶a̶ ̶c̶r̶a̶s̶h̶ ̶o̶f̶ ̶m̶i̶a̶ ̶a̶t̶ ̶s̶t̶a̶r̶t̶u̶p̶:̶
̶F̶o̶r̶ ̶m̶e̶ ̶p̶c̶ ̶o̶b̶j̶e̶c̶t̶ ̶i̶s̶ ̶n̶o̶t̶ ̶p̶r̶o̶p̶a̶g̶a̶t̶e̶d̶.̶ |
̶o̶k̶ ̶I̶ ̶h̶a̶v̶e̶ ̶a̶ ̶s̶t̶a̶r̶t̶ ̶o̶f̶ ̶e̶x̶p̶l̶a̶n̶a̶t̶i̶o̶n̶ ̶.̶.̶.̶ ̶I̶'̶m̶ ̶d̶i̶g̶g̶i̶n̶g̶ ̶;̶-̶)̶ |
̶S̶o̶ ̶i̶t̶ ̶s̶e̶e̶m̶s̶ ̶l̶i̶k̶e̶ ̶t̶h̶e̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶s̶ ̶n̶o̶t̶ ̶p̶r̶o̶p̶e̶r̶l̶y̶ ̶s̶a̶v̶e̶d̶ ̶?̶ |
The nipypeprocess2 branch style finally causes some problems with the traits which will ultimately make it impossible for the user to save the new pipelines (which is an important feature of mia). |
Oh ! and with nipypeprocess1 we do not observe all the unnecessary exception messages in the stdout ... So I close the opened PR for that. nipypeprocess2 is definitively currently not robust ! |
The new controller for the V2 is beautiful. Ok ... but currently it doesn't allow to send a list directly. Another thing that I haven't explored yet but that might limit the bricks I already have in my personal library (and that I haven't had time to test yet with v2 !). |
I still try to make the V2 work as close as possible to the V1. An interesting feature is the display, on the standard output, of the name of the node (and associated process class) at the time of initialisation. This can be done quite easily in the MIAProcessCompletionEngine.complete_parameters() method. It works fine for instances of ProcessMIA. However, for NipypeProcess instances the result is not perfect for me. We have the node in question but I haven't found a way to get the path to the base class (here it should be nipype.interfaces.spm.Smooth). |
I'm back from holidays. Well, there is a lot of questions/remarks, it will be a bit difficult to answer all in a clear way...
yes that's a problem difficult to solve: nipype interfaces get some invalid values by default at init time, so calling
I had not noticed this point, thanks for pointing it out. I have no clear idea of where/when the popup message was issued. I need to check. However we have to be really really careful about when/where to issue graphical popups, and more specifically how many times we do:
Impossible to save ? Do you have an example ? In any case this has to be fixed: we should always be able to save the pipeline.
Yes the controller GUI is not clear here. The GUI is using generic widgets that are embedded recursively: a File will display a filter button (for itself), a List will display several elements of its contents type, so a List of File will display one filter button for each of its elements. Moreover I have a little bit specialized "List of File" to display a filter button at the list level, but then there are several buttons at several levels... We may try to find a way to remove the elements buttons in "List of " I think.
They do not refer to the same thing actually, so must not display the same title. the green arrow is for the list, and allows multiple selection, and is actually for the parameter "in_files". But here the red arrow is for only the item 0 of the list -- one file. A more proper title would be "in_files[0]" maybe, if I find a way to force it in the recursive GUI, which is not very easy...
If the pipeline is reasonably small, yes... If it's thousands of iterations, maybe not...
Yes, the thing is that |
I wanted to start working on the tools chapter of mia_processes for V2 but from the beginning I noticed that there is no more access to the database filter in the controller. There's something that I miss with the v2 for this very simple type of brick that doesn't overload an already existing class ? ex. Auto_Filter_List in mia_processes.tools, on branch 2.0 |
Oh wait a minute ... I observe that this issue seems to exist only with the Auto_Filter_List ... Everything seems ok with all the other classes of mia_processes.tools ... I investigate ... |
Ok I understand ... In V2, to have access to the filter in the controller, therefore to the database, it is now necessary that the trait of the plug is a file (traits.File) !!! This was not the case in V1, which allowed all the plugs to have access to the database even if it was absurd (it can be considered as absurd to give access to the database to retrieve a data which can't be there, like for example a float !; Indeed until now the DataBrowser in mia use only files for each document). Ok it's clearly an improvement in v2, it's just a matter of changing the coding habits for the bricks a little bit and be careful with the traits that will need access to the database ! |
Thank you for your answer @denisri. I didn't comment on it for lack of time but also because these are general questions that can perhaps be dealt with a little later. However, some of them are neither trivial nor secondary, such as the possibility of iteration, which would be a regression if it was not present in V2 (in a simple, ergonomic, automatic version, in short using the database). There are other points that worked rather well in V1 and seem to work less well in V2. However, I prefer to focus on mia_processes V2 for now. I hope to be finished soon enough this point. We will then make visio meeting to discuss the last points to be improved in order to finalise the v2? PS. it seems that the fork for the PR (denisri:populse_db2_capsulengine) is not synchronised with the |
This morning it seems impossible to use mia. Importing a node in the pipelime manager seems so long that it is very difficult to use mia. I already made a remark about this in a previous post in this ticket. The problem seemed to be solved, as if by magic, without knowing why ... This morning it reached new heights ... Do you observe this problem from your side ? I opened a ticket at nipype.
What is curious is that before when I was observing the problem it was only during the first import (this is always the case with the interpreter). Now every time you touch a brick there is such a long waiting time that it is almost impossible to use mia . |
So the problem is within nipype itself...
|
Yes nipype has been using detection tools at least via elemetry for a little while (maybe the problem comes from something else but I've already had problems with slowness because of this one). I'm going to do some investigation to see from what version of nipype the problem comes from (according to what you write the problem is not present with nipype 1.1.9). Beyond that, it poses the problem of including by default nipipe in mia ... Maybe we should reconsider this position ... Thank you for your answer @denisri , and no problem, come back whenever you want and can ;-)! |
For me too nipype 1.1.9 is fine ... |
So: With traits 4.6 (it seems that there are problems with nipype 1.3.0 and 1.4.0 with traits>4.6):
With last version of traits (6.1.1) :
So I suggest to don't go over nipype 1.4.2 currently. To keep in mind:
This suggests that the problem clearly comes from the nypipe check_latest_version which is done automatically. This issue is not within our control but we should find a way to work around it when things go wrong as they do now. |
Ok the problem is indeed linked to etelemetry and to an issue with the server which is currently down. As proposed here a workaround is to define a NO_ET environment variable which bypasses the update checks. It works on my laptop and we can use the latest version of nipype again with this environment variable. As we are trying to ban the environment variable in mia as much as possible, I will add it to os.environ as soon as possible. This make a problem for someone? |
OK I see. |
Done! |
always with the NewSegment brick : |
The V2 version is now functional and all the bricks of mia_processes or nipype work correctly. 😄 🎉
To simplify the work, the goal is to close this now very long ticket, but to not waste the time invested in it, I will try to summarise the important points still unanswered or without action, in new tickets easier to read. |
I will try to summarise all the issues that are already open because at the moment it is a bit scattered and difficult to deal with ... When a issue will be resumed here, I would indicate here his number (in order to be able to go and consult it if necessary because there is some interesting information in it) and I would close it.
Let's go:
Config used:
Important issues (t)/ remarks (r):
-̶ ̶R̶u̶n̶:̶ ̶ ̶n̶o̶t̶ ̶l̶a̶u̶n̶c̶h̶e̶d̶ ̶(̶a̶s̶ ̶d̶e̶s̶c̶r̶i̶b̶e̶d̶ ̶i̶n̶ ̶"̶m̶a̶t̶l̶a̶b̶ ̶c̶o̶n̶f̶i̶g̶ ̶i̶s̶ ̶n̶o̶t̶ ̶w̶o̶r̶k̶i̶n̶g̶ ̶i̶n̶ ̶j̶o̶b̶s̶ ̶#̶1̶7̶2̶"̶ ̶a̶n̶d̶ ̶"̶g̶i̶v̶e̶ ̶t̶h̶e̶ ̶s̶y̶s̶.̶p̶a̶t̶h̶ ̶t̶o̶ ̶t̶h̶e̶ ̶p̶r̶o̶c̶e̶s̶s̶_̶c̶m̶d̶l̶i̶n̶e̶ ̶f̶o̶r̶ ̶d̶e̶v̶ ̶m̶o̶d̶e̶ ̶h̶t̶t̶p̶s̶:̶/̶/̶g̶i̶t̶h̶u̶b̶.̶c̶o̶m̶/̶p̶o̶p̶u̶l̶s̶e̶/̶c̶a̶p̶s̶u̶l̶/̶p̶u̶l̶l̶/̶1̶6̶7̶"̶)̶:̶ ̶s̶t̶d̶o̶u̶t̶ ̶d̶i̶s̶p̶l̶a̶y̶:̶
The text was updated successfully, but these errors were encountered: