Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variables and Observables #603

Closed
wants to merge 41 commits into from
Closed

Variables and Observables #603

wants to merge 41 commits into from

Conversation

lfarv
Copy link
Contributor

@lfarv lfarv commented May 23, 2023

This PR introduces some classes used for matching and response matrix computation, but which may be used also for other purposes (plotting…).

Basically, Observables are anything which can be looked at in a lattice (element attributes, optics data…), possibly while varying Variables (anything that can be tuned). They are subdivided in TrajectoryObervable, OrbitObservable, LocalOpticsObservable, GlobalOpticsObservable, EmittanceObservable, LatticeObservable, each resulting from a different computation. The idea is to propose a single interface for these different things. Of course new ones can be added.

For evaluation, Observables must be grouped in an ObservableList so that the necessary computations are optimised.
Similarly, Variables may be grouped in a VariableList for matching.

The documentation for this new latticetools package is here (updated on each commit in this branch).
Though perfectly usable from now on, this is a work in progress: suggestion are welcome for modifications, additions, documentation…

Warning: as it is now, the new classes are conflicting with class definitions in the matching.py module. This is because I prefered to keep the "Variable" and "ElementVariable" class names to refer to the new objects. I modified the match() function in the matching.py module to use the new class (it's much simpler now!).
If one wants to keep the existing module, one should find another name for the two conflicting classes, and also keep two different matching functions (not very nice, but can be done). To be discussed.

@lfarv lfarv added enhancement Python For python AT code labels May 23, 2023
@lfarv lfarv marked this pull request as draft May 23, 2023 09:20
@lfarv lfarv added the WIP work in progress label May 23, 2023
@lfarv lfarv mentioned this pull request May 23, 2023
@swhite2401
Copy link
Contributor

This will take a long time to review!! I can already tell you that the modification of the matching is fine for me, we should keep the code a simple as possible and this is a clear improvement.

@simoneliuzzo simoneliuzzo removed their request for review May 25, 2023 07:43
@lfarv
Copy link
Contributor Author

lfarv commented May 25, 2023

@swhite2401: Well, there is no pressure at the moment. I have no development waiting for that. If it stays for long as a separate branch while you look at it, I will rebase it on master at each significant upgrade, so that it's does not get too late with respect to master.

If you agree, you could close the issue #603 and try to keep the list short!

@simoneliuzzo: I do not expect you to spend time on reviewing python code, but since this was a burning topic at the meeting on errors and corrections, it would be nice if you can just have a look at the global idea here. Though it's not implemented yet, a BpmObservable would be a clean way to introduce BPM errors, for instance.
And all future corrections strategies should be based on Variables and Observables.

@swhite2401
Copy link
Contributor

I closed the issue and will go through it slowly.

Concerning errors and correction, the question is not as hot as it used to be since there is a pySC coming online that handles all of this and @simoneliuzzo also has something that is working very well. I still think that we should integrate something in AT but I am also not satisfied with the present proposal, this is not what we would have liked. I have been too busy to look into alternatives.... but I will at some point. Since there is no emergency, I would prefer that we evaluate all options before merging anything.

For the moment the main application that I can see would be control system interface, there is more and more interest in having a 'python Middle Layer' and I think these features would be extremely useful to implement it!

@lfarv
Copy link
Contributor Author

lfarv commented May 25, 2023

I still think that we should integrate something in AT but I am also not satisfied with the present proposal

Nothing here concerning errors and corrections, apart that Observables (modified if needed) should be the targets of corrections (as they are for matching).

@simoneliuzzo
Copy link
Contributor

Dear @lfarv and @swhite2401 ,

thank you for these developments.

Is the matching faster?

Looking at the test code, it is not shorter to write a matching, but it looks more easy to read. I doubt this was the real target of the development.

I think the existing matching should be kept working, independently, for backward compatibility.

I use rather often geometry/survey matching. Something that could be GeometryObservable or SurveyObservable. How easy is it to write a private ******Observable?

How easy is it to specify a user defined function as variable? For example, change 5 dipoles keeping: 1) the sum constant, 2) the ratio between the fields in dipole 1 and dipole 3 constant, 3) assigning a quadrupolar gradient to each of the 5 dipoles proportional to the dipole field. (this is done in PETRAIV matching, very easy to write in MADX/mad8, rather cumbersome in AT / pyAT).

The pySC development is going on for the moment simply translating the matlab SC. I will point T.Hellert and L.Malina to this pull request since they are doing the work and know the code.
As @swhite2401 mentioned this development may become very useful for a future python Middle Layer.
It could be useful also for python implementation of optics correction in the ESRF CTRM, but only after it becomes an AT feature.

Also from my side, I may not promise that I will soon look at these features.

thank you

best regards
simone

@lfarv
Copy link
Contributor Author

lfarv commented May 27, 2023

Hello @simoneliuzzo,
I knew you could contribute valuably to this PR ! Thanks a lot.

To answer your first remark: no, matching is not faster. It is identical to the present version, It has just been adapted for the new Variable and Observable classes. A side effect is that the matching code is simpler. But the fact the matching test is more readable (and easier to write, I hope) comes from the improved Variable and Observable classes, the matching itself has not changed.

Then you raise two very interesting questions:

New Variables

The present base class Variable is very general: the two user-defined setfun and getfun arguments allow anything. Using callable class objects for setfun and getfun allows to initialise them with any number of parameters. In your example, you have 5 bending magnets and 2 constraints, which leaves 3 degrees of freedom. One could imagine for instance using a standard ElementVariable for dipoles 1 and 2, and a custom variable for dipole 4: it takes care of setting dipole3 according to dipole1, and setting dipole5 to fulfil the sum requirement. It also takes care of the 5 quadrupole components.

Variables are evaluated in order, so the custom variable can use the values set by the 2 previous ones.

This just means writing a python class providing setfun for the last variable.

I’ll propose a similar (but simpler) example in the documentation to illustrate the use of custom setfun classes.

Now to compare with MADX:

  • I’d like to see how you define these constraints in MADX. This may help in adding improvements to make the creation of custom classes simpler (you can mail it to me),
  • I’ll then try to send you the equivalent for PyAT,
  • MADX has its own language to define expressions, macros etc. In PyAT, the input language is python itself: customisation of variables can be done with python classes. Advantage: the power of python. Drawback: python is may be more difficult to master than the MAD language… So one should try to make this as simple as possible.

New Observables

Similarly, a custom evaluation function can compute anything, like computing the lattice geometry and extracting the desired values. Simply writing a python evaluation function is the simplest way of creating new observables based on linear optics, for instance. However, for performance reasons, the idea is to separate the computation itself, done once (call linopt6 for instance) and the processing (extract βx at point A), done for each observable. Geometry is another good example: it would be nice to compute the full geometry once (a kind of atgeometry), and have a new GeometryObservable to extract the desired values from its results. I’ll start looking at that, a GeometryObservable looks very useful.

So thanks Simone for your input,

@lfarv
Copy link
Contributor Author

lfarv commented May 30, 2023

I added more documentation on variables, including an example of custom variables (to answer one of @simoneliuzzo's questions).

@lfarv
Copy link
Contributor Author

lfarv commented May 30, 2023

There is now a full notebook with a matching example using a custom variable

@lfarv
Copy link
Contributor Author

lfarv commented May 30, 2023

Added another example notebook showing how to use various observables.

@swhite2401
Copy link
Contributor

This notebook in the documentation is very nice! @lcarver, do you think you could do the same thing with the collective effects examples??

@lfarv
Copy link
Contributor Author

lfarv commented May 31, 2023

I think most examples should be set as notebooks. In addition, notebooks are executed by Sphinx when generating the doc (by default, otherwise one can still store "pre-computed" notebooks), so we are sure that the outputs are correct !

To get the exact appearance of the notebook, it's better to have Sphinx installed, since myst-nb allows features which are not interpreted by Jupyter. For developers, pip install -e ".[doc]" does that.

The next improvement would be to add a link somewhere allowing users to download the notebooks.

@simoneliuzzo
Copy link
Contributor

simoneliuzzo commented Jun 2, 2023

Dear @lfarv,

below an example of something I am unable to do (if not in a cumbersome way) in AT, but was possible in mad8

dipoles in standard cell

DL4A = at.Bend('DL4A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12), KDIP)
DL3A = at.Bend('DL3A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12), KDIP)
DL2A = at.Bend('DL2A', DL_LEN * BYFR * (2 - DL12), ANG0 * BYFR * (2 - DL12), KDIP)
DL1A = at.Bend('DL1A', DL_LEN * BYFR * (1 + DL12), ANG0 * BYFR * (1 + DL12), KDIP)
DL0A = at.Bend('DL0A', DL_LEN * BTFR, ANG0 * BTFR, KDIP)

dipoles in dispersion suppressor

DS1A = at.Bend('DS1A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12) * ADS1, KDIP)
DS2A = at.Bend('DS2A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12) * ADS1, KDIP)
DS3A = at.Bend('DS3A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12) * (1 - ADS1), KDIP)
DS4A = at.Bend('DS4A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12) * (1 - ADS1), KDIP)

The variable DL12 is accessible for matching. When it is changed, both the standard cells and the dispersion suppressor cells are adjusted.
The mad8 matching script simply states (the same script would work in madx)

USE ARC_UU   

CELL

  vary, DL12, step=.001

  vary, KQD1, step=.001
  vary, KQF2, step=.001
  vary, KQD3, step=.001
  vary, KQF4, step=.001
  vary, KQD5, step=.001
  vary, KQF6, step=.001

  constrai, SD1A[2], bety = bysu, mux = mux_sd/2, muy = muy_sd/2
  constrai, SF1A[2], betx = bxsu, mux = muxu/2-mux_sf/2

   constrai, #e, mux = 2*muxu, muy = 2*muyu

   lmdif, tolerance = 1e-10, calls = 2000

endmatch

In this mad8 matching script by Pantaleo Riamondi:

  • it is extremely clear what is varied
  • even if the variables (DL12, K**) are used by ARC_UU with DL** dipoles and other cells via DS** dipoles, the variation computed for ARC_UU are automatically known to all cells that include either DL** or DS** elements
  • it is extremely simple to state the locations in the constraits (element name and occurrence in the lattice)
  • it is immediately readable by the user

I think this should be the target of updated matching tools for AT.

Not to mention, the matching process takes a negligible time.

@swhite2401
Copy link
Contributor

Dear @lfarv,

below an example of something I am unable to do (if not in a cumbersome way) in AT, but was possible in mad8

dipoles in standard cell

DL4A = at.Bend('DL4A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12), KDIP)
DL3A = at.Bend('DL3A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12), KDIP)
DL2A = at.Bend('DL2A', DL_LEN * BYFR * (2 - DL12), ANG0 * BYFR * (2 - DL12), KDIP)
DL1A = at.Bend('DL1A', DL_LEN * BYFR * (1 + DL12), ANG0 * BYFR * (1 + DL12), KDIP)
DL0A = at.Bend('DL0A', DL_LEN * BTFR, ANG0 * BTFR, KDIP)

dipoles in dispersion suppressor

DS1A = at.Bend('DS1A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12) * ADS1, KDIP)
DS2A = at.Bend('DS2A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12) * ADS1, KDIP)
DS3A = at.Bend('DS3A', DL_LEN * BXFR * (1 + DL12), ANG0 * BXFR * (1 + DL12) * (1 - ADS1), KDIP)
DS4A = at.Bend('DS4A', DL_LEN * BXFR * (2 - DL12), ANG0 * BXFR * (2 - DL12) * (1 - ADS1), KDIP)

The variable DL12 is accessible for matching. When it is changed, both the standard cells and the dispersion suppressor cells are adjusted. The mad8 matching script simply states (the same script would work in madx)

USE ARC_UU   

CELL

  vary, DL12, step=.001

  vary, KQD1, step=.001
  vary, KQF2, step=.001
  vary, KQD3, step=.001
  vary, KQF4, step=.001
  vary, KQD5, step=.001
  vary, KQF6, step=.001

  constrai, SD1A[2], bety = bysu, mux = mux_sd/2, muy = muy_sd/2
  constrai, SF1A[2], betx = bxsu, mux = muxu/2-mux_sf/2

   constrai, #e, mux = 2*muxu, muy = 2*muyu

   lmdif, tolerance = 1e-10, calls = 2000

endmatch

In this mad8 matching script by Pantaleo Riamondi:

  • it is extremely clear what is varied
  • even if the variables (DL12, K**) are used by ARC_UU with DL** dipoles and other cells via DS** dipoles, the variation computed for ARC_UU are automatically known to all cells that include either DL** or DS** elements
  • it is extremely simple to state the locations in the constraits (element name and occurrence in the lattice)
  • it is immediately readable by the user

I think this should be the target of updated matching tools for AT.

Not to mention, the matching process takes a negligible time.

@simoneliuzzo , I think you have to think of you ring as an object and provide a function that rebuilds it from the variables. Each time you change a variable you would then call this function to rebuild the whole ring.
Similarly, this ring object should be able to modify/extract cells to match them individually an propagate the modification to the whole ring.

I havre done something similar for EBS, I'll propose something for your example. This is certainly not a few lines because you need to write the code that builds your ring (the same as MADX) but once this is done it is relatively simple.
However, adding variables requires more effort than MADX in this case...

Concerning the speed of the matching, I have done test and AT is roughly equivalent: for your information MADX by default matches uncoupled lattice, this makes a big deference. To activate this in AT you may use method=at.linopt2

@lfarv
Copy link
Contributor Author

lfarv commented Jun 4, 2023

Thanks @simoneliuzzo for the example.

I have another option which could avoid rebuilding the lattice. I agree with @swhite2401 that a function to build the lattice is a good thing, it allows building blocks and combine them. It's also good for efficiency to reuse the same element when it appears several times in the lattice, rather than creating many identical elements. Then, modifying the single element is faster that modifying the many identical ones.

But rebuilding the lattice takes time, and during a matching it costs.

To avoid that I introduced 2 new classes:

a Parameter is a Variable which is not associated with a lattice element. It has a scalar value and may be used in matching as any Variable

an Expression represents any computation involving constants and Variables and is re-evaluated as soon as one of its Variables is modified.

In @simoneliuzzo's example, this could give:

DL_LEN = Parameter(3.0) # Initial value is 3.0 for example
BXFR = Parameter(0.1)
DL12 = Parameter(1.0)

the length of the DL4A would be set to Expression('{1} * {2} * (1-{0})', DL12, DL_LEN, BXFR)
The expression looks like a format string: it is any arithmetic formula where {0} refers to the 1st argument and so on (like in format strings). Here {0} is the DL12 Variable, {1} is DL_LEN, {2} is BXFR. All operators a constructs are allowed.

Then, letting elements be varied is done like this:

exp4 = Expression(dl4a_pts, 'PolynomB', '{1} * {2} * (1+{0})', DL12, DL_LEN, BXFR, index = 1)
exp3 = Expression(dl3a_pts, 'PolynomB', '{1} * {2} * (2-{0})', DL12, DL_LEN, BXFR, index = 1)
exp2 = Expression(dl2a_pts, 'PolynomB', '{1} * {2} * (2-{0})', DL12, DL_LEN, BXFR, index = 1)
exp1 = Expression(dl1a_pts, 'PolynomB', '{1} * {2} * (1+{0})', DL12, DL_LEN, BXFR, index = 1)
...

The DL12 Parameter is given to the matching, and its variation will be forwarded to all elements.

For now, both Parameter and Expression are working, the exact interface and the synchronisation are still to be done.

@lfarv
Copy link
Contributor Author

lfarv commented Jun 4, 2023

The example notebook is modified to demonstrate the 2 methods for handling correlated variables:

  1. Create a custom variable to handle the correlation
  2. Use Expression, Parameter and a standard variable

@swhite2401
Copy link
Contributor

Hello @lfarv ,
Very nice improvement in the right direction! I have nevertheless some comments:

  1. I do not really understand why you went for this very complicated string parsing scheme, this looks very old fashioned to me... why not just use callables? This is in principle way simpler, easier to understand and maintain and is much more flexible... this possibility should at least be added
  2. Why are you attaching the Parameters and expressions to Variable? These belong to the lattice

Now concerning deferred variables in general, the whole idea is really to defer the evaluation of these expression only when they are needed, i.e. when entering the tracking engine or reading the attribute. In theory you may want to assign these variables or expressions to any lattice element attribute.
Applications then go well beyond matching and inter-dependencies could be embarked in the lattice definition: this is what is done in MADX and is missing in AT. Ideally you would like to be able to save these variables in the lattice file such that they can be re-loaded.

Finally, this PR is becoming too huge and keeps growing. This was already a comment when you first proposed it. We need to split it otherwise it will never be merged.

It would like to propose the following PRs:

  1. Variables and Observables classes -> no impact on existing code
  2. Integration in matching module
  3. Response matrices
  4. Deferred variables

@lfarv
Copy link
Contributor Author

lfarv commented Jun 5, 2023

Hello @swhite2401
Nice analysis of the present proposal ! I have a few fast answers, but other points will need further discussion. For now:

  1. ... very complicated string parsing scheme ...

No good reason for this: I proposed that as the easiest way, but any better idea is welcome. I just took @simoneliuzzo' s 1st example: dipole length = DL_LEN * BXFR * (1 + DL12), and try to express it similarly as an equation string. I'd like to see an example of what you suggest to express the same equation.

  1. Why are you attaching the Parameters and expressions to Variable? These belong to the lattice

Well, it depends. I agree that parametrising the lattice elements would be a major improvement. However, concerning matching, the constraints you want to impose depend a lot on what you want to match. In the example notebook, I took the example of moving monitor in a straight section. The constraint l1 + l2 = constant is related to this matching case, but may be irrelevant for matching something else. This kind of constraint is not part of any element an has no reason to be part of the lattice description.

Also, parametrising lattice elements is a major task, out of the scope of this PR. In particular, the evaluation time is delicate: for sure not at tracking time (it would be evaluated at each pass through the element), preferably not at the beginning of tracking (if you track turn by turn, it will be evaluated on each turn), but only when a parameter is varied, more precisely after all parameters have been varied. The elements should stay as they are (also to avoid modifying the integrators), and parametrising should be done through additional attributes: it's a big work!

So the approach in this PR is much simpler: all computations are done after all variables in a VariableList are set.

Finally I agree with you scheduling, with this remark:
In the 1st step I will restrict to the "basic" variables, but there is still the problem of the matching module: if the new Variable replaces the old one, the matching module must be modified at the same time. If we want to keep the old one, one needs a new name for "Variable" and "match", and the new matching can be postponed to another PR. @simoneliuzzo pushes for this 2nd option, but I am reluctant to keep two different matching functions and to create the "Variable2" and "match2" names. What do we do?

@swhite2401
Copy link
Contributor

Ok, I can propose something, can I add functions to this branch or should I make my own copy?

I still think that all of these belong to the lattice! These are basically lattice parametrizations and if you take time to define them you would like to save them altogether with your lattice (and I know this is very complex...).

Your proposal is obviously much simpler, but we should be ambitious, I know for a fact that these deferred variables is what is stopping a lot of people form using anything else than MADX. Having such functionality in AT would be a huge asset. But clearly out of the scope of the PR I agree and this is why I would like to split things a bit.

I fully agree that we should not have 2 matching modules, this is confusing and requires a lot work to maintain, so you are probably right the first 2 steps have to merged in one go. But this is just my personal opinion... maybe we could make a new release first?

@lfarv
Copy link
Contributor Author

lfarv commented Jun 5, 2023

Ok, I can propose something, can I add functions to this branch or should I make my own copy?

Sure, feel free to commit to this branch

I still think that all of these belong to the lattice!

Could be, but then you have to modify the lattice to add the correlations required by such or such matching request. Why not… And wait for the full parametrisation for correlated matching. Unless we start with a simpler solution, which is not incompatible with the full one.

I fully agree that we should not have 2 matching modules, this is confusing and requires a lot work to maintain, so you are probably right the first 2 steps have to merged in one go. But this is just my personal opinion...

I have the same opinion. But then we have to assume the compatibility break!

maybe we could make a new release first?

Sure, I'm preparing the pull request and updating the developer documentation of the release procedure. There is still a pending PR fixing the format of geometry output, with again a problem of compatibility…

@lfarv
Copy link
Contributor Author

lfarv commented Nov 14, 2023

See #696

@lfarv lfarv closed this Nov 14, 2023
@lfarv lfarv mentioned this pull request Nov 15, 2023
@lfarv lfarv deleted the latticetools branch February 15, 2024 16:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Python For python AT code WIP work in progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants