-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choosing constraints: parameter vs. relationship #13
Comments
In GitLab by @jkiviluo on Sep 14, 2018, 11:33 That description was supposed to be free of opinion (probably quite not so) and Kris helped me to modify it to that direction (while also giving other input). So, I just wanted express my main reason why relationships would be great: Using relationships needs additional objects (Data Store gets bit more complex), but I believe that life is easier in the end. I think it's important to have a clear separation between data and function. With this you can make re-usable databases that try to contain all possible data. Then you just apply rules how you want to use the database in a particular task (e.g. you might want to use the same data in different ways for the investment model and the UCED model that you run iteratively). I think it's more straightforward to keep the parameter data same and just create separate model instances where you select bit different methods for some of the features. Now, that's just my current view. Good arguments in either direction are very welcome. |
In GitLab by @manuelma on Sep 16, 2018, 12:02 Nice post @juha_k. I just have a couple of questions:
From this I understand that if we want to 'deactivate' a parameter, we set its value to zero. Does that work? What if zero is a meaningful value?
This is an interesting question, but I don't see how it relates to the main issue? --At the moment,
Just to be sure, do you mean define parameters for method objects? Because we can't define relationships between objects and parameters. |
In GitLab by @DillonJ on Sep 17, 2018, 12:15 My main problem with this approach is that relationships indicate some sort of relationship between objects Parameters are data that tell us things about objects. However, this relationship: This also means that So we have to create this new object class (that will appear in the tree) what are the object instances then and what do they represent? I suspect you're going to need something in "quotes".... my worst nightmare :) As @manuelma alludes to, I think there are hidden implementation issues here. It just seems too awkward and non-intuitive. I don't think using relationships in this way achieves separation of data and function... relationships are just as much data as parameters. In the end of the day, you create parameters to indicate function, or create relationships (meaning you also need to create mostly empty new object classes and object instances)... I really think it's not using objects and relationships in a nice way. It feels like a twisting of the data structure... |
In GitLab by @jkiviluo on Sep 17, 2018, 12:15
I don't know how it would work with the value field. For JSON we could use things like NaN. That might mean that we would not have a separate value field, just the JSON. But maybe there is another way to solve this.
It's a difference between the parameter and relationships approaches. With the relationship approach it is straightforward to establish which one is used (you select 'times series' method or a 'constant' method). With the parametric approach there needs to be a convention (that the user must know) or some other way to establish precedence.
Good point. We would need to either have object_class parameters that would list all the parameters (automatically) so that we can create that relationship or then allow a relationship between an object and parameter. We don't need to do this right away - it's only later when we want to enable parameter validation with archetypes. |
In GitLab by @DillonJ on Sep 17, 2018, 12:22 @juha_k We already have a working mechanism for dealing with time varying data I.e. you can specify a constant value, a time patterned value, link to a time_series object or embed a time series in the json and as @manuelma mentions, JumpAllOut already supports this. This is a huge case for a parameter based approach because we have no mechanism for time varying relationships, which in my view, would be very messy. An advantage of the parameter based approach is also that they can be defined, if we wish, on the relationship between the entity and a temporal_stage object and take different values for each temporal stage, thus giving us significant capability in terms of having different formulations in different temporal_stages. |
In GitLab by @DillonJ on Sep 17, 2018, 12:34 Another issue is that you would need to refer to require_upward_primary_reserve in "quotes" and if it's wrong, you won't get an error. If you used parameters with enumeration as proposed in require_upward_primary_reserve you could write something like: if(require_upward_primary_reserve(unit=u) == upward_primary_reserve_required
#do stuff |
In GitLab by @jkiviluo on Sep 17, 2018, 12:50
Number data would always be in parameters, including time varying data. There is no need for time-varying relationships (I don't even know what you mean by it). The relationships are only used to say when the number data is used and when it is not used. (And if there is a need to say that something starts from T=5, then a number must be given in a parameter field). |
In GitLab by @jkiviluo on Sep 17, 2018, 12:52
We can also have |
In GitLab by @jkiviluo on Sep 17, 2018, 12:56
I don't get this. In Spine Model, one would have sets. No quotes needed. If you write something wrong in Spine Model, like |
In GitLab by @DillonJ on Sep 17, 2018, 12:58
Yes, it's a good thing, that was my point. With "stuffinquotes" you don't have this safeguard. |
In GitLab by @jkiviluo on Sep 17, 2018, 13:37 So, just to make clear. |
We're nowadays using parameters that are methods and in 0.8.2 we will get method parameter type (with some additional functionality). |
In GitLab by @jkiviluo on Sep 14, 2018, 11:23
There's been a bit pro-longed discussion over Powerpoint slides, e-mail and telco between @poncelet, @DillonJ and me that has included this problem. There's also been some references to it in other issues like https://gitlab.vtt.fi/spine/model/issues/59#note_2700. We are also preparing another issue on the archetype (so don't discuss that here).
There are at least two major ways to express when a constraint would be used for a particular object in Spine Model.
Let's go over both approaches. I'm trying to keep it simple here (i.e. it can get more complicated). Also, this is just a partial example (the full details don't add much information).
Approach where the specification of the ’common’ parameters are also used as a trigger to generate constraints
Here the existence of a <> Null/NaN parameter value for the upward primary reserve would mean that the unit will participate in the upward primary reserve. Similarly, the upward primary reserve constraint in the node gets established by the existence of the (constant) value for the parameter.
Advantages of this approach:
Disadvantages of this approach:
Thus, you would need to have a scenario 'no reserves' that is built by combining the 'base' scenario with the 'no reserves' alternative (resulting in the displacement of original values with the zeros). Alternatively, the archetype could contain a field where a list of parameters which will be ignored could be stored
upward_primary_reserve
? How to decide which one is used? A separate parameter flag? What if the flag points to a constant and only time series values are provided? Or should they be in separate alternatives (i.e. it is not allowed to have both constant and time series on single record (row))?Approach where relationships are used as a trigger to generate constraints
require_upward_primary_reserve
is what I call amethod
(relates to the archetype - feature - method chain, but let's not go there here). Here the existence of thenode_Leuven__require_upward_primary_reserve
causes the upward reserve constraint to be activated fornode_Leuven
.Advantages of this approach:
upward_primary_reserve
can be switched off without using alternatives and scenarios. Just delete thenode__upward_primary_reserves
relationship and the numeric data remains intact.Disadvantages of this approach:
The text was updated successfully, but these errors were encountered: