Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use extendable lines to replace load shedding #31

Open
lukasol opened this issue Jul 17, 2017 · 7 comments
Open

Use extendable lines to replace load shedding #31

lukasol opened this issue Jul 17, 2017 · 7 comments
Assignees

Comments

@lukasol
Copy link
Member

lukasol commented Jul 17, 2017

Depending on the results of load shedding analyses in #19 allow the respective lines (those that connect nodes with increased load shedding) to be extendable. Then test this functionality and its potential to replace load shedding in a new run.

@lukasol lukasol self-assigned this Jul 17, 2017
@lukasol lukasol modified the milestone: Release 0.3 Jul 20, 2017
@ulfmueller ulfmueller mentioned this issue Aug 15, 2017
2 tasks
@ulfmueller ulfmueller self-assigned this Aug 18, 2017
@lukasol lukasol modified the milestones: Release 0.3, Release 0.4 Sep 8, 2017
@lukasoldi
Copy link
Contributor

@kimvk could please state your current progress briefly here? Thanks!

@kimvk kimvk removed this from the Release 0.4 milestone Oct 10, 2017
@kimvk
Copy link
Contributor

kimvk commented Oct 20, 2017

When I allow all lines to be extendable, load shedding disappears.
If I extend only the lines connected to the nodes with load shedding, load shedding is lowered, but most load shedding spots will remain.

Next step will be to extend the lines which are connected to the lines extended before until load shedding disappears. This will slower the computation time, because after each extension of the lines, the LOPF will be calculated again to see if load shedding still exist.

@ulfmueller
Copy link
Member

maybe we do not have to be that complicated. How does the solution look when you allowed all lines to be extended? Can we may be use this solution as the desired 'debugged' data solution?
Is there a huge grid expansion or only a slight one concerning only the effect of load shedding? Does the dispatch change dramatically or only slightly?
Maybe we can fix the generator dispatch of the load shedding lopf with the help of p_min_pu to limit a substantial change of dispatch and only tackle the load shedding.

Or even better you just set the capital cost of the lines to a very, very expensive value - that it wont change the dispatch substantially and only fix the problems caused by load shedding (load shedding turned off of course) because it has to supply the load. I think that is a very good idea. @kimvk Can you try this out? It should be a fast, easy and nice solution.

Sorry for doing a simultaneous thinking-writing and letting you read my thoughts... I think you can read right away the last paragraph.

@kimvk
Copy link
Contributor

kimvk commented Oct 28, 2017

I pushed the new code. I set all lines and transformers extendable and the capital cost to 1 million. The results of s_nom_opt for the lines and the transformers are saved in two csv fils, which can replace the old s_nom in a new calculation. If we calculate without loadshedding and with the data from the csv files the computation time is essentially faster.

@kimvk kimvk closed this as completed Oct 28, 2017
@kimvk kimvk reopened this Oct 28, 2017
@ulfmueller
Copy link
Member

Cool, I will use your code perform a lopf for the entire year and calculate s_nom_opt which we can use in the following for the base grid without load shedding :)

@MarlonSchlemminger
Copy link
Contributor

I've pushed a new version of code. Two csv-Files (one for transformers, one for lines) including all components with changed s_noms are produced and can then be used to overwrite s_noms in the next run. In a new environment on the server with most recent versions of etrago and egoio, it looks like there are no unbounded timesteps anymore (at least some timesteps which were unbounded before are now feasible). This means we can probably run this as soon as the dp run is finished.

@MarlonSchlemminger
Copy link
Contributor

The most recent versions of egoio and etrago didn't solve the issue with unbounded problems entirely, but I identified generator_noise as another solution. Using noise values generated on the server make some timesteps unbounded, whereas noise values generated on my local computer and then copied to the server make them feasible. I calculated over a 100 timesteps now and all are feasible with local noise values. The reason is unclear to me, but it at least seems to be a workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants