-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FP Hahne - Changes #77
Open
hmfi3
wants to merge
42
commits into
tum-ewk:master
Choose a base branch
from
hmfi3:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
outer for loop still with lists
All 27 flex price figures (with summer/winter and weekday/weekend/average day are only plotted if parameter "results" in plot_flex_prices.py is changed from "case_study" to "all"
at least two time steps check (before only evaluated if at least one time step)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@michelzade
Explanation of the changes to the aggregate_ev_flex function:
(in order of first appearance, with [line of code in commit "Merge branch 'dev-helena' 534b4a, 28.4.2021 by Helena Hahne])
[42f] The power levels, that are read from the output directory are now sorted from lowest to highest value.
[46ff, 517ff, 606ff] For plotting the prices, the y-axis limits should be the same for all plots for comparability reasons, an are read out when executing the aggregate function (absolute highest values that appear for each of the variables that are plotted). The variables are initialized with (negative) inf values before the aggregation is started [46ff]. The maximum and minimum y-limit values are calculated for every season in every power level and if the respective value is higher/lower, the variable value is overwritten with the absolute higher value [517ff]. Finally after the data aggregation is finished, the 6 variables (min/max forecast, power and price) are saved to a dictionary and returned to the run_ev_case_study module.
[100ff] Extension of the flex_sum_df dataframe - now also including columns for the maximum and minimum flexibility prices for every "price tariff - positive/negative flexibility"-combination. This is done to later make the visualization of the price range between the average of the highest and the average of the minimum prices offered in a 15-min time step.
[131ff] The new max/minprice columns are initialized with inf/-inf, to enable the calculation of the upper and lower bounds later on.
[164ff] To only look at actual prices for offered flexibilities, all flexibility prices with the value zero which at the same time do not have a correlating flexibility power (meaning price is zero because no flexibility was offered), are set to NaN.
[200ff] The flex_result_df dataframe is also extended by 10 colums with the negative and positive flexibility price for every pricing tariff.
[270ff] Preparation for the flex price aggregation process - the list of all the flexibility price columns of the flex_result_df, which is used in the next step.
[276ff] For every availability, the flexibility price is compared to the maximum and minimum price column. For this, the flex_sum_df is compared with the flex_result_df of the current availability at the matching indices (with the help of a temporary dataframe df_temp that represents the part of the flex_sum_df that has the same timestamp indices as the current flex_result_df). Using np.where, the existing entry in the maxprice column (df_temp[maxprice], initially -np.inf) is compared to the new flexprice value from the current availability. If the "new" flexibility price is higher or equal than the existing value, the entry in flex_sum_df at this index is overwritten, if not or there was no flexibility offered (NaN value in flex_result_df), the existing value remains. Tthe maxprice columns were initialized with -np.inf to ensure that all theoretically possible values are accounted for. The process for the minimum prices is the same, only with smaller or equal than and of course starting at np.inf.
[288f] In case the previous calculations leave some inf/-inf values in the max/minprice columns (meaning there was no flexibility offered at that time step) they are replaced with NaN.
[361ff] Another new feature: now the aggregation is not only executed for the whole time represented by the availabilities, but also for the summer respective winter season (defined by the Time of Use pricing tariff, summer from June to September). Here the dictionary with the different dataframes that are necessary for that calculation is prepared. Depending on whether timestamp indices with summer or winter months exist, all winter/summer availabilities are copied to a respective winter/summer dataframe. At last the "allseasons" dataframes are added to the dictionary, as they should be calculated in the last execution of the for loop, as the results are used by other parts of the code (heatmap calculations) which come after that.
[334ff] The for loop over the summer, winter and allseason dataframes to aggregate the data. The current "i" so to say is always directly copied to a variable that is used in the further code.
[368ff, 462ff] Additionally to the differentiation between weekday and weekend day, the average day of all 7 days of a week is calculated. This average 15 min time step value for all 15 min time steps of a week does not work for the flexibility prices though, as addition and then dividing by the number of days (5, 2 or 7) is not possible when there are NaN values.
[391ff] The groupby function for the flex_sum dataframes was updated, as the mean should only be taken from the already previously existing columns. For the max/minprice columns, the groupby function takes the maximum or minimum value.
[472ff] Here the weekday/weekend day/average day average values for the flex prices (see comment regarding [368ff, 462ff]) are calculated. First a list of all the price columns is created, then the new dataframes for the flexibility prices are created. They are then filled in a for loop that goes over values from i = 0 to i = 95. For every i, the mean values of all the price columns are concatenated to the new dataframes. Through intelligently choosing the range for "iloc" (for example i:480:96, meaning starting at the first Monday 00:00 am time step and going until the first Saturday time step (excluded) in steps of 96) the effect of the previous calculations [368ff, 462ff] can be easily achieved while at the same time not failing because of NaN values. These are handled in the following way: If one or more of the values are NaN, only the average over the remaining values is calculated. If all values are NaN, the result value for that time step is also NaN, meaning that no flexibility was ever offered at this time step and resulting in a gap in the plot.
[501ff] Saving the data to hdf files: changes due to the additionally calculated dataframes, season string is included in filename.
[517ff] ylims calculation, see above.