Skip to content

Commit

Permalink
completing autodoc_mock_imports + Docstrings
Browse files Browse the repository at this point in the history
  • Loading branch information
DorsanL committed May 22, 2024
1 parent 0893291 commit 4baa421
Show file tree
Hide file tree
Showing 28 changed files with 227 additions and 195 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ For more information about the model foundations and features, please refer to t
REHO is developed by EPFL (Switzerland), within the Industrial Process and Energy Systems Engineering (IPESE) group.

Dorsan Lepour <dorsan.lepour@epfl.ch>
Cédric Terrier <cedric.terrier@epfl.ch>
Cédric Terrier <cedric.terrier@epfl.ch>
Joseph Loustau

## Licence
Copyright (C) <2021-2024> <Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland>
Expand Down
22 changes: 11 additions & 11 deletions documentation/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@
project = 'REHO'
copyright = '2021, IPESE, EPFL'
author = 'D. Lepour, J. Loustau, C. Terrier'

# The full version, including alpha/beta/rc tags
release = '1.0'
release = '1.1.0'


# -- General configuration ---------------------------------------------------
Expand Down Expand Up @@ -50,7 +48,7 @@
'github_url': 'https://github.com/IPESE/REHO',
'header_links_before_dropdown': 7,
'navbar_align': 'left',
"external_links": [{"name": "REHO FM", "url": "https://ipese-test.epfl.ch/reho-fm/"}],
"external_links": [{"name": "REHO-fm", "url": "https://ipese-test.epfl.ch/reho-fm/"}],
"icon_links": [{"name": "IPESE",
"url": "https://ipese-web.epfl.ch/ipese-blog/",
"icon": "https://github.com/IPESE/REHO/blob/documentation/documentation/images/logos/ipese_square.png?raw=true",
Expand All @@ -72,20 +70,23 @@
'pandas',
'openpyxl',
'numpy',
'scipy',
'scikit-learn',
'scikit-learn-extra',
'psycopg2',
'requests',
'sqlalchemy',
'scipy',
'psycopg2',
'geopandas',
'matplotlib',
'plotly',
'geopandas',
'urllib3',
'kaleido',
'dotenv',
'requests',
'coloredlogs',
'SALib',
'qmcpy',
'pvlib']
'pvlib',
'pyproj',
'shapely']
sys.modules['scikit-learn'] = MagicMock()
sys.modules['sklearn'] = MagicMock()
sys.modules['sklearn.metrics'] = MagicMock()
Expand All @@ -94,4 +95,3 @@
sys.modules['sklearn_extra.cluster'] = MagicMock()
sys.modules['sqlalchemy'] = MagicMock()
sys.modules['sqlalchemy.dialects'] = MagicMock()
sys.modules['shapely'] = MagicMock()
4 changes: 2 additions & 2 deletions documentation/data/methods.csv
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
*switch_off_second_objective*;To generate the Pareto curve by minimizing only one objective and constraining the other one. By default, both objectives are successively minimized and constrained.;False
**Profiles**;;
*include_stochasticity*;Includes variability among SIA typical consumption profiles;False
*sd_stochasticity*;If include_stochasticity is True, allows to specify a list [sd_consumption, sd_timeshift] to choose the variability in 1-consumption and 2-moment of the consumption;None
*sd_stochasticity*;If include_stochasticity is True, specify the variability parameters through a list [sd_consumption, sd_timeshift] where sd_consumption is the standard deviation on the profile value, and sd_timeshift is the standard deviation on the profile time shift;None
*use_dynamic_emission_profiles*;Uses hourly values for electricity GWP;False
*use_custom_profiles*;Allows to replace SIA profiles for DHW [L/h], electricity demands [W/h] and people gains [W/h] by custom ones, via a dictionary where the key is among [‘electricity’, ‘dhw’, ‘occupancy’] and the value is the path to the file;False
**Saving options**;;
*include_all_solutions*;For a district-scale optimization, gives the results from the SPs;False
*save_input_data*;Adds in the results file the input data (df_Buildings, df_Weather, df_Index);True
*save_timeseries*;Adds in the results file the timeseries results (df_Buildings_t and df_Unit_t);True
*save_streams*;Adds in the results file the streams-timeseries results (df_Streams_t);False
*save_lca*;dds in the results file the impact in terms of LCA indicators by units, hubs and energy carriers;False
*save_lca*;Adds in the results file the impact in terms of LCA indicators by units, hubs and energy carriers;False
*extract_parameters*;To extract all the parameters used in the optimization;False
*print_logs*;Prints the logs of the optimization(s);True
**Other**;;
Expand Down
9 changes: 0 additions & 9 deletions documentation/sections/4_Package_structure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,6 @@ Directory for data-related files.
- ``skydome/``
- ``weather/``


**model/**
==================

Expand Down Expand Up @@ -112,7 +111,6 @@ Core of the optimization model (model objectives, constraints, modelling equatio
.. automodule:: reho.model.preprocessing.clustering
:members:


`electricity_prices.py`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -179,22 +177,17 @@ Core of the optimization model (model objectives, constraints, modelling equatio
.. automodule:: reho.model.infrastructure
:members:


*reho.py*
------------------------------

.. automodule:: reho.model.reho
:members:


**plotting/**
==================

.. automodule:: reho.plotting

- ``layout.csv``: the plotting relies on this file to get the *color* and the *labels* that characterize the units and the layers.
- ``sia380_1.csv``: contains the translation of building's affectation in roman numbering to labels in the SIA 380/1 norm.

*plotting.py*
---------------

Expand All @@ -211,9 +204,7 @@ Core of the optimization model (model objectives, constraints, modelling equatio

.. automodule:: reho.plotting.sankey


*paths.py*
==================

.. automodule:: reho.paths

8 changes: 5 additions & 3 deletions documentation/sections/5_Getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,15 +128,17 @@ Please include a ``venv`` at the project root folder and install dependencies wi
pip install -r requirements.txt
.. warning::
The ``psycopg2`` dependency is known to cause some issues, as some prerequisites are fequently missing (i.e. the PostgreSQL library and Python development tools). For Windows users, there are binary wheels for Windows in PyPI so this should no longer be an issue. But for Linux and Mac users, 2 options are suggested:
The ``psycopg2`` dependency is known to cause some issues, as some prerequisites are frequently missing (i.e. the PostgreSQL library and Python development tools). For Windows users, the binary wheel ``psycopg2-binary`` is already specified in REHO's requirements so this should no longer be an issue.

1. Install the psycopg2-binary PyPI package instead, as it has Python wheels for Linux and Mac OS.
For Linux and Mac users, 2 options are suggested:

1. Try to install the ``psycopg2-binary`` instead:

.. code-block:: bash
pip install psycopg2-binary
2. Install the prerequisites for building the ``psycopg2`` package from source:
2. Install the prerequisites for building ``psycopg2`` from source:

.. grid:: 1 2 2 2
:gutter: 4
Expand Down
9 changes: 4 additions & 5 deletions documentation/sections/7_Contribute.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The repository consists of three types of branches:

- ``main``: Contains a stable and interoperable version of the code. Protected branch, only *Administrators* can push.
- ``documentation``: Contains additions and corrections to the tool documentation (this may concern .rst files, but also the DocStrings for the Python functions and classes). Anyone can push.
- *Others*: Used by *Developers* to collaborate and develop advanced features, or by *Users* to generate results.
- **Others**: Used by *Developers* to collaborate and develop advanced features, or by *Users* to generate results.

Reporting issues
-------------------
Expand Down Expand Up @@ -76,14 +76,13 @@ Documentation
The purpose of the documentation is to consolidate all the knowledge generated by the REHO community.
It is created using Sphinx and is an integral part of the repository. As such, it is open for modification by all users.

Feel free to edit the ``.rst`` files in the ``documentation`` directory, or to modify the DocStrings for the Python functions and classes as needed.
Feel free to edit the .rst files in the ``reho/documentation`` directory, or to modify the docstrings for the Python functions and classes as needed.

If you are not familiar with Sphinx, you can look at `Getting started with Sphinx <https://docs.readthedocs.io/en/stable/intro/getting-started-with-sphinx.html>`_.

Communication
================

A communication platform is available to quick chat with other Users and Developers.
We welcome new members to join our `REHO community on Mattermost <https://ipese-mattermost.epfl.ch/signup_user_complete/?id=6ukmwrxfufgmdcajm8ok6krfxo&md=link&sbr=su>`_.
In addition to the GitHub repository for exchanging ideas on the development of the tool, a communication platform is available to quick chat with other Users and Developers. We welcome new members to join our `REHO community on Mattermost <https://ipese-mattermost.epfl.ch/signup_user_complete/?id=6ukmwrxfufgmdcajm8ok6krfxo&md=link&sbr=su>`_.

You can also use the server to directly contact one of the Administrators if you have any questions.
You can also use this chat to directly contact one of the Administrators (@dorsan or @cedric_terrier) if you have any questions.
15 changes: 7 additions & 8 deletions reho/model/infrastructure.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@

from reho.paths import *


__doc__ = """
File for handling infrastructure parameters.
"""
Expand Down Expand Up @@ -326,7 +325,7 @@ def set_discretize_unit_size(self):

def prepare_units_array(file, exclude_units=[], grids=None):
"""
Prepares the array that will be used in the initialize_units.
Prepares the array that will be used by initialize_units.
Parameters
----------
Expand All @@ -340,7 +339,7 @@ def prepare_units_array(file, exclude_units=[], grids=None):
Returns
-------
np.array
Array that contains the one dictionary by cell, containing the units' information.
Contains one dictionary in each cell, with the parameters for a specific unit.
See also
--------
Expand All @@ -349,7 +348,7 @@ def prepare_units_array(file, exclude_units=[], grids=None):
Notes
-----
- Make sure the name of the columns you are using are the same as the one from the default files, that can be found
in *data/infrastructure*.
in ``data/infrastructure``.
- The name of the units, which will be used as keys, do not matter but the *UnitOfType* must be along a defined
list of possibilities.
"""
Expand Down Expand Up @@ -440,16 +439,16 @@ def initialize_units(scenario, grids=None, building_data=os.path.join(path_to_in
Returns
-------
dict
A dictionary containing building_units and district_units.
Contains building_units and district_units.
See also
--------
initialize_grids
Notes
-----
- The default files are located at *reho/data/parameters*.
- The custom files can be given as absolute or relative path
- The default files are located in ``reho/data/parameters``.
- The custom files can be given as absolute or relative path.
Examples
--------
Expand Down Expand Up @@ -504,7 +503,7 @@ def initialize_grids(available_grids={'Electricity': {}, 'NaturalGas': {}},
Returns
-------
dict
A dictionary containing information about the initialized grids.
Contains information about the initialized grids.
See also
--------
Expand Down
1 change: 0 additions & 1 deletion reho/model/master_problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
from reho.model.preprocessing.local_data import *
from reho.model.sub_problem import *


__doc__ = """
File for handling data and optimization for an AMPL master problem.
"""
Expand Down
4 changes: 2 additions & 2 deletions reho/model/postprocessing/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
__doc__ = """
Directory where the output of the optimization from the AMPL model is extracted and processed to give a REHO results dictionary.
"""
Directory where the output of the optimization from the AMPL model is extracted and processed to give a ``reho.results`` dictionary.
"""
31 changes: 15 additions & 16 deletions reho/model/postprocessing/sensitivity_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,27 +7,29 @@

from reho.model.reho import *

__doc__ = """
Performs a sensitivity analysis on the optimization.
"""


class SensitivityAnalysis:
"""
Performs a sensitivity analysis (SA): sampling, problem, store all optimizations results and the
sensitivity of each tested parameters.
Performs a sensitivity analysis (SA): sampling, solving, storing all optimizations results and the sensitivity of each tested parameter.
Parameters
----------
reho : reho object
Model of the district, obtained via the REHO class.
SA_type : str
Type of SA (Morris or Sobol).
Type of SA, choose between 'Morris', 'Sobol', and 'Monte_Carlo'.
sampling_parameters : int
Number of trajectories for the sampling of the solution space.
upscaling_factor : int
To represent the effective ERA of the typical districts.
Notes
-------
The framework is designed to be performed using TOTEX minimization but can easily be modified,
just change the objective function of the reho and the KPI saved into OBJ in the function run_SA()
The framework is designed to be performed using TOTEX minimization but can easily be modified: simply change the objective function in the REHO object initialization, and adapt the calculation for ``objective_values`` in ``extract_results()``.
"""

def __init__(self, reho, SA_type, sampling_parameters=0, upscaling_factor=1):
Expand All @@ -41,7 +43,7 @@ def __init__(self, reho, SA_type, sampling_parameters=0, upscaling_factor=1):
self.parameter = {}
self.problem = {}
self.sampling = []
self.OBJ = []
self.objective_values = []
self.SA_results = {'num_optimizations': [], 'dict_df_results': [], 'dict_res_ES': []}
self.sensitivity = []

Expand All @@ -64,10 +66,9 @@ def get_lists(self):

def build_SA(self, unit_parameter=['Cost_inv1', 'Cost_inv2'], SA_parameters={}):
"""
Description:
- Generate the list of parameters for the SA, their values and type of variation range
- Generate the problem of the SA, i.e. define the parameters and theirs bounds
- Generate the sampling scheme of the SA
- Generates the list of parameters for the SA, their values and type of variation range
- Generates the problem of the SA, i.e. define the parameters and theirs bounds
- Generates the sampling scheme of the SA
Parameters
----------
Expand Down Expand Up @@ -151,7 +152,7 @@ def run_SA(self, save_inter=True, save_inter_nb_iter=50, save_time_opt=True, int
---------
SA_results : dict
Contains the number of the optimization and a dictionary regrouping all main results of the optimizations
OBJ : list
objective_values : list
Values of the objective function for each optimization
"""

Expand Down Expand Up @@ -228,9 +229,9 @@ def calculate_SA(self):
Computes the sensitivity indices with the objective values and the problem.
"""
if self.SA_type == "Sobol":
sensitivity = sobol_analyze.analyze(self.problem, np.array(self.OBJ), print_to_console=True, calc_second_order=False)
sensitivity = sobol_analyze.analyze(self.problem, np.array(self.objective_values), print_to_console=True, calc_second_order=False)
if self.SA_type == "Morris":
sensitivity = morris_analyze.analyze(self.problem, self.sampling, np.array(self.OBJ), print_to_console=True)
sensitivity = morris_analyze.analyze(self.problem, self.sampling, np.array(self.objective_values), print_to_console=True)
self.sensitivity = sensitivity

def plot_Morris(self, save=False):
Expand Down Expand Up @@ -269,9 +270,7 @@ def extract_results(self, reho, j):

self.SA_results['num_optimizations'].append(j)
self.SA_results['dict_df_results'].append(dict_res)
self.OBJ.append(reho.results[self.SA_type][0].df_Performance['Costs_inv']['Network']
+ reho.results[self.SA_type][0].df_Performance['Costs_op']['Network']
+ reho.results[self.SA_type][0].df_Performance['Costs_rep']['Network'])
self.objective_values.append(reho.results[self.SA_type][0].df_Performance['Costs_inv']['Network'] + reho.results[self.SA_type][0].df_Performance['Costs_op']['Network'] + reho.results[self.SA_type][0].df_Performance['Costs_rep']['Network'])

df_Grid_t = reho.results[self.SA_type][0].df_Grid_t[['Grid_demand', 'Grid_supply']].groupby(['Layer', 'Hub', 'Period']).sum()
df_Annuals = reho.results[self.SA_type][0].df_Annuals
Expand Down
2 changes: 1 addition & 1 deletion reho/model/preprocessing/EV_profile_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import numpy as np

__doc__ = """
Generates electric vehicle (EV) demand profiles.
Generates electric vehicles (EVs) demand profiles.
"""


Expand Down
15 changes: 7 additions & 8 deletions reho/model/preprocessing/QBuildings.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,14 @@
import reho.model.preprocessing.skydome as skydome
from reho.paths import *

__doc__ = """
Handles data for buildings characterization.
"""


class QBuildingsReader:
"""
This class is used to handle and prepare the data related to buildings.
Handles and prepares the data related to buildings.
There usually come from `GBuildings <https://ipese-web.epfl.ch/lepour/qbuildings/index.html>`_ database. However,
one can use data from a csv, in which case the column names should correspond to the GBuildings one, described in
Expand Down Expand Up @@ -94,7 +98,7 @@ def establish_connection(self, db):

return

def read_csv(self, buildings_filename='buildings.csv', nb_buildings=None, roofs_filename='roofs.csv', facades_filename='facades.csv'):
def read_csv(self, buildings_filename='data/buildings.csv', nb_buildings=None, roofs_filename='data/roofs.csv', facades_filename='data/facades.csv'):
"""
Reads buildings-related data from CSV files and prepare it for the REHO model.
Expand All @@ -120,13 +124,8 @@ def read_csv(self, buildings_filename='buildings.csv', nb_buildings=None, roofs_
Notes
-----
- If `nb_buildings` is not provided, all buildings in the 'buildings' data are considered.
- If ``nb_buildings`` is not provided, all buildings in the 'buildings' data are considered.
- If ``load_roofs = True``, `roofs_filename` must be provided, else it is not useful. Same goes for the facades.
- This function can be used with default files in case one does not want to connect to the database and does
not need a particular building.
In that case, do not fill any filename. `buildings.csv`, `roofs.csv` and `facades.csv`
will be used by default.
It should be noted that those names are therefore reserved for the default and cannot be used for your own files.
Example
-------
Expand Down
Loading

0 comments on commit 4baa421

Please sign in to comment.