Skip to content

Commit

Permalink
Fix a few typos in docstrings (#695)
Browse files Browse the repository at this point in the history
* Fix a typo in README.md

* Fix typos in docstrings and comments

* Fix typo in comment

* Fix typos in docstrings and comments

* Fix typos in docstrings

* Fix typos in docstrings

* Fix typos in comments

* Fix typo in comment

* Fix typos in docstrings

* Fix typos in docstrings and comments

* Fix typos in docstrings

* Fix typo in docstring

* Add release note

* Update build_natura_raster.py
  • Loading branch information
pitmonticone committed Apr 21, 2023
1 parent 309e837 commit 2f43229
Show file tree
Hide file tree
Showing 13 changed files with 44 additions and 42 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ There are multiple ways to get involved and learn more about our work. That's ho
.../pypsa-earth % jupyter lab
```
5. Verify or install a java redistribution from the [official website](https://www.oracle.com/java/technologies/downloads/) or equivalent.
To verify the successfull installation the following code can be tested from bash:
To verify the successful installation the following code can be tested from bash:

```bash
.../pypsa-earth % java -version
Expand Down
2 changes: 2 additions & 0 deletions doc/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ Upcoming Release

* Add *zenodo_handler.py* to update and upload files via code `PR #688 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/688>`__

* Fix a few typos in docstrings `PR #695 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/695>`__

PyPSA-Earth 0.1.0
=================

Expand Down
2 changes: 1 addition & 1 deletion scripts/build_natura_raster.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def get_transform_and_shape(bounds, res, out_logging):

def unify_protected_shape_areas(inputs, natura_crs, out_logging):
"""
Iterates thorugh all snakemake rule inputs and unifies shapefiles (.shp) only.
Iterates through all snakemake rule inputs and unifies shapefiles (.shp) only.
The input is given in the Snakefile and shapefiles are given by .shp
Expand Down
4 changes: 2 additions & 2 deletions scripts/build_renewable_profiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
overestimate production since it is assumed the geographical distribution is
proportional to capacity factor.
- ``conservative`` assertains the nodal limit by increasing capacities
- ``conservative`` ascertains the nodal limit by increasing capacities
proportional to the layout until the limit of an individual grid cell is
reached.
Expand Down Expand Up @@ -263,7 +263,7 @@ def get_hydro_capacities_annual_hydro_generation(fn, countries, year):
def check_cutout_completness(cf):
"""
Check if a cutout contains missed values.
That may be the case due to some issues witht accessibility of ERA5 data
That may be the case due to some issues with accessibility of ERA5 data
See for details https://confluence.ecmwf.int/display/CUSF/Missing+data+in+ERA5T
Returns share of cutout cells with missed data
"""
Expand Down
6 changes: 3 additions & 3 deletions scripts/build_shapes.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def filter_gadm(
# force GID_0 to be the country code for the relevant countries
geodf["GID_0"] = cc

# country shape should have a single geomerty
# country shape should have a single geometry
if (layer == 0) and (geodf.shape[0] > 1):
logger.warning(
f"Country shape is composed by multiple shapes that are being merged in agreement to contented_flag option '{contended_flag}'"
Expand Down Expand Up @@ -526,7 +526,7 @@ def download_WorldPop_API(
def convert_GDP(name_file_nc, year=2015, out_logging=False):
"""
Function to convert the nc database of the GDP to tif, based on the work at https://doi.org/10.1038/sdata.2018.4.
The dataset shall be downloaded independently by the user (see guide) or toghether with pypsa-earth package.
The dataset shall be downloaded independently by the user (see guide) or together with pypsa-earth package.
"""

if out_logging:
Expand Down Expand Up @@ -577,7 +577,7 @@ def load_GDP(
):
"""
Function to load the database of the GDP, based on the work at https://doi.org/10.1038/sdata.2018.4.
The dataset shall be downloaded independently by the user (see guide) or toghether with pypsa-earth package.
The dataset shall be downloaded independently by the user (see guide) or together with pypsa-earth package.
"""

if out_logging:
Expand Down
6 changes: 3 additions & 3 deletions scripts/build_test_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# -*- coding: utf-8 -*-
"""
Write option files (configs) for the Continous Integration tests
Write option files (configs) for the Continuous Integration tests
The config.tutorial.yaml has all options.
The test/* config files have only key/value strings that are different from the tutorial config.
Expand Down Expand Up @@ -49,10 +49,10 @@ def create_test_config(default_config, diff_config, output_path):
Inputs
------
default_config : dict or path-like
Default dictionray-like object provided as
Default dictionary-like object provided as
a dictionary or a path to a yaml file
diff_config : dict or path-like
Difference dictionray-like object provided as
Difference dictionary-like object provided as
a dictionary or a path to a yaml file
output_path : path-like
Output path where the merged dictionary is saved
Expand Down
2 changes: 1 addition & 1 deletion scripts/clean_osm_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def prepare_substation_df(df_all_substations):
}
)

# Add longitute (lon) and latitude (lat) coordinates in the dataset
# Add longitude (lon) and latitude (lat) coordinates in the dataset
df_all_substations["lon"] = df_all_substations["geometry"].x
df_all_substations["lat"] = df_all_substations["geometry"].y

Expand Down
28 changes: 14 additions & 14 deletions scripts/config_osm_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -450,21 +450,21 @@
# Australasia region includes New Caledonia and Papua New Guinea
continent_regions = {
# European regions
"SCR": ["DK", "NO", "SE", "FI", "IS"], # SCANDANAVIAN REGION
# EASTREN EUROPIAN REGION
"SCR": ["DK", "NO", "SE", "FI", "IS"], # SCANDINAVIAN REGION
# EASTERN EUROPEAN REGION
"EER": ["BY", "PL", "CZ", "RU", "SK", "UA", "LT", "LV", "EE", "FI", "MD"],
# CENTRAL EUROPIAN REGION
# CENTRAL EUROPEAN REGION
"CER": ["AT", "CH", "CZ", "DE", "HU", "PL", "SK", "LI"],
# BALKAN PENISULAN REGION
"BPR": ["AL", "BA", "BG", "GR", "HR", "ME", "RO", "SI", "RS", "ME", "MK"],
# WESTREN EUROPE
# WESTERN EUROPE
"WER": ["FR", "BE", "GB", "IE", "LU", "MC", "NL", "AD"],
# SOUTHERN EUROPAIN REGION
# SOUTHERN EUROPEAN REGION
"SER": ["ES", "AD", "IT", "PT", "SM", "MT"],
# African regions
# NORTHERN AFRICAN REGION
"NAR": ["EG", "LY", "TN", "DZ", "MA", "EH", "SD", "SS"],
# WESTREN AFRICAN REGION
# WESTERN AFRICAN REGION
# Guinea-Bissau ["GW"] belongs to the region but power data are NA in OSM)
"WAR": [
"MR",
Expand All @@ -484,7 +484,7 @@
],
# CENTRAL AFRICAN REGION
"CAR": ["TD", "CF", "CM", "GQ", "GA", "CD", "CG", "AO"],
# EASTREN AFRICAN REGION
# EASTERN AFRICAN REGION
# Somalia ["SO"] belongs to the region but power data are NA in OSM)
"EAR": ["ER", "ET", "UG", "KE", "RW", "BI", "TZ", "MZ", "DJ", "MG"],
# SOUTHERN AFRICAN REGION
Expand All @@ -511,15 +511,15 @@
"AE",
"YE",
],
# FAR EASTREN AISIAN REGION
# FAR EASTERN ASIAN REGION
"FEAR": ["JP", "KP", "KR", "CN", "TW", "MN"], # , "HK", "MO"],
# SOUTHEASTREN AISIAN REGION
# SOUTHEASTERN ASIAN REGION
"SEAR": ["LA", "TH", "KH", "VN", "PH", "MY", "SG", "BN", "ID"],
# CENTRAL AISIAN REGION
# CENTRAL ASIAN REGION
"CASR": ["KZ", "KG", "UZ", "TM", "TJ"],
# SOUTHERN AISIAN REGION
# SOUTHERN ASIAN REGION
"SASR": ["MM", "BD", "BT", "NP", "IN", "LK", "PK", "AF"],
# MIDDLE EASTREN ASIAN REGION
# MIDDLE EASTERN ASIAN REGION
"MEAR": [
"TR",
"SY",
Expand All @@ -539,7 +539,7 @@
"OM",
],
# American continent regions
"NACR": ["CA", "GL", "MX", "US"], # NORTHERN AMERCAN CONTINENT REGION
"NACR": ["CA", "GL", "MX", "US"], # NORTHERN AMERICAN CONTINENT REGION
# SOUTHERN LATIN AMERICAN REGION
"LACR": ["AR", "BO", "BR", "CL", "CO", "EC", "GF", "PE", "PY", "SR", "UY", "VE"],
# CENTRAL AMERICAN REGION
Expand All @@ -552,7 +552,7 @@

# Geofabrik and iso norm deviate for some countries and domains

# dictionary of correspondance between iso country codes and geofabrik codes containing those information
# dictionary of correspondence between iso country codes and geofabrik codes containing those information
# This dictionary instructs the script download_osm_data about how to successfully download data
# from countries that are aggregated into osm.
# For example, Senegal (SN) and Gambia (GM) cannot be downloaded from OSM separately, but only jointly as SN-GM
Expand Down
4 changes: 2 additions & 2 deletions scripts/make_statistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ def collect_shape_stats(rulename="build_shapes", area_crs="ESRI:54009"):

def collect_snakemake_stats(name, dict_dfs, config):
"""
Collect statistics on what rules have been successfull
Collect statistics on what rules have been successful
"""
ren_techs = [
tech
Expand Down Expand Up @@ -408,7 +408,7 @@ def weigh_avg(df, coldata="total_time", colweight="mean_load"):
def collect_renewable_stats(rulename, technology):
"""
Collect statistics on the renewable time series generated by the workflow:
- potantial
- potential
- average production by plant (hydro) or bus (other RES)
"""
snakemake = _mock_snakemake(rulename, technology=technology)
Expand Down
4 changes: 2 additions & 2 deletions scripts/make_summary.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
-------
Description
-----------
The following rule can be used to summarize the results in seperate .csv files:
The following rule can be used to summarize the results in separate .csv files:
.. code::
snakemake results/summaries/elec_s_all_lall_Co2L-3H_all
clusters
Expand All @@ -35,7 +35,7 @@
the line volume/cost cap field can be set to one of the following:
* ``lv1.25`` for a particular line volume extension by 25%
* ``lc1.25`` for a line cost extension by 25 %
* ``lall`` for all evalutated caps
* ``lall`` for all evaluated caps
* ``lvall`` for all line volume caps
* ``lcall`` for all line cost caps
Replacing '/summaries/' with '/plots/' creates nice colored maps of the results.
Expand Down
12 changes: 6 additions & 6 deletions scripts/monte_carlo.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,23 +42,23 @@
-----------
PyPSA-Earth is deterministic which means that a set of inputs give a set of outputs.
Parameter sweeps can help to explore the uncertainty of the outputs cause by parameter changes.
Many are familar with the classical "sensitvity analysis" that can be applied by varying the
Many are familiar with the classical "sensitivity analysis" that can be applied by varying the
input of only one feature, while exploring its outputs changes. Here implemented is a
"global sensitvity analysis" that can help to explore the multi-dimensional uncertainty space
"global sensitivity analysis" that can help to explore the multi-dimensional uncertainty space
when more than one feature are changed at the same time.
To do so, the scripts is separated in two building blocks: One creates the experimental design,
the other, modifies and outputs the network file. Building the experimental design is currently
supported by the packages pyDOE2, chaospy and scipy. This should give users the freedom to
explore alternative approaches. The orthogonal latin hypercube sampling is thereby found as most
performant, hence, implemented here. Sampling the mutli-dimensional uncertainty space is relatively
performant, hence, implemented here. Sampling the multi-dimensional uncertainty space is relatively
easy. It only requires two things: The number of *samples* (e.g. PyPSA networks) and *features* (e.g.
load or solar timeseries). This results in an experimental design of the dimenson (samples X features).
load or solar timeseries). This results in an experimental design of the dimension (samples X features).
Additionally, upper and lower bounds *per feature* need to be provided such that the experimental
design can be scaled accordingly. Currently the user can define uncertainty ranges e.g. bounds,
for all PyPSA objects that are `int` or `float`. Boolean values could be used but require testing.
The experimental design `lhs_scaled` (dimension: samplex X features) is then used to modify the PyPSA
The experimental design `lhs_scaled` (dimension: sample X features) is then used to modify the PyPSA
networks. Thereby, this script creates samples x amount of networks. The iterators comes from the
wildcard {unc}, which is described in the config.yaml and created in the Snakefile as a range from
0 to (total number of) SAMPLES.
Expand Down Expand Up @@ -230,7 +230,7 @@ def monte_carlo_sampling_scipy(
# this loop sets in one scenario each "i" feature assumption
# k is the config input key "loads_t.p_set"
# v is the lower and upper bound [0.8,1.3], that was used for lh_scaled
# i, j interation number to pick values of experimental setup
# i, j interaction number to pick values of experimental setup
# Example: n.loads_t.p_set = network.loads_t.p_set = .loads_t.p_set * lh_scaled[0,0]
exec(f"n.{k} = n.{k} * {lh_scaled[i,j]}")
logger.info(f"Scaled n.{k} by factor {lh_scaled[i,j]} in the {i} scenario")
Expand Down
2 changes: 1 addition & 1 deletion scripts/retrieve_databundle_light.py
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,7 @@ def get_best_bundles(countries, config_bundles, tutorial, config_enable):
set([config_bundles[conf]["category"] for conf in config_bundles])
)

# idenfify matched countries for every bundle
# identify matched countries for every bundle
for bname in config_bundles:
config_bundles[bname]["matched_countries"] = [
c for c in config_bundles[bname]["countries"] if c in countries
Expand Down
12 changes: 6 additions & 6 deletions scripts/simplify_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@
1. Create an equivalent transmission network in which all voltage levels are mapped to the 380 kV level by the function ``simplify_network(...)``.
2. DC only sub-networks that are connected at only two buses to the AC network are reduced to a single representative link in the function ``simplify_links(...)``. The components attached to buses in between are moved to the nearest endpoint. The grid connection cost of offshore wind generators are added to the captial costs of the generator.
2. DC only sub-networks that are connected at only two buses to the AC network are reduced to a single representative link in the function ``simplify_links(...)``. The components attached to buses in between are moved to the nearest endpoint. The grid connection cost of offshore wind generators are added to the capital costs of the generator.
3. Stub lines and links, i.e. dead-ends of the network, are sequentially removed from the network in the function ``remove_stubs(...)``. Components are moved along.
Expand Down Expand Up @@ -116,7 +116,7 @@ def simplify_network_to_380(n, linetype):
The function preserves the transmission capacity for each line while updating
its voltage level, line type and number of parallel bundles (num_parallel).
Transformers are removed and connected components are moved from their
starting bus to their ending bus. The corresponing starting buses are
starting bus to their ending bus. The corresponding starting buses are
removed as well.
"""
logger.info("Mapping all network lines onto a single 380kV layer")
Expand Down Expand Up @@ -460,12 +460,12 @@ def aggregate_to_substations(n, aggregation_strategies=dict(), buses_i=None):
buses_i
] = np.inf # bus in buses_i should not be assigned to different bus in buses_i

# avoid assignnment a bus to a wrong country
# avoid assignment a bus to a wrong country
for c in n.buses.country.unique():
incountry_b = n.buses.country == c
dist.loc[incountry_b, ~incountry_b] = np.inf

# avoid assignnment DC buses to AC ones
# avoid assignment DC buses to AC ones
for c in n.buses.carrier.unique():
incarrier_b = n.buses.carrier == c
dist.loc[incarrier_b, ~incarrier_b] = np.inf
Expand Down Expand Up @@ -656,7 +656,7 @@ def merge_isolated_nodes(n, threshold, aggregation_strategies=dict()):
n.loads_t.p_set[i_load_islands].mean(axis=0) <= threshold
]

# all the noded to be merged should be mapped into a single node
# all the nodes to be merged should be mapped into a single node
map_isolated_node_by_country = (
n.buses.loc[i_suffic_load].groupby("country")["bus_id"].first().to_dict()
)
Expand Down Expand Up @@ -779,7 +779,7 @@ def merge_isolated_nodes(n, threshold, aggregation_strategies=dict()):
- set(n.generators.query("carrier == @carrier").bus)
)
logger.info(
f"clustering preparaton (hac): aggregating {len(buses_i)} buses of type {carrier}."
f"clustering preparation (hac): aggregating {len(buses_i)} buses of type {carrier}."
)
n, busmap_hac = aggregate_to_substations(n, aggregation_strategies, buses_i)
busmaps.append(busmap_hac)
Expand Down

0 comments on commit 2f43229

Please sign in to comment.