Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Numpyrize backward induction. #145

Merged
merged 112 commits into from
Apr 2, 2019
Merged
Show file tree
Hide file tree
Changes from 109 commits
Commits
Show all changes
112 commits
Select commit Hold shift + click to select a range
612fc02
Small formatting.
tobiasraabe Feb 6, 2019
15f5322
Added hotfix. Needs review.
tobiasraabe Feb 6, 2019
b3b9a47
Extract constructing covariates from pyth_calc_rewards_sys to pyth_so…
tobiasraabe Feb 7, 2019
437a6c4
Reduced pyth_create_state_space to minimum. Working until after covar…
tobiasraabe Feb 7, 2019
29d61a3
Running until after calculation of systematic rewards.
tobiasraabe Feb 7, 2019
99a2fe4
Small formatting.
tobiasraabe Feb 7, 2019
519f118
Everything running until pyth_simulate.
tobiasraabe Feb 8, 2019
ba4f7f8
Completed simulation.
tobiasraabe Feb 9, 2019
3809505
Estimation done, but could not test it.
tobiasraabe Feb 10, 2019
e1feba2
Added notes to delete some functions.
tobiasraabe Feb 10, 2019
51d036c
Deleted unnecessary statements.
tobiasraabe Feb 10, 2019
6f6b1af
Start to fix regression tests.
janosg Feb 11, 2019
6167fbe
fix
tobiasraabe Feb 11, 2019
f9edc7b
Small fixes and added states_indexer object.
tobiasraabe Feb 12, 2019
73334af
Adjustments in clsRespy.
janosg Feb 12, 2019
dc74705
Merge branch 'extract-covariates' into fix_regression_tests
janosg Feb 12, 2019
5906db2
Simulation runs, estimation not yet because we need an indexer object…
janosg Feb 12, 2019
f8d6012
Implement states_indexer in simulation, fixed simulated dataset.
tobiasraabe Feb 12, 2019
02a06a6
Integrated fix_regression_tests, added mail.
tobiasraabe Feb 12, 2019
14606da
implemented state class. maybe running.
tobiasraabe Feb 12, 2019
ebd14de
changes.
tobiasraabe Feb 14, 2019
fb8481e
Minimal changes to make run_regression work (#143)
Feb 14, 2019
98877e8
Merge branch 'janosg' into extract-covariates
tobiasraabe Feb 14, 2019
01e00e5
small fix.
tobiasraabe Feb 15, 2019
d28e723
Merge branch 'janosg' into extract-covariates
tobiasraabe Feb 15, 2019
d6eaea6
Made regression tests called from respy. Refactored get_total_values,…
tobiasraabe Feb 16, 2019
d8a6396
Added numba as dependency.
tobiasraabe Feb 16, 2019
90fb3a8
Solved bug. Regression tests up to 50 working.
tobiasraabe Feb 19, 2019
c8a7d4f
Cleaned last todos.
tobiasraabe Feb 19, 2019
960eb84
Small fixes.
tobiasraabe Feb 19, 2019
d0933f8
Fixed indexing in StateSpace. Removed unnecessary fast_routines.py.
tobiasraabe Feb 20, 2019
f43da02
Small changes to state space.
tobiasraabe Feb 22, 2019
3f253ec
Reduced looping in pyth_create_state_space and deleted unnecessary ch…
tobiasraabe Feb 22, 2019
314d397
Jitted pyth_create_state_space.
tobiasraabe Feb 22, 2019
8d4524d
fix.
tobiasraabe Feb 23, 2019
836e950
numpyrized create_covariates, but regression tests are failing.
tobiasraabe Feb 26, 2019
a3959b3
Fixed regression tests.
tobiasraabe Feb 26, 2019
4aa896e
Two fixes of pathlib objects.
janosg Mar 1, 2019
ab73d7d
Integration tests without Fortran are running.
tobiasraabe Mar 2, 2019
7a7d099
removed NaN handling in create_covariates:
tobiasraabe Mar 2, 2019
80edee1
first attempt at fixing test_f2py.py.
tobiasraabe Mar 2, 2019
a5cd02b
most tests are failing due to low pandas version.
tobiasraabe Mar 2, 2019
5a96cae
edit.
tobiasraabe Mar 3, 2019
227deb6
edit.
tobiasraabe Mar 4, 2019
a36616c
edit.
tobiasraabe Mar 4, 2019
70261b8
edit.
tobiasraabe Mar 4, 2019
2b4cbfa
edit.
tobiasraabe Mar 4, 2019
0abf250
edit.
tobiasraabe Mar 4, 2019
4b10b7a
edit.
tobiasraabe Mar 4, 2019
2b2bdf3
edit.
tobiasraabe Mar 4, 2019
de826ff
edit.
tobiasraabe Mar 4, 2019
b89d806
edit.
tobiasraabe Mar 4, 2019
88898f6
edit.
tobiasraabe Mar 4, 2019
ae8593f
just checking if fortran running on appveyor.
tobiasraabe Mar 4, 2019
5e2fa98
just checking if fortran running on appveyor.
tobiasraabe Mar 4, 2019
3e6f9c6
edit.
tobiasraabe Mar 4, 2019
765348b
Make tests at least run on travis.
tobiasraabe Mar 4, 2019
a7a73f7
Fix travis setup.
tobiasraabe Mar 4, 2019
2f2f5de
edit.
tobiasraabe Mar 4, 2019
e06ac02
edit.
tobiasraabe Mar 4, 2019
0439413
edit.
tobiasraabe Mar 4, 2019
d24f5ba
fix.
tobiasraabe Mar 4, 2019
ef487d7
fix.
tobiasraabe Mar 4, 2019
1637895
fix.
tobiasraabe Mar 4, 2019
d5bac8e
fix.
tobiasraabe Mar 4, 2019
44dfa75
fix.
tobiasraabe Mar 4, 2019
c6cad95
edit.
tobiasraabe Mar 4, 2019
9a67f2a
fix.
tobiasraabe Mar 4, 2019
cb71751
fix.
tobiasraabe Mar 4, 2019
af41091
Fixing f2py tests.
tobiasraabe Mar 5, 2019
0b62f36
More test fixes.
tobiasraabe Mar 5, 2019
5d84ed8
More fixes.
tobiasraabe Mar 5, 2019
8d00ec2
Fixed until segmentation fault :).
tobiasraabe Mar 5, 2019
9acc1d6
removed version pinning. Solved seg fault.
tobiasraabe Mar 6, 2019
154660a
Maybe fortran under Windows.
tobiasraabe Mar 6, 2019
092b5eb
fix.
tobiasraabe Mar 6, 2019
e169188
fixed tests.
tobiasraabe Mar 6, 2019
df7a0bd
Added auxiliary array which prevented compilation on Windows.
tobiasraabe Mar 7, 2019
6b66b21
Add informative test skip messages.
tobiasraabe Mar 7, 2019
78c9794
Delete erroneously added __init__.py
tobiasraabe Mar 7, 2019
ed49f6e
Add a little bit of installation instructions for fortran on windows.
tobiasraabe Mar 7, 2019
6726e30
Added preliminary version of numpyrized pyth_calc_sys_rew.
tobiasraabe Mar 8, 2019
87810ab
Fixed test.
tobiasraabe Mar 8, 2019
8240621
Added more doctests and silenced warnings in pytest configuration.
tobiasraabe Mar 8, 2019
468eac6
Fixed doctests.
tobiasraabe Mar 8, 2019
c9bb0a6
Fixed importing in testing.
tobiasraabe Mar 8, 2019
ee6c22d
Fix.
tobiasraabe Mar 8, 2019
652361d
Numpyrized state space and rewritten almost everything. Python tests …
tobiasraabe Mar 10, 2019
523cd4a
Fixed tests, issues with test_integration::test_12.
tobiasraabe Mar 11, 2019
f621db4
Added example to improve get_emaxs_subs...
tobiasraabe Mar 11, 2019
237d544
Small modification.
tobiasraabe Mar 11, 2019
1ea710f
Changed test_integration::test_12 to find possible errors.
tobiasraabe Mar 11, 2019
118d546
Hopefully fixed interpolation.
tobiasraabe Mar 11, 2019
2691039
Merge branch 'extract-covariates' into numpyrize
tobiasraabe Mar 11, 2019
aa87add
Only one random test of KW replications will run.
tobiasraabe Mar 11, 2019
a1f418a
Fixed missing np.exp for shifts in pyth_back.
tobiasraabe Mar 11, 2019
5fee022
Deleted debugging statement.
tobiasraabe Mar 12, 2019
ad8b9c9
Improved documentation, added tests.
tobiasraabe Mar 12, 2019
68e2b24
Even more documentation.
tobiasraabe Mar 12, 2019
c3cb971
More.
tobiasraabe Mar 12, 2019
6138f86
Fixed test_6 in test_f2py.py.
tobiasraabe Mar 12, 2019
cd26115
Fixed test_f2py::test_6.
tobiasraabe Mar 14, 2019
8686578
Reworked construct_emax_risk as independent gufunc.
tobiasraabe Mar 15, 2019
780c58c
Deleted unused arguments and unpacked often used ones.
tobiasraabe Mar 17, 2019
57bd709
fixed tests.
tobiasraabe Mar 18, 2019
09185a8
Fixed test again.
tobiasraabe Mar 18, 2019
2264e38
Add exclusive condition in pyth_create_state_space. FIRST FORTRAN COM…
tobiasraabe Mar 28, 2019
6e58ad3
Merged conftests. Use pytests own tmpdir. Fixed pr testing.
tobiasraabe Mar 30, 2019
b9efe5f
Now working with absolute path.
tobiasraabe Mar 30, 2019
26f7fd4
Fixed testing_pull_request.py.
tobiasraabe Mar 31, 2019
06e0edb
Merge branch 'new-python' into numpyrize
tobiasraabe Mar 31, 2019
a8c479c
Merge branch 'new-python' into numpyrize
tobiasraabe Apr 2, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
62 changes: 42 additions & 20 deletions .appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,23 @@ notifications:
# Do not start two builds, one for the branch and one for the PR.
skip_branch_with_pr: true

image: Visual Studio 2015

environment:
MINICONDA: "C:\\Miniconda36-x64"
matrix:
- TRAVIS_CI_FORTRAN: false
TRAVIS_CI_MPI: false
TRAVIS_CI_OMP: false
- TRAVIS_CI_FORTRAN: true
TRAVIS_CI_MPI: true
TRAVIS_CI_OMP: true
- TRAVIS_CI_FORTRAN: true
TRAVIS_CI_MPI: false
TRAVIS_CI_OMP: true
- TRAVIS_CI_FORTRAN: true
TRAVIS_CI_MPI: true
TRAVIS_CI_OMP: false
# - CI_FORTRAN: false
# CI_MPI: false
# CI_OMP: false
- CI_FORTRAN: true
CI_MPI: true
CI_OMP: true
# - CI_FORTRAN: true
# CI_MPI: false
# CI_OMP: true
# - CI_FORTRAN: true
# CI_MPI: true
# CI_OMP: false

install:
# If there is a newer build queued for the same PR, cancel this one. The AppVeyor
Expand All @@ -37,25 +39,45 @@ install:
https://ci.appveyor.com/api/projects/$env:APPVEYOR_ACCOUNT_NAME/$env:APPVEYOR_PROJECT_SLUG/history?recordsNumber=50).builds | `
Where-Object pullRequestId -eq $env:APPVEYOR_PULL_REQUEST_NUMBER)[0].buildNumber) { `
throw "There are newer queued builds for this pull request, failing early." }

# Add miniconda to PATH
- ps: $env:PATH = "$env:MINICONDA;$env:MINICONDA\Scripts;$env:MINICONDA\bin;$env:PATH"
# Add MPI to path
- set PATH=C:\Program Files\Microsoft MPI\Bin;%PATH%
# Set conda to always yes and print info

# Add Fortran to PATH and install dependencies.
- ps: >-
if ($env:CI_FORTRAN -eq $true) {
$env:PATH += ";C:\msys64\mingw64\bin;C:\msys64\usr\bin"
pacman -S mingw64/mingw-w64-x86_64-liblas mingw64/mingw-w64-x86_64-lapack --noconfirm
echo "[build]`ncompiler=mingw32" > C:\Miniconda36-x64\Lib\distutils\distutils.cfg
}

# Install MS-MPI
- ps: >-
if ($env:CI_MPI -eq $true) {
Start-FileDownload https://github.com/Microsoft/Microsoft-MPI/releases/download/v10.0/msmpisdk.msi -FileName msmpisdk.msi
msiexec msmpisdk.msi /quiet /norestart
$env:PATH += ";C:\Program Files\Microsoft MPI\Bin"
}

# Configure conda and install packages
- conda config --set always_yes yes --set changeps1 no
- conda info -a
- conda install numpy scipy=0.19 pandas=0.20 statsmodels=0.8 pytest pytest-cov
- pip install codecov
- conda env create -n respy -f environment.yml
- activate respy

# Install respy
- pip install -e . -vv

# Run tests
- pytest --cov=respy -vvv -s

build: false

on_success:
- sh: |
if [ $TRAVIS_CI_FORTRAN ] || \
[ $TRAVIS_CI_MPI ] || \
[ $TRAVIS_CI_OMP ] ||
if [ $CI_FORTRAN ] || \
[ $CI_MPI ] || \
[ $CI_OMP ] ||
then
codecov
echo 'send coverage statistic'
Expand Down
45 changes: 29 additions & 16 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,32 +16,45 @@ addons:

matrix:
include:
- env: TRAVIS_CI_FORTRAN=False TRAVIS_CI_MPI=False TRAVIS_CI_OMP=False
- env: TRAVIS_CI_FORTRAN=True TRAVIS_CI_MPI=True TRAVIS_CI_OMP=True
- env: TRAVIS_CI_FORTRAN=True TRAVIS_CI_MPI=False TRAVIS_CI_OMP=True
- env: TRAVIS_CI_FORTRAN=True TRAVIS_CI_MPI=True TRAVIS_CI_OMP=False
- env: CI_FORTRAN=False CI_MPI=False CI_OMP=False
- env: CI_FORTRAN=True CI_MPI=True CI_OMP=True
- env: CI_FORTRAN=True CI_MPI=False CI_OMP=True
- env: CI_FORTRAN=True CI_MPI=True CI_OMP=False

install:
- sudo apt-get install -y -q mpich libmpich-dev
- travis_retry travis_wait pip install -r requirements.txt

# Install, configure conda and install packages
- wget https://repo.continuum.io/miniconda/Miniconda3-4.5.11-Linux-x86_64.sh -O miniconda.sh
- bash miniconda.sh -b -p /home/travis/miniconda
- export PATH=/home/travis/miniconda/bin:$PATH
- conda config --set always_yes yes --set changeps1 no
- conda info -a

- conda env create -n respy -f environment.yml
- source activate respy

# Install respy
- pip install -e . -vv
- pip install pytest-cov
- pip install codecov

script:
- travis_wait 45 pytest --cov=respy -vvv -s

after_success:
- if [ $TRAVIS_CI_FORTRAN ] || \
[ $TRAVIS_CI_MPI ] || \
[ $TRAVIS_CI_OMP ] ||
then
codecov
echo 'send coverage statistic'
fi
- if [ $CI_FORTRAN ] || \
[ $CI_MPI ] || \
[ $CI_OMP ] ||
then
codecov
echo 'send coverage statistic'
fi

notifications:
slack: policy-lab:LkWqVb15dNvdLjMQOyacTXy6
on_success: change

slack:
rooms:
- oseconomics:NwY0nlxNsQh1WTEs7Y1acukS
on_success: change # default: always
on_failure: always # default: always

email: false
Empty file added development/__init__.py
Empty file.
Empty file.
96 changes: 52 additions & 44 deletions development/documentation/forensics/upgraded/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,50 +5,58 @@

import shutil
import os
from pathlib import Path

# Compiler options. Note that the original codes are not robust enough to
# execute in debug mode.
DEBUG_OPTIONS = (
" -O2 -Wall -Wline-truncation -Wcharacter-truncation "
" -Wsurprising -Waliasing -Wimplicit-interface -Wunused-parameter "
" -fwhole-file -fcheck=all -fbacktrace -g -fmax-errors=1 "
" -ffpe-trap=invalid,zero"
)

PRODUCTION_OPTIONS = " -O3"

# I rely on production options for this script, as I also run the estimation
# below.
OPTIONS = PRODUCTION_OPTIONS

# Some strings that show up repeatedly in compiler command.
MODULES = "imsl_replacements.f90"
LAPACK = "-L/usr/lib/lapack -llapack"

# Copy required initialization files from the original codes.
for fname in ["seed.txt", "in1.txt"]:
shutil.copy("../original/" + fname, ".")

# Compiling and calling executable for simulation.
cmd = " gfortran " + OPTIONS + " -o dp3asim " + MODULES + " dp3asim.f90 " + LAPACK
os.system(cmd + "; ./dp3asim")

# Compiling and calling executable for assessment of interpolation.
for fname in ["dpsim1", "dpsim4d"]:
cmd = (
" gfortran "
+ OPTIONS
+ " -o "
+ fname
+ " "
+ MODULES
+ " "
+ fname
+ ".f90 "
+ LAPACK

def main():
# Compiler options. Note that the original codes are not robust enough to
# execute in debug mode.
DEBUG_OPTIONS = (
" -O2 -Wall -Wline-truncation -Wcharacter-truncation "
" -Wsurprising -Waliasing -Wimplicit-interface -Wunused-parameter "
" -fwhole-file -fcheck=all -fbacktrace -g -fmax-errors=1 "
" -ffpe-trap=invalid,zero"
)
os.system(cmd + "; ./" + fname)

# Compiling and calling executable for estimation.
cmd = " gfortran " + OPTIONS + " -o dpml4a " + MODULES + " dpml4a.f90 " + LAPACK
os.system(cmd + "; ./dpml4a")
PRODUCTION_OPTIONS = " -O3"

# I rely on production options for this script, as I also run the estimation
# below.
OPTIONS = PRODUCTION_OPTIONS

# Some strings that show up repeatedly in compiler command.
MODULES = "imsl_replacements.f90"
LAPACK = "-L/usr/lib/lapack -llapack"

# Copy required initialization files from the original codes.
path = Path("..", "original")
for fname in ["seed.txt", "in1.txt"]:
shutil.copy(str(path / fname), ".")

# Compiling and calling executable for simulation.
cmd = " gfortran " + OPTIONS + " -o dp3asim " + MODULES + " dp3asim.f90 " + LAPACK
os.system(cmd + "; ./dp3asim")

# Compiling and calling executable for assessment of interpolation.
for fname in ["dpsim1", "dpsim4d"]:
cmd = (
" gfortran "
+ OPTIONS
+ " -o "
+ fname
+ " "
+ MODULES
+ " "
+ fname
+ ".f90 "
+ LAPACK
)
os.system(cmd + "; ./" + fname)

# Compiling and calling executable for estimation.
cmd = " gfortran " + OPTIONS + " -o dpml4a " + MODULES + " dpml4a.f90 " + LAPACK
os.system(cmd + "; ./dpml4a")


if __name__ == '__main__':
main()
Empty file.
23 changes: 13 additions & 10 deletions development/documentation/reliability/run_reliability.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
#!/usr/bin/env python
from auxiliary_shared import process_command_line_arguments
from auxiliary_reliability import run
from development.modules.auxiliary_shared import process_command_line_arguments
from development.modules.auxiliary_reliability import run

if __name__ == "__main__":

def main():
is_debug = process_command_line_arguments(
"Run reliability exercise for the package"
)
Expand All @@ -12,8 +11,8 @@
spec_dict = dict()
spec_dict["fnames"] = ["reliability_short.ini"]

# The following key-value pairs are the requested updates from the baseline initialization
# file.
# The following key-value pairs are the requested updates from the baseline
# initialization file.
spec_dict["update"] = dict()

spec_dict["update"]["is_store"] = True
Expand All @@ -24,10 +23,10 @@
spec_dict["update"]["maxfun"] = 1500
spec_dict["update"]["level"] = 0.05

# The following key-value pair sets the number of processors for each of the estimations.
# This is required as the maximum number of useful cores varies drastically depending on the
# model. The requested number of processors is never larger than the one specified as part of
# the update dictionary.
# The following key-value pair sets the number of processors for each of the
# estimations. This is required as the maximum number of useful cores varies
# drastically depending on the model. The requested number of processors is never
# larger than the one specified as part of the update dictionary.
spec_dict["procs"] = dict()
spec_dict["procs"]["ambiguity"] = 1
spec_dict["procs"]["static"] = 1
Expand All @@ -42,3 +41,7 @@
spec_dict["update"]["maxfun"] = 0

run(spec_dict)


if __name__ == '__main__':
main()
5 changes: 2 additions & 3 deletions development/documentation/scalability/run_scalability.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#!/usr/bin/env python
from auxiliary_shared import process_command_line_arguments
from auxiliary_scalability import run
from development.modules.auxiliary_shared import process_command_line_arguments
from development.modules.auxiliary_scalability import run

if __name__ == "__main__":

Expand Down
Empty file added development/modules/__init__.py
Empty file.
31 changes: 16 additions & 15 deletions development/modules/auxiliary_regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from respy.pre_processing.model_processing import write_init_file
from respy.python.shared.shared_constants import IS_FORTRAN
from respy.python.shared.shared_constants import TOL
from auxiliary_shared import get_random_dirname
from development.modules.auxiliary_shared import get_random_dirname
from respy.tests.codes.auxiliary import simulate_observed
from respy.tests.codes.random_init import generate_init

Expand All @@ -27,9 +27,9 @@ def create_single(idx):
os.mkdir(dirname)
os.chdir(dirname)

# The late import is required so a potentially just compiled FORTRAN implementation is
# recognized. This is important for the creation of the regression vault as we want to
# include FORTRAN use cases.
# The late import is required so a potentially just compiled FORTRAN implementation
# is recognized. This is important for the creation of the regression vault as we
# want to include FORTRAN use cases.
from respy import RespyCls

# We impose a couple of constraints that make the requests manageable.
Expand All @@ -45,8 +45,8 @@ def create_single(idx):

# In rare instances, the value of the criterion function might be too large and thus
# printed as a string. This occurred in the past, when the gradient preconditioning
# had zero probability observations. We now generate random initialization files with
# smaller gradient step sizes.
# had zero probability observations. We now generate random initialization files
# with smaller gradient step sizes.
if not isinstance(crit_val, float):
raise AssertionError(" ... value of criterion function too large.")

Expand All @@ -63,8 +63,8 @@ def check_single(tests, idx):
# Distribute test information.
init_dict, crit_val = tests[idx]

# TODO: These are temporary modifications that ensure compatibility over time and will be
# removed once we update the regression test battery.
# TODO: These are temporary modifications that ensure compatibility over time and
# will be removed once we update the regression test battery.
init_dict["EDUCATION"]["lagged"] = []
for edu_start in init_dict["EDUCATION"]["start"]:
if edu_start >= 10:
Expand All @@ -89,8 +89,8 @@ def check_single(tests, idx):
print(msg)
return None

# In the past we also had the problem that some of the testing machines report selective
# failures when the regression vault was created on another machine.
# In the past we also had the problem that some of the testing machines report
# selective failures when the regression vault was created on another machine.
msg = " ... test is known to fail on this machine"
if "zeus" in socket.gethostname() and idx in []:
print(msg)
Expand All @@ -102,15 +102,16 @@ def check_single(tests, idx):
print(msg)
return None

# We need to create an temporary directory, so the multiprocessing does not interfere with
# any of the files that are printed and used during the small estimation request.
# We need to create an temporary directory, so the multiprocessing does not
# interfere with any of the files that are printed and used during the small
# estimation request.
dirname = get_random_dirname(5)
os.mkdir(dirname)
os.chdir(dirname)

# The late import is required so a potentially just compiled FORTRAN implementation is
# recognized. This is important for the creation of the regression vault as we want to
# include FORTRAN use cases.
# The late import is required so a potentially just compiled FORTRAN implementation
# is recognized. This is important for the creation of the regression vault as we
# want to include FORTRAN use cases.
from respy import RespyCls

write_init_file(init_dict)
Expand Down