We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seems to be around +/-2% at 5000 runs, would be valuable to see how that accuracy changes for smaller and larger numbers of runs
The text was updated successfully, but these errors were encountered:
import copy from simulator import Simulator import simulator from models.model import Model import numpy as np
import copy
from simulator import Simulator
import simulator
from models.model import Model
import numpy as np
TRIALS = 30 RUN_QTYS = [100,200,500,1000,2000,3000,4000,5000,7000,10000]
TRIALS = 30
RUN_QTYS = [100,200,500,1000,2000,3000,4000,5000,7000,10000]
simulator.DEBUG_LVL = 0
model = Model() full_params = copy.deepcopy(model.params) for runs in RUN_QTYS: param_vals = {key:obj["val"] for (key,obj) in full_params.items()} override_dict = {'monte_carlo_runs' : runs } new_simulator = Simulator(param_vals,override_dict) rates = [] for _ in range(TRIALS): success_rate, _= new_simulator.main() rates.append(success_rate) print(f'Runs: {runs} | Range: {np.ptp(rates)*100:.2f}%')
model = Model()
full_params = copy.deepcopy(model.params)
for runs in RUN_QTYS:
param_vals = {key:obj["val"] for (key,obj) in full_params.items()}
override_dict = {'monte_carlo_runs' : runs }
new_simulator = Simulator(param_vals,override_dict)
rates = []
for _ in range(TRIALS):
success_rate, _= new_simulator.main()
rates.append(success_rate)
print(f'Runs: {runs} | Range: {np.ptp(rates)*100:.2f}%')
Runs: 100 | Range: 5.00% Runs: 200 | Range: 5.00% Runs: 500 | Range: 3.80% Runs: 1000 | Range: 2.00% Runs: 2000 | Range: 1.55% Runs: 3000 | Range: 1.20% Runs: 4000 | Range: 1.45% Runs: 5000 | Range: 0.94% Runs: 7000 | Range: 0.84% Runs: 10000 | Range: 0.89%
Sorry, something went wrong.
No branches or pull requests
Seems to be around +/-2% at 5000 runs, would be valuable to see how that accuracy changes for smaller and larger numbers of runs
The text was updated successfully, but these errors were encountered: