Skip to content

Commit

Permalink
Merge pull request #138 from kiudee/133_export_optimizer
Browse files Browse the repository at this point in the history
Save optimizer object to disk in tuning server
  • Loading branch information
kiudee committed Jun 26, 2021
2 parents 817afd0 + 3ed8f8c commit 8135164
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 1 deletion.
4 changes: 3 additions & 1 deletion HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
History
=======

0.7.3 (2021-06-26)
0.7.3 (2021-06-27)
------------------
* Add ``--fast-resume`` switch to the tuner, which allows instant resume
functionality from disk (new default).
Expand All @@ -11,6 +11,8 @@ History
* Add ``--skip-benchmark`` flag to distributed tuning client. If True, it will
skip the calibration of the time control, which involves running a benchmark
for both engines.
* Tuning server of the distributed tuning framework will now also save the
optimizer object.
* Fix the match parser producing incorrect results, when concurrency > 1 is
used for playing matches.
* Fix the server for distributed tuning trying to compute the current optimum
Expand Down
4 changes: 4 additions & 0 deletions tune/db_workers/tuning_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
from datetime import datetime
from time import sleep

import dill

try:
import joblib
except ImportError:
Expand Down Expand Up @@ -102,6 +104,8 @@ def save_state(self):
np.savez_compressed(
path, np.array(self.opt.gp.pos_), np.array(self.opt.gp.chain_)
)
with open("model.pkl", mode="wb") as file:
dill.dump(self.opt, file)

def resume_tuning(self):
path = os.path.join(
Expand Down

0 comments on commit 8135164

Please sign in to comment.