|
11 | 11 | "cell_type": "markdown",
|
12 | 12 | "metadata": {},
|
13 | 13 | "source": [
|
14 |
| - "Current users of bayesflow will notice that with the update to 2.0 many things have changed or been moved. This short guide aims to clarify what has been changed as well as additonal functionalities that have been added. This guide follows a similar structure to the Quickstart notebook, without the mathematical explaination in order to demonstrate the differences in workflow. However, for a more detailed explaination of any of the features, users should read any of the other example notebooks. " |
| 14 | + "Current users of bayesflow will notice that with the update to 2.0 many things have changed or been moved. This short guide aims to clarify what has been changed as well as additonal functionalities that have been added. This guide follows a similar structure to the Quickstart notebook, without the mathematical explaination in order to demonstrate the differences in workflow. However, for a more detailed explaination of any of the features, users should read any of the other example notebooks. Additionally to avoid confusion, when necessary similarly named objects from _bayesflow1.0_ will have 1.0 after their name, whereas those from _bayesflow2.0_ will not. Finally a short table with a summary of the function call changes is provided at the end of the guide. " |
15 | 15 | ]
|
16 | 16 | },
|
17 | 17 | {
|
|
20 | 20 | "source": [
|
21 | 21 | "## Major Changes \n",
|
22 | 22 | "\n",
|
23 |
| - "One of the major changes from _bayesflow 1.0_ to _bayeflow 2.0_ is that entire package has been reformatted in line with keras standards. This was done to allow users to choose their prefered backend for machine learning models. Rather than only being compatible with tensorflow, users can now choose to fit their models with either `TensorFlow`, `JAX` or `Pytorch`. \n" |
| 23 | + "One of the major changes from _bayesflow1.0_ to _bayeflow2.0_ is that entire package has been reformatted in line with keras standards. This was done to allow users to choose their prefered backend for machine learning models. Rather than only being compatible with tensorflow, users can now choose to fit their models with either `TensorFlow`, `JAX` or `Pytorch`. \n" |
24 | 24 | ]
|
25 | 25 | },
|
26 | 26 | {
|
|
45 | 45 | "cell_type": "markdown",
|
46 | 46 | "metadata": {},
|
47 | 47 | "source": [
|
48 |
| - "This version of bayeflow also relies much more heavily on dictionaries. Nearly every object, and function will expect a dictionary, so any parameter or data should be returned as a dictionary. " |
| 48 | + "This version of bayeflow also relies much more heavily on dictionaries. This is done because parameters are now named by convention. Nearly every object, and function will expect a dictionary, so any parameter or data should be returned as a dictionary. " |
49 | 49 | ]
|
50 | 50 | },
|
51 | 51 | {
|
|
56 | 56 | "\n",
|
57 | 57 | "### 1. Priors and Likelihood Model\n",
|
58 | 58 | "\n",
|
59 |
| - "Previously users would define a prior function, which would then be used by a `Prior` object to sample prior values. The likelihood would then also be specified via function and used by a `Simulator` wrapper to produce observations for a given prior. These were then combined in the `GenerativeModel`, however this has been changed, we no longer use the `Prior`, `Simulator` or `GenerativeModel` objects. Instead `GenerativeModel` has been renamed to `simulator` which is a single function that glues the prior and likelihood together. " |
| 59 | + "Previously users would define a prior function, which would then be used by a `Prior1.0` object to sample prior values. The likelihood would then also be specified via function and used by a `Simulator1.0` wrapper to produce observations for a given prior. These were then combined in the `GenerativeModel1.0`, however this has been changed, we no longer use the `Prior1.0`, `Simulator1.0` or `GenerativeModel1.0` objects. Instead the roll of the `GenerativeModel1.0` has been renamed to `simulator` which can be invoked as a single function that glues the prior and likelihood together from the simulation module. " |
60 | 60 | ]
|
61 | 61 | },
|
62 | 62 | {
|
|
155 | 155 | "### 3. Summary Network and Inference Network"
|
156 | 156 | ]
|
157 | 157 | },
|
| 158 | + { |
| 159 | + "cell_type": "markdown", |
| 160 | + "metadata": {}, |
| 161 | + "source": [ |
| 162 | + "As in _bayesflow1.0_ we still use a summary network, which is still a Deepset model. Nothing has changed in this step of the workflow. " |
| 163 | + ] |
| 164 | + }, |
| 165 | + { |
| 166 | + "cell_type": "code", |
| 167 | + "execution_count": null, |
| 168 | + "metadata": {}, |
| 169 | + "outputs": [], |
| 170 | + "source": [ |
| 171 | + "summary_net = bf.networks.DeepSet(depth=2, summary_dim=10)" |
| 172 | + ] |
| 173 | + }, |
| 174 | + { |
| 175 | + "cell_type": "markdown", |
| 176 | + "metadata": {}, |
| 177 | + "source": [ |
| 178 | + "For the inference network there are now several implemented architectures for users to choose from. They are `FlowMatching`, `ConsistencyModel`, `ContinuousConsistencyModel` and `CouplingFlow`. For this demonstration we use `FlowMatching`, but for further explaination of the different models please see the other examples and documentation. " |
| 179 | + ] |
| 180 | + }, |
| 181 | + { |
| 182 | + "cell_type": "code", |
| 183 | + "execution_count": null, |
| 184 | + "metadata": {}, |
| 185 | + "outputs": [], |
| 186 | + "source": [ |
| 187 | + "inference_net = bf.networks.FlowMatching()" |
| 188 | + ] |
| 189 | + }, |
158 | 190 | {
|
159 | 191 | "cell_type": "markdown",
|
160 | 192 | "metadata": {},
|
161 | 193 | "source": [
|
162 | 194 | "### 4. Approximator (Amortizer Posterior)"
|
163 | 195 | ]
|
| 196 | + }, |
| 197 | + { |
| 198 | + "cell_type": "markdown", |
| 199 | + "metadata": {}, |
| 200 | + "source": [ |
| 201 | + "Previously the actual training and amortization was done in two steps with two different objects the `Amortizer1.0` and `Trainer1.0` . First users would create an amortizer containing the summary and inference networks." |
| 202 | + ] |
| 203 | + }, |
| 204 | + { |
| 205 | + "cell_type": "code", |
| 206 | + "execution_count": null, |
| 207 | + "metadata": {}, |
| 208 | + "outputs": [], |
| 209 | + "source": [ |
| 210 | + "### Do Not Run \n", |
| 211 | + "\n", |
| 212 | + "# Renamed to Approximator\n", |
| 213 | + "amortizer = bf.amortizers.AmortizedPosterior(inference_net, summary_net)\n", |
| 214 | + "\n", |
| 215 | + "# Defunct\n", |
| 216 | + "trainer = bf.trainers.Trainer(amortizer=amortizer, generative_model=gen_model)" |
| 217 | + ] |
| 218 | + }, |
| 219 | + { |
| 220 | + "cell_type": "markdown", |
| 221 | + "metadata": {}, |
| 222 | + "source": [ |
| 223 | + " This has been renamed to an `Approximator` and takes the summary network, inference network and the data adapter as arguments. " |
| 224 | + ] |
| 225 | + }, |
| 226 | + { |
| 227 | + "cell_type": "code", |
| 228 | + "execution_count": null, |
| 229 | + "metadata": {}, |
| 230 | + "outputs": [], |
| 231 | + "source": [ |
| 232 | + "approximator = bf.approximators.ContinuousApproximator(\n", |
| 233 | + " summary_network=summary_net,\n", |
| 234 | + " inference_network=inference_net,\n", |
| 235 | + " adapter=adapter\n", |
| 236 | + ")" |
| 237 | + ] |
| 238 | + }, |
| 239 | + { |
| 240 | + "cell_type": "markdown", |
| 241 | + "metadata": {}, |
| 242 | + "source": [ |
| 243 | + "Whereas previously a `Trainer1.0` object for training, now users call fit on the `approximator` directly. For additional flexibility in training the `approximator` also has two additional arguments the `learning rate` and `optimizer`. The optimizer can be any keras optimizer." |
| 244 | + ] |
| 245 | + }, |
| 246 | + { |
| 247 | + "cell_type": "code", |
| 248 | + "execution_count": null, |
| 249 | + "metadata": {}, |
| 250 | + "outputs": [], |
| 251 | + "source": [ |
| 252 | + "learning_rate = 1e-4\n", |
| 253 | + "optimizer = keras.optimizers.AdamW(learning_rate=learning_rate, clipnorm=1.0)" |
| 254 | + ] |
| 255 | + }, |
| 256 | + { |
| 257 | + "cell_type": "markdown", |
| 258 | + "metadata": {}, |
| 259 | + "source": [ |
| 260 | + "Users must then compile the `approximator` in oder to ??? " |
| 261 | + ] |
| 262 | + }, |
| 263 | + { |
| 264 | + "cell_type": "code", |
| 265 | + "execution_count": null, |
| 266 | + "metadata": {}, |
| 267 | + "outputs": [], |
| 268 | + "source": [ |
| 269 | + "approximator.compile(optimizer=optimizer)" |
| 270 | + ] |
| 271 | + }, |
| 272 | + { |
| 273 | + "cell_type": "markdown", |
| 274 | + "metadata": {}, |
| 275 | + "source": [ |
| 276 | + "\n", |
| 277 | + "To train the network, and save output users now need only to call fit on the `approximator`. " |
| 278 | + ] |
| 279 | + }, |
| 280 | + { |
| 281 | + "cell_type": "code", |
| 282 | + "execution_count": null, |
| 283 | + "metadata": {}, |
| 284 | + "outputs": [], |
| 285 | + "source": [ |
| 286 | + "history = approximator.fit(\n", |
| 287 | + " epochs=50,\n", |
| 288 | + " num_batches=200,\n", |
| 289 | + " batch_size=64,\n", |
| 290 | + " simulator=simulator\n", |
| 291 | + ")" |
| 292 | + ] |
| 293 | + }, |
| 294 | + { |
| 295 | + "cell_type": "markdown", |
| 296 | + "metadata": {}, |
| 297 | + "source": [ |
| 298 | + "# Other New Features? " |
| 299 | + ] |
| 300 | + }, |
| 301 | + { |
| 302 | + "cell_type": "markdown", |
| 303 | + "metadata": {}, |
| 304 | + "source": [] |
| 305 | + }, |
| 306 | + { |
| 307 | + "cell_type": "markdown", |
| 308 | + "metadata": {}, |
| 309 | + "source": [ |
| 310 | + "# Summary Change Table \n", |
| 311 | + "\n", |
| 312 | + "| 1.0 | 2.0 Useage |\n", |
| 313 | + "| :--------| :---------| \n", |
| 314 | + "| `Prior`, `Simulator` | Defunct and no longer standalone objects but incorporated into `simulator` | \n", |
| 315 | + "|`GenerativeModel` | Defunct with it's functionality having been taken over by `simulations.make_simulator` | \n", |
| 316 | + "| `training.configurator` | Functionality taken over by `Adapter` | \n", |
| 317 | + "|`Trainer` | Functionality taken over by `fit` method of `Approximator` | \n", |
| 318 | + "| `AmortizedPosterior`| Renamed to `Approximator` | " |
| 319 | + ] |
| 320 | + }, |
| 321 | + { |
| 322 | + "cell_type": "code", |
| 323 | + "execution_count": null, |
| 324 | + "metadata": {}, |
| 325 | + "outputs": [], |
| 326 | + "source": [] |
164 | 327 | }
|
165 | 328 | ],
|
166 | 329 | "metadata": {
|
|
0 commit comments