Skip to content


Folders and files

Last commit message
Last commit date

Latest commit



7 Commits

Repository files navigation

Neural reparameterization improves structural optimization

Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus

Presented at the NeurIPS 2019 workshop on Solving inverse problems with deep networks, December 13th, 2019.

Key links:

Optimization example


The easiest way

Run the "Optimization Examples" notebook in Colab, from your web browser!

The easy way

There appears to be a bug in the initial TensorFlow 2.0 release that means the official TensorFlow pip package doesn't work. So for now, you'll need to install the nightly TensorFlow build.

This package is experimental research code, so it isn't distiributed on the Python package index. You'll need to clone the repository with git and install inplace. This will automatically install all required and optional dependencies listed below, with the exception of the optional Scikit-Sparse package.

Putting it all together:

# consider creating a new virtualenv or conda environment first
pip install tf-nightly
git clone
pip install -e neural-structural-optimization

You should now be able to run import neural_structural_optimization and execute the example notebook.

The hard way

Install Python dependencies manually or with your favorite package manager.

Required dependencies for running anything at all:

  • Python 3.6
  • Abseil Python
  • Autograd
  • dataclasses (if using Python <3.7)
  • NumPy
  • SciPy
  • Scikit-Image
  • TensorFlow 2.0
  • Xarray

Required dependencies for running the parallel training pipeline:

  • matplotlib
  • Pillow
  • Apache Beam

Optional dependencies:

  • Scikit-Sparse: speeds up physics simulation by about 2x
  • NLopt: required for the MMA optimizer
  • Seaborn: for the notebooks.


For examples of typical usage, see the "Optimization Examples" notebook.

To reproduce the full results in the paper, run neural_structural_optimization/ after modifying it to launch jobs on your Apache Beam runner of choice (by default the code uses multi-processing). Note that the total runtime is about 27k CPU hours, but that includes 100 random seed replicates.

You don't need to run our pipeline if you are simply interested in comparing to our results or running alternative analyses. You can download the raw data (188 MB) from Google Cloud Storage. See the "Analysis of optimization results" notebook for how we processed this data to create the figures and table in our paper.


Author = {Stephan Hoyer and Jascha Sohl-Dickstein and Sam Greydanus},
Title = {Neural reparameterization improves structural optimization},
Year = {2019},
Eprint = {arXiv:1909.04240},


Neural reparameterization improves structural optimization







No releases published