Skip to content
Official pyTorch implementation of "Dynamic-Net: Tuning the Objective Without Re-training for Synthesis Tasks" experiments
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Dynamic-Net: Tuning the Objective Without Re-training for Synthesis Tasks [paper] [project page]

Alon Shoshan, Roey Mechrez, Lihi Zelnik-Manor
Technion - Israel Institute of Technology

One of the key ingredients for successful optimization of modern CNNs is identifying a suitable objective. To date, the objective is fixed a-priori at training time, and any variation to it requires re-training a new network. In this paper we present a first attempt at alleviating the need for re-training. Rather than fixing the network at training time, we train a "Dynamic-Net" that can be modified at inference time. Our approach considers an "objective-space" as the space of all linear combinations of two objectives, and the Dynamic-Net is emulating the traversing of this objective-space at test-time, without any further training. We show that this upgrades pre-trained networks by providing an out-of-learning extension, while maintaining the performance quality. The solution we propose is fast and allows a user to interactively modify the network, in real-time, in order to obtain the result he/she desires. We show the benefits of such an approach via several different applications.


Dynamic Style Transfer [Code & Setup]

Control over Stylization level

Interpolation between two Styles

Dynamic DCGAN: Controlled Image Generation [Code & Setup]

The proposed method allow us to generate faces with control over the facial attributes e.g gender or hair color

Image Completion

Dynamic-Net allows the user to select the best working point for each image, improving results of networks that were trained with sub-optimal objectives


Code for every application is written as a separate project:

Dynamic style transfer demo


If you use our code for research, please cite our paper:

  title={Dynamic-Net: Tuning the Objective Without Re-training},
  author={Shoshan, Alon and Mechrez, Roey and Zelnik-Manor, Lihi},
  journal={arXiv preprint arXiv:1811.08760},


Code for the style transfer network implementation borrows from [1][2].
Code for the DCGAN network implementation borrows from [1].

You can’t perform that action at this time.