Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow 1 and 2 Step Savers, And Base Classes #5

Merged
merged 30 commits into from Jan 16, 2020

Conversation

@alexbrillant
Copy link
Member

alexbrillant commented Dec 31, 2019

Tensorflow 1 :

class Tensorflow1Model(BaseTensorflowV1ModelStep):
    def __init__(self, variable_scope=None):
        if variable_scope is None:
            variable_scope = 'Tensorflow1Model'
        BaseTensorflowV1ModelStep.__init__(
            self,
            variable_scope=variable_scope,
            hyperparams=HyperparameterSamples({
                'learning_rate': 0.01
            })
        )

    def setup_graph(self) -> Dict:
        tf.placeholder('float', name='x')
        tf.placeholder('float', name='y')

        tf.Variable(np.random.rand(), name='weight')
        tf.Variable(np.random.rand(), name='bias')

        tf.add(tf.multiply(self['x'], self['weight']), self['bias'], name='pred')

        loss = tf.reduce_sum(tf.pow(self['pred'] - self['y'], 2)) / (2 * N_SAMPLES)
        optimizer = tf.train.GradientDescentOptimizer(self.hyperparams['learning_rate']).minimize(loss)

        return {
            'loss': loss,
            'optimizer': optimizer
        }

    def fit_model(self, data_inputs, expected_outputs=None) -> 'BaseStep':
        for (x, y) in zip(data_inputs, expected_outputs):
            self.session.run(self['optimizer'], feed_dict={self['x']: x, self['y']: y})

        self.is_invalidated = True

        return self

    def transform_model(self, data_inputs):
        return self.session.run(self['weight']) * data_inputs + self.session.run(self['bias'])

Tensorflow 2 :

class Tensorflow2Model(BaseTensorflowV2ModelStep):
    def __init__(self=None, checkpoint_folder=None, hyperparams=None):
        BaseTensorflowV2ModelStep.__init__(self, checkpoint_folder=checkpoint_folder, hyperparams=hyperparams)

    def create_optimizer(self):
        return tf.keras.optimizers.Adam(0.1)

    def create_model(self):
        return LinearModel()

    def fit(self, data_inputs, expected_outputs=None) -> 'BaseStep':
        x = tf.convert_to_tensor(data_inputs)
        y = tf.convert_to_tensor(expected_outputs)

        with tf.GradientTape() as tape:
            output = self.model(x)
            self.loss = tf.reduce_mean(tf.abs(output - y))

        self.optimizer.apply_gradients(zip(
            tape.gradient(self.loss, self.model.trainable_variables),
            self.model.trainable_variables
        ))

        return self

    def transform(self, data_inputs):
        return self.model(tf.convert_to_tensor(data_inputs))

README.md Show resolved Hide resolved
@guillaume-chevalier

This comment has been minimized.

Copy link
Member

guillaume-chevalier commented Jan 6, 2020

model = TensorflowV2ModelStep(
    create_model, create_loss, create_optimizer,
    has_expected_outputs=False
).set_hyperparams(hp).set_hyperparams_space(hps)
@guillaume-chevalier

This comment has been minimized.

Copy link
Member

guillaume-chevalier commented Jan 6, 2020

    def create_graph(step: TensorflowV1ModelStep):
        tf.placeholder('float', name='data_inputs')
        tf.placeholder('float', name='expected_outputs')

        tf.Variable(np.random.rand(), name='weight')
        tf.Variable(np.random.rand(), name='bias')

        output = tf.add(tf.multiply(step['data_inputs'], step['expected_outputs']), self['bias'], name='output')

    def create_loss(step: TensorflowV1ModelStep):
        loss = tf.reduce_sum(tf.pow(output - self['expected_outputs'], 2)) / (2 * N_SAMPLES)

        # Note that the following variables are named with
        # a null (identity) op for being able to get them with self["loss"] for instance
        loss = tf.identity(loss, name="loss")

    def create_optimizer(step: TensorflowV1ModelStep):
        optimizer = tf.train.GradientDescentOptimizer(step.hyperparams['learning_rate']).minimize(loss, name="optimizer")
model = TensorflowV1ModelStep(
    create_graph, create_loss, create_optimizer,
    has_expected_outputs=False
).set_hyperparams(hp).set_hyperparams_space(hps)
# add to BaseTensorflowV1ModelStep: 

    def BaseTensorflowV1ModelStep.transform_model(self, data_inputs):
        data_inputs = tf.convert_to_tensor(data_inputs)
        return self.session.run(self['output']), feed_dict=data_inputs)

    def BaseTensorflowV1ModelStep.fit_model(self, data_inputs, expected_outputs=None) -> 'BaseStep':
        self.session.run(self['optimizer'], feed_dict={self['data_inputs']: data_inputs, self['expected_outputs']: expected_outputs})
        return self

TODO: validate this works:

def BaseTensorflowModelStep.__init__(self, create_graph, create_loss, create_optimizer):
    # sel;f. = create_graph, create_loss, create_optimizer
    self.set_hyperparams(self.__class__.HYPERPARAMS)
    self.set_hyperparams_space(self.__class__.HYPERPARAMS_SPACE)

BaseTensorflowModelStep is a common base-base class of the two.

neuraxle_tensorflow/tensorflow_v1.py Outdated Show resolved Hide resolved
testing/test_tensorflow_v2.py Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v1.py Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v1.py Outdated Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v1.py Outdated Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v1.py Outdated Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v1.py Show resolved Hide resolved
neuraxle_tensorflow/tensorflow_v2.py Outdated Show resolved Hide resolved
requirements.txt Outdated Show resolved Hide resolved
Copy link
Member

guillaume-chevalier left a comment

Those changes are needed for me to upload/publish to PyPI

setup.py Outdated Show resolved Hide resolved
setup.py Outdated Show resolved Hide resolved
setup.py Outdated Show resolved Hide resolved
setup.py Outdated Show resolved Hide resolved
setup.py Show resolved Hide resolved
requirements.txt Outdated Show resolved Hide resolved
Copy link
Member

guillaume-chevalier left a comment

Must also do this.

setup.py Outdated Show resolved Hide resolved
return tf.add(tf.multiply(step['data_inputs'], step['weight']), step['bias'])
"""
# Note: you can also return a tuple containing two elements : tensor for training (fit), tensor for inference (transform)

This comment has been minimized.

Copy link
@guillaume-chevalier

guillaume-chevalier Jan 13, 2020

Member

Was indentation lost during auto-format? where should this comment go?

@@ -147,9 +147,9 @@ def fit_model(self, data_inputs, expected_outputs=None) -> BaseStep:
feed_dict.update(additional_feed_dict_arguments)

results = self.session.run([self['optimizer'], self['loss']], feed_dict=feed_dict)
self.loss.append(results[1])
self.losses.append(results[1])

This comment has been minimized.

Copy link
@guillaume-chevalier

guillaume-chevalier Jan 13, 2020

Member

srry for the double comment here. I think this could even be train_losses or test_losses. Or have two variables for those losses if we compute both at some point. Just saying. It's okay to not rename for now, although in the doc this will need to be clear when doing a documentation pass.

@guillaume-chevalier

This comment has been minimized.

Copy link
Member

guillaume-chevalier commented Jan 16, 2020

Merging for now, will review later the unresolved items.

@guillaume-chevalier guillaume-chevalier merged commit 7b78af0 into master Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.