You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had a Star Wars like "I have a bad feeling about this!" feeling, and then the training and eval step crashed and I started to tumble down the rabbit hole.
net = tf.feature_column.input_layer(features, params['feature_columns']) -> net = tf.compat.v1.feature_column.input_layer(features, params['feature_columns'])
net = tf.layers.dense(net, units=units, activation=tf.nn.relu) -> net = tf.compat.v1.layers.dense(net, units=units, activation=tf.nn.relu)
Aaand I ran out of time converting the tf.lookup.index_table_from_file(vocabulary_file="content_ids.txt"). Tried all kinds of tf.compat.v1 routes, tf.disable_v2_behavior() magic, etc.
Please convert this lab!
The text was updated successfully, but these errors were encountered:
Good news: there's a workaround until the ipynb will be refreshed. I learned this trick in the next lab (#2441): you cannot select TF 1.15 LTS on the GUI any more, but the prepared Anaconda VM image still exists, so we can create the workbench instead via the Terminal with CLI. The lab instructions should be updated though, so just like the #2441 ipynb:
So instead of the User Managed Notebook creation block in the instructions, have the "Activate Cloud Shell" block, and then the CLI, just like in the next lab:
Activate Cloud Shell
gcloud auth list
gcloud config list project
And this is the workaround: gcloud notebooks instances create tensorflow-notebook --vm-image-project=deeplearning-platform-release --vm-image-family=tf-1-15-cpu --machine-type=e2-standard-4 --location=us-central1-a
It was already a bad omen that the lab instructions (https://www.cloudskillsboost.google/course_sessions/2920313/labs/325069) tell me to pick a
TensorFlow Enterprise > TensorFlow Enterprise 1.15 (with LTS)
notebook. I could pick a 2.6 LTS.The https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/recommendation_systems/labs/content_based_preproc.ipynb was without a hickup, but then in the main notebook https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/recommendation_systems/labs/content_based_using_neural_networks.ipynb I spotted
os.environ['TFVERSION'] = '1.15.3'
doubled down with!pip3 install --upgrade tensorflow==1.15.3
and also other versions are ancient as well.I had a Star Wars like "I have a bad feeling about this!" feeling, and then the training and eval step crashed and I started to tumble down the rabbit hole.
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
-> columns =tf.io.decode_csv(value_column,record_defaults=record_defaults)
return dataset.make_one_shot_iterator().get_next()
->return tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
net = tf.feature_column.input_layer(features, params['feature_columns'])
->net = tf.compat.v1.feature_column.input_layer(features, params['feature_columns'])
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
->net = tf.compat.v1.layers.dense(net, units=units, activation=tf.nn.relu)
logits = tf.layers.dense(net, params['n_classes'], activation=None)
->logits = tf.compat.v1.layers.dense(net, params['n_classes'], activation=None)
Aaand I ran out of time converting the
tf.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
. Tried all kinds oftf.compat.v1
routes,tf.disable_v2_behavior()
magic, etc.Please convert this lab!
The text was updated successfully, but these errors were encountered: