-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
monotonic calibration not working. #35
Comments
You seem to have not applied the projections. Note that calibration_layer is returning:
So you will need to apply the projection ops that are returned after each batch update. If you are writing your own loop, you can just add a session.run to apply the projection ops. With estimators, this can be done with a SessionRunHook. See the Base estimator for an example: |
worked well, Since i was using an estimator, i setup a SessionRunHook and it obeys the constraints, Thanks for the help, On a related note, is there a reason the tensor name doesnt include in any part the name sent in for the call for calibration_layer. That would make it difficult to use multiple calls without collision in the names? |
You can always use tf.name_scope for that. But TF will add a suffix to avoid collisions if you recreate the layer in the same scope. Marking as closed. |
Trying to train a calibration for some signals to incorporate into tf ranking.
The relevant code for the calibration is
` num_keypoints = 26
kp_inits = tfl.uniform_keypoints_for_signal(
num_keypoints=num_keypoints,
input_min=0.0,
input_max=1.0,
output_min=0.0,
output_max=1.0)
`
The learned model isnt monotonic, here are some of the calibration it has learnt
tensor_name: group_score/pwl_calibration/signal_2_bound_max 1.0 tensor_name: group_score/pwl_calibration/signal_2_bound_min 0.0 tensor_name: group_score/pwl_calibration/signal_2_keypoints_inputs [0. 0.04 0.08 0.12 0.16 0.19999999 0.24 0.28 0.32 0.35999998 0.39999998 0.44 0.48 0.52 0.56 0.59999996 0.64 0.68 0.71999997 0.76 0.79999995 0.84 0.88 0.91999996 0.96 1. ] tensor_name: group_score/pwl_calibration/signal_2_keypoints_inputs/Adagrad [0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1] tensor_name: group_score/pwl_calibration/signal_2_keypoints_outputs [ 0.5595347 0.00848915 -0.02862659 0.44848698 0.3586025 0.40749145 0.35288998 0.38407487 0.38621387 0.47819927 0.6856117 0.60562074 0.59473854 0.5449814 0.43999994 0.61086124 0.72133946 0.64237064 0.66826046 0.7117335 0.6590987 0.662649 0.5869861 0.87017834 0.7034538 1.2272371 ] tensor_name: group_score/pwl_calibration/signal_2_keypoints_outputs/Adagrad [4.567583 0.34649372 0.2375099 0.2630496 0.22509426 0.19528154 0.1826403 0.19447225 0.1917207 0.21152268 0.17799918 0.18089467 0.2096777 0.18614963 0.17668937 0.1913786 0.23144016 0.23107207 0.2278506 0.21568052 0.26991028 0.24701497 0.287972 0.36811396 0.62489855 2.2491465 ]
The bounds arent respected either.
The text was updated successfully, but these errors were encountered: