You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it not very accurate to use the size reward of the entire file as the reward for each caller-callee feature, if the file is large and has a large number of caller-callee?
#325
Open
18liumin opened this issue
Dec 12, 2023
· 1 comment
def _overwrite_trajectory_reward(sequence_example: tf.train.SequenceExample,
reward: float) -> tf.train.SequenceExample:
"""Overwrite the reward in the trace (sequence_example) with the given one.
Args:
sequence_example: A tf.SequenceExample proto describing compilation trace.
reward: The reward to overwrite with.
Returns:
The tf.SequenceExample proto after post-processing.
"""
sequence_length = len(
next(iter(sequence_example.feature_lists.feature_list.values())).feature)
reward_list = sequence_example.feature_lists.feature_list['reward']
for _ in range(sequence_length):
added_feature = reward_list.feature.add()
added_feature.float_list.value.append(reward)
return sequence_example
The text was updated successfully, but these errors were encountered:
Notice the discount factor is 1, which should mean only one of the reward values would be picked. It's probably superfluous we set the others to be the same, and hurts readability (clearly) - makes for easier experimentation with other discount values.
def _overwrite_trajectory_reward(sequence_example: tf.train.SequenceExample,
reward: float) -> tf.train.SequenceExample:
"""Overwrite the reward in the trace (sequence_example) with the given one.
Args:
sequence_example: A tf.SequenceExample proto describing compilation trace.
reward: The reward to overwrite with.
Returns:
The tf.SequenceExample proto after post-processing.
"""
sequence_length = len(
next(iter(sequence_example.feature_lists.feature_list.values())).feature)
reward_list = sequence_example.feature_lists.feature_list['reward']
for _ in range(sequence_length):
added_feature = reward_list.feature.add()
added_feature.float_list.value.append(reward)
return sequence_example
The text was updated successfully, but these errors were encountered: