-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How could I get the predicted results? #6
Comments
Please have a look at my answer #5 (comment) |
@Kouin - did you figure it out? I'm trying to do the same. |
I provided the code for generating predictions in the thread for another issue #5 (comment) and it seems to work for the original poster there |
Thanks, I was thrown off by the comment in #3 (comment) about the step() function in RNNLayer being needed when generating new sequences. Has it been used in #5 (comment)? |
I'm not sure if I understand what exactly you want to do. Can you describe it in more detail? If you want to sample new trajectories from the TPP, you will need to use If you want to get the predictions one step into the future (i.e. you want to compute the expected time until the next event |
Thanks a lot for the explanation - I want to do the former. I'll follow these directions. |
One more important detail: make sure that you apply the relevant transformations to the samples before feeding them into the RNN. Under default settings, we transform the RNN input |
Thanks a lot! |
Hi @shchur - apologies if this is a silly question! Just on the above instructions for sampling new trajectories with dl_train = torch.utils.data.DataLoader(d_train, batch_size=1, shuffle=False, collate_fn=collate)
for x in dl_train:
break
y, h = model.rnn.step(x, model.rnn(x)) |
Hi @cjchristopher, here is my implementation of sampling for entire trajectories next_in_time = torch.zeros(1, 1, 1)
h = torch.zeros(1, 1, history_size)
inter_times = []
t_max = 1000
with torch.no_grad():
while sum(inter_times) < t_max:
_, h = model.rnn.step(next_in_time, h)
tau = model.decoder.sample(1, h)
inter_times.append(tau.item())
next_in_time = ((tau + 1e-8).log() - mean_in_train) / std_in_train |
Hi @shchur, thanks for sharing the code. I need your clarification on how you denormalize the sample generated. The transformation you applied [next_in_time = ((tau + 1e-8).log() - mean_in_train) / std_in_train] doesn't seem to work as my input has discrete integer times and outputs generated are fractional. Please assist by providing possible de-normalization code that works on your model. Thanks :) |
Hi @avs123, do I understand it correctly that you want the model to generate discrete inter-event times? This is currently not supported, as the model is learning a continuous probability distribution for the inter-event times, so the sampled inter-event times |
Thanks for responding on time @shchur. Can you please confirm that is the generated tau the actual inter-event time or do we need to apply some transformation to get it in the time space as is the input sequence? Anything to nullify the effect of log and normalisation transformation that is being applied to the input. Please elaborate. |
The transformations applied to the RNN input are not related to the transformations applied to the output, so |
Thanks for the refactored code, @shchur .
Any tips would be appreciated. Thanks. |
Hey @KristenMoore, I have just implemented sampling for the new code. I checked it on a few datasets (see I also realized that there was a very serious bug that I introduced while refactoring. The slicing for context embedding was off-by-one, which means that the model was peeking into the future. This is fixed by the last commit. |
Great - thanks @shchur! |
Hi @shchur - just one question about the sampling. Thanks. |
There is no reason, really, I guess you could just do |
This repo hightlights me a lot
However, I focus more on the predicted results In my work.
It's hard for me to understand your implementation datails to deduct the predict results as I just step into the area for few months.
Could you please tell me how to get the predicted results based on your code?
I'd appreciated it if you can help .
Thanks any way
The text was updated successfully, but these errors were encountered: