Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculate the mean of the entire distribution. #3

Closed
guoshnBJTU opened this issue Apr 29, 2020 · 7 comments
Closed

Calculate the mean of the entire distribution. #3

guoshnBJTU opened this issue Apr 29, 2020 · 7 comments

Comments

@guoshnBJTU
Copy link

Thanks for your code.
I wonder how to calculate the mean of the entire distribution.
Because I not only want to get the NLL loss of \tau, but also want to get the rmse/mae of \tau. And I will use the mean of \tau to calculate rmse/mae.
I think I should use the eq E[\tau]=\sum_k w_k \exp( \mu_k + s^2_k/2). But I cannotget the right answer.
Could you tell me how to get the mean? Thank you!

@shchur
Copy link
Owner

shchur commented May 4, 2020

Hi, thanks for a good question. We implement the LogNormMix model in a non-standard way, which changes the formula slightly. The standard way to obtain a LogNormMix is using the following sequence of operations

x ~ GMM(w, \mu, s^2)
\tau = exp(x)

What we do instead is

x ~ GMM(w, \mu, s^2)
y = ax + b
\tau = exp(y)

The parameters a and b are chosen based on the (training) dataset, such that the distribution of x = (log \tau - b)/a has zero mean and unit variance. (We found this to speed up the training, and we use a similar trick for other methods as well.) In our code (train.py) the parameter a is called std_out_train and b is called mean_out_train in .

The correct way to compute the mean in this case is E[\tau] = \sum_k w_k \exp(a * \mu_k + b + a^2 * s_k^2 / 2)

Let me know if this works for you.

@guoshnBJTU
Copy link
Author

@shchur Thank you for your reply. I got it. But I have another question.
Since x ~ GMM, could I first get the mean of x by E[x]=\sum_k w_k \mu_k. Then use E[y] = aE[x]+b and E[\tau]=exp(E[y]) to get the mean of tau?

@shchur
Copy link
Owner

shchur commented May 7, 2020

Unfortunately, it's not that simple. In general, E[f(x)] != f(E[x]). For example if you have x ~ Normal(\mu, \sigma^2), then exp(E[x]) = \exp(\mu), but E[exp(x)] = \exp(\mu + \sigma^2/2) (since exp(x) follows log-normal distribution).

@guoshnBJTU
Copy link
Author

@shchur OK, I know. Thank you very much!

@guoshnBJTU
Copy link
Author

Besides, when splitting each sequence into train/val/test, in your code:
def train_val_test_split_each(self, train_size=0.6, val_size=0.2, test_size=0.2, seed=123) I see in_train.append(self.in_times[idx][:n_train]) in_val.append(self.in_times[idx][n_train : (n_train + n_val)]) in_test.append(self.in_times[idx][(n_train + n_val):]) . So I wonder the val sequence does not use the historical information in the previous train sequence? right? But I think when testing/validation, historical information in the previous train sequence should also be used. it maybe helpful.

In addition, what's the purpose of def step(self, x, h) in RNNLayer. Thank you!

@shchur
Copy link
Owner

shchur commented May 7, 2020

That's indeed what happens. Even though RNNs should in theory be able to capture long-range interactions, we found that this additional history lead to absolutely no improvement in performance. So we decided to stick with this version when refactoring the code. This is also consistent with results in other papers (e.g. https://arxiv.org/abs/1905.09690), where they also say that RNNs basically don't learn long-range interactions (or at least no long-range interactions are necessary for TPP models). Also, you can see that our RNN model basically matches the optimal performance on synthetic datasets (like Hawkes), which means that we don't lose much by discarding the history here.

This function is necessary when generating new sequences with the RNN. Since the parameters of p(tau_{i+1} | History) depend on tau_i, we have to process the history one-by-one and cannot use fused RNN kernels (i.e. RNN.forward).

@guoshnBJTU
Copy link
Author

I really appreciate your patience in answering all my questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants