Skip to content

Commit

Permalink
Remove warnings generated in test_graphmask_explainer.py and `test_…
Browse files Browse the repository at this point in the history
…laplacian_lambda_max.py`. (#9179)

With this fix, the following warning:
```
test/explain/algorithm/test_graphmask_explainer.py: 12960 warnings
  /usr/local/lib/python3.10/dist-packages/torch_geometric/explain/algorithm/graphmask_explainer.py:165: DeprecationWarning: `np.math` is a deprecated alias for the standard library `math` module (Deprecated Numpy 1.25). Replace usages of `np.math` with `math`
    beta * np.math.log(-gamma / zeta))


test/transforms/test_laplacian_lambda_max.py::test_laplacian_lambda_max
test/transforms/test_laplacian_lambda_max.py::test_laplacian_lambda_max
test/transforms/test_laplacian_lambda_max.py::test_laplacian_lambda_max
test/transforms/test_laplacian_lambda_max.py::test_laplacian_lambda_max
  /usr/local/lib/python3.10/dist-packages/torch_geometric/transforms/laplacian_lambda_max.py:65: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
    data.lambda_max = float(lambda_max.real)
```
will no longer be generated.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: rusty1s <matthias.fey@tu-dortmund.de>
  • Loading branch information
3 people committed Apr 11, 2024
1 parent 35497ae commit c00775d
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
3 changes: 1 addition & 2 deletions torch_geometric/explain/algorithm/graphmask_explainer.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import math
from typing import List, Optional, Tuple, Union

import numpy as np
import torch
import torch.nn.functional as F
from torch import Tensor
Expand Down Expand Up @@ -162,7 +161,7 @@ def _hard_concrete(
(torch.log(u) - torch.log(1 - u) + input_element) / beta)

penalty = torch.sigmoid(input_element -
beta * np.math.log(-gamma / zeta))
beta * math.log(-gamma / zeta))
else:
s = torch.sigmoid(input_element)
penalty = torch.zeros_like(input_element)
Expand Down
2 changes: 1 addition & 1 deletion torch_geometric/transforms/laplacian_lambda_max.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def forward(self, data: Data) -> Data:
eig_fn = eigsh

lambda_max = eig_fn(L, k=1, which='LM', return_eigenvectors=False)
data.lambda_max = float(lambda_max.real)
data.lambda_max = lambda_max.real.item()

return data

Expand Down

0 comments on commit c00775d

Please sign in to comment.