Skip to content

McDaniel7/Deep_Graph_Kernel_Point_Process

Repository files navigation

Deep graph kernel point processes

Point process models are widely used to analyze asynchronous events occurring within a graph that reflect how different types of events influence one another. Predicting future events' times and types is a crucial task, and the size and topology of the graph add to the challenge of the problem. Recent neural point process models unveil the possibility of capturing intricate inter-event-category dependencies. However, such methods utilize an unfiltered history of events, including all event categories in the intensity computation for each target event type. In this work, we propose a graph point process method where event interactions occur based on a latent graph topology. The corresponding undirected graph has nodes representing event categories and edges indicating potential contribution relationships. We then develop a novel deep graph kernel to characterize the triggering and inhibiting effects between events. The intrinsic influence structures are incorporated via the graph neural network (GNN) model used to represent the learnable kernel. The computational efficiency of the GNN approach allows our model to scale to large graphs. Comprehensive experiments on synthetic and real-world data show the superior performance of our approach against the state-of-the-art methods in predicting future events and uncovering the relational structure among data.

Model

Point process with deep temporal graph kernel

$$\lambda(t, v) = \mu + \sum_{(t^\prime, v^\prime)\in \mathcal{H}_t}k(t^\prime, t, v^\prime, v)$$

$$k(t^\prime, t, v^\prime, v) = \sum_{r=1}^{R} \sum_{l=1}^{L}\alpha_{rl}\psi_l(t^\prime)\varphi_l(t-t^\prime)B_r^{(o_r)}(v^\prime, v)$$

An illustration of localized graph filter basis $B_r^{(o_r)}(v^\prime, v)$ on a 5-node ring graph.

Results

True / Learned Graph Kernel Event Dependency Intensity
True model 1
Learned model 1
True model 2
Learned model 2

Usage

Synthetic data generation

One can use synthetic_data_generation.py to simulate events from temporal graph point processes:

from torch_geometric.data import Data
from kernel import BaseExponentialCosineBasis, TemporalParametricKernelL3netLocalFilterOnGraph
from point_process import TemporalPointProcessOnGraph
from synthetic_data_generation import GraphTemporalPointProcessGenerator

T = [0., 50.]
tau_max = 5.
n_node = 3
mu_max = .3
mu= mu_max * np.ones(n_node)
data_dim = 2
adjacency_mat = np.zeros((n_node, n_node))
adjacency_mat[0, 1] = 1.
adjacency_mat[1, 0] = 1.
adjacency_mat[1, 2] = 1.
adjacency_mat[2, 1] = 1.
edge_index = torch.tensor(np.vstack(np.where(adjacency_mat != 0)), dtype=torch.long)
edge_weight = torch.tensor(adjacency_mat[np.where(adjacency_mat != 0)], dtype=torch.float)
node = torch.tensor(np.arange(n_node).reshape(-1, 1), dtype=torch.float)
G = Data(x=node, edge_index=edge_index, edge_weight=edge_weight)

# kernel
timebasis = BaseExponentialCosineBasis(alpha=1.5, beta=2., freq=.2)
order_list = [0, 1]
basis_weight = np.array([[.5, .2]])
filter_1 = torch.Tensor([[0.5, 0.0, 0.0],
                        [0.0, 0.7, 0.0],
                        [0.0, 0.0, 0.5]])
filter_2 = torch.Tensor([[0.0, 0.0, 0.0],
                        [-0.2, 0.0, 0.4],
                        [0.0, 0.0, 0.0]])
kernel = TemporalParametricKernelL3netLocalFilterOnGraph(device="cpu", T=T, tau_max=tau_max, G=G,
                                                        time_basis_list=[timebasis],
                                                        loc_filter_list=[filter_1, filter_2],
                                                        data_dim=data_dim, basis_weight=basis_weight,
                                                        basis_dim=1, init_std=1e0)
# point process
trg_model = TemporalPointProcessOnGraph(device="cpu", T=T, G=G, mu=mu, tau_max=tau_max,
                                kernel=kernel, data_dim=data_dim)
                                
batch_size=1000
# define point process generator and generator synthetic data
generator  = GraphTemporalPointProcessGenerator(trg_model, upper_bound=2e0, T=T, G=G)
data, size = generator.generate(batch_size=batch_size, min_n_points=5, verbose=False)

Model training

The model is trained using train.py:

from utils import eval_points_generate

device     = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# initialize model
order_list = [0, 1]
n_basis_time = 1
n_basis_loc  = 2
data_name    = "data-basecos_exp-neg-graph-V3"
model_name   = "Model_for_%s_nbasistime%d_nbasisloc%d" % (data_name, n_basis_time, n_basis_loc)
save_path    = rootpath + "/saved_models/%s" % model_name
G = torch.load(rootpath + "/data/%s/graph.pt" % data_name)
kernel = TemporalDeepBasisL3netLocalFilterOnGraphKernel(T=T, G=G, tau_max=tau_max, device=device,
                                        n_basis_time=n_basis_time, loc_order_list=order_list,
                                        data_dim=data_dim, basis_dim=1, nn_width_basis_time=32,
                                        init_gain=5e-1, init_bias=1e-2, init_std=1e-2,)
init_model = TemporalPointProcessOnGraph(device=device, T=T, G=G, mu=mu, tau_max=tau_max,
                                         kernel=kernel, data_dim=data_dim, numerical_int=True,
                                         int_res=100, int_res_loc=200, pen_res_time=50, eval_res=100)

# initialize training configuration
config = config_generate(lr=1e-2, epoch=1000, batch_size=32, penalty=True, mae_eval=True,
                        t_init=1e0, t_upp=1e6, t_mul=1.3, b_bar=mu_max/4., b_upp=-mu_max/4., device=device)
ts = np.linspace(T[0], T[1], 50)
ns = np.arange(G.x.shape[0])
eval_points = eval_points_generate(ts, ns)

# data loading
data          = torch.FloatTensor(data)
train_data    = data[:800]
test_data     = data[800:]
plot_points   = test_data[189]

# training
save_model = True
train_llks, test_llks, test_maes, test_mres, losses, wall_time, lam_mins, ts, bs = train(init_model, trg_model, config, train_data, test_data,
      eval_points=eval_points, plot_points=plot_points, plot_ngrid=100,
      modelname=model_name, save_model=save_model, save_path=save_path, load_iter=50)

Model evaluation

The model can be evaluated using functions in visualization.py and evaluation.py, other evaluations can be developed accordingly in different tasks.

# load pre-trained model
init_model.load_state_dict(torch.load(rootpath + "/results/saved_models/saved_model_dict.pth"))

# evaluation
## graph kernel
graph_kernel_vals = calc_graph_kernel(init_model.kernel).T

## intensity
events = plot_points[plot_points[:, 0] > 0]
T_plot = [10., 50.]
ngrid = 200
lam_vals = calc_graph_pointprocess(init_model, events, T_plot=T_plot, ngrid=ngrid)

## event dependency
select_event_idx = np.arange(0, len(events), 1)
n_dep = len(select_event_idx)
idx_pair = np.array(list(itertools.product(select_event_idx, select_event_idx)))
event_dep_1 = events[idx_pair[:, 0]]
event_dep_2 = events[idx_pair[:, 1]]
with torch.no_grad():
    dep_vals    = init_model.kernel(event_dep_1, event_dep_2).numpy()
    dep_vals    = dep_vals.reshape(n_dep, n_dep)
    upp_idx     = np.bool_(np.triu(np.ones((n_dep, n_dep))))
    dep_vals[upp_idx] = 0.

Citation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages