torch_geometric.utils — pytorch_geometric documentation (original) (raw)

scatter Reduces all values from the src tensor at the indices specified in the index tensor along a given dimension dim.
group_argsort Returns the indices that sort the tensor src along a given dimension in ascending order by value.
group_cat Concatenates the given sequence of tensors tensors in the given dimension dim.
segment Reduces all values in the first dimension of the src tensor within the ranges specified in the ptr.
index_sort Sorts the elements of the inputs tensor in ascending order.
cumsum Returns the cumulative sum of elements of x.
degree Computes the (unweighted) degree of a given one-dimensional index tensor.
softmax Computes a sparsely evaluated softmax.
lexsort Performs an indirect stable sort using a sequence of keys.
sort_edge_index Row-wise sorts edge_index.
coalesce Row-wise sorts edge_index and removes its duplicated entries.
is_undirected Returns True if the graph given by edge_index is undirected.
to_undirected Converts the graph given by edge_index to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).
contains_self_loops Returns True if the graph given by edge_index contains self-loops.
remove_self_loops Removes every self-loop in the graph given by edge_index, so that \((i,i) \not\in \mathcal{E}\) for every \(i \in \mathcal{V}\).
segregate_self_loops Segregates self-loops from the graph.
add_self_loops Adds a self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index.
add_remaining_self_loops Adds remaining self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index.
get_self_loop_attr Returns the edge features or weights of self-loops \((i, i)\) of every node \(i \in \mathcal{V}\) in the graph given by edge_index.
contains_isolated_nodes Returns True if the graph given by edge_index contains isolated nodes.
remove_isolated_nodes Removes the isolated nodes from the graph given by edge_index with optional edge attributes edge_attr.
get_num_hops Returns the number of hops the model is aggregating information from.
subgraph Returns the induced subgraph of (edge_index, edge_attr) containing the nodes in subset.
bipartite_subgraph Returns the induced subgraph of the bipartite graph (edge_index, edge_attr) containing the nodes in subset.
k_hop_subgraph Computes the induced subgraph of edge_index around all nodes in node_idx reachable within \(k\) hops.
dropout_node Randomly drops nodes from the adjacency matrix edge_index with probability p using samples from a Bernoulli distribution.
dropout_edge Randomly drops edges from the adjacency matrix edge_index with probability p using samples from a Bernoulli distribution.
dropout_path Drops edges from the adjacency matrix edge_index based on random walks.
dropout_adj Randomly drops edges from the adjacency matrix (edge_index, edge_attr) with probability p using samples from a Bernoulli distribution.
homophily The homophily of a graph characterizes how likely nodes with the same label are near each other in a graph.
assortativity The degree assortativity coefficient from the "Mixing patterns in networks" paper.
normalize_edge_index Applies normalization to the edges of a graph.
get_laplacian Computes the graph Laplacian of the graph given by edge_index and optional edge_weight.
get_mesh_laplacian Computes the mesh Laplacian of a mesh given by pos and face.
mask_select Returns a new tensor which masks the src tensor along the dimension dim according to the boolean mask mask.
index_to_mask Converts indices to a mask representation.
mask_to_index Converts a mask to an index representation.
select Selects the input tensor or input list according to a given index or mask vector.
narrow Narrows the input tensor or input list to the specified range.
to_dense_batch Given a sparse batch of node features \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times F}\) (with \(N_i\) indicating the number of nodes in graph \(i\)), creates a dense node feature tensor \(\mathbf{X} \in \mathbb{R}^{B \times N_{\max} \times F}\) (with \(N_{\max} = \max_i^B N_i\)).
to_dense_adj Converts batched sparse adjacency matrices given by edge indices and edge attributes to a single dense batched adjacency matrix.
to_nested_tensor Given a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\) (with \(N_i\) indicating the number of elements in example \(i\)), creates a nested PyTorch tensor.
from_nested_tensor Given a nested PyTorch tensor, creates a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\), and optionally a batch vector which assigns each element to a specific example.
dense_to_sparse Converts a dense adjacency matrix to a sparse adjacency matrix defined by edge indices and edge attributes.
is_torch_sparse_tensor Returns True if the input src is a torch.sparse.Tensor (in any sparse layout).
is_sparse Returns True if the input src is of type torch.sparse.Tensor (in any sparse layout) or of type torch_sparse.SparseTensor.
to_torch_coo_tensor Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layout torch.sparse_coo.
to_torch_csr_tensor Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layout torch.sparse_csr.
to_torch_csc_tensor Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layout torch.sparse_csc.
to_torch_sparse_tensor Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with custom layout.
to_edge_index Converts a torch.sparse.Tensor or a torch_sparse.SparseTensor to edge indices and edge attributes.
spmm Matrix product of sparse matrix with dense matrix.
unbatch Splits src according to a batch vector along dimension dim.
unbatch_edge_index Splits the edge_index according to a batch vector.
one_hot Taskes a one-dimensional index tensor and returns a one-hot encoded representation of it with shape [*, num_classes] that has zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1.
normalized_cut Computes the normalized cut \(\mathbf{e}_{i,j} \cdot \left( \frac{1}{\deg(i)} + \frac{1}{\deg(j)} \right)\) of a weighted graph given by edge indices and edge attributes.
grid Returns the edge indices of a two-dimensional grid graph with height height and width width and its node positions.
geodesic_distance Computes (normalized) geodesic distances of a mesh given by pos and face.
to_scipy_sparse_matrix Converts a graph given by edge indices and edge attributes to a scipy sparse matrix.
from_scipy_sparse_matrix Converts a scipy sparse matrix to edge indices and edge attributes.
to_networkx Converts a torch_geometric.data.Data instance to a networkx.Graph if to_undirected is set to True, or a directed networkx.DiGraph otherwise.
from_networkx Converts a networkx.Graph or networkx.DiGraph to a torch_geometric.data.Data instance.
to_networkit Converts a (edge_index, edge_weight) tuple to a networkit.Graph.
from_networkit Converts a networkit.Graph to a (edge_index, edge_weight) tuple.
to_trimesh Converts a torch_geometric.data.Data instance to a trimesh.Trimesh.
from_trimesh Converts a trimesh.Trimesh to a torch_geometric.data.Data instance.
to_cugraph Converts a graph given by edge_index and optional edge_weight into a cugraph graph object.
from_cugraph Converts a cugraph graph object into edge_index and optional edge_weight tensors.
to_dgl Converts a torch_geometric.data.Data or torch_geometric.data.HeteroData instance to a dgl graph object.
from_dgl Converts a dgl graph object to a torch_geometric.data.Data or torch_geometric.data.HeteroData instance.
from_rdmol Converts a rdkit.Chem.Mol instance to a torch_geometric.data.Data instance.
to_rdmol Converts a torch_geometric.data.Data instance to a rdkit.Chem.Mol instance.
from_smiles Converts a SMILES string to a torch_geometric.data.Data instance.
to_smiles Converts a torch_geometric.data.Data instance to a SMILES string.
erdos_renyi_graph Returns the edge_index of a random Erdos-Renyi graph.
stochastic_blockmodel_graph Returns the edge_index of a stochastic blockmodel graph.
barabasi_albert_graph Returns the edge_index of a Barabasi-Albert preferential attachment model, where a graph of num_nodes nodes grows by attaching new nodes with num_edges edges that are preferentially attached to existing nodes with high degree.
negative_sampling Samples random negative edges of a graph given by edge_index.
batched_negative_sampling Samples random negative edges of multiple graphs given by edge_index and batch.
structured_negative_sampling Samples a negative edge (i,k) for every positive edge (i,j) in the graph given by edge_index, and returns it as a tuple of the form (i,j,k).
shuffle_node Randomly shuffle the feature matrix x along the first dimension.
mask_feature Randomly masks feature from the feature matrix x with probability p using samples from a Bernoulli distribution.
add_random_edge Randomly adds edges to edge_index.
tree_decomposition The tree decomposition algorithm of molecules from the "Junction Tree Variational Autoencoder for Molecular Graph Generation" paper.
get_embeddings Returns the output embeddings of all MessagePassing layers in model.
get_embeddings_hetero Returns the output embeddings of all MessagePassing layers in a heterogeneous model, organized by edge type.
trim_to_layer Trims the edge_index representation, node features x and edge features edge_attr to a minimal-sized representation for the current GNN layer layer in directed NeighborLoader scenarios.
get_ppr Calculates the personalized PageRank (PPR) vector for all or a subset of nodes using a variant of the Andersen algorithm.
train_test_split_edges Splits the edges of a torch_geometric.data.Data object into positive and negative train/val/test edges.

Utility package.

scatter(src: Tensor, index: Tensor, dim: int = 0, dim_size: Optional[int] = None, reduce: str = 'sum') → Tensor[source]

Reduces all values from the src tensor at the indices specified in the index tensor along a given dimension dim. See thedocumentation # noqa: E501 of the torch_scatter package for more information.

Parameters:

Return type:

Tensor

group_argsort(src: Tensor, index: Tensor, dim: int = 0, num_groups: Optional[int] = None, descending: bool = False, return_consecutive: bool = False, stable: bool = False) → Tensor[source]

Returns the indices that sort the tensor src along a given dimension in ascending order by value. In contrast to torch.argsort(), sorting is performed in groups according to the values in index.

Parameters:

Example

src = torch.tensor([0, 1, 5, 4, 3, 2, 6, 7, 8]) index = torch.tensor([0, 0, 1, 1, 1, 1, 2, 2, 2]) group_argsort(src, index) tensor([0, 1, 3, 2, 1, 0, 0, 1, 2])

Return type:

Tensor

group_cat(tensors: Union[List[Tensor], Tuple[Tensor, ...]], indices: Union[List[Tensor], Tuple[Tensor, ...]], dim: int = 0, return_index: bool = False) → Union[Tensor, Tuple[Tensor, Tensor]][source]

Concatenates the given sequence of tensors tensors in the given dimension dim. Different from torch.cat(), values along the concatenating dimension are grouped according to the indices defined in the index tensors. All tensors must have the same shape (except in the concatenating dimension).

Parameters:

Example

x1 = torch.tensor([[0.2716, 0.4233], ... [0.3166, 0.0142], ... [0.2371, 0.3839], ... [0.4100, 0.0012]]) x2 = torch.tensor([[0.3752, 0.5782], ... [0.7757, 0.5999]]) index1 = torch.tensor([0, 0, 1, 2]) index2 = torch.tensor([0, 2]) scatter_concat([x1,x2], [index1, index2], dim=0) tensor([[0.2716, 0.4233], [0.3166, 0.0142], [0.3752, 0.5782], [0.2371, 0.3839], [0.4100, 0.0012], [0.7757, 0.5999]])

Return type:

Union[Tensor, Tuple[Tensor, Tensor]]

segment(src: Tensor, ptr: Tensor, reduce: str = 'sum') → Tensor[source]

Reduces all values in the first dimension of the src tensor within the ranges specified in the ptr. See the documentation of the torch_scatter package for more information.

Parameters:

Return type:

Tensor

index_sort(inputs: Tensor, max_value: Optional[int] = None, stable: bool = False) → Tuple[Tensor, Tensor][source]

Sorts the elements of the inputs tensor in ascending order. It is expected that inputs is one-dimensional and that it only contains positive integer values. If max_value is given, it can be used by the underlying algorithm for better performance.

Parameters:

Return type:

Tuple[Tensor, Tensor]

cumsum(x: Tensor, dim: int = 0) → Tensor[source]

Returns the cumulative sum of elements of x. In contrast to torch.cumsum(), prepends the output with zero.

Parameters:

Example

x = torch.tensor([2, 4, 1]) cumsum(x) tensor([0, 2, 6, 7])

Return type:

Tensor

degree(index: Tensor, num_nodes: Optional[int] = None, dtype: Optional[dtype] = None) → Tensor[source]

Computes the (unweighted) degree of a given one-dimensional index tensor.

Parameters:

Return type:

Tensor

Example

row = torch.tensor([0, 1, 0, 2, 0]) degree(row, dtype=torch.long) tensor([3, 1, 1])

softmax(src: Tensor, index: Optional[Tensor] = None, ptr: Optional[Tensor] = None, num_nodes: Optional[int] = None, dim: int = 0) → Tensor[source]

Computes a sparsely evaluated softmax. Given a value tensor src, this function first groups the values along the first dimension based on the indices specified in index, and then proceeds to compute the softmax individually for each group.

Parameters:

Return type:

Tensor

Examples

src = torch.tensor([1., 1., 1., 1.]) index = torch.tensor([0, 0, 1, 2]) ptr = torch.tensor([0, 2, 3, 4]) softmax(src, index) tensor([0.5000, 0.5000, 1.0000, 1.0000])

softmax(src, None, ptr) tensor([0.5000, 0.5000, 1.0000, 1.0000])

src = torch.randn(4, 4) ptr = torch.tensor([0, 4]) softmax(src, index, dim=-1) tensor([[0.7404, 0.2596, 1.0000, 1.0000], [0.1702, 0.8298, 1.0000, 1.0000], [0.7607, 0.2393, 1.0000, 1.0000], [0.8062, 0.1938, 1.0000, 1.0000]])

lexsort(keys: List[Tensor], dim: int = -1, descending: bool = False) → Tensor[source]

Performs an indirect stable sort using a sequence of keys.

Given multiple sorting keys, returns an array of integer indices that describe their sort order. The last key in the sequence is used for the primary sort order, the second-to-last key for the secondary sort order, and so on.

Parameters:

Return type:

Tensor

sort_edge_index(edge_index: Tensor, edge_attr: Union[Tensor, None, List[Tensor], str] = '???', num_nodes: Optional[int] = None, sort_by_row: bool = True) → Union[Tensor, Tuple[Tensor, Optional[Tensor]], Tuple[Tensor, List[Tensor]]][source]

Row-wise sorts edge_index.

Parameters:

Return type:

LongTensor if edge_attr is not passed, else (LongTensor, Optional[Tensor] or List[Tensor]])

Warning

From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr is passed as an argument (even in case it is set to None).

Examples

edge_index = torch.tensor([[2, 1, 1, 0], [1, 2, 0, 1]]) edge_attr = torch.tensor([[1], [2], [3], [4]]) sort_edge_index(edge_index) tensor([[0, 1, 1, 2], [1, 0, 2, 1]])

sort_edge_index(edge_index, edge_attr) (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([[4], [3], [2], [1]]))

coalesce(edge_index: Tensor, edge_attr: Union[Tensor, None, List[Tensor], str] = '???', num_nodes: Optional[int] = None, reduce: str = 'sum', is_sorted: bool = False, sort_by_row: bool = True) → Union[Tensor, Tuple[Tensor, Optional[Tensor]], Tuple[Tensor, List[Tensor]]][source]

Row-wise sorts edge_index and removes its duplicated entries. Duplicate entries in edge_attr are merged by scattering them together according to the given reduce option.

Parameters:

Return type:

LongTensor if edge_attr is not passed, else (LongTensor, Optional[Tensor] or List[Tensor]])

Warning

From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr is passed as an argument (even in case it is set to None).

Example

edge_index = torch.tensor([[1, 1, 2, 3], ... [3, 3, 1, 2]]) edge_attr = torch.tensor([1., 1., 1., 1.]) coalesce(edge_index) tensor([[1, 2, 3], [3, 1, 2]])

Sort edge_index column-wise

coalesce(edge_index, sort_by_row=False) tensor([[2, 3, 1], [1, 2, 3]])

coalesce(edge_index, edge_attr) (tensor([[1, 2, 3], [3, 1, 2]]), tensor([2., 1., 1.]))

Use 'mean' operation to merge edge features

coalesce(edge_index, edge_attr, reduce='mean') (tensor([[1, 2, 3], [3, 1, 2]]), tensor([1., 1., 1.]))

is_undirected(edge_index: Tensor, edge_attr: Union[Tensor, None, List[Tensor]] = None, num_nodes: Optional[int] = None) → bool[source]

Returns True if the graph given by edge_index is undirected.

Parameters:

Return type:

bool

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) weight = torch.tensor([0, 0, 1]) is_undirected(edge_index, weight) True

weight = torch.tensor([0, 1, 1]) is_undirected(edge_index, weight) False

to_undirected(edge_index: Tensor, edge_attr: Union[Tensor, None, List[Tensor], str] = '???', num_nodes: Optional[int] = None, reduce: str = 'add') → Union[Tensor, Tuple[Tensor, Optional[Tensor]], Tuple[Tensor, List[Tensor]]][source]

Converts the graph given by edge_index to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).

Parameters:

Return type:

LongTensor if edge_attr is not passed, else (LongTensor, Optional[Tensor] or List[Tensor]])

Warning

From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr is passed as an argument (even in case it is set to None).

Examples

edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) to_undirected(edge_index) tensor([[0, 1, 1, 2], [1, 0, 2, 1]])

edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) edge_weight = torch.tensor([1., 1., 1.]) to_undirected(edge_index, edge_weight) (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([2., 2., 1., 1.]))

Use 'mean' operation to merge edge features

to_undirected(edge_index, edge_weight, reduce='mean') (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([1., 1., 1., 1.]))

contains_self_loops(edge_index: Tensor) → bool[source]

Returns True if the graph given by edge_index contains self-loops.

Parameters:

edge_index (LongTensor) – The edge indices.

Return type:

bool

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) contains_self_loops(edge_index) True

edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) contains_self_loops(edge_index) False

remove_self_loops(edge_index: Tensor, edge_attr: Optional[Tensor] = None) → Tuple[Tensor, Optional[Tensor]][source]

Removes every self-loop in the graph given by edge_index, so that \((i,i) \not\in \mathcal{E}\) for every \(i \in \mathcal{V}\).

Parameters:

Return type:

(LongTensor, Tensor)

Example

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) edge_attr = [[1, 2], [3, 4], [5, 6]] edge_attr = torch.tensor(edge_attr) remove_self_loops(edge_index, edge_attr) (tensor([[0, 1], [1, 0]]), tensor([[1, 2], [3, 4]]))

segregate_self_loops(edge_index: Tensor, edge_attr: Optional[Tensor] = None) → Tuple[Tensor, Optional[Tensor], Tensor, Optional[Tensor]][source]

Segregates self-loops from the graph.

Parameters:

Return type:

(LongTensor, Tensor, LongTensor,Tensor)

Example

edge_index = torch.tensor([[0, 0, 1], ... [0, 1, 0]]) (edge_index, edge_attr, ... loop_edge_index, ... loop_edge_attr) = segregate_self_loops(edge_index) loop_edge_index tensor([[0], [0]])

add_self_loops(edge_index: Tensor, edge_attr: Optional[Tensor] = None, fill_value: Optional[Union[float, Tensor, str]] = None, num_nodes: Optional[Union[int, Tuple[int, int]]] = None) → Tuple[Tensor, Optional[Tensor]][source]

Adds a self-loop \((i,i) \in \mathcal{E}\) to every node\(i \in \mathcal{V}\) in the graph given by edge_index. In case the graph is weighted or has multi-dimensional edge features (edge_attr != None), edge features of self-loops will be added according to fill_value.

Parameters:

Return type:

(LongTensor, Tensor)

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) edge_weight = torch.tensor([0.5, 0.5, 0.5]) add_self_loops(edge_index) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), None)

add_self_loops(edge_index, edge_weight) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 1.0000, 1.0000]))

edge features of self-loops are filled by constant 2.0

add_self_loops(edge_index, edge_weight, ... fill_value=2.) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 2.0000, 2.0000]))

Use 'add' operation to merge edge features for self-loops

add_self_loops(edge_index, edge_weight, ... fill_value='add') (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 1.0000, 0.5000]))

add_remaining_self_loops(edge_index: Tensor, edge_attr: Optional[Tensor] = None, fill_value: Optional[Union[float, Tensor, str]] = None, num_nodes: Optional[int] = None) → Tuple[Tensor, Optional[Tensor]][source]

Adds remaining self-loop \((i,i) \in \mathcal{E}\) to every node\(i \in \mathcal{V}\) in the graph given by edge_index. In case the graph is weighted or has multi-dimensional edge features (edge_attr != None), edge features of non-existing self-loops will be added according to fill_value.

Parameters:

Return type:

(LongTensor, Tensor)

Example

edge_index = torch.tensor([[0, 1], ... [1, 0]]) edge_weight = torch.tensor([0.5, 0.5]) add_remaining_self_loops(edge_index, edge_weight) (tensor([[0, 1, 0, 1], [1, 0, 0, 1]]), tensor([0.5000, 0.5000, 1.0000, 1.0000]))

get_self_loop_attr(edge_index: Tensor, edge_attr: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Tensor[source]

Returns the edge features or weights of self-loops\((i, i)\) of every node \(i \in \mathcal{V}\) in the graph given by edge_index. Edge features of missing self-loops not present in edge_index will be filled with zeros. Ifedge_attr is not given, it will be the vector of ones.

Note

This operation is analogous to getting the diagonal elements of the dense adjacency matrix.

Parameters:

Return type:

Tensor

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) edge_weight = torch.tensor([0.2, 0.3, 0.5]) get_self_loop_attr(edge_index, edge_weight) tensor([0.5000, 0.0000])

get_self_loop_attr(edge_index, edge_weight, num_nodes=4) tensor([0.5000, 0.0000, 0.0000, 0.0000])

contains_isolated_nodes(edge_index: Tensor, num_nodes: Optional[int] = None) → bool[source]

Returns True if the graph given by edge_index contains isolated nodes.

Parameters:

Return type:

bool

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) contains_isolated_nodes(edge_index) False

contains_isolated_nodes(edge_index, num_nodes=3) True

remove_isolated_nodes(edge_index: Tensor, edge_attr: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Tuple[Tensor, Optional[Tensor], Tensor][source]

Removes the isolated nodes from the graph given by edge_indexwith optional edge attributes edge_attr. In addition, returns a mask of shape [num_nodes] to manually filter out isolated node features later on. Self-loops are preserved for non-isolated nodes.

Parameters:

Return type:

(LongTensor, Tensor, BoolTensor)

Examples

edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) edge_index, edge_attr, mask = remove_isolated_nodes(edge_index) mask # node mask (2 nodes) tensor([True, True])

edge_index, edge_attr, mask = remove_isolated_nodes(edge_index, ... num_nodes=3) mask # node mask (3 nodes) tensor([True, True, False])

get_num_hops(model: Module) → int[source]

Returns the number of hops the model is aggregating information from.

Note

This function counts the number of message passing layers as an approximation of the total number of hops covered by the model. Its output may not necessarily be correct in case message passing layers perform multi-hop aggregation, e.g., as inChebConv.

Example

class GNN(torch.nn.Module): ... def init(self): ... super().init() ... self.conv1 = GCNConv(3, 16) ... self.conv2 = GCNConv(16, 16) ... self.lin = Linear(16, 2) ... ... def forward(self, x, edge_index): ... x = self.conv1(x, edge_index).relu() ... x = self.conv2(x, edge_index).relu() ... return self.lin(x) get_num_hops(GNN()) 2

Return type:

int

subgraph(subset: Union[Tensor, List[int]], edge_index: Tensor, edge_attr: Optional[Tensor] = None, relabel_nodes: bool = False, num_nodes: Optional[int] = None, *, return_edge_mask: bool = False) → Union[Tuple[Tensor, Optional[Tensor]], Tuple[Tensor, Optional[Tensor], Tensor]][source]

Returns the induced subgraph of (edge_index, edge_attr)containing the nodes in subset.

Parameters:

Return type:

(LongTensor, Tensor)

Examples

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6], ... [1, 0, 2, 1, 3, 2, 4, 3, 5, 4, 6, 5]]) edge_attr = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) subset = torch.tensor([3, 4, 5]) subgraph(subset, edge_index, edge_attr) (tensor([[3, 4, 4, 5], [4, 3, 5, 4]]), tensor([ 7., 8., 9., 10.]))

subgraph(subset, edge_index, edge_attr, return_edge_mask=True) (tensor([[3, 4, 4, 5], [4, 3, 5, 4]]), tensor([ 7., 8., 9., 10.]), tensor([False, False, False, False, False, False, True, True, True, True, False, False]))

bipartite_subgraph(subset: Union[Tuple[Tensor, Tensor], Tuple[List[int], List[int]]], edge_index: Tensor, edge_attr: Optional[Tensor] = None, relabel_nodes: bool = False, size: Optional[Tuple[int, int]] = None, return_edge_mask: bool = False) → Union[Tuple[Tensor, Optional[Tensor]], Tuple[Tensor, Optional[Tensor], Optional[Tensor]]][source]

Returns the induced subgraph of the bipartite graph(edge_index, edge_attr) containing the nodes in subset.

Parameters:

Return type:

(LongTensor, Tensor)

Examples

edge_index = torch.tensor([[0, 5, 2, 3, 3, 4, 4, 3, 5, 5, 6], ... [0, 0, 3, 2, 0, 0, 2, 1, 2, 3, 1]]) edge_attr = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) subset = (torch.tensor([2, 3, 5]), torch.tensor([2, 3])) bipartite_subgraph(subset, edge_index, edge_attr) (tensor([[2, 3, 5, 5], [3, 2, 2, 3]]), tensor([ 3, 4, 9, 10]))

bipartite_subgraph(subset, edge_index, edge_attr, ... return_edge_mask=True) (tensor([[2, 3, 5, 5], [3, 2, 2, 3]]), tensor([ 3, 4, 9, 10]), tensor([False, False, True, True, False, False, False, False, True, True, False]))

k_hop_subgraph(node_idx: Union[int, List[int], Tensor], num_hops: int, edge_index: Tensor, relabel_nodes: bool = False, num_nodes: Optional[int] = None, flow: str = 'source_to_target', directed: bool = False) → Tuple[Tensor, Tensor, Tensor, Tensor][source]

Computes the induced subgraph of edge_index around all nodes innode_idx reachable within \(k\) hops.

The flow argument denotes the direction of edges for finding\(k\)-hop neighbors. If set to "source_to_target", then the method will find all neighbors that point to the initial set of seed nodes in node_idx.This mimics the natural flow of message passing in Graph Neural Networks.

The method returns (1) the nodes involved in the subgraph, (2) the filterededge_index connectivity, (3) the mapping from node indices innode_idx to their new location, and (4) the edge mask indicating which edges were preserved.

Parameters:

Return type:

(LongTensor, LongTensor, LongTensor,BoolTensor)

Examples

edge_index = torch.tensor([[0, 1, 2, 3, 4, 5], ... [2, 2, 4, 4, 6, 6]])

Center node 6, 2-hops

subset, edge_index, mapping, edge_mask = k_hop_subgraph( ... 6, 2, edge_index, relabel_nodes=True) subset tensor([2, 3, 4, 5, 6]) edge_index tensor([[0, 1, 2, 3], [2, 2, 4, 4]]) mapping tensor([4]) edge_mask tensor([False, False, True, True, True, True]) subset[mapping] tensor([6])

edge_index = torch.tensor([[1, 2, 4, 5], ... [0, 1, 5, 6]]) (subset, edge_index, ... mapping, edge_mask) = k_hop_subgraph([0, 6], 2, ... edge_index, ... relabel_nodes=True) subset tensor([0, 1, 2, 4, 5, 6]) edge_index tensor([[1, 2, 3, 4], [0, 1, 4, 5]]) mapping tensor([0, 5]) edge_mask tensor([True, True, True, True]) subset[mapping] tensor([0, 6])

dropout_node(edge_index: Tensor, p: float = 0.5, num_nodes: Optional[int] = None, training: bool = True, relabel_nodes: bool = False) → Tuple[Tensor, Tensor, Tensor][source]

Randomly drops nodes from the adjacency matrixedge_index with probability p using samples from a Bernoulli distribution.

The method returns (1) the retained edge_index, (2) the edge mask indicating which edges were retained. (3) the node mask indicating which nodes were retained.

Parameters:

Return type:

(LongTensor, BoolTensor, BoolTensor)

Examples

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) edge_index, edge_mask, node_mask = dropout_node(edge_index) edge_index tensor([[0, 1], [1, 0]]) edge_mask tensor([ True, True, False, False, False, False]) node_mask tensor([ True, True, False, False])

dropout_edge(edge_index: Tensor, p: float = 0.5, force_undirected: bool = False, training: bool = True) → Tuple[Tensor, Tensor][source]

Randomly drops edges from the adjacency matrixedge_index with probability p using samples from a Bernoulli distribution.

The method returns (1) the retained edge_index, (2) the edge mask or index indicating which edges were retained, depending on the argumentforce_undirected.

Parameters:

Return type:

(LongTensor, BoolTensor or LongTensor)

Examples

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) edge_index, edge_mask = dropout_edge(edge_index) edge_index tensor([[0, 1, 2, 2], [1, 2, 1, 3]]) edge_mask # masks indicating which edges are retained tensor([ True, False, True, True, True, False])

edge_index, edge_id = dropout_edge(edge_index, ... force_undirected=True) edge_index tensor([[0, 1, 2, 1, 2, 3], [1, 2, 3, 0, 1, 2]]) edge_id # indices indicating which edges are retained tensor([0, 2, 4, 0, 2, 4])

dropout_path(edge_index: Tensor, p: float = 0.2, walks_per_node: int = 1, walk_length: int = 3, num_nodes: Optional[int] = None, is_sorted: bool = False, training: bool = True) → Tuple[Tensor, Tensor][source]

Drops edges from the adjacency matrix edge_indexbased on random walks. The source nodes to start random walks from are sampled from edge_index with probability p, following a Bernoulli distribution.

The method returns (1) the retained edge_index, (2) the edge mask indicating which edges were retained.

Parameters:

Return type:

(LongTensor, BoolTensor)

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) edge_index, edge_mask = dropout_path(edge_index) edge_index tensor([[1, 2], [2, 3]]) edge_mask # masks indicating which edges are retained tensor([False, False, True, False, True, False])

dropout_adj(edge_index: Tensor, edge_attr: Optional[Tensor] = None, p: float = 0.5, force_undirected: bool = False, num_nodes: Optional[int] = None, training: bool = True) → Tuple[Tensor, Optional[Tensor]][source]

Randomly drops edges from the adjacency matrix(edge_index, edge_attr) with probability p using samples from a Bernoulli distribution.

Warning

dropout_adj is deprecated and will be removed in a future release. Use torch_geometric.utils.dropout_edge instead.

Parameters:

Examples

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) edge_attr = torch.tensor([1, 2, 3, 4, 5, 6]) dropout_adj(edge_index, edge_attr) (tensor([[0, 1, 2, 3], [1, 2, 3, 2]]), tensor([1, 3, 5, 6]))

The returned graph is kept undirected

dropout_adj(edge_index, edge_attr, force_undirected=True) (tensor([[0, 1, 2, 1, 2, 3], [1, 2, 3, 0, 1, 2]]), tensor([1, 3, 5, 1, 3, 5]))

Return type:

Tuple[Tensor, Optional[Tensor]]

homophily(edge_index: Union[Tensor, SparseTensor], y: Tensor, batch: Optional[Tensor] = None, method: str = 'edge') → Union[float, Tensor][source]

The homophily of a graph characterizes how likely nodes with the same label are near each other in a graph.

There are many measures of homophily that fits this definition. In particular:

Parameters:

Examples

edge_index = torch.tensor([[0, 1, 2, 3], ... [1, 2, 0, 4]]) y = torch.tensor([0, 0, 0, 0, 1])

Edge homophily ratio

homophily(edge_index, y, method='edge') 0.75

Node homophily ratio

homophily(edge_index, y, method='node') 0.6000000238418579

Class insensitive edge homophily ratio

homophily(edge_index, y, method='edge_insensitive') 0.19999998807907104

Return type:

Union[float, Tensor]

assortativity(edge_index: Union[Tensor, SparseTensor]) → float[source]

The degree assortativity coefficient from the“Mixing patterns in networks” paper. Assortativity in a network refers to the tendency of nodes to connect with other similar nodes over dissimilar nodes. It is computed from Pearson correlation coefficient of the node degrees.

Parameters:

edge_index (Tensor or SparseTensor) – The graph connectivity.

Returns:

float – The value of the degree assortativity coefficient for the input graph \(\in [-1, 1]\)

Example

edge_index = torch.tensor([[0, 1, 2, 3, 2], ... [1, 2, 0, 1, 3]]) assortativity(edge_index) -0.666667640209198

normalize_edge_index(edge_index: Tensor, num_nodes: Optional[int] = None, add_self_loops: bool = True, symmetric: bool = True) → Tuple[Tensor, Tensor][source]

Applies normalization to the edges of a graph.

This function can add self-loops to the graph and apply either symmetric or asymmetric normalization based on the node degrees.

Parameters:

Return type:

Tuple[Tensor, Tensor]

get_laplacian(edge_index: Tensor, edge_weight: Optional[Tensor] = None, normalization: Optional[str] = None, dtype: Optional[dtype] = None, num_nodes: Optional[int] = None) → Tuple[Tensor, Tensor][source]

Computes the graph Laplacian of the graph given by edge_indexand optional edge_weight.

Parameters:

Examples

edge_index = torch.tensor([[0, 1, 1, 2], ... [1, 0, 2, 1]]) edge_weight = torch.tensor([1., 2., 2., 4.])

No normalization

lap = get_laplacian(edge_index, edge_weight)

Symmetric normalization

lap_sym = get_laplacian(edge_index, edge_weight, normalization='sym')

Random-walk normalization

lap_rw = get_laplacian(edge_index, edge_weight, normalization='rw')

Return type:

Tuple[Tensor, Tensor]

get_mesh_laplacian(pos: Tensor, face: Tensor, normalization: Optional[str] = None) → Tuple[Tensor, Tensor][source]

Computes the mesh Laplacian of a mesh given by pos andface.

Computation is based on the cotangent matrix defined as

\[\begin{split}\mathbf{C}_{ij} = \begin{cases} \frac{\cot \angle_{ikj}~+\cot \angle_{ilj}}{2} & \text{if } i, j \text{ is an edge} \\ -\sum_{j \in N(i)}{C_{ij}} & \text{if } i \text{ is in the diagonal} \\ 0 & \text{otherwise} \end{cases}\end{split}\]

Normalization depends on the mass matrix defined as

\[\begin{split}\mathbf{M}_{ij} = \begin{cases} a(i) & \text{if } i \text{ is in the diagonal} \\ 0 & \text{otherwise} \end{cases}\end{split}\]

where \(a(i)\) is obtained by joining the barycenters of the triangles around vertex \(i\).

Parameters:

Return type:

Tuple[Tensor, Tensor]

mask_select(src: Tensor, dim: int, mask: Tensor) → Tensor[source]

Returns a new tensor which masks the src tensor along the dimension dim according to the boolean mask mask.

Parameters:

Return type:

Tensor

index_to_mask(index: Tensor, size: Optional[int] = None) → Tensor[source]

Converts indices to a mask representation.

Parameters:

Example

index = torch.tensor([1, 3, 5]) index_to_mask(index) tensor([False, True, False, True, False, True])

index_to_mask(index, size=7) tensor([False, True, False, True, False, True, False])

Return type:

Tensor

mask_to_index(mask: Tensor) → Tensor[source]

Converts a mask to an index representation.

Parameters:

mask (Tensor) – The mask.

Example

mask = torch.tensor([False, True, False]) mask_to_index(mask) tensor([1])

Return type:

Tensor

select(src: Union[Tensor, List[Any], TensorFrame], index_or_mask: Tensor, dim: int) → Union[Tensor, List[Any]][source]

Selects the input tensor or input list according to a given index or mask vector.

Parameters:

Return type:

Union[Tensor, List[Any]]

narrow(src: Union[Tensor, List[Any]], dim: int, start: int, length: int) → Union[Tensor, List[Any]][source]

Narrows the input tensor or input list to the specified range.

Parameters:

Return type:

Union[Tensor, List[Any]]

to_dense_batch(x: Tensor, batch: Optional[Tensor] = None, fill_value: float = 0.0, max_num_nodes: Optional[int] = None, batch_size: Optional[int] = None) → Tuple[Tensor, Tensor][source]

Given a sparse batch of node features\(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times F}\) (with\(N_i\) indicating the number of nodes in graph \(i\)), creates a dense node feature tensor\(\mathbf{X} \in \mathbb{R}^{B \times N_{\max} \times F}\) (with\(N_{\max} = \max_i^B N_i\)). In addition, a mask of shape \(\mathbf{M} \in \{ 0, 1 \}^{B \times N_{\max}}\) is returned, holding information about the existence of fake-nodes in the dense representation.

Parameters:

Return type:

(Tensor, BoolTensor)

Examples

x = torch.arange(12).view(6, 2) x tensor([[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], [ 8, 9], [10, 11]])

out, mask = to_dense_batch(x) mask tensor([[True, True, True, True, True, True]])

batch = torch.tensor([0, 0, 1, 2, 2, 2]) out, mask = to_dense_batch(x, batch) out tensor([[[ 0, 1], [ 2, 3], [ 0, 0]], [[ 4, 5], [ 0, 0], [ 0, 0]], [[ 6, 7], [ 8, 9], [10, 11]]]) mask tensor([[ True, True, False], [ True, False, False], [ True, True, True]])

out, mask = to_dense_batch(x, batch, max_num_nodes=4) out tensor([[[ 0, 1], [ 2, 3], [ 0, 0], [ 0, 0]], [[ 4, 5], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 6, 7], [ 8, 9], [10, 11], [ 0, 0]]])

mask tensor([[ True, True, False, False], [ True, False, False, False], [ True, True, True, False]])

to_dense_adj(edge_index: Tensor, batch: Optional[Tensor] = None, edge_attr: Optional[Tensor] = None, max_num_nodes: Optional[int] = None, batch_size: Optional[int] = None) → Tensor[source]

Converts batched sparse adjacency matrices given by edge indices and edge attributes to a single dense batched adjacency matrix.

Parameters:

Return type:

Tensor

Examples

edge_index = torch.tensor([[0, 0, 1, 2, 3], ... [0, 1, 0, 3, 0]]) batch = torch.tensor([0, 0, 1, 1]) to_dense_adj(edge_index, batch) tensor([[[1., 1.], [1., 0.]], [[0., 1.], [1., 0.]]])

to_dense_adj(edge_index, batch, max_num_nodes=4) tensor([[[1., 1., 0., 0.], [1., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], [[0., 1., 0., 0.], [1., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]])

edge_attr = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0]) to_dense_adj(edge_index, batch, edge_attr) tensor([[[1., 2.], [3., 0.]], [[0., 4.], [5., 0.]]])

to_nested_tensor(x: Tensor, batch: Optional[Tensor] = None, ptr: Optional[Tensor] = None, batch_size: Optional[int] = None) → Tensor[source]

Given a contiguous batch of tensors\(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\)(with \(N_i\) indicating the number of elements in example \(i\)), creates a nested PyTorch tensor. Reverse operation of from_nested_tensor().

Parameters:

Return type:

Tensor

from_nested_tensor(x: Tensor, return_batch: bool = False) → Union[Tensor, Tuple[Tensor, Tensor]][source]

Given a nested PyTorch tensor, creates a contiguous batch of tensors\(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\), and optionally a batch vector which assigns each element to a specific example. Reverse operation of to_nested_tensor().

Parameters:

Return type:

Union[Tensor, Tuple[Tensor, Tensor]]

dense_to_sparse(adj: Tensor, mask: Optional[Tensor] = None) → Tuple[Tensor, Tensor][source]

Converts a dense adjacency matrix to a sparse adjacency matrix defined by edge indices and edge attributes.

Parameters:

Return type:

(LongTensor, Tensor)

Examples

For a single adjacency matrix:

adj = torch.tensor([[3, 1], ... [2, 0]]) dense_to_sparse(adj) (tensor([[0, 0, 1], [0, 1, 0]]), tensor([3, 1, 2]))

For two adjacency matrixes:

adj = torch.tensor([[[3, 1], ... [2, 0]], ... [[0, 1], ... [0, 2]]]) dense_to_sparse(adj) (tensor([[0, 0, 1, 2, 3], [0, 1, 0, 3, 3]]), tensor([3, 1, 2, 1, 2]))

First graph with two nodes, second with three:

adj = torch.tensor([[ ... [3, 1, 0], ... [2, 0, 0], ... [0, 0, 0] ... ], [ ... [0, 1, 0], ... [0, 2, 3], ... [0, 5, 0] ... ]]) mask = torch.tensor([ ... [True, True, False], ... [True, True, True] ... ]) dense_to_sparse(adj, mask) (tensor([[0, 0, 1, 2, 3, 3, 4], [0, 1, 0, 3, 3, 4, 3]]), tensor([3, 1, 2, 1, 2, 3, 5]))

is_torch_sparse_tensor(src: Any) → bool[source]

Returns True if the input src is atorch.sparse.Tensor (in any sparse layout).

Parameters:

src (Any) – The input object to be checked.

Return type:

bool

is_sparse(src: Any) → bool[source]

Returns True if the input src is of typetorch.sparse.Tensor (in any sparse layout) or of typetorch_sparse.SparseTensor.

Parameters:

src (Any) – The input object to be checked.

Return type:

bool

to_torch_coo_tensor(edge_index: Tensor, edge_attr: Optional[Tensor] = None, size: Optional[Union[int, Tuple[Optional[int], Optional[int]]]] = None, is_coalesced: bool = False) → Tensor[source]

Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layouttorch.sparse_coo. See to_edge_index() for the reverse operation.

Parameters:

Return type:

torch.sparse.Tensor

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) to_torch_coo_tensor(edge_index) tensor(indices=tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_coo)

to_torch_csr_tensor(edge_index: Tensor, edge_attr: Optional[Tensor] = None, size: Optional[Union[int, Tuple[Optional[int], Optional[int]]]] = None, is_coalesced: bool = False) → Tensor[source]

Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layouttorch.sparse_csr. See to_edge_index() for the reverse operation.

Parameters:

Return type:

torch.sparse.Tensor

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) to_torch_csr_tensor(edge_index) tensor(crow_indices=tensor([0, 1, 3, 5, 6]), col_indices=tensor([1, 0, 2, 1, 3, 2]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_csr)

to_torch_csc_tensor(edge_index: Tensor, edge_attr: Optional[Tensor] = None, size: Optional[Union[int, Tuple[Optional[int], Optional[int]]]] = None, is_coalesced: bool = False) → Tensor[source]

Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with layouttorch.sparse_csc. See to_edge_index() for the reverse operation.

Parameters:

Return type:

torch.sparse.Tensor

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) to_torch_csc_tensor(edge_index) tensor(ccol_indices=tensor([0, 1, 3, 5, 6]), row_indices=tensor([1, 0, 2, 1, 3, 2]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_csc)

to_torch_sparse_tensor(edge_index: Tensor, edge_attr: Optional[Tensor] = None, size: Optional[Union[int, Tuple[Optional[int], Optional[int]]]] = None, is_coalesced: bool = False, layout: layout = torch.sparse_coo) → Tensor[source]

Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor with custom layout. See to_edge_index() for the reverse operation.

Parameters:

Return type:

torch.sparse.Tensor

to_edge_index(adj: Union[Tensor, SparseTensor]) → Tuple[Tensor, Tensor][source]

Converts a torch.sparse.Tensor or atorch_sparse.SparseTensor to edge indices and edge attributes.

Parameters:

adj (torch.sparse.Tensor or SparseTensor) – The adjacency matrix.

Return type:

(torch.Tensor, torch.Tensor)

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) adj = to_torch_coo_tensor(edge_index) to_edge_index(adj) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([1., 1., 1., 1., 1., 1.]))

spmm(src: Union[Tensor, SparseTensor], other: Tensor, reduce: str = 'sum') → Tensor[source]

Matrix product of sparse matrix with dense matrix.

Parameters:

Return type:

Tensor

unbatch(src: Tensor, batch: Tensor, dim: int = 0, batch_size: Optional[int] = None) → List[Tensor][source]

Splits src according to a batch vector along dimensiondim.

Parameters:

Return type:

List[Tensor]

Example

src = torch.arange(7) batch = torch.tensor([0, 0, 0, 1, 1, 2, 2]) unbatch(src, batch) (tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))

unbatch_edge_index(edge_index: Tensor, batch: Tensor, batch_size: Optional[int] = None) → List[Tensor][source]

Splits the edge_index according to a batch vector.

Parameters:

Return type:

List[Tensor]

Example

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3, 4, 5, 5, 6], ... [1, 0, 2, 1, 3, 2, 5, 4, 6, 5]]) batch = torch.tensor([0, 0, 0, 0, 1, 1, 1]) unbatch_edge_index(edge_index, batch) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([[0, 1, 1, 2], [1, 0, 2, 1]]))

one_hot(index: Tensor, num_classes: Optional[int] = None, dtype: Optional[dtype] = None) → Tensor[source]

Taskes a one-dimensional index tensor and returns a one-hot encoded representation of it with shape [*, num_classes] that has zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1.

Note

This is a more memory-efficient version oftorch.nn.functional.one_hot() as you can customize the outputdtype.

Parameters:

Return type:

Tensor

normalized_cut(edge_index: Tensor, edge_attr: Tensor, num_nodes: Optional[int] = None) → Tensor[source]

Computes the normalized cut \(\mathbf{e}_{i,j} \cdot \left( \frac{1}{\deg(i)} + \frac{1}{\deg(j)} \right)\) of a weighted graph given by edge indices and edge attributes.

Parameters:

Return type:

Tensor

Example

edge_index = torch.tensor([[1, 1, 2, 3], ... [3, 3, 1, 2]]) edge_attr = torch.tensor([1., 1., 1., 1.]) normalized_cut(edge_index, edge_attr) tensor([1.5000, 1.5000, 2.0000, 1.5000])

grid(height: int, width: int, dtype: Optional[dtype] = None, device: Optional[device] = None) → Tuple[Tensor, Tensor][source]

Returns the edge indices of a two-dimensional grid graph with heightheight and width width and its node positions.

Parameters:

Return type:

(LongTensor, Tensor)

Example

(row, col), pos = grid(height=2, width=2) row tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]) col tensor([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]) pos tensor([[0., 1.], [1., 1.], [0., 0.], [1., 0.]])

geodesic_distance(pos: Tensor, face: Tensor, src: Optional[Tensor] = None, dst: Optional[Tensor] = None, norm: bool = True, max_distance: Optional[float] = None, num_workers: int = 0, **kwargs: Optional[Tensor]) → Tensor[source]

Computes (normalized) geodesic distances of a mesh given by posand face. If src and dst are given, this method only computes the geodesic distances for the respective source and target node-pairs.

Note

This function requires the gdist package. To install, run pip install cython && pip install gdist.

Parameters:

Return type:

Tensor

Example

pos = torch.tensor([[0.0, 0.0, 0.0], ... [2.0, 0.0, 0.0], ... [0.0, 2.0, 0.0], ... [2.0, 2.0, 0.0]]) face = torch.tensor([[0, 0], ... [1, 2], ... [3, 3]]) geodesic_distance(pos, face) [[0, 1, 1, 1.4142135623730951], [1, 0, 1.4142135623730951, 1], [1, 1.4142135623730951, 0, 1], [1.4142135623730951, 1, 1, 0]]

to_scipy_sparse_matrix(edge_index: Tensor, edge_attr: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Any[source]

Converts a graph given by edge indices and edge attributes to a scipy sparse matrix.

Parameters:

Examples

edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) to_scipy_sparse_matrix(edge_index) <4x4 sparse matrix of type '<class 'numpy.float32'>' with 6 stored elements in COOrdinate format>

Return type:

Any

from_scipy_sparse_matrix(A: Any) → Tuple[Tensor, Tensor][source]

Converts a scipy sparse matrix to edge indices and edge attributes.

Parameters:

A (scipy.sparse) – A sparse matrix.

Examples

edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) adj = to_scipy_sparse_matrix(edge_index)

edge_index and edge_weight are both returned

from_scipy_sparse_matrix(adj) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([1., 1., 1., 1., 1., 1.]))

Return type:

Tuple[Tensor, Tensor]

to_networkx(data: Union[Data, HeteroData], node_attrs: Optional[Iterable[str]] = None, edge_attrs: Optional[Iterable[str]] = None, graph_attrs: Optional[Iterable[str]] = None, to_undirected: Optional[Union[bool, str]] = False, to_multi: bool = False, remove_self_loops: bool = False) → Any[source]

Converts a torch_geometric.data.Data instance to anetworkx.Graph if to_undirected is set to True, or a directed networkx.DiGraph otherwise.

Parameters:

Examples

edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) data = Data(edge_index=edge_index, num_nodes=4) to_networkx(data) <networkx.classes.digraph.DiGraph at 0x2713fdb40d0>

Return type:

Any

from_networkx(G: Any, group_node_attrs: Optional[Union[List[str], Literal['all']]] = None, group_edge_attrs: Optional[Union[List[str], Literal['all']]] = None) → Data[source]

Converts a networkx.Graph or networkx.DiGraph to atorch_geometric.data.Data instance.

Parameters:

Note

All group_node_attrs and group_edge_attrs values must be numeric.

Examples

edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) data = Data(edge_index=edge_index, num_nodes=4) g = to_networkx(data)

A Data object is returned

from_networkx(g) Data(edge_index=[2, 6], num_nodes=4)

Return type:

Data

to_networkit(edge_index: Tensor, edge_weight: Optional[Tensor] = None, num_nodes: Optional[int] = None, directed: bool = True) → Any[source]

Converts a (edge_index, edge_weight) tuple to anetworkit.Graph.

Parameters:

Return type:

Any

from_networkit(g: Any) → Tuple[Tensor, Optional[Tensor]][source]

Converts a networkit.Graph to a(edge_index, edge_weight) tuple. If the networkit.Graph is not weighted, the returnededge_weight will be None.

Parameters:

g (networkkit.graph.Graph) – A networkit graph object.

Return type:

Tuple[Tensor, Optional[Tensor]]

to_trimesh(data: Data) → Any[source]

Converts a torch_geometric.data.Data instance to atrimesh.Trimesh.

Parameters:

data (torch_geometric.data.Data) – The data object.

Example

pos = torch.tensor([[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0]], ... dtype=torch.float) face = torch.tensor([[0, 1, 2], [1, 2, 3]]).t()

data = Data(pos=pos, face=face) to_trimesh(data) <trimesh.Trimesh(vertices.shape=(4, 3), faces.shape=(2, 3))>

Return type:

Any

from_trimesh(mesh: Any) → Data[source]

Converts a trimesh.Trimesh to atorch_geometric.data.Data instance.

Parameters:

mesh (trimesh.Trimesh) – A trimesh mesh.

Example

pos = torch.tensor([[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0]], ... dtype=torch.float) face = torch.tensor([[0, 1, 2], [1, 2, 3]]).t()

data = Data(pos=pos, face=face) mesh = to_trimesh(data) from_trimesh(mesh) Data(pos=[4, 3], face=[3, 2])

Return type:

Data

to_cugraph(edge_index: Tensor, edge_weight: Optional[Tensor] = None, relabel_nodes: bool = True, directed: bool = True) → Any[source]

Converts a graph given by edge_index and optionaledge_weight into a cugraph graph object.

Parameters:

Return type:

Any

from_cugraph(g: Any) → Tuple[Tensor, Optional[Tensor]][source]

Converts a cugraph graph object into edge_index and optional edge_weight tensors.

Parameters:

g (cugraph.Graph) – A cugraph graph object.

Return type:

Tuple[Tensor, Optional[Tensor]]

to_dgl(data: Union[Data, HeteroData]) → Any[source]

Converts a torch_geometric.data.Data ortorch_geometric.data.HeteroData instance to a dgl graph object.

Parameters:

data (torch_geometric.data.Data or torch_geometric.data.HeteroData) – The data object.

Example

edge_index = torch.tensor([[0, 1, 1, 2, 3, 0], [1, 0, 2, 1, 4, 4]]) x = torch.randn(5, 3) edge_attr = torch.randn(6, 2) data = Data(x=x, edge_index=edge_index, edge_attr=y) g = to_dgl(data) g Graph(num_nodes=5, num_edges=6, ndata_schemes={'x': Scheme(shape=(3,))} edata_schemes={'edge_attr': Scheme(shape=(2, ))})

data = HeteroData() data['paper'].x = torch.randn(5, 3) data['author'].x = torch.ones(5, 3) edge_index = torch.tensor([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]) data['author', 'cites', 'paper'].edge_index = edge_index g = to_dgl(data) g Graph(num_nodes={'author': 5, 'paper': 5}, num_edges={('author', 'cites', 'paper'): 5}, metagraph=[('author', 'paper', 'cites')])

Return type:

Any

from_dgl(g: Any) → Union[Data, HeteroData][source]

Converts a dgl graph object to atorch_geometric.data.Data ortorch_geometric.data.HeteroData instance.

Parameters:

g (dgl.DGLGraph) – The dgl graph object.

Example

g = dgl.graph(([0, 0, 1, 5], [1, 2, 2, 0])) g.ndata['x'] = torch.randn(g.num_nodes(), 3) g.edata['edge_attr'] = torch.randn(g.num_edges(), 2) data = from_dgl(g) data Data(x=[6, 3], edge_attr=[4, 2], edge_index=[2, 4])

g = dgl.heterograph({ g = dgl.heterograph({ ... ('author', 'writes', 'paper'): ([0, 1, 1, 2, 3, 3, 4], ... [0, 0, 1, 1, 1, 2, 2])}) g.nodes['author'].data['x'] = torch.randn(5, 3) g.nodes['paper'].data['x'] = torch.randn(5, 3) data = from_dgl(g) data HeteroData( author={ x=[5, 3] }, paper={ x=[3, 3] }, (author, writes, paper)={ edge_index=[2, 7] } )

Return type:

Union[Data, HeteroData]

from_rdmol(mol: Any) → Data[source]

Converts a rdkit.Chem.Mol instance to atorch_geometric.data.Data instance.

Parameters:

mol (rdkit.Chem.Mol) – The rdkit molecule.

Return type:

Data

to_rdmol(data: Data, kekulize: bool = False) → Any[source]

Converts a torch_geometric.data.Data instance to ardkit.Chem.Mol instance.

Parameters:

Return type:

Any

from_smiles(smiles: str, with_hydrogen: bool = False, kekulize: bool = False) → Data[source]

Converts a SMILES string to a torch_geometric.data.Datainstance.

Parameters:

Return type:

Data

to_smiles(data: Data, kekulize: bool = False) → str[source]

Converts a torch_geometric.data.Data instance to a SMILES string.

Parameters:

Return type:

str

erdos_renyi_graph(num_nodes: int, edge_prob: float, directed: bool = False) → Tensor[source]

Returns the edge_index of a random Erdos-Renyi graph.

Parameters:

Examples

erdos_renyi_graph(5, 0.2, directed=False) tensor([[0, 1, 1, 4], [1, 0, 4, 1]])

erdos_renyi_graph(5, 0.2, directed=True) tensor([[0, 1, 3, 3, 4, 4], [4, 3, 1, 2, 1, 3]])

Return type:

Tensor

stochastic_blockmodel_graph(block_sizes: Union[List[int], Tensor], edge_probs: Union[List[List[float]], Tensor], directed: bool = False) → Tensor[source]

Returns the edge_index of a stochastic blockmodel graph.

Parameters:

Examples

block_sizes = [2, 2, 4] edge_probs = [[0.25, 0.05, 0.02], ... [0.05, 0.35, 0.07], ... [0.02, 0.07, 0.40]] stochastic_blockmodel_graph(block_sizes, edge_probs, ... directed=False) tensor([[2, 4, 4, 5, 5, 6, 7, 7], [5, 6, 7, 2, 7, 4, 4, 5]])

stochastic_blockmodel_graph(block_sizes, edge_probs, ... directed=True) tensor([[0, 2, 3, 4, 4, 5, 5], [3, 4, 1, 5, 6, 6, 7]])

Return type:

Tensor

barabasi_albert_graph(num_nodes: int, num_edges: int) → Tensor[source]

Returns the edge_index of a Barabasi-Albert preferential attachment model, where a graph of num_nodes nodes grows by attaching new nodes with num_edges edges that are preferentially attached to existing nodes with high degree.

Parameters:

Example

barabasi_albert_graph(num_nodes=4, num_edges=3) tensor([[0, 0, 0, 1, 1, 2, 2, 3], [1, 2, 3, 0, 2, 0, 1, 0]])

Return type:

Tensor

negative_sampling(edge_index: Tensor, num_nodes: Optional[Union[int, Tuple[int, int]]] = None, num_neg_samples: Optional[Union[int, float]] = None, method: str = 'sparse', force_undirected: bool = False) → Tensor[source]

Samples random negative edges of a graph given by edge_index.

Parameters:

Return type:

LongTensor

Examples

Standard usage

edge_index = torch.as_tensor([[0, 0, 1, 2], ... [0, 1, 2, 3]]) negative_sampling(edge_index) tensor([[3, 0, 0, 3], [2, 3, 2, 1]])

negative_sampling(edge_index, num_nodes=(3, 4), ... num_neg_samples=0.5) # 50% of positive edges tensor([[0, 3], [3, 0]])

For bipartite graph

negative_sampling(edge_index, num_nodes=(3, 4)) tensor([[0, 2, 2, 1], [2, 2, 1, 3]])

batched_negative_sampling(edge_index: Tensor, batch: Union[Tensor, Tuple[Tensor, Tensor]], num_neg_samples: Optional[Union[int, float]] = None, method: str = 'sparse', force_undirected: bool = False) → Tensor[source]

Samples random negative edges of multiple graphs given byedge_index and batch.

Parameters:

Return type:

LongTensor

Examples

Standard usage

edge_index = torch.as_tensor([[0, 0, 1, 2], [0, 1, 2, 3]]) edge_index = torch.cat([edge_index, edge_index + 4], dim=1) edge_index tensor([[0, 0, 1, 2, 4, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7]]) batch = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1]) batched_negative_sampling(edge_index, batch) tensor([[3, 1, 3, 2, 7, 7, 6, 5], [2, 0, 1, 1, 5, 6, 4, 4]])

Using float multiplier for negative samples

batched_negative_sampling(edge_index, batch, num_neg_samples=1.5) tensor([[3, 1, 3, 2, 7, 7, 6, 5, 2, 0, 1, 1], [2, 0, 1, 1, 5, 6, 4, 4, 3, 2, 3, 0]])

For bipartite graph

edge_index1 = torch.as_tensor([[0, 0, 1, 1], [0, 1, 2, 3]]) edge_index2 = edge_index1 + torch.tensor([[2], [4]]) edge_index3 = edge_index2 + torch.tensor([[2], [4]]) edge_index = torch.cat([edge_index1, edge_index2, ... edge_index3], dim=1) edge_index tensor([[ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) src_batch = torch.tensor([0, 0, 1, 1, 2, 2]) dst_batch = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]) batched_negative_sampling(edge_index, ... (src_batch, dst_batch)) tensor([[ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5], [ 2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9]])

structured_negative_sampling(edge_index: Tensor, num_nodes: Optional[int] = None, contains_neg_self_loops: bool = True) → Tuple[Tensor, Tensor, Tensor][source]

Samples a negative edge (i,k) for every positive edge(i,j) in the graph given by edge_index, and returns it as a tuple of the form (i,j,k).

Parameters:

Return type:

(LongTensor, LongTensor, LongTensor)

Example

edge_index = torch.as_tensor([[0, 0, 1, 2], ... [0, 1, 2, 3]]) structured_negative_sampling(edge_index) (tensor([0, 0, 1, 2]), tensor([0, 1, 2, 3]), tensor([2, 3, 0, 2]))

structured_negative_sampling_feasible(edge_index: Tensor, num_nodes: Optional[int] = None, contains_neg_self_loops: bool = True) → bool[source]

Returns True ifstructured_negative_sampling() is feasible on the graph given by edge_index.structured_negative_sampling() is infeasible if at least one node is connected to all other nodes.

Parameters:

Return type:

bool

Examples

edge_index = torch.LongTensor([[0, 0, 1, 1, 2, 2, 2], ... [1, 2, 0, 2, 0, 1, 1]]) structured_negative_sampling_feasible(edge_index, 3, False) False

structured_negative_sampling_feasible(edge_index, 3, True) True

shuffle_node(x: Tensor, batch: Optional[Tensor] = None, training: bool = True) → Tuple[Tensor, Tensor][source]

Randomly shuffle the feature matrix x along the first dimension.

The method returns (1) the shuffled x, (2) the permutation indicating the orders of original nodes after shuffling.

Parameters:

Return type:

(FloatTensor, LongTensor)

Example

Standard case

x = torch.tensor([[0, 1, 2], ... [3, 4, 5], ... [6, 7, 8], ... [9, 10, 11]], dtype=torch.float) x, node_perm = shuffle_node(x) x tensor([[ 3., 4., 5.], [ 9., 10., 11.], [ 0., 1., 2.], [ 6., 7., 8.]]) node_perm tensor([1, 3, 0, 2])

For batched graphs as inputs

batch = torch.tensor([0, 0, 1, 1]) x, node_perm = shuffle_node(x, batch) x tensor([[ 3., 4., 5.], [ 0., 1., 2.], [ 9., 10., 11.], [ 6., 7., 8.]]) node_perm tensor([1, 0, 3, 2])

mask_feature(x: Tensor, p: float = 0.5, mode: str = 'col', fill_value: float = 0.0, training: bool = True) → Tuple[Tensor, Tensor][source]

Randomly masks feature from the feature matrixx with probability p using samples from a Bernoulli distribution.

The method returns (1) the retained x, (2) the feature mask broadcastable with x (mode='row' and mode='col') or with the same shape as x (mode='all'), indicating where features are retained.

Parameters:

Return type:

(FloatTensor, BoolTensor)

Examples

Masked features are column-wise sampled

x = torch.tensor([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9]], dtype=torch.float) x, feat_mask = mask_feature(x) x tensor([[1., 0., 3.], [4., 0., 6.], [7., 0., 9.]]), feat_mask tensor([[True, False, True]])

Masked features are row-wise sampled

x, feat_mask = mask_feature(x, mode='row') x tensor([[1., 2., 3.], [0., 0., 0.], [7., 8., 9.]]), feat_mask tensor([[True], [False], [True]])

Masked features are uniformly sampled

x, feat_mask = mask_feature(x, mode='all') x tensor([[0., 0., 0.], [4., 0., 6.], [0., 0., 9.]]) feat_mask tensor([[False, False, False], [True, False, True], [False, False, True]])

add_random_edge(edge_index: Tensor, p: float = 0.5, force_undirected: bool = False, num_nodes: Optional[Union[int, Tuple[int, int]]] = None, training: bool = True) → Tuple[Tensor, Tensor][source]

Randomly adds edges to edge_index.

The method returns (1) the retained edge_index, (2) the added edge indices.

Parameters:

Return type:

(LongTensor, LongTensor)

Examples

Standard case

edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) edge_index, added_edges = add_random_edge(edge_index, p=0.5) edge_index tensor([[0, 1, 1, 2, 2, 3, 2, 1, 3], [1, 0, 2, 1, 3, 2, 0, 2, 1]]) added_edges tensor([[2, 1, 3], [0, 2, 1]])

The returned graph is kept undirected

edge_index, added_edges = add_random_edge(edge_index, p=0.5, ... force_undirected=True) edge_index tensor([[0, 1, 1, 2, 2, 3, 2, 1, 3, 0, 2, 1], [1, 0, 2, 1, 3, 2, 0, 2, 1, 2, 1, 3]]) added_edges tensor([[2, 1, 3, 0, 2, 1], [0, 2, 1, 2, 1, 3]])

For bipartite graphs

edge_index = torch.tensor([[0, 1, 2, 3, 4, 5], ... [2, 3, 1, 4, 2, 1]]) edge_index, added_edges = add_random_edge(edge_index, p=0.5, ... num_nodes=(6, 5)) edge_index tensor([[0, 1, 2, 3, 4, 5, 3, 4, 1], [2, 3, 1, 4, 2, 1, 1, 3, 2]]) added_edges tensor([[3, 4, 1], [1, 3, 2]])

tree_decomposition(mol: Any, return_vocab: bool = False) → Union[Tuple[Tensor, Tensor, int], Tuple[Tensor, Tensor, int, Tensor]][source]

The tree decomposition algorithm of molecules from the“Junction Tree Variational Autoencoder for Molecular Graph Generation” paper. Returns the graph connectivity of the junction tree, the assignment mapping of each atom to the clique in the junction tree, and the number of cliques.

Parameters:

Return type:

(LongTensor, LongTensor, int) if return_vocab isFalse, else (LongTensor, LongTensor, int, LongTensor)

get_embeddings(model: Module, *args: Any, **kwargs: Any) → List[Tensor][source]

Returns the output embeddings of allMessagePassing layers inmodel.

Internally, this method registers forward hooks on allMessagePassing layers of a model, and runs the forward pass of the model by callingmodel(*args, **kwargs).

Parameters:

Return type:

List[Tensor]

get_embeddings_hetero(model: Module, supported_models: Optional[List[Type[Module]]] = None, *args: Any, **kwargs: Any) → Dict[str, List[Tensor]][source]

Returns the output embeddings of allMessagePassing layers in a heterogeneousmodel, organized by edge type.

Internally, this method registers forward hooks on all modules that process heterogeneous graphs in the model and runs the forward pass of the model. For heterogeneous models, the output is a dictionary where each key is a node type and each value is a list of embeddings from different layers.

Parameters:

Returns:

A dictionary mapping each node type to a list of embeddings from different layers.

Return type:

Dict[NodeType, List[Tensor]]

trim_to_layer(layer: int, num_sampled_nodes_per_hop: Union[List[int], Dict[str, List[int]]], num_sampled_edges_per_hop: Union[List[int], Dict[Tuple[str, str, str], List[int]]], x: Union[Tensor, Dict[str, Tensor]], edge_index: Union[Tensor, Dict[Tuple[str, str, str], Tensor]], edge_attr: Optional[Union[Tensor, Dict[Tuple[str, str, str], Tensor]]] = None) → Tuple[Union[Tensor, Dict[str, Tensor]], Union[Tensor, Dict[Tuple[str, str, str], Union[Tensor, SparseTensor]]], Optional[Union[Tensor, Dict[Tuple[str, str, str], Tensor]]]][source]

Trims the edge_index representation, node features x and edge features edge_attr to a minimal-sized representation for the current GNN layer layer in directedNeighborLoader scenarios.

This ensures that no computation is performed for nodes and edges that are not included in the current GNN layer, thus avoiding unnecessary computation within the GNN when performing neighborhood sampling.

Parameters:

Return type:

Tuple[Union[Tensor, Dict[str, Tensor]], Union[Tensor, Dict[Tuple[str, str, str], Union[Tensor, SparseTensor]]], Union[Tensor, Dict[Tuple[str, str, str], Tensor], None]]

get_ppr(edge_index: Tensor, alpha: float = 0.2, eps: float = 1e-05, target: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Tuple[Tensor, Tensor][source]

Calculates the personalized PageRank (PPR) vector for all or a subset of nodes using a variant of the Andersen algorithm.

Parameters:

Return type:

(torch.Tensor, torch.Tensor)

train_test_split_edges(data: Data, val_ratio: float = 0.05, test_ratio: float = 0.1) → Data[source]

Splits the edges of a torch_geometric.data.Data object into positive and negative train/val/test edges. As such, it will replace the edge_index attribute withtrain_pos_edge_index, train_pos_neg_adj_mask,val_pos_edge_index, val_neg_edge_index andtest_pos_edge_index attributes. If data has edge features named edge_attr, thentrain_pos_edge_attr, val_pos_edge_attr andtest_pos_edge_attr will be added as well.

Parameters:

Return type:

torch_geometric.data.Data