torch_geometric.nn.pool.EdgePooling — pytorch_geometric documentation (original) (raw)
class EdgePooling(in_channels: int, edge_score_method: Optional[Callable] = None, dropout: float = 0.0, add_to_edge_score: float = 0.5)[source]
Bases: Module
The edge pooling operator from the “Towards Graph Pooling by Edge Contraction” and“Edge Contraction Pooling for Graph Neural Networks” papers.
In short, a score is computed for each edge. Edges are contracted iteratively according to that score unless one of their nodes has already been part of a contracted edge.
To duplicate the configuration from the “Towards Graph Pooling by Edge Contraction” paper, use either EdgePooling.compute_edge_score_softmax()or EdgePooling.compute_edge_score_tanh(), and setadd_to_edge_score
to 0.0
.
To duplicate the configuration from the “Edge Contraction Pooling for Graph Neural Networks” paper, set dropout
to 0.2
.
Parameters:
- in_channels (int) – Size of each input sample.
- edge_score_method (callable , optional) – The function to apply to compute the edge score from raw edge scores. By default, this is the softmax over all incoming edges for each node. This function takes in a
raw_edge_score
tensor of shape[num_nodes]
, anedge_index
tensor and the number of nodesnum_nodes
, and produces a new tensor of the same size asraw_edge_score
describing normalized edge scores. Included functions areEdgePooling.compute_edge_score_softmax(),EdgePooling.compute_edge_score_tanh(), andEdgePooling.compute_edge_score_sigmoid(). (default: EdgePooling.compute_edge_score_softmax()) - dropout (float, optional) – The probability with which to drop edge scores during training. (default:
0.0
) - add_to_edge_score (float, optional) – A value to be added to each computed edge score. Adding this greatly helps with unpooling stability. (default:
0.5
)
Resets all learnable parameters of the module.
static compute_edge_score_softmax(raw_edge_score: Tensor, edge_index: Tensor, num_nodes: int) → Tensor[source]
Normalizes edge scores via softmax application.
Return type:
static compute_edge_score_tanh(raw_edge_score: Tensor, edge_index: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Tensor[source]
Normalizes edge scores via hyperbolic tangent application.
Return type:
static compute_edge_score_sigmoid(raw_edge_score: Tensor, edge_index: Optional[Tensor] = None, num_nodes: Optional[int] = None) → Tensor[source]
Normalizes edge scores via sigmoid application.
Return type:
forward(x: Tensor, edge_index: Tensor, batch: Tensor) → Tuple[Tensor, Tensor, Tensor, UnpoolInfo][source]
Forward pass.
Parameters:
- x (torch.Tensor) – The node features.
- edge_index (torch.Tensor) – The edge indices.
- batch (torch.Tensor) – The batch vector\(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example.
Return types:
- x (torch.Tensor) - The pooled node features.
- edge_index (torch.Tensor) - The coarsened edge indices.
- batch (torch.Tensor) - The coarsened batch vector.
- unpool_info (UnpoolInfo) - Information that is consumed by EdgePooling.unpool() for unpooling.
Return type:
Tuple[Tensor, Tensor, Tensor, UnpoolInfo
]
unpool(x: Tensor, unpool_info: UnpoolInfo) → Tuple[Tensor, Tensor, Tensor][source]
Unpools a previous edge pooling step.
For unpooling, x
should be of same shape as those produced by this layer’s forward() function. Then, it will produce an unpooled x
in addition to edge_index
and batch
.
Parameters:
- x (torch.Tensor) – The node features.
- unpool_info (UnpoolInfo) – Information that has been produced byEdgePooling.forward().
Return types:
- x (torch.Tensor) - The unpooled node features.
- edge_index (torch.Tensor) - The new edge indices.
- batch (torch.Tensor) - The new batch vector.
Return type: