Skip to content

Graph Neural Networks

Graph Neural Networks are a class of neural networks, optimized for deep learning on graphs. They provide an end-to-end solution for machine learning on graphs, unlike graph kernels/embeddings where a transformation step is applied before a "classical" machine learning algorithm. As they have attracted more attention in the recent years, a range of different architectures for this class has sprung up ranging from the Graph Convolutional Network (GCN, Kipf et al. 2017) to Graph Attention Networks (citation needed). The different architectures learn from the graph and it's overall structure, making use of the graph information unlike classical neural networks.

The graph neural network module of PHOTONAI Graph provides a variety of customizable out-of-the-box graph neural networks. They can be instantiated in one line of code and easily integrate into PHOTONAI pipelines.

Graph Neural Network Module

The Graph Neural Network module consists of three parts. The Layer Module, where different layers are implemented and the message-passing steps of these are defined. The Model module where the module is constructed as a class (see pytorch neural networks). And the GraphConvNet module which calls the models and implements fit and transform steps, making them sklearn conform. This module also handles data conversions, converting graphs to the right format for the networks, which are written in pytorch.

You can also write your own custom graph neural network architecture, and register them via the PHOTON register function (link here). When writing your own custom neural nets you are free to choose your own package, as long as they implement fit, transform and predict functions like the GraphConvNet module classes. These can also be used as a blueprint if you want to integrate your own graph neural network architectures into PHOTONAI.

DglModel

Base class for DGL based graph neural networks. Implements helper functions and shared parameters used by other models. Implementation based on dgl python package.

Parameters:

Name Type Description Default
nn_epochs int

the number of epochs which a model is trained

200
learning_rate float

the learning rate when training the model

0.001
batch_size int

number of samples per training batch

32
adjacency_axis int

position of the adjacency matrix, default being zero

0
feature_axis int

position of the feature matrix

1
add_self_loops bool

self loops are added if true

True
allow_zero_in_degree bool

If true the zero in degree test of dgl is disabled

False
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
early_stopping bool

If true then the loss over multiple iterations is evaluated to see whether early stopping should be called on the model

False
verbose bool

If true verbose information is printed

False
logs str

Path to the log data

None

DGLRegressorBaseModel

Abstract base class for regression algorithms

Parameters:

Name Type Description Default
nn_epochs int

Number of epochs to fit the model

200
learning_rate float

Learning rate for model training

0.001
batch_size int

Batch size for model training

32
adjacency_axis int

Axis which contains the adjacency

0
feature_axis int

Axis which contains the features

1
add_self_loops bool

If this value is true, a self loop is added to each node of each graph

True
allow_zero_in_degree bool

If true the dgl model allows zero-in-degree Graphs

False
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False
logs str

Default logging directory

None

DGLClassifierBaseModel

Abstract base class for classification algorithms

Parameters:

Name Type Description Default
nn_epochs int

Number of epochs to fit the model

200
learning_rate float

Learning rate for model training

0.001
batch_size int

Batch size for model training

32
adjacency_axis int

Axis which contains the adjacency

0
feature_axis int

Axis which contains the features

1
add_self_loops bool

If this value is true, a self loop is added to each node of each graph

True
allow_zero_in_degree bool

If true the dgl model allows zero-in-degree Graphs

False
validation_score bool

It true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False
logs str

Default logging directory

None

GCNClassifierModel

Graph Attention Network for graph classification. GCN Layers from Kipf & Welling, 2017. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False

SGConvClassifierModel

Graph convolutional network for graph classification. Simple Graph convolutional layers from Wu, Felix, et al., 2018. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False

GATClassifierModel

Graph Attention Network for graph classification. GAT Layers are modeled after Veličković et al., 2018. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
heads List

list with number of heads per hidden layer

None
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False
agg_mode

aggregation mode for the graph convolutional layers

'mean'

GCNRegressorModel

Graph convolutional Network for graph regression. GCN Layers from Kipf & Welling, 2017. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False

SGConvRegressorModel

Graph convolutional network for graph regression. Simple Graph convolutional layers from Wu, Felix, et al., 2018. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False

GATRegressorModel

Graph Attention Network for graph regression. GAT Layers are modeled after Veličković et al., 2018. Implementation based on dgl & pytorch.

Parameters:

Name Type Description Default
in_dim int

input dimension

1
hidden_layers int

number of hidden layers used by the model

2
hidden_dim int

dimensions in the hidden layers

256
heads List

list with number of heads per hidden layer

None
validation_score bool

If true the input data is split into train and test (90%/10%). The testset is then used to get validation results during training

False
verbose bool

If true verbose output is generated

False