Abstract
Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted
increasing attention from the graph machine learning community in recent years,
owing to its capability to learn performant node embeddings without costly
label information. One weakness of conventional SSL frameworks for GNNs is that
they learn through a single philosophy, such as mutual information maximization
or generative reconstruction. When applied to various downstream tasks, these
frameworks rarely perform equally well for every task, because one philosophy
may not span the extensive knowledge required for all tasks. In light of this,
we introduce ParetoGNN, a multi-task SSL framework for node representation
learning over graphs. Specifically, ParetoGNN is self-supervised by manifold
pretext tasks observing multiple philosophies. To reconcile different
philosophies, we explore a multiple-gradient descent algorithm, such that
ParetoGNN actively learns from every pretext task while minimizing potential
conflicts. We conduct comprehensive experiments over four downstream tasks
(i.e., node classification, node clustering, link prediction, and partition
prediction), and our proposal achieves the best overall performance across
tasks on 11 widely adopted benchmark datasets. Besides, we observe that
learning from multiple philosophies enhances not only the task generalization
but also the single task performance, demonstrating that ParetoGNN achieves
better task generalization via the disjoint yet complementary knowledge learned
from different philosophies.