Abstract
Graph contrastive learning (GCL) is prevalent to tackle the supervision
shortage issue in graph learning tasks. Many recent GCL methods have been
proposed with various manually designed augmentation techniques, aiming to
implement challenging augmentations on the original graph to yield robust
representation. Although many of them achieve remarkable performances, existing
GCL methods still struggle to improve model robustness without risking losing
task-relevant information because they ignore the fact the augmentation-induced
latent factors could be highly entangled with the original graph, thus it is
more difficult to discriminate the task-relevant information from irrelevant
information. Consequently, the learned representation is either brittle or
unilluminating. In light of this, we introduce the Adversarial Cross-View
Disentangled Graph Contrastive Learning (ACDGCL), which follows the information
bottleneck principle to learn minimal yet sufficient representations from graph
data. To be specific, our proposed model elicits the augmentation-invariant and
augmentation-dependent factors separately. Except for the conventional
contrastive loss which guarantees the consistency and sufficiency of the
representations across different contrastive views, we introduce a cross-view
reconstruction mechanism to pursue the representation disentanglement. Besides,
an adversarial view is added as the third view of contrastive loss to enhance
model robustness. We empirically demonstrate that our proposed model
outperforms the state-of-the-arts on graph classification task over multiple
benchmark datasets.