DropMessage: Unifying Random Dropping for Graph Neural Networks
AAAIApr 21, 2022Distinguished Paper
Graph Neural Networks (GNNs) are powerful tools for graph representation
learning. Despite their rapid development, GNNs also face some challenges, such
as over-fitting, over-smoothing, and non-robustness. Previous works indicate
that these problems can be alleviated by random dropping methods, which
integrate augmented data into models by randomly masking parts of the input.
However, some open problems of random dropping on GNNs remain to be solved.
First, it is challenging to find a universal method that are suitable for all
cases considering the divergence of different datasets and models. Second,
augmented data introduced to GNNs causes the incomplete coverage of parameters
and unstable training process. Third, there is no theoretical analysis on the
effectiveness of random dropping methods on GNNs. In this paper, we propose a
novel random dropping method called DropMessage, which performs dropping
operations directly on the propagated messages during the message-passing
process. More importantly, we find that DropMessage provides a unified
framework for most existing random dropping methods, based on which we give
theoretical analysis of their effectiveness. Furthermore, we elaborate the
superiority of DropMessage: it stabilizes the training process by reducing
sample variance; it keeps information diversity from the perspective of
information theory, enabling it become a theoretical upper bound of other
methods. To evaluate our proposed method, we conduct experiments that aims for
multiple tasks on five public datasets and two industrial datasets with various
backbone models. The experimental results show that DropMessage has the
advantages of both effectiveness and generalization, and can significantly
alleviate the problems mentioned above.