Does Invariant Graph Learning via Environment Augmentation Learn Invariance?

Yongqiang chen · yatao bian · kaiwen zhou · binghui xie · bo han · james cheng, great hall & hall b1+b2 #626.

Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In this work, we find that it is fundamentally impossible to learn invariant graph representations via environment augmentation without additional assumptions. Therefore, we develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning. We then propose a new framework Graph invAriant Learning Assistant (GALA). GALA incorporates an assistant model that needs to be sensitive to graph environment changes or distribution shifts. The correctness of the proxy predictions by the assistant model hence can differentiate the variations in spurious subgraphs. We show that extracting the maximally invariant subgraph to the proxy predictions provably identifies the underlying invariant subgraph for successful OOD generalization under the established minimal assumptions. Extensive experiments on datasets including DrugOOD with various graph distribution shifts confirm the effectiveness of GALA.

> cs > arXiv:2312.10988

  • Other formats

Current browse context:

Change to browse by:, references & citations, dblp - cs bibliography, computer science > machine learning, title: graph invariant learning with subgraph co-mixup for out-of-distribution generalization.

Abstract: Graph neural networks (GNNs) have been demonstrated to perform well in graph representation learning, but always lacking in generalization capability when tackling out-of-distribution (OOD) data. Graph invariant learning methods, backed by the invariance principle among defined multiple environments, have shown effectiveness in dealing with this issue. However, existing methods heavily rely on well-predefined or accurately generated environment partitions, which are hard to be obtained in practice, leading to sub-optimal OOD generalization performances. In this paper, we propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy, which is capable of jointly generating mixed multiple environments and capturing invariant patterns from the mixed graph data. Specifically, we first adopt a subgraph extractor to identify invariant subgraphs. Subsequently, we design one novel co-mixup strategy, i.e., jointly conducting environment Mixup and invariant Mixup. For the environment Mixup, we mix the variant environment-related subgraphs so as to generate sufficiently diverse multiple environments, which is important to guarantee the quality of the graph invariant learning. For the invariant Mixup, we mix the invariant subgraphs, further encouraging to capture invariant patterns behind graphs while getting rid of spurious correlations for OOD generalization. We demonstrate that the proposed environment Mixup and invariant Mixup can mutually promote each other. Extensive experiments on both synthetic and real-world datasets demonstrate that our method significantly outperforms state-of-the-art under various distribution shifts.

Submission history

Link back to: arXiv , form interface , contact .

Does Invariant Graph Learning via Environment Augmentation Learn Invariance?

Yongqiang chen , yatao bian , kaiwen zhou , binghui xie , bo han , james cheng, send feedback.

Enter your feedback below and we'll get back to you as soon as possible. To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Report an issue

BibTeX Record

IMAGES

  1. An Introduction To Invariant Graph Networks (1:2)

    rethinking invariant graph representation learning without environment partitions

  2. A Guide to Graph Representation Learning

    rethinking invariant graph representation learning without environment partitions

  3. Illustration of graph representation learning input and output

    rethinking invariant graph representation learning without environment partitions

  4. Sign and Basis Invariant Networks for Spectral Graph Representation

    rethinking invariant graph representation learning without environment partitions

  5. A Guide to Graph Representation Learning

    rethinking invariant graph representation learning without environment partitions

  6. On Learning Invariant Representations for Domain Adaptation

    rethinking invariant graph representation learning without environment partitions

VIDEO

  1. [220919] Hierarchical Graph Representation Learning with Differentiable Pooling

  2. WSDM-23 Paper: Self-supervised Graph Representation Learning for Black Market Account Detection

  3. [rfp1382] Invariant Graph Learning for Causal Effect Estimation

  4. iHerd an integrative hierarchical graph representation learning framework to quantify network chang

  5. Rethinking Graph Data Placement for Graph Neural Network Training on Multiple GPUs

  6. Redefining Augmentation: Enriching Contrastive Learning for Time Series Analysis

COMMENTS

  1. Rethinking Invariant Graph Representation Learning Without Environment

    As the environment partitions on graphs are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the aug-mented environment information has never been verified. In this work, we found that it is fundamentally impossible to learn invariant graph representations by aug-

  2. Rethinking Invariant Graph Representation Learning without Environment

    TL;DR: We found impossibility results of learning invariant graph representations without environment partitions, established the minimal assumptions for the feasibility of the problem and propose a novel solution. Out-of-distribution generalization on graphs requires graph neural networks to identify the invariance among data from different ...

  3. ‪Binghui Xie‬

    RETHINKING INVARIANT GRAPH REPRESENTATION LEARNING WITHOUT ENVIRONMENT PARTITIONS. Y Chen, Y Bian, K Zhou, B Xie, B Han, J Cheng. 2: An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms. B Xie, C Jin, K Zhou, J Cheng, W Meng. arXiv preprint arXiv:2205.02273, 2022. 1:

  4. PDF Abstract

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the envi-ronment information has become the de facto approach. However, the usefulness of

  5. PDF ZIN: When and How to Learn Invariance Without Environment Partition?

    Similarly, ERM and LfF all rely on either color or shape as the invariant feature and would fail on at least one of CMNIST and MCOLOR. On the other hand, if environment partition is available, IRM can still learn the desired invariant feature. 4. Table 1: Experimental results on CMNIST and MCOLOR.

  6. PDF Learning Causally Invariant Representations for Out-of ...

    difficult to identify the invariance. Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invari-

  7. [2202.05441] Learning Causally Invariant Representations for Out-of

    Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various ...

  8. PDF Learning Invariant Graph Representations for Out-of-Distribution

    the literature of graph representation learning. However, invariant graph representation learning is ... An example of distribution shifts under a mixture of latent environments, which leads to poor generalization. environments without accurate environment la-bels, as shown in Figure 1. ... we do not have access to accurate environment labels ...

  9. PDF Learning Invariant Graph Representations for Out-of ...

    the literature of graph representation learning. However, invariant graph representation learning is ... An example of distribution shifts under a mixture of latent environments, which leads to poor generalization. environments without accurate environment la-bels, as shown in Figure 1. ... we do not have access to accurate environment labels ...

  10. ZIN: When and How to Learn Invariance Without Environment Partition?

    It is shown that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information, and a framework to jointly learn environment partition and invariant representation is proposed, assisted by additional auxiliary information. It is commonplace to encounter heterogeneous data, of which some aspects of the data ...

  11. Learning Invariant Graph Representations for Out-of-Distribution

    Abstract: Graph representation learning has shown effectiveness when testing and training graph data come from the same distribution, but most existing approaches fail to generalize under distribution shifts. Invariant learning, backed by the invariance principle from causality, can achieve guaranteed generalization under distribution shifts in theory and has shown great successes in practice.

  12. Enhancing Out-of-distribution Generalization on Graphs via Causal

    Rethinking invariant graph representation learning without environment partitions In ICML DG Workshop. Google Scholar [8] Chen Yongqiang, Zhang Yonggang, Bian Yatao, Yang Han, Kaili M. A., Xie Binghui, Liu Tongliang, Han Bo, and Cheng James. 2022. Learning causally invariant representations for out-of-distribution generalization on graphs.

  13. Does Invariant Graph Learning via Environment Augmentation Learn

    As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In this work, we find that it is fundamentally impossible to learn invariant graph representations via environment ...

  14. PDF Learning Invariant Graph Representations for Out-of-Distribution

    The theorem shows that while learning invariant graph representations under distribution shifts, our proposed method naturally holds permutation-invariance as other GNNs. E Experimental Details E.1 Datasets Table 2: The statistics of the datasets. #Graphs(Train/Val/Test) is the number of graphs in the train-ing/validation/testing set of the ...

  15. Learning Causally Invariant Representations for Out-of-Distribution

    Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various ...

  16. Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In ...

  17. Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution

    Graph invariant learning methods, backed by the invariance principle among defined multiple environments, have shown effectiveness in dealing with this issue. However, existing methods heavily rely on well-predefined or accurately generated environment partitions, which are hard to be obtained in practice, leading to sub-optimal OOD ...

  18. Does Invariant Graph Learning via Environment Augmentation Learn

    It is found that it is fundamentally impossible to learn invariant graph representations via environment augmentation without additional assumptions, and a set of minimal assumptions is developed for feasible invariantGraph invAriant Learning Assistant (GALA). Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution ...

  19. Learning Invariant Representations for Reinforcement Learning without

    We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that both provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous ...

  20. Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach.

  21. Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified.

  22. PDF Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the ... without environment partitions. When it comes to the graph regime where the OOD generalization is fundamentally more difficult [8] than the Euclidean regime, it raises a ...