Learning Invariant Graph Representations for Out-of-Distribution Generalization

Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Haoyang Li, Ziwei Zhang, Xin Wang, Wenwu Zhu

Graph representation learning has shown effectiveness when testing and training graph data come from the same distribution, but most existing approaches fail to generalize under distribution shifts. Invariant learning, backed by the invariance principle from causality, can achieve guaranteed generalization under distribution shifts in theory and has shown great successes in practice. However, invariant learning for graphs under distribution shifts remains unexplored and challenging. To solve this problem, we propose Graph Invariant Learning (GIL) model capable of learning generalized graph representations under distribution shifts. Our proposed method can capture the invariant relationships between predictive graph structural information and labels in a mixture of latent environments through jointly optimizing three tailored modules. Specifically, we first design a GNN-based subgraph generator to identify invariant subgraphs. Then we use the variant subgraphs, i.e., complements of invariant subgraphs, to infer the latent environment labels. We further propose an invariant learning module to learn graph representations that can generalize to unknown test graphs. Theoretical justifications for our proposed method are also provided. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against state-of-the-art baselines under distribution shifts for the graph classification task.

Name Change Policy

Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.

Use the "Report an Issue" link to request a name change.

Does Invariant Graph Learning via Environment Augmentation Learn Invariance?

Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, James Cheng

Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In this work, we find that it is fundamentally impossible to learn invariant graph representations via environment augmentation without additional assumptions. Therefore, we develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning. We then propose a new framework Graph invAriant Learning Assistant (GALA). GALA incorporates an assistant model that needs to be sensitive to graph environment changes or distribution shifts. The correctness of the proxy predictions by the assistant model hence can differentiate the variations in spurious subgraphs. We show that extracting the maximally invariant subgraph to the proxy predictions provably identifies the underlying invariant subgraph for successful OOD generalization under the established minimal assumptions. Extensive experiments on datasets including DrugOOD with various graph distribution shifts confirm the effectiveness of GALA.

Name Change Policy

Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.

Use the "Report an Issue" link to request a name change.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

A list of Graph Causal Learning materials.

TimeLovercc/Awesome-Graph-Causal-Learning

Folders and files, repository files navigation, awesome graph causal learning.

PRs Welcome

This repository contains a list of Graph Causal Learning resources. We also have a survey paper about Counterfactual Learning on Graphs . We include a Not for Graph section to introduce well-selected materials for beginners to learn causal-related concepts. We will try our best to continuously maintain this Repository in weekly manner.

Why Causal Learning?

Causality connotes lawlike necessity, whereas probabilities connote exceptionality, doubt, and lack of regularity. --Judea Pearl

Graph Causal Learning is an emerging research area and it can be widely applied in dealing with out of distribution, fairness and explanation problems.

  • 2023/04/04: I add our survey paper. Welcome to cite if it is useful.
  • 2023/01/04: I change the format of the repo and try to include both causal learning papers and counterfactual learning papers.

If our repo or survey is useful for your research, please cite our paper as follows:

Contributing

Please help to contribute this list by adding pull request

Markdown format:

Note : In the same year, please place the conference paper before the journal paper, as journals are usually submitted a long time ago and therefore have some lag. ( i.e. , Conferences-->Preprints-->Journals)

Table of Contents

Node classification, out of distribution, contrastive learning, explanation, recommendation, applications, counterfactual fairness, counterfactual explanation, counterfactual link prediction and recommendation, counterfactual learning in real-world applications, causal discovery, causal effect estimate, a summary of open-source codes, not for graph.

Counterfactual Learning on Graphs: A Survey. [pdf]

  • Zhimeng Guo, Teng Xiao, Charu Aggarwal, Hui Liu, and Suhang Wang. arXiv , 2023.

Domain Generalization -- A Causal Perspective. [pdf]

  • Paras Sheth, Raha Moraffah, K. Selçuk Candan, Adrienne Raglin, and Huan Liu. arXiv , 2022.

Learning Causality with Graphs. [pdf]

  • Jing Ma, and Jundong Li. AI Magazine , 2022.

Causal Discovery from Temporal Data: An Overview and New Perspectives. [pdf]

  • Chang Gong, Di Yao, Chuzhe Zhang, Wenbin Li, and Jingping Bi. arXiv , 2023.

A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation. [pdf]

  • Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, and Fosca Giannotti. arXiv , 2022.

Causal Learning

Causally-guided Regularization of Graph Attention Improves Generalizability. [pdf]

  • Alexander P. Wu, Thomas Markovich, Bonnie Berger, Nils Hammerla, and Rohit Singh. arXiv , 2023.

Causal-Based Supervision of Attention in Graph Neural Network: A Better and Simpler Choice towards Powerful Attention [pdf]

  • Hongjun Wang, Jiyuan Chen, Lun Du, Qiang Fu, Shi Han, and Xuan Song. arXiv , 2023.

Invariant Node Representation Learning under Distribution Shits with Multiple Latent Environments. [pdf]

  • Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. TOIS , 2023.

Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism. [pdf] [code]

  • Siqi Miao, Miaoyuan Liu, and Pan Li. ICML , 2022.

Graph Rationalization with Environment-based Augmentations. [pdf] [code]

  • Gang Liu, Tong Zhao, Jiaxin Xu, Tengfei Luo, and Meng Jiang. KDD , 2022.

Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method. [pdf]

  • Fuli Feng, Weiran Huang, Xiangnan He, Xin Xin, Qifan Wang, and Tat-Seng Chua. SIGIR , 2021.

Unsupervised Learning

Robust Causal Graph Representation Learning against Confounding Effects. [pdf]

  • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, and Fuchun Sun. AAAI , 2023.

Rethinking Invariant Graph Representation Learning without Environment Partitions. [pdf]

  • Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, and James Cheng. ICLR Workshop , 2023.

Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization. [pdf]

  • Xiner Li, Shurui Gui, Youzhi Luo, and Shuiwang Ji. arXiv , 2023.

Joint Learning of Label and Environment Causal Independence for Graph Out-of-Distribution Generalization. [pdf]

  • Shurui Gui, Meng Liu, Xiner Li, Youzhi Luo, Shuiwang Ji. arXiv , 2023.

Introducing Expertise Logic into Graph Representation Learning from A Causal Perspective. [pdf]

  • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Xingzhe Su1, Fengge wu, Changwen Zheng, and Fuchu Sun. arXiv , 2023.

Causal Attention for Interpretable and Generalizable Graph Classification. [pdf] [code]

  • Yongduo Sui, Xiang Wang, Jiancan Wu, Min Lin, Xiangnan He, and Tat-Seng Chua. KDD , 2022.

Learning Invariant Graph Representations for Out-of-Distribution Generalization. [pdf]

  • Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. NeurIPS , 2022.

Discovering Invariant Rationales for Graph Neural Networks. [pdf] [code]

  • Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, and Tat-Seng Chua. ICLR , 2022.

Handling Distribution Shifts on Graphs: An Invariance Perspective. [pdf] [code]

  • Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. ICLR , 2022.

Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure. [pdf] [code]

  • Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, Jian Tang. NeurIPS , 2022.

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs. [pdf] [code]

  • Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, and James Cheng. NeurIPS , 2022.

Dynamic graph neural networks under spatio-temporal distribution shift. [pdf] [code]

  • Zeyang Zhang, Xin Wang, Ziwei Zhang, Haoyang Li, Zhou Qin, and Wenwu Zhu. NeurIPS , 2022.

Learning Substructure Invariance for Out-of-Distribution Molecular Representations. [pdf] [code]

  • Nianzu Yang, Kaipeng Zeng, Qitian Wu, Xiaosong Jia, and Junchi Yan. NeurIPS , 2022.

Adversarial Causal Augmentation for Graph Covariate Shift. [pdf]

  • Yongduo Sui, Xiang Wang, Jiancan Wu, An Zhang, and Xiangnan He. arXiv , 2022.

Generalizing graph neural networks on out-of-distribution graphs. [pdf]

  • Shaohua Fan, Xiao Wang, Chuan Shi, Peng Cui, and Bai Wang. arXiv , 2021.
  • Sihang Li, Xiang Wang, An zhang, Yingxin Wu, Xiangnan He, and Tat-Seng Chua. ICML , 2022.

FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing. [pdf] [code]

  • Donald Loveland, Jiayi Pan, Aaresh Farrokh Bhathena, and Yiyang Lu. *arXiv, 2022.

VACA: Designing Variational Graph Autoencoders for Causal Queries. [pdf] [code]

  • Pablo Sanchez-Martin, Miriam Rateike, and Isabel Valera. AAAI , 2022.

Explaining GNN over Evolving Graphs using Information Flow. [pdf]

  • Yazheng Liu, Xi Zhang, and Sihong Xie. AAAI , 2022.

Orphicx: A causality-inspired latent variable model for interpreting graph neural networks. [pdf] [code]

  • Wanyu Lin, Hao Lan, Hao Wang, and Baochun Li. CVPR , 2022.

Reinforced Causal Explainer for Graph Neural Networks. [pdf] [code]

  • Xiang Wang, Yingxin Wu, An Zhang, Fuli Feng, Xiangnan He, and Tat-Seng Chua. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2022.

Generative Causal Explanations for Graph Neural Networks. [pdf] [code]

  • Wanyu Lin, Hao Lan, and Baochun Li. ICML , 2021.

ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions. [pdf]

  • Bo Kang, Jefrey Lijffijt, and Tijl De Bie. arXiv , 2019.

Causality-based CTR prediction using graph neural networks. [pdf]

  • Panyu Zhai, Yanwu, Yang, and Chunjie Zhang. Information Processing & Management , 2023.

Disentangled Causal Embedding With Contrastive Learning For Recommender System. [pdf]

  • Weiqi Zhao, Dian Tang, Xin Chen, Dawei Lv, Daoli Ou, Biao Li, Peng Jiang, and Kun Gai. arXiv , 2023.

Causal Disentanglement for Implicit Recommendations with Network Information. [pdf]

  • Paras Sheth, Ruocheng Guo, Lu Cheng, Huan Liu, and Kasim Selçuk Candan. TKDD , 2023.

Causal Inference for Knowledge Graph based Recommendation. [pdf]

  • Yinwei Wei, Xiang Wang, Liqiang Nie, Shaoyu Li, Dingxian Wang and Tat-Seng Chua. TKDE , 2022.

Causal Disentanglement with Network Information for Debiased Recommendations. [pdf]

  • Paras Sheth, Ruocheng Guo, Lu Cheng, Huan Liu, and K. Selçuk Candan. arXiv , 2022.

Shift-Robust Molecular Relational Learning with Causal Substructure. [pdf]

  • Namkyeong Lee, Kanghoon Yoon, Gyoung S. Na, Sein Kim, and Chanyoung Park. arXiv , 2023.

Causal-Trivial Attention Graph Neural Network for Fault Diagnosis of Complex Industrial Processes. [pdf]

  • Hao Wang, Ruonan Liu, Steven X. Ding, Qinghua Hu, Zengxiang Li, and Hongkuan Zhou. arXiv , 2023.

CI-GNN: A Granger Causality-Inspired Graph Neural Network for Interpretable Brain Network-Based Psychiatric Diagnosis. [pdf]

  • Kaizhong Zheng, Shujian Yu, and Badong Chen. arXiv , 2023.

CausalGNN: Causal-based Graph Neural Networks for Spatio-Temporal Epidemic Forecasting. [pdf]

  • Lijing Wang, Aniruddha Adiga, Jiangzhuo Chen, Adam Sadilek, Srinivasan Venkatramanan, and Madhav Marathe. AAAI , 2022.

Multivariate Time Series Forecasting with Transfer Entropy Graph. [pdf] [code]

  • Ziheng Duan, Haoyan Xu, Yida Huang, Jie Feng, and Yueyang Wang. arXiv , 2021.

Causal Understanding of Fake News Dissemination on Social Media. [pdf]

  • Lu Cheng, Ruocheng Guo, Kai Shu, and Huan Liu. arXiv , 2020.

Counterfactual Learning

Mitigating multisource biases in graph neural networks via real counterfactual samples [pdf] -Zichong Wang, Giri Narasimhan, Xin Yao, and Wenbin Zhang. ICDM , 2023.

Towards Fair Graph Neural Networks via Graph Counterfactual [pdf] [code]

  • Zhimeng Guo, Jialiang Li, Teng Xiao, Yao Ma, and Suhang Wang. CIKM , 2023.

Learning Fair Node Representations with Graph Counterfactual Fairness. [pdf] [code]

  • Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, and Jundong Li. WSDM , 2022.

Towards a Unified Framework for Fair and Stable Graph Representation Learning. [pdf] [code]

  • Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. UAI , 2021.

A Multi-view Confidence-calibrated Framework for Fair and Stable Graph Representation Learning. [pdf]

  • Xu Zhang, Liang Zhang, Bo Jin, and Xinjiang Lu. ICDM , 2021.

Fairness-Aware Node Representation Learning. [pdf]

  • O. Deniz Kose, and Yanning Shen. arXiv , 2021.

UNR-Explainer: Counterfactual Explanations for Unsupervised Node Representation Learning Models [pdf]

  • Hyunju Kang, Geonhee Han, Hogun Park. ICLR , 2024.

Game-theoretic Counterfactual Explanation for Graph Neural Networks. [pdf]

  • Chirag Chhablani, Sarthak Jain, Akshay Channesh, Ian A. Kash, and Sourav Medya. WWW , 2024.

Evaluating explainability for graph neural networks. [pdf] [code]

  • Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. arXiv , 2023.

Global Counterfactual Explainer for Graph Neural Networks. [pdf] [code]

  • Mert Kosan, Zexi Huang, Sourav Medya, Sayan Ranu, and Ambuj Singh. WSDM , 2023.

On the Probability of Necessity and Sufficiency of Explaining Graph Neural Networks: A Lower Bound Optimization Approach. [pdf]

  • Ruichu Cai, Yuxuan Zhu, Xuexin Chen, Yuan Fang, Min Wu, Jie Qiao, and Zhifeng Hao. arXiv , 2022.

CLEAR: Generative Counterfactual Explanations on Graphs. [pdf]

  • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, and Jundong Li. NeurIPS , 2022.

Ensemble approaches for Graph Counterfactual Explanations. [pdf]

  • Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Alessandro Celi, Ernesto Estevanell-Valladares, and Daniel Alejandro Valdés-Pérez. CEUR-WS , 2022.

GRETEL: A unified framework for Graph Counterfactual Explanation Evaluation. [pdf] [code]

  • Mario Alfonso Prado-Romero, and Giovanni Stilo. arXiv , 2022.

Learning and Evaluating Graph Neural Network Explanations based on Counterfactual and Factual Reasoning. [pdf] [code]

  • Juntao Tan, Shijie Geng, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Yunqi Li, and Yongfeng Zhang. WWW , 2022.

Flow-based counterfactuals for interpretable graph node classification. [pdf]

  • Lorenz Ohly. Bachelar Disseration at Freie Universität Berlin , 2022.

Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. [pdf]

  • Chirag Agarwal, Marinka Zitnik, and Himabindu Lakkaraju. AISTATS , 2022

Multi-objective Explanations of GNN Predictions. [pdf]

  • Yifei Liu, Chao Chen, Yazheng Liu, Xi Zhang, and Sihong Xie. ICDM , 2021.

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation. [pdf]

  • Yi Sun, Abel Valente, Sijia Liu, and Dakuo Wang. arXiv , 2021.

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks. [pdf] [code]

  • Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, and Fabrizio Silvestri. arXiv , 2021.

Robust Counterfactual Explanations on Graph Neural Networks. [pdf] [code]

  • Mohit Bajaj, Lingyang Chu, Zi Yu Xue, Jian Pei, Lanjun Wang, Peter Cho-Ho Lam, and Yong Zhang. arXiv , 2021.

MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks. [pdf] [code]

  • Danilo Numeroso, and Davide Bacciu. IJCNN , 2021.

Counterfactual Graphs for Explainable Classification of Brain Networks. [pdf] [code]

  • Carlo Abrate, and Francesco Bonchi. KDD , 2021.

Knowledge Graph Completion with Counterfactual Augmentation. [pdf]

  • Heng Chang, Jie Cai, and Jia Li. WWW , 2023.

A Counterfactual Collaborative Session-based Recommender System. [pdf]

  • Wenzhuo Song, Shoujin Wang, Yan Wang, Kunpeng Liu, Xueyan Liu, and Minghao Yin. WWW , 2023.

Alleviating Spurious Correlations in Knowledge-aware Recommendations through Counterfactual Generator. [pdf] [code]

  • Shanlei Mu, Yaliang Li, Wayne Xin Zhao, Jingyuan Wang, Bolin Ding and Ji-Rong Wen. SIGIR , 2022.

Learning from Counterfactual Links for Link Prediction. [pdf] [code]

  • Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. ICML , 2022.

GREASE: Generate Factual and Counterfactual Explanations for GNN-based Recommendations. [pdf]

  • Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua Huang, Hongshik Ahn, and Gabriele Tolomei. arXiv , 2022.

Clicks can be Cheating: Counterfactual Recommendation for Mitigating Clickbait Issue. [pdf] [code]

  • Wenjie Wang, Fuli Feng, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. SIGIR , 2021.

Improving Location Recommendation with Urban Knowledge Graph. [pdf]

  • Chang Liu, Chen Gao, Depeng Jin, and Yong Li. arXiv , 2021.

Counterfactual Graph Learning for Anomaly Detection with Feature Disentanglement and Generation (Student Abstract). [pdf]

Yutao Wei, Wenzheng Shu, Zhangtao Cheng, Wenxin Tai, Chunjing Xiao, and Ting Zhong. AAAI , 2024.

Towards Explainable Motion Prediction using Heterogeneous Graph Representations. [pdf] [code]

  • Sandra Carrasco, Sylwia Majchrowska, Joakim Johnander, Christoffer Petersson, and David Fernández LLorca. arXiv , 2022.

Model agnostic generation of counterfactual explanations for molecules. [pdf] [code]

  • Geemi P. Wellawatte, Aditi Seshadri, and Andrew D. White. Chemical Science , 2022.

Counterfactual inference to predict causal knowledge graph for relational transfer learning by assimilating expert knowledge --Relational feature transfer learning algorithm. [pdf]

  • Jiarui Lia, Yukio Horiguchib, and Tetsuo Sawaragia. Advanced Engineering Informatics , 2022.

Counterfactual Graph Learning for Anomaly Detection on Attributed Networks. [pdf]

  • Chunjing Xiao, Xovee Xu, Yue Lei, Kunpeng Zhang, Siyuan Liu, and Fan Zhou. TKDE , 2022.

Counterfactual inference graph network for disease prediction. [pdf]

  • Baoliang Zhang, Xiaoxin Guo, Qifeng Lin, Haoren Wang, and Songbai Xu. Knowledge-Based Systems , 2022.

Counterfactual based reinforcement learning for graph neural networks. [pdf]

  • David Pham, and Yongfeng Zhang. Annals of Operations Research , 2022.

Deconfounding Physical Dynamics with Global Causal Relation and Confounder Transmission for Counterfactual Prediction. [pdf]

  • Zongzhao Li, Xiangyu Zhu, Zhen Lei1, and Zhaoxiang Zhang. AAAI , 2022.

Counterfactual and Factual Reasoning over Hypergraphs for Interpretable Clinical Predictions on EHR. [pdf]

  • Ran Xu, Yue Yu, Chao Zhang, Mohammed K Ali, Joyce C Ho, and Carl Yang. ML4H , 2022.

Capturing Molecular Interactions in Graph Neural Networks: A Case Study in Multi-Component Phase Equilibrium. [pdf] [code]

  • Shiyi Qin, Shengli Jiang, Jianping Li, Prasanna Balaprakash, Reid C. Van Lehn, and Victor M. Zavala. Chemrxiv , 2022.

Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios. [pdf] [code]

  • Keisuke Fujii, Koh Takeuchi, Atsushi Kuribayashi, Naoya Takeishi, Yoshinobu Kawahara, and Kazuya Takeda. arXiv , 2022.

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI. [pdf]

  • Andreas Holzinger, Bernd Malle, Anna Saranti, and Bastian Pfeifer. Information Fusion , 2021.

Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. [pdf] [code]

  • Haoran Wu, Wei Chen, Shuang Xu, and Bo Xu. NAACL , 2021.

Causal Discovery in Physical Systems from Videos. [pdf] [code]

  • Yunzhu Li, Antonio Torralba, Anima Anandkumar, Dieter Fox, and Animesh Garg. NeurIPS , 2020.

Wireless Power Control via Counterfactual Optimization of Graph Neural Networks. [pdf]

  • Navid Naderializadeh, Mark Eisen, and Alejandro Ribeiro. arXiv , 2020.

Counterfactual multi-agent reinforcement learning with graph convolution communication. [pdf]

  • Jianyu Su, Stephen Adams, and Peter A. Beling. arXiv , 2020.

Cophy: Counterfactual learning of physical dynamics. [pdf] [code]

  • Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, and Christian Wolf. arXiv , 2019.

Relating Graph Neural Networks to Structural Causal Models. [pdf] [code]

  • Matej Zečević, Devendra Singh Dhami, Petar Veličković, and Kristian Kersting. arXiv , 2021.

A Graph Autoencoder Approach to Causal Structure Learning. [pdf] [code]

  • Ignavier Ng, Shengyu Zhu, Zhitang Chen, and Zhuangyan Fang. NeurIPS Workshop , 2019.

DAG-GNN: DAG Structure Learning with Graph Neural Networks. [pdf] [code]

  • Yue Yu, Jie Chen, Tian Gao, and Mo Yu. ICML , 2019.

Learning Causal Effects on Hypergraphs. [pdf]

  • Jing Ma, Mengting Wan, Longqi Yang, Jundong Li, Brent Hecht, and Jaime Teevan. arXiv , 2022.

Causal Inference under Networked Interference and Intervention Policy Enhancement. [pdf]

  • Yunpu Ma, and Volker Tresp. AISTATS , 2021.

Learning Individual Causal Effects from Networked Observational Data. [pdf]

  • Ruocheng Guo, Jundong Li, and Huan Liu. WSDM , 2020.

Causal Inference for Social Network Data. [pdf]

  • Elizabeth L. Ogburn, Oleg Sofrygin, Ivan Diaz, and Mark J. van der Laan. arXiv , 2017.

CausalBench: A Large-scale Benchmark for Network Inference from Single-cell Perturbation Data. [pdf] [code]

  • Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. arXiv , 2022.

GOOD: A Graph Out-of-Distribution Benchmark. [pdf] [code]

  • Shurui Gui, Xiner Li, Limei Wang, and Shuiwang Ji. NeurIPS Track on Datasets and Benchmarks , 2022.

Deep Causal Learning: Representation, Discovery and Inference. [pdf]

  • Zizhen Deng, Xiaolong Zheng, Hu Tian, and Daniel Dajun Zeng. arXiv , 2022.

Causal Machine Learning: A Survey and Open Problems. [pdf]

  • Jean Kaddour, Aengus Lynch, Qi Liu, Matt J. Kusner, and Ricardo Silva. arXiv , 2022.

Toward Causal Representation Learning. [pdf]

  • Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. arXiv , 2021.

A Survey on Causal Inference. [pdf]

  • Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang. arXiv , 2020.

A Survey of Learning Causality with Data: Problems and Methods. [pdf]

  • Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, and Huan Liu. ACM Computing Surveys , 2020.

Causality for Machine Learning. [pdf]

  • Bernhard Schölkopf. arXiv , 2019.

Counterfactual Fairness. [pdf]

  • Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. NeurIPS , 2017.

Learning Representations for Counterfactual Inference. [pdf]

  • Fredrik D. Johansson, Uri Shalit, and David Sontag. ICML , 2016.

Causal inference in statistics: An overview. [pdf]

  • Judea Pearl. Statistics Surveys , 2009.

Contributors 3

@TimeLovercc

Learning Invariant Graph Representations for Out-of-Distribution Generalization

Haoyang li , ziwei zhang , xin wang , wenwu zhu, send feedback.

Enter your feedback below and we'll get back to you as soon as possible. To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Report an issue

BibTeX Record

IMAGES

  1. A Guide to Graph Representation Learning

    rethinking invariant graph representation learning without environment partitions

  2. A Guide to Graph Representation Learning

    rethinking invariant graph representation learning without environment partitions

  3. Illustration of graph representation learning input and output

    rethinking invariant graph representation learning without environment partitions

  4. A Guide to Graph Representation Learning

    rethinking invariant graph representation learning without environment partitions

  5. An Introduction To Invariant Graph Networks (1:2)

    rethinking invariant graph representation learning without environment partitions

  6. Learning Texture Invariant Representation for Domain Adaptation of

    rethinking invariant graph representation learning without environment partitions

VIDEO

  1. [rfp1562] Unleashing the Power of Knowledge Graph for Recommendation via Invariant Learning

  2. [tt8745] Text-Attributed Graph Representation Learning: Methods, Applications, and Challenges

  3. Xudong Tang: Graph Representation Learning in Phylogenetic Inference

  4. Redefining Augmentation: Enriching Contrastive Learning for Time Series Analysis

  5. Talk by H. Dong: Learning-Based Dimensionality Reduction for Compact and Effective Features(ICRA'23)

  6. Discrete Math. Brief Introduction to Degree Sequences, Graph Invariants, and Isomorphism

COMMENTS

  1. Rethinking Invariant Graph Representation Learning Without Environment

    As the environment partitions on graphs are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the aug-mented environment information has never been verified. In this work, we found that it is fundamentally impossible to learn invariant graph representations by aug-

  2. Rethinking Invariant Graph Representation Learning without Environment

    TL;DR: We found impossibility results of learning invariant graph representations without environment partitions, established the minimal assumptions for the feasibility of the problem and propose a novel solution. Abstract: Out-of-distribution generalization on graphs requires graph neural networks to identify the invariance among data from ...

  3. ‪Binghui Xie‬

    RETHINKING INVARIANT GRAPH REPRESENTATION LEARNING WITHOUT ENVIRONMENT PARTITIONS. Y Chen, Y Bian, K Zhou, B Xie, B Han, J Cheng. 2. An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms. B Xie, C Jin, K Zhou, J Cheng, W Meng. arXiv preprint arXiv:2205.02273. , 2022. 1.

  4. PDF Abstract

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the envi-ronment information has become the de facto approach. However, the usefulness of

  5. GitHub

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach.

  6. ZIN: When and How to Learn Invariance Without Environment Partition?

    Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information. Then, we propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information.

  7. PDF Learning Invariant Graph Representations for Out-of ...

    the literature of graph representation learning. However, invariant graph representation learning is ... An example of distribution shifts under a mixture of latent environments, which leads to poor generalization. environments without accurate environment la-bels, as shown in Figure 1. ... we do not have access to accurate environment labels ...

  8. [2202.05441] Learning Causally Invariant Representations for Out-of

    Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various ...

  9. Learning Causally Invariant Representations for Out-of ...

    Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various ...

  10. Size-Invariant Graph Representations for Graph Classification

    In general, graph representation learning methods assume that the train and test data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly devel-oping field of graph representation learning: The task of out-of-distribution (OOD) graph classifi-cation, where train and test data have different

  11. PDF Learning Invariant Graph Representations for Out-of-Distribution

    the literature of graph representation learning. However, invariant graph representation learning is ... An example of distribution shifts under a mixture of latent environments, which leads to poor generalization. environments without accurate environment la-bels, as shown in Figure 1. ... we do not have access to accurate environment labels ...

  12. PDF Learning Causally Invariant Representations for Out-of-Distribution

    Learning Causally Invariant Representations for Out-of-Distribution ...

  13. Learning Invariant Graph Representations for Out-of ...

    Specifically, we first design a GNN-based subgraph generator to identify invariant subgraphs. Then we use the variant subgraphs, i.e., complements of invariant subgraphs, to infer the latent environment labels. We further propose an invariant learning module to learn graph representations that can generalize to unknown test graphs.

  14. Invariant Node Representation Learning under Distribution Shifts with

    Since we have transformed the node representation learning task into only using ego-graphs \(G_v\), we assume that each ego-graph instance has an invariant subgraph, i.e., ego-subgraph \(G_v^{I} \subset G_v\), that possesses invariant and sufficiently predictive information to the node's label \(y_v\) in different environments under ...

  15. Learning causally invariant representations for out-of-distribution

    Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD ...

  16. Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach.

  17. Enhancing Out-of-distribution Generalization on Graphs via Causal

    Rethinking invariant graph representation learning without environment partitions In ICML DG Workshop. Google Scholar [8] Chen Yongqiang, Zhang Yonggang, Bian Yatao, Yang Han, Kaili M. A., Xie Binghui, Liu Tongliang, Han Bo, and Cheng James. 2022. Learning causally invariant representations for out-of-distribution generalization on graphs.

  18. TimeLovercc/Awesome-Graph-Causal-Learning

    Robust Causal Graph Representation Learning against Confounding Effects. Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, and Fuchun Sun. AAAI, 2023. Rethinking Invariant Graph Representation Learning without Environment Partitions. Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, and James Cheng.

  19. NeurIPS Poster Does Invariant Graph Learning via Environment

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach.

  20. Does Invariant Graph Learning via Environment Augmentation Learn

    Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs. As the graph environment partitions are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In ...

  21. Learning Invariant Graph Representations for Out-of-Distribution

    Abstract: Graph representation learning has shown effectiveness when testing and training graph data come from the same distribution, but most existing approaches fail to generalize under distribution shifts. Invariant learning, backed by the invariance principle from causality, can achieve guaranteed generalization under distribution shifts in theory and has shown great successes in practice.

  22. PDF ZIN: When and How to Learn Invariance by Environment Inference?

    robust and invariant models based on this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional ...