Explainable artificial intelligence: a comprehensive review

  • Published: 18 November 2021
  • Volume 55 , pages 3503–3568, ( 2022 )

Cite this article

artificial intelligence research paper 2022 pdf

  • Dang Minh 1   na1 ,
  • H. Xiang Wang 2 ,
  • Y. Fen Li 2 &
  • Tan N. Nguyen 3   na1  

17k Accesses

141 Citations

1 Altmetric

Explore all metrics

Thanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered systems have brought competitive advantages, the black-box nature makes them lack transparency and prevents them from explaining their decisions. This issue has motivated the introduction of explainable artificial intelligence (XAI), which promotes AI algorithms that can show their internal process and explain how they made decisions. The number of XAI research has increased significantly in recent years, but there lacks a unified and comprehensive review of the latest XAI progress. This review aims to bridge the gap by discovering the critical perspectives of the rapidly growing body of research associated with XAI. After offering the readers a solid XAI background, we analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-modeling explainability. We also pay attention to the current methods that dedicate to interpret and analyze deep learning methods. In addition, we systematically discuss various XAI challenges, such as the trade-off between the performance and the explainability, evaluation methods, security, and policy. Finally, we show the standard approaches that are leveraged to deal with the mentioned challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

artificial intelligence research paper 2022 pdf

Similar content being viewed by others

artificial intelligence research paper 2022 pdf

Machine Learning: Algorithms, Real-World Applications and Research Directions

Iqbal H. Sarker

artificial intelligence research paper 2022 pdf

Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions

artificial intelligence research paper 2022 pdf

Machine learning and deep learning

Christian Janiesch, Patrick Zschech & Kai Heinrich

Abdollahi B, Nasraoui O (2018) Transparency in fair machine learning: the case of explainable recommender systems. In: Human and machine learning. Springer, Berlin, pp 21?35

ACM (2020) ACM conference on fairness, accountability, and transparency. https://fatconference.org . Accessed 24 Jan 2020

Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138?52160

Article   Google Scholar  

Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. Adv Neural Inf Process Syst 31:9505?9515

Google Scholar  

Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54(1):95?122

Adriana da Costa FC, Vellasco MMB, Tanscheit R (2013) Fuzzy rules extraction from support vector machines for multi-class classification. Neural Comput Appl 22(7):1571?1580

Ahmed M (2019) Data summarization: a survey. Knowl Inf Syst 58(2):249?273

Ahn Y, Lin YR (2019) Fairsight: visual analytics for fairness in decision making. IEEE Trans Vis Comput Graph 26(1):1086?1095

AI (2019) Ethics for autonomous systems. https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-ethics-for-autonomous-systems . Accessed 3 Mar 2020

AI (2020) Explainable artificial intelligence. https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-explainable-artificial-intelligence . Accessed 3 Mar 2020

Akula AR, Todorovic S, Chai JY, Zhu SC (2019) Natural language interaction with explainable AI models. In: CVPR workshops, pp 87?90

Al-Shedivat M, Dubey A, Xing E (2020) Contextual explanation networks. J Mach Learn Res 21(194):1?44

MathSciNet   MATH   Google Scholar  

Angelov P, Soares E (2020) Towards explainable deep neural networks (xDNN). Neural Netw 130:185?194

Anysz H, Zbiciak A, Ibadov N (2016) The influence of input data standardization method on prediction accuracy of artificial neural networks. Proc Eng 153:66?70

Arras L, Arjona-Medina J, Widrich M, Montavon G (2019) Explaining and interpreting lstms. In: Explainable AI: interpreting, explaining and visualizing deep learning, vol 11700, p 211

Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82?115

Asadi S, Nilashi M, Husin ARC, Yadegaridehkordi E (2017) Customers perspectives on adoption of cloud computing in banking sector. Inf Technol Manag 18(4):305?330

Assaf R, Giurgiu I, Bagehorn F, Schumann A (2019) Mtex-cnn: Multivariate time series explanations for predictions with convolutional neural networks. In: 2019 IEEE international conference on data mining (ICDM). IEEE, pp 952?957

Bang JS, Lee MH, Fazli S, Guan C, Lee SW (2021) Spatio-spectral feature representation for motor imagery classification using convolutional neural networks. IEEE Trans Neural Netw Learn Syst

Baniecki H, Biecek P (2019) modelStudio: Interactive studio with explanations for ML predictive models. J Open Source Softw 4(43):1798

Baron B, Musolesi M (2020) Interpretable machine learning for privacy-preserving pervasive systems. IEEE Pervasive Comput

Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6541?6549

Bender EM, Friedman B (2018) Data statements for natural language processing: toward mitigating system bias and enabling better science. Trans Assoc Comput Linguist 6:587?604

Bi X, Zhang C, He Y, Zhao X, Sun Y, Ma Y (2021) Explainable time?frequency convolutional neural network for microseismic waveform classification. Inf Sci 546:883?896

Article   MathSciNet   MATH   Google Scholar  

Blanco-Justicia A, Domingo-Ferrer J, Martínez S, Sánchez D (2020) Machine learning explainability via microaggregation and shallow decision trees. Knowl-Based Syst 194:105532

BMVC (2020) Interpretable & explainable machine vision. https://arxiv.org/html/1909.07245 . Accessed 3 Mar 2020

Bologna G (2019) A simple convolutional neural network with rule extraction. Appl Sci 9(12):2411

Butterworth M (2018) The ICO and artificial intelligence: the role of fairness in the GDPR framework. Comput Law Secur Rev 34(2):257?268

Campbell T, Broderick T (2019) Automated scalable Bayesian inference via Hilbert coresets. J Mach Learn Res 20(1):551?588

Cao HE, Sarlin R, Jung A (2020) Learning explainable decision rules via maximum satisfiability. IEEE Access 8:218180?218185

Carey P (2018) Data protection: a practical guide to UK and EU law. Oxford University Press, Inc, Oxford

Carter S, Armstrong Z, Schubert L, Johnson I, Olah C (2019) Activation atlas. Distill 4(3):e15

Carvalho DV, Pereira EM, Cardoso JS (2019a) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832

Carvalho DV, Pereira EM, Cardoso JS (2019b) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832

Ceni A, Ashwin P, Livi L (2020) Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn Comput 12(2):330?356

Chakraborty S, Tomsett R, Raghavendra R, Harborne D, Alzantot M, Cerutti F, Srivastava M, Preece A, Julier S, Rao RM et al (2017) Interpretability of deep learning models: a survey of results. In: 2017 IEEE SmartWorld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, pp 1?6

Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017?5032

Chen J, Song L, Wainwright MJ, Jordan MI (2018) L-shapley and c-shapley: efficient model interpretation for structured data. In: International conference on learning representations

Chen J, Vaughan J, Nair V, Sudjianto A (2020a) Adaptive explainable neural networks (AxNNs). Available at SSRN 3569318

Chen Y, Yu C, Liu X, Xi T, Xu G, Sun Y, Zhu F, Shen B (2020b) PCLiON: an ontology for data standardization and sharing of prostate cancer associated lifestyles. Int J Med Inform 145:104332

Chen H, Lundberg S, Lee SI (2021) Explaining models by propagating Shapley values of local components. In: Explainable AI in Healthcare and Medicine. Springer, Berlin, pp 261?270

Choi E, Bahadori MT, Kulas JA, Schuetz A, Stewart WF, Sun J (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In: Advances in Neural Information Processing Systems, pp 3512?3520

Choi KS, Choi SH, Jeong B (2019) Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro Oncol 21(9):1197?1209

Choi H, Som A, Turaga P (2020) AMC-loss: angular margin contrastive loss for improved explainability in image classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 838?839

Choo J, Liu S (2018) Visual analytics for explainable deep learning. IEEE Comput Graph Appl 38(4):84?92

CIM I (2021) Explainable and trustworthy artificial intelligence. https://sites.google.com/view/special-issue-on-xai-ieee-cim . Accessed 1 Aug 2021

Comizio VG, Petrasic KL, Lee HY (2011) Regulators take steps to eliminate differences in thrift, bank and holding company reporting requirements. Banking LJ 128:426

Cortez P, Embrechts MJ (2013) Using sensitivity analysis and visualization techniques to open black box data mining models. Inf Sci 225:1?17

Craven MW, Shavlik JW (2014) Learning symbolic rules using artificial neural networks. In: Proceedings of the tenth international conference on machine learning, pp 73?80

Daglarli E (2020) Explainable artificial intelligence (XAI) approaches and deep meta-learning models. In: Advances and applications in deep learning, p 79

Dai J, Chen C, Li Y (2019) A backdoor attack against lstm-based text classification systems. IEEE Access 7:138872?138878

Dang LM, Hassan SI, Im S, Mehmood I, Moon H (2018) Utilizing text recognition for the defects extraction in sewers CCTV inspection videos. Comput Ind 99:96?109

Dang LM, Piran M, Han D, Min K, Moon H et al (2019) A survey on internet of things and cloud computing for healthcare. Electronics 8(7):768

Darpa (2020) Explainable artificial intelligence (XAI). https://www.darpa.mil/program/explainable-artificial-intelligence . Accessed 24 Jan 2020

De T, Giri P, Mevawala A, Nemani R, Deo A (2020) Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput Sci 168:40?48

Deeks A (2019) The judicial demand for explainable artificial intelligence. Columbia Law Rev 119(7):1829?1850

Deleforge A, Forbes F, Horaud R (2015) High-dimensional regression with gaussian mixtures and partially-latent response variables. Stat Comput 25(5):893?911

Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277?287

Dibia V, Demiralp Ç (2019) Data2vis: automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. IEEE Comput Graph Appl 39(5):33?46

Ding L (2018) Human knowledge in constructing AI systems?neural logic networks approach towards an explainable AI. Procedia Comput Sci 126:1561?1570

Dingen D, van?t Veer M, Houthuizen P, Mestrom EH, Korsten EH, Bouwman AR, Van Wijk J (2018) Regressionexplorer: interactive exploration of logistic regression models with subgroup analysis. IEEE Trans Vis Comput Graph 25(1):246?255

DMKD (2021) Data mining and knowledge discovery. https://www.springer.com/journal/10618/updates/18745970 . Aceessed 1 Aug 2021

Dogra DP, Ahmed A, Bhaskar H (2016) Smart video summarization using mealy machine-based trajectory modelling for surveillance applications. Multimed Tools Appl 75(11):6373?6401

Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:171000794

DuMouchel W (2002) Data squashing: constructing summary data sets. In: Handbook of massive data sets. Springer, Cham, pp 579?591

Dunn C, Moustafa N, Turnbull B (2020) Robustness evaluations of sustainable machine learning models against data poisoning attacks in the internet of things. Sustainability 12(16):6434

Dziugaite GK, Ben-David S, Roy DM (2020) Enforcing interpretability and its statistical impacts: trade-offs between accuracy and interpretability. arXiv preprint arXiv:201013764

Eiras-Franco C, Guijarro-Berdiñas B, Alonso-Betanzos A, Bahamonde A (2019) A scalable decision-tree-based method to explain interactions in dyadic data. Decis Support Syst 127:113141

Article   MATH   Google Scholar  

Electronics (2019) Interpretable deep learning in electronics, computer science and medical imaging. https://www.mdpi.com/journal/electronics/special_issues/interpretable_deep_learning . Accessed 3 Mar 2020

Elghazel H, Aussem A (2015) Unsupervised feature selection with ensemble learning. Mach Learn 98(1):157?180

Elshawi R, Al-Mallah MH, Sakr S (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 19(1):1?32

Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121?134

Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, van Gerven M, van Lier R (2018) Explainable and interpretable models in computer vision and machine learning. Springer, Cham

Book   Google Scholar  

Escobar CA, Morales-Menendez R (2019) Process-monitoring-for-quality?a model selection criterion for support vector machine. Procedia Manuf 34:1010?1017

Fang X, Xu Y, Li X, Lai Z, Wong WK, Fang B (2017) Regularized label relaxation linear regression. IEEE Trans Neural Netwo Learn Syst 29(4):1006?1018

Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Robots and transparency: the multiple dimensions of transparency in the context of robot technologies. IEEE Robotics Autom Mag 26(2):71?78

Fernandez A, Herrera F, Cordon O, del Jesus MJ, Marcelloni F (2019) Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to? IEEE Comput Intell Mag 14(1):69?81

FGCS (2021) Future generation computer systems. https://www.journals.elsevier.com/future-generation-computer-systems/call-for-papers/explainable-artificial-intelligence-for-healthcare . Accessed 1 Aug 2021

Forte JC, Mungroop HE, de Geus F, van der Grinten ML, Bouma HR, Pettilä V, Scheeren TW, Nijsten MW, Mariani MA, van der Horst IC et al (2021) Ensemble machine learning prediction and variable importance analysis of 5-year mortality after cardiac valve and CABG operations. Sci Rep 11(1):1?11

Främling K (2020) Decision theory meets explainable AI. In: International workshop on explainable, transparent autonomous agents and multi-agent systems. Springer, Cham, pp 57?74

Gallego AJ, Calvo-Zaragoza J, Valero-Mas JJ, Rico-Juan JR (2018) Clustering-based k-nearest neighbor classification for large-scale data with neural codes representation. Pattern Recogn 74:531?543

Gaonkar B, Shinohara RT, Davatzikos C, Initiative ADN et al (2015) Interpreting support vector machine models for multivariate group wise analysis in neuroimaging. Med Image Anal 24(1):190?204

García-Magariño I, Muttukrishnan R, Lloret J (2019) Human-centric AI for trustworthy IoT systems with explainable multilayer perceptrons. IEEE Access 7:125562?125574

Gartner (2020) Gartner identifies the top 10 strategic technology trends for 2020. https://www.gartner.com/en/newsroom/press-releases/2019-10-21-gartner-identifies-the-top-10-strategic-technology-trends-for-2020 . Accessed 24 Jan 2020

Ghorbani A, Abid A, Zou J (2019) Interpretation of neural networks is fragile. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 3681?3688

Gite S, Khatavkar H, Kotecha K, Srivastava S, Maheshwari P, Pandey N (2021) Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Comput Sci 7:e340

Google (2021) Google what-if toolkit. https://pair-code.github.io/what-if-tool/ . Accessed 26 Apr 2021

Gronauer S, Diepold K (2021) Multi-agent deep reinforcement learning: a survey. Artif Intell Rev 1?49

Gu D, Su K, Zhao H (2020a) A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif Intell Med 107:101858

Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Ourselin S, Vercauteren T, Zhang S (2020b) Ca-net: comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans Med Imaging

Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2019) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):93

Gulati P, Hu Q, Atashzar SF (2021) Toward deep generalization of peripheral EMG-based human-robot interfacing: a hybrid explainable solution for neurorobotic systems. IEEE Robotics Autom Lett

Guo S, Yu J, Liu X, Wang C, Jiang Q (2019) A predicting model for properties of steel using the industrial big data based on machine learning. Comput Mater Sci 160:95?104

Guo W (2020) Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun Mag 58(6):39?45

Gupta B, Rawat A, Jain A, Arora A, Dhami N (2017) Analysis of various decision tree algorithms for classification in data mining. Int J Comput Appl 163(8):15?19

H2oai (2017) Comparative performance analysis of neural networks architectures on h2o platform for various activation functions. In: 2017 IEEE International young scientists forum on applied physics and engineering (YSF). IEEE, pp 70?73

Haasdonk B (2005) Feature space interpretation of SVMs with indefinite kernels. IEEE Trans Pattern Anal Mach Intell 27(4):482?492

Hagras H (2018) Toward human-understandable, explainable AI. Computer 51(9):28?36

Hara S, Hayashi K (2018) Making tree ensembles interpretable: a Bayesian model selection approach. In: International conference on artificial intelligence and statistics. PMLR, pp 77?85

Hatwell J, Gaber MM, Azad RMA (2020) Chirps: explaining random forest classification. Artif Intell Rev 53:5747?5788

Hatzilygeroudis I, Prentzas J (2015) Symbolic-neural rule based reasoning and explanation. Expert Syst Appl 42(9):4595?4609

Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European conference on computer vision. Springer, Cham, pp 3?19

Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Disc 28(5):1503?1529

Article   MathSciNet   Google Scholar  

Hind M, Wei D, Campbell M, Codella NC, Dhurandhar A, Mojsilovi? A, Natesan Ramamurthy K, Varshney KR (2019) TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 123?129

Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:181204608

Holzinger A (2016) Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform 3(2):119?131

Holzinger A, Langs G, Denk H, Zatloukal K, Müller H (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov 9(4):e1312

Holzinger A, Malle B, Saranti A, Pfeifer B (2021a) Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf Fusion 71:28?37

Holzinger A, Weippl E, Tjoa AM, Kieseberg P (2021b) Digital transformation for sustainable development goals (SDGS)?a security, safety and privacy perspective on AI. In: International cross-domain conference for machine learning and knowledge. Springer, Cham, pp 103?107

Hu K, Orghian D, Hidalgo C (2018a) Dive: a mixed-initiative system supporting integrated data exploration workflows. In: Proceedings of the workshop on human-in-the-loop data analytics, pp 1?7

Hu R, Andreas J, Darrell T, Saenko K (2018b) Explainable neural computation via stack neural module networks. In: Proceedings of the European conference on computer vision (ECCV), pp 53?69

Huang Q, Katsman I, He H, Gu Z, Belongie S, Lim SN (2019) Enhancing adversarial example transferability with an intermediate level attack. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4733?4742

Huisman M, van Rijn JN, Plaat A (2021) A survey of deep meta-learning. Artif Intell Rev 1?59

IBM (2019) AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):4?1

ICAPS (2020) Explainable planning. https://icaps20.icaps-conference.org/workshops/xaip/ . Accessed 3 Mar 2020

ICCV (2019) Interpretating and explaining visual artificial intelligence models. http://xai.unist.ac.kr/workshop/2019/ . Accessed 3 Mar 2020

ICML (2021) Theoretic foundation, criticism, and application trend of explainable AI. https://icml2021-xai.github.io/ . Accessed 1 Aug 2021

IDC (2020) Worldwide spending on artificial intelligence systems will be nearly 98 billion dollars in 2023. https://www.idc.com/getdoc.jsp?containerId=prUS45481219 . Accessed 24 Jan 2020

IJCAI (2019) Explainable artificial intelligence(XAI). https://sites.google.com/view/xai2019/home . Accessed 3 Mar 2020

Islam MA, Anderson DT, Pinar AJ, Havens TC, Scott G, Keller JM (2019) Enabling explainable fusion in deep learning with fuzzy integral neural networks. IEEE Trans Fuzzy Syst 28(7):1291?1300

Islam NU, Lee S (2019) Interpretation of deep CNN based on learning feature reconstruction with feedback weights. IEEE Access 7:25195?25208

IUI (2019) Explainable smart systems. https://explainablesystems.comp.nus.edu.sg/2019/ . Accessed 3 Mar 2020

Ivanovs M, Kadikis R, Ozols K (2021) Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit Lett

Jagadish H, Gehrke J, Labrinidis A, Papakonstantinou Y, Patel JM, Ramakrishnan R, Shahabi C (2014) Big data and its technical challenges. Commun ACM 57(7):86?94

Janitza S, Celik E, Boulesteix AL (2018) A computationally fast variable importance test for random forests for high-dimensional data. Adv Data Anal Classif 12(4):885?915

Jung YJ, Han SH, Choi HJ (2021) Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access 9:18670?18681

Junior JRB (2020) Graph embedded rules for explainable predictions in data streams. Neural Netw 129:174?192

Juuti M, Szyller S, Marchal S, Asokan N (2019) PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512?527

Kapelner A, Soterwood J, Nessaiver S, Adlof S (2018) Predicting contextual informativeness for vocabulary learning. IEEE Trans Learn Technol 11(1):13?26

Karlsson I, Rebane J, Papapetrou P, Gionis A (2020) Locally and globally explainable time series tweaking. Knowl Inf Syst 62(5):1671?1700

Keane MT, Kenny EM (2019) How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: International conference on case-based reasoning. Springer, Cham, pp 155?171

Keneni BM, Kaur D, Al Bataineh A, Devabhaktuni VK, Javaid AY, Zaientz JD, Marinier RP (2019) Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7:17001?17016

Kenny EM, Ford C, Quinn M, Keane MT (2021) Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif Intell 294:103459

Kim J, Canny J (2018) Explainable deep driving by visualizing causal attention. In: Explainable and interpretable models in computer vision and machine learning. Springer, Cham, pp 173?193

Kindermans PJ, Hooker S, Adebayo J, Alber M, Schütt KT, Dähne S, Erhan D, Kim B (2019) The (un) reliability of saliency methods. In: Explainable AI: interpreting, explaining and visualizing deep learning. Springer, Cham, pp 267?280

Kiritz N, Sarfati P (2018) Supervisory guidance on model risk management (SR 11-7) versus enterprise-wide model risk management for deposit-taking institutions (E-23): a detailed comparative analysis. Available at SSRN 3332484

Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR, pp 1885?1894

Kolyshkina I, Simoff S (2021) Interpretability of machine learning solutions in public healthcare: the CRISP-ML approach. Front Big Data 4:18

Konig R, Johansson U, Niklasson L (2008) G-REX: a versatile framework for evolutionary data mining. In: 2008 IEEE international conference on data mining workshops. IEEE, pp 971?974

Konstantinov AV, Utkin LV (2021) Interpretable machine learning with an ensemble of gradient boosting machines. Knowl Based Syst 222:106993

Krishnamurthy P, Sarmadi A, Khorrami F (2021) Explainable classification by learning human-readable sentences in feature subsets. Inf Sci 564:202?219

Kumari B, Swarnkar T (2020) Importance of data standardization methods on stock indices prediction accuracy. In: Advanced computing and intelligent engineering. Springer, Cham, pp 309?318

Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Represent 60:346?359

Langer M, Oster D, Speith T, Hermanns H, Kästner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)??A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473

Lapchak PA, Zhang JH (2018) Data standardization and quality management. Transl Stroke Res 9(1):4?8

Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) The LRP toolbox for artificial neural networks. J Mach Learn Res 17(1):3938?3942

Latouche P, Robin S, Ouadah S (2018) Goodness of fit of logistic regression models for random graphs. J Comput Graph Stat 27(1):98?109

Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jørgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(1):1?11

Lawless WF, Mittu R, Sofge D, Hiatt L (2019) Artificial intelligence, autonomy, and human-machine teams: interdependence, context, and explainable AI. AI Mag 40(3)

Lee D, Mulrow J, Haboucha CJ, Derrible S, Shiftan Y (2019) Attitudes on autonomous vehicle adoption using interpretable gradient boosting machine. Transp Res Rec, p 0361198119857953

Li K, Hu C, Liu G, Xue W (2015) Building?s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build 108:106?113

Liang S, Sabri AQM, Alnajjar F, Loo CK (2021) Autism spectrum self-stimulatory behaviours classification using explainable temporal coherency deep features and SVM classifier. IEEE Access

Liberati C, Camillo F, Saporta G (2017) Advances in credit scoring: combining performance and interpretation in kernel discriminant analysis. Adv Data Anal Classif 11(1):121?138

Lin YC, Lee YC, Tsai WC, Beh WK, Wu AYA (2020) Explainable deep neural network for identifying cardiac abnormalities using class activation map. In: 2020 Computing in cardiology. IEEE, pp 1?4

Lipton ZC (2018) The mythos of model interpretability. Queue 16(3):31?57

Liu YJ, Ma C, Zhao G, Fu X, Wang H, Dai G, Xie L (2016) An interactive spiraltape video summarization. IEEE Trans Multimed 18(7):1269?1282

Liu Z, Tang B, Wang X, Chen Q (2017) De-identification of clinical notes via recurrent neural network and conditional random field. J Biomed Inform 75:S34?S42

Liu P, Zhang L, Gulla JA (2020) Dynamic attention-based explainable recommendation with textual and visual fusion. Inf Process Manag 57(6):102099

Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071?3085

Loor M, De Tré G (2020) Contextualizing support vector machine predictions. Int J Comput Intell Syst 13(1):1483?1497

Luo X, Chang X, Ban X (2016) Regression and classification using extreme learning machine based on L1-norm and L2-norm. Neurocomputing 174:179?186

Ma Y, Chen W, Ma X, Xu J, Huang X, Maciejewski R, Tung AK (2017) EasySVM: a visual analysis approach for open-box support vector machines. Comput Vis Media 3(2):161?175

Manica M, Oskooei A, Born J, Subramanian V, Sáez-Rodríguez J, Rodriguez Martinez M (2019) Toward explainable anticancer compound sensitivity prediction via multimodal attention-based convolutional encoders. Mol Pharm 16(12):4797?4806

Martini ML, Neifert SN, Gal JS, Oermann EK, Gilligan JT, Caridi JM (2021) Drivers of prolonged hospitalization following spine surgery: a game-theory-based approach to explaining machine learning models. JBJS 103(1):64?73

Maweu BM, Dakshit S, Shamsuddin R, Prabhakaran B (2021) CEFEs: a CNN explainable framework for ECG signals. Artif Intell Med 102059

Meske C, Bunde E, Schneider J, Gersch M (2020) Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf Syst Manag 1?11

Microsoft (2021) Azure model interpretability. https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability . Accessed 26 Apr 2021

Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1?38

Minh DL, Sadeghi-Niaraki A, Huy HD, Min K, Moon H (2018) Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network. IEEE Access 6:55392?55404

Mohit, Kumari AC, Sharma M (2019) A novel approach to text clustering using shift k-medoid. Int J Soc Comput Cyber Phys Syst 2(2):106?118

Molnar C, Casalicchio G, Bischl B (2019) Quantifying model complexity via functional decomposition for better post-hoc interpretability. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Cham, pp 193?204

Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211?222

Moradi M, Samwald M (2021) Post-hoc explanation of black-box classifiers using confident itemsets. Expert Syst Appl 165:113941

Mordvintsev A, Olah C, Tyka M (2015) Inceptionism: going deeper into neural networks, 2015. https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

Muller H, Mayrhofer MT, Van Veen EB, Holzinger A (2021) The ten commandments of ethical medical AI. Computer 54(07):119?123

Musto C, de Gemmis M, Lops P, Semeraro G (2020) Generating post hoc review-based natural language justifications for recommender systems. User Model User Adapt Interact 1?45

Neto MP, Paulovich FV (2020) Explainable matrix?visualization for global and local interpretability of random forest classification ensembles. IEEE Trans Vis Comput Graph

Ng SF, Chew YM, Chng PE, Ng KS (2018) An insight of linear regression analysis. Sci Res J 15(2):1?16

Nguyen TN, Lee S, Nguyen-Xuan H, Lee J (2019) A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput Methods Appl Mech Eng 354:506?526

Nguyen DT, Kasmarik KE, Abbass HA (2020a) Towards interpretable neural networks: an exact transformation to multi-class multivariate decision trees. arXiv preprint arXiv:200304675

Nguyen TN, Nguyen-Xuan H, Lee J (2020b) A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem Anal Des 171:103377

NIPS (2017) Interpreting, explaining and visualizing deep learning. http://www.interpretable-ml.org/nips2017workshop/ . Accessed 3 Mar 2020

Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64?82

Olah C, Satyanarayan A, Johnson I, Carter S, Schubert L, Ye K, Mordvintsev A (2018) The building blocks of interpretability. Distill 3(3):e10

Oracle (2021) Oracle skater. https://oracle.github.io/Skater/overview.html . Accessed 26 Apr 2021

Ostad-Ali-Askari K, Shayannejad M (2021) Computation of subsurface drain spacing in the unsteady conditions using artificial neural networks (ANN). Appl Water Sci 11(2):1?9

Ostad-Ali-Askari K, Shayannejad M, Ghorbanizadeh-Kharazi H (2017) Artificial neural network for modeling nitrate pollution of groundwater in marginal area of Zayandeh-rood river, Isfahan, Iran. KSCE J Civ Eng 21(1):134?140

Osullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robotics Comput Assist Surg 15(1):e1968

Padarian J, McBratney AB, Minasny B (2020) Game theory interpretation of digital soil mapping convolutional neural networks. Soil 6(2):389?397

Páez A (2019) The pragmatic turn in explainable artificial intelligence (XAI). Mind Mach 29(3):441?459

Pan X, Tang F, Dong W, Ma C, Meng Y, Huang F, Lee TY, Xu C (2019) Content-based visual summarization for image collections. IEEE Transa Vis Comput Graph

Park DH, Hendricks LA, Akata Z, Rohrbach A, Schiele B, Darrell T, Rohrbach M (2018) Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8779?8788

Payer C, Stern D, Bischof H, Urschler M (2019) Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med Image Anal 54:207?219

Peloquin D, DiMaio M, Bierer B, Barnes M (2020) Disruptive and avoidable: GDPR challenges to secondary research uses of data. Eur J Hum Genet 28(6):697?705

Polato M, Aiolli F (2019) Boolean kernels for rule based interpretation of support vector machines. Neurocomputing 342:113?124

PR (2019) Explainable deep learning for efficient and robust pattern recognition. https://www.journals.elsevier.com/pattern-recognition/call-for-papers/call-for-paper-on-special-issue-on-explainable-deep-learning . Accessed 3 Mar 2020

Raaijmakers S (2019) Artificial intelligence for law enforcement: challenges and opportunities. IEEE Secur Priv 17(5):74?77

Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48(1):137?141

Rajapaksha D, Bergmeir C, Buntine W (2020) LoRMIkA: local rule-based model interpretability with k-optimal associations. Inf Sci 540:221?241

Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, Liu PJ, Liu X, Marcus J, Sun M et al (2018) Scalable and accurate deep learning with electronic health records. NPJ Digit Med 1(1):1?10

Ren X, Xing Z, Xia X, Lo D, Wang X, Grundy J (2019) Neural network-based detection of self-admitted technical debt: from performance to explainability. ACM Trans Softw Eng Methodol (TOSEM) 28(3):1?45

Ribeiro MT, Singh S, Guestrin C (2016) ?Why should I trust you?? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135?1144

Ribeiro PC, Schardong GG, Barbosa SD, de Souza CS, Lopes H (2019) Visual exploration of an ensemble of classifiers. Comput Graph 85:23?41

Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206?215

Sabol P, Sinčák P, Hartono P, Kočan P, Benetinová Z, Blichárová A, Verbóová Ľ, Štammová E, Sabolová-Fabianová A, Jašková A (2020) Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. J Biomed Inform 109:103523

Sagi O, Rokach L (2020) Explainable decision forest: transforming a decision forest into an interpretable tree. Inf Fusion 61:124?138

Salmeron JL, Correia MB, Palos-Sanchez PR (2019) Complexity in forecasting and predictive models. Complexity 2019

Sanz H, Valim C, Vegas E, Oller JM, Reverter F (2018) SVM-RFE: selection and visualization of the most relevant features through non-linear kernels. BMC Bioinform 19(1):1?18

Sarvghad A, Tory M, Mahyar N (2016) Visualizing dimension coverage to support exploratory analysis. IEEE Trans Visual Comput Graph 23(1):21?30

Schneeberger D, Stöger K, Holzinger A (2020) The European legal framework for medical AI. In: International cross-domain conference for machine learning and knowledge extraction. Springer, Cham, pp 209?226

Self JZ, Dowling M, Wenskovitch J, Crandell I, Wang M, House L, Leman S, North C (2018) Observation-level and parametric interaction for high-dimensional data analysis. ACM Trans Interact Intell Syst (TIIS) 8(2):1?36

Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128(2):336?359

Setzu M, Guidotti R, Monreale A, Turini F, Pedreschi D, Giannotti F (2021) Glocalx-from local to global explanations of black box AI models. Artif Intell 294:103457

Shi L, Teng Z, Wang L, Zhang Y, Binder A (2018) Deepclue: visual interpretation of text-based deep stock prediction. IEEE Trans Knowl Data Eng 31(6):1094?1108

Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International conference on machine learning. PMLR, pp 3145?3153

Singh N, Singh P, Bhagat D (2019) A rule extraction approach from support vector machines for diagnosing hypertension among diabetics. Expert Syst Appl 130:188?205

Singh A, Sengupta S, Lakshminarayanan V (2020) Explainable deep learning models in medical image analysis. J Imaging 6(6):52

Song S, Huang H, Ruan T (2019) Abstractive text summarization using LSTM-CNN based deep learning. Multimed Tools Appl 78(1):857?875

SP (2019) Explainable AI on emerging multimedia technologies. https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers/emerging-multimedia-technologies . Accessed 3 Mar 2020

Spinner T, Schlegel U, Schäfer H, El-Assady M (2019) explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Vis Comput Graph 26(1):1064?1074

Statista (2020) Revenues from the artificial intelligence software market worldwide from 2018 to 2025. https://www.statista.com/statistics/607716/worldwide-artificial-intelligence-market-revenues/ . Accessed 24 Jan 2020

Stojić A, Stanić N, Vuković G, Stanišić S, Perišić M, Šoštarić A, Lazić L (2019) Explainable extreme gradient boosting tree-based prediction of toluene, ethylbenzene and xylene wet deposition. Sci Total Environ 653:140?147

Strobelt H, Gehrmann S, Pfister H, Rush AM (2017) Lstmvis: a tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Trans Vis Comput Graph 24(1):667?676

Strobelt H, Gehrmann S, Behrisch M, Perer A, Pfister H, Rush AM (2018) SEQ2SEQ-VIS: a visual debugging tool for sequence-to-sequence models. IEEE Trans Vis Comput Graph 25(1):353?363

Štrumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41(3):647?665

Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828?841

Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Cham, pp 543?585

Tan Q, Ye M, Ma AJ, Yang B, Yip TCF, Wong GLH, Yuen PC (2020) Explainable uncertainty-aware convolutional recurrent neural network for irregular medical time series. IEEE Trans Neural Netw Learn Syst

Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans Neural Netw Learn Syst

Turkay C, Kaya E, Balcisoy S, Hauser H (2016) Designing progressive and interactive analytics processes for high-dimensional data analysis. IEEE Trans Vis Comput Graph 23(1):131?140

UberAccident (2020) What happens when self-driving cars kill people. https://www.forbes.com/sites/cognitiveworld/2019/09/26/what-happens-with-self-driving-cars-kill-people/#4b798bcc405c . Accessed 17 Mar 2020

Van Belle V, Van Calster B, Van Huffel S, Suykens JA, Lisboa P (2016) Explaining support vector machines: a color based nomogram. PLoS ONE 11(10):e0164568

Van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the national conference on artificial intelligence. AAAI Press; MIT Press, Menlo Park, London, pp 900?907

Van Luong H, Joukovsky B, Deligiannis N (2021) Designing interpretable recurrent neural networks for video reconstruction via deep unfolding. IEEE Trans Image Process 30:4099?4113

Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans Royal Soc A Math Phys Eng Sci 376(2133):20180083

Vellido A (2019) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 1?15

Waa J, Nieuwburg E, Cremers A, Neerincx M (2021) Evaluating XAI: a comparison of rule-based and example-based explanations. Artif Intell 291:103404

Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76?99

Wang SC (2003) Artificial neural network. In: Interdisciplinary computing in java programming. Springer, Cham, pp 81?100

Wang B, Gong NZ (2018) Stealing hyperparameters in machine learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 36?52

Wang H, Yeung DY (2016) Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans Knowl Data Eng 28(12):3395?3408

Wang Y, Aghaei F, Zarafshani A, Qiu Y, Qian W, Zheng B (2017) Computer-aided classification of mammographic masses using visually sensitive image features. J Xray Sci Technol 25(1):171?186

Wang Q, Zhang K, Ororbia AG II, Xing X, Liu X, Giles CL (2018) An empirical evaluation of rule extraction from recurrent neural networks. Neural Comput 30(9):2568?2591

Wang C, Shi Y, Fan X, Shao M (2019a) Attribute reduction based on k-nearest neighborhood rough sets. Int J Approx Reason 106:18?31

Wang F, Kaushal R, Khullar D (2019b) Should health care demand interpretable artificial intelligence or accept ?black box? medicine? Ann Intern Med

Wang S, Zhou T, Bilmes J (2019c) Bias also matters: bias attribution for deep neural network explanation. In: International conference on machine learning. PMLR, pp 6659?6667

Wang Y, Wang D, Geng N, Wang Y, Yin Y, Jin Y (2019d) Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Appl Soft Comput 77:188?204

Wasilow S, Thorpe JB (2019) Artificial intelligence, robotics, ethics, and the military: a Canadian perspective. AI Mag 40(1)

Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) ?Let me explain!?: exploring the potential of virtual agents in explainable AI interaction design. J Multimodal User Interfaces 1?12

Wickstrøm KK, ØyvindMikalsen K, Kampffmeyer M, Revhaug A, Jenssen R (2020) Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE J Biomed Health Inform

Williford JR, May BB, Byrne J (2020) Explainable face recognition. In: European Conference on computer vision. Springer, Cham, pp 248?263

Wu Q, Burges CJ, Svore KM, Gao J (2010) Adapting boosting for information retrieval measures. Inf Retr 13(3):254?270

Wu J, Zhong Sh, Jiang J, Yang Y (2017) A novel clustering method for static video summarization. Multimed Tools Appl 76(7):9625?9641

Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI conference on artificial intelligence, vol 32

Xu J, Zhang Z, Friedman T, Liang Y, Broeck G (2018) A semantic loss function for deep learning with symbolic knowledge. In: International conference on machine learning. PMLR, pp 5502?5511

Yamamoto Y, Tsuzuki T, Akatsuka J, Ueki M, Morikawa H, Numata Y, Takahara T, Tsuyuki T, Tsutsumi K, Nakazawa R et al (2019) Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun 10(1):1?9

Yang SCH, Shafto P (2017) Explainable artificial intelligence via Bayesian teaching. In: NIPS 2017 workshop on teaching machines, robots, and humans, pp 127?137

Yang Z, Zhang A, Sudjianto A (2020) Enhancing explainability of neural networks through architecture constraints. IEEE Trans Neural Netw Learn Syst

Yeganejou M, Dick S, Miller J (2019) Interpretable deep convolutional fuzzy classifier. IEEE Trans Fuzzy Syst 28(7):1407?1419

Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:150606579

Yousefi-Azar M, Hamey L (2017) Text summarization using unsupervised deep learning. Expert Syst Appl 68:93?105

Yu H, Yang S, Gu W, Zhang S (2017) Baidu driving dataset and end-to-end reactive control model. In: 2017 IEEE intelligent vehicles symposium (IV). IEEE, pp 341?346

Yuan J, Xiong HC, Xiao Y, Guan W, Wang M, Hong R, Li ZY (2020) Gated CNN: Integrating multi-scale feature layers for object detection. Pattern Recogn 105:107131

Zeltner D, Schmid B, Csiszár G, Csiszár O (2021) Squashing activation functions in benchmark tests: towards a more explainable artificial intelligence using continuous-valued logic. Knowl Based Syst 218:106779

Zhang Qs, Zhu SC (2018) Visual interpretability for deep learning: a survey. Fronti Inf Technol Electron Eng 19(1):27?39

Zhang J, Wang Y, Molino P, Li L, Ebert DS (2018a) Manifold: a model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Trans Vis Comput Graph 25(1):364?373

Zhang Q, Nian Wu Y, Zhu SC (2018b) Interpretable convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8827?8836

Zhang Q, Yang Y, Ma H, Wu YN (2019) Interpreting CNNs via decision trees. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6261?6270

Zhang A, Teng L, Alterovitz G (2020a) An explainable machine learning platform for pyrazinamide resistance prediction and genetic feature identification of mycobacterium tuberculosis. J Am Med Inform Assoc

Zhang M, You H, Kadam P, Liu S, Kuo CCJ (2020b) Pointhop: an explainable machine learning method for point cloud classification. IEEE Trans Multimed 22(7):1744?1755

Zhang W, Tang S, Su J, Xiao J, Zhuang Y (2020c) Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention. Multimed Tools Appl 1?16

Zhang Z, Beck MW, Winkler DA, Huang B, Sibanda W, Goyal H et al (2018c) Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Ann Transl Med 6(11)

Zhao W, Du S (2016) Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach. IEEE Trans Geosci Remote Sens 54(8):4544?4554

Zheng S, Ding C (2020) A group lasso based sparse KNN classifier. Pattern Recogn Lett 131:227?233

Zheng Xl, Zhu My, Li Qb, Chen Cc, Tan Yc (2019) FinBrain: when finance meets AI 2.0. Front Inf Technol Electron Eng 20(7):914?924

Zhou B, Bau D, Oliva A, Torralba A (2018a) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Mach Intell 41(9):2131?2145

Zhou X, Jiang P, Wang X (2018b) Recognition of control chart patterns using fuzzy SVM with a hybrid kernel function. J Intell Manuf 29(1):51?67

Zhuang Yt, Wu F, Chen C, Pan Yh (2017) Challenges and opportunities: from big data to knowledge in AI 2.0. Front Inf Technol Electron Eng 18(1):3?14

Download references

Author information

Dang Minh and Tan N. Nguyen are co-first authorship and have been contributed equally to the work.

Authors and Affiliations

Department of Information Technology, FPT University, Ho Chi Minh City, Vietnam

Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 05006, Republic of Korea

H. Xiang Wang & Y. Fen Li

Department of Architectural Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 05006, Republic of Korea

Tan N. Nguyen

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Dang Minh or Tan N. Nguyen .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Minh, D., Wang, H.X., Li, Y.F. et al. Explainable artificial intelligence: a comprehensive review. Artif Intell Rev 55 , 3503–3568 (2022). https://doi.org/10.1007/s10462-021-10088-y

Download citation

Published : 18 November 2021

Issue Date : June 2022

DOI : https://doi.org/10.1007/s10462-021-10088-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Explainable artificial intelligence
  • Interpretability
  • Black-box models
  • Deep learning
  • Machine learning
  • Find a journal
  • Publish with us
  • Track your research

Generative Artificial Intelligence: Trends and Prospects

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

artificial intelligence research paper 2022 pdf

Volume 75 Masthead

Published: 2022-09-19

Motion Planning Under Uncertainty with Complex Agents and Environments via Hybrid Search

Semg-based upper limb movement classifier: current scenario and upcoming challenges, altruistic hedonic games, can we automate scientific reviewing, on efficient reinforcement learning for full-length game of starcraft ii, on tackling explanation redundancy in decision trees, multi-agent path finding: a new boolean encoding, domain adaptation and multi-domain adaptation for neural machine translation: a survey, a survey of methods for automated algorithm configuration, planning with perspectives -- decomposing epistemic planning using functional strips, planted dense subgraphs in dense random graphs can be recovered using graph-based machine learning, mean-semivariance policy optimization via risk-averse reinforcement learning, low-rank representation of reinforcement learning policies, communication-aware local search for distributed constraint optimization, aan+: generalized average attention network for accelerating neural transformer, deepsym: deep symbol generation and rule learning for planning from unsupervised robot interaction, solving the watchman route problem with heuristic search, computational short cuts in infinite domain constraint satisfaction, interpretable local concept-based explanation with human feedback to predict all-cause mortality, creative problem solving in artificially intelligent agents: a survey and framework, fair in the eyes of others, initialization of feature selection search for classification, reinforcement learning from optimization proxy for ride-hailing vehicle relocation, asymmetric action abstractions for planning in real-time strategy games, learning to design fair and private voting rules, strategy graphs for influence diagrams, first-order rewritability and complexity of two-dimensional temporal ontology-mediated queries, towards evidence retrieval cost reduction in abstract argumentation frameworks with fallible evidence, chance-constrained static schedules for temporally probabilistic plans, proofs and certificates for max-sat, towards continual reinforcement learning: a review and perspectives, the lm-cut heuristic family for optimal numeric planning with simple conditions, data-driven revision of conditional norms in multi-agent systems, tooltango: common sense generalization in predicting sequential tool interactions for robot plan synthesis, automated dynamic algorithm configuration, the complexity of network satisfaction problems for symmetric relation algebras with a flexible atom.

The state of AI in 2022—and a half decade in review

Adoption has more than doubled since 2017, though the proportion of organizations using AI 1 In the survey, we defined AI as the ability of a machine to perform cognitive functions that we associate with human minds (for example, natural-language understanding and generation) and to perform physical tasks using cognitive functions (for example, physical robotics, autonomous driving, and manufacturing work). has plateaued between 50 and 60 percent for the past few years. A set of companies seeing the highest financial returns from AI continue to pull ahead of competitors. The results show these leaders making larger investments in AI, engaging in increasingly advanced practices known to enable scale and faster AI development , and showing signs of faring better in the tight market for AI talent. On talent, for the first time, we looked closely at AI hiring and upskilling. The data show that there is significant room to improve diversity on AI teams, and, consistent with other studies, diverse teams correlate with outstanding performance.

Table of Contents

  • Five years in review: AI adoption, impact, and spend
  • Mind the gap: AI leaders pulling ahead
  • AI talent tales: New hot roles, continued diversity woes

About the research

1. five years in review: ai adoption, impact, and spend.

This marks the fifth consecutive year we’ve conducted research globally on AI’s role in business, and we have seen shifts over this period.

2. Mind the gap: AI leaders pulling ahead

Over the past five years we have tracked the leaders in AI—we refer to them as AI high performers—and examined what they do differently. We see more indications that these leaders are expanding their competitive advantage than we find evidence that others are catching up.

First, we haven’t seen an expansion in the size of the leader group. For the past three years, we have defined AI high performers as those organizations that respondents say are seeing the biggest bottom-line impact from AI adoption—that is, 20 percent or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at about 8 percent. The findings indicate that this group is achieving its superior results mainly from AI boosting top-line gains, as they’re more likely to report that AI is driving revenues rather than reducing costs, though they do report AI decreasing costs as well.

Next, high performers are more likely than others to follow core practices that unlock value, such as linking their AI strategy to business outcomes  (Exhibit 1). 2 All questions about AI-related strengths and practices were asked only of the 744 respondents who said their organizations had adopted AI in at least one function, n = 744. Also important, they are engaging more often in “frontier” practices that enable AI development and deployment at scale , or what some call the “ industrialization of AI .” For example, leaders are more likely to have a data architecture that is modular enough to accommodate new AI applications rapidly. They also often automate most data-related processes, which can both improve efficiency in AI development and expand the number of applications they can develop by providing more high-quality data to feed into AI algorithms. And AI high performers are 1.6 times more likely than other organizations to engage nontechnical employees in creating AI applications by using emerging low-code or no-code programs , which allow companies to speed up the creation of AI applications. In the past year, high performers have become even more likely than other organizations to follow certain advanced scaling practices, such as using standardized tool sets to create production-ready data pipelines and using an end-to-end platform for AI-related data science, data engineering, and application development that they’ve developed in-house.

High performers might also have a head start on managing potential AI-related risks, such as personal privacy and equity and fairness, that other organizations have not addressed yet. While overall, we have seen little change in organizations reporting recognition and mitigation of AI-related risks since we began asking about them four years ago, respondents from AI high performers are more likely than others to report that they engage in practices that are known to help mitigate risk . These include ensuring AI and data governance , standardizing processes and protocols , automating processes such as data quality control to remove errors introduced through manual work, and testing the validity of models and monitoring them over time for potential issues.

AI use and sustainability efforts

The survey findings suggest that many organizations that have adopted AI are integrating AI capabilities into their sustainability efforts and are also actively seeking ways to reduce the environmental impact of their AI use (exhibit). Of respondents from organizations that have adopted AI, 43 percent say their organizations are using AI to assist in sustainability efforts, and 40 percent say their organizations are working to reduce the environmental impact of their AI use by minimizing the energy used to train and run AI models. As companies that have invested more in AI and have more mature AI efforts than others, high performers are 1.4 times more likely than others to report AI-enabled sustainability efforts as well as to say their organizations are working to decrease AI-related emissions. Both efforts are more commonly seen at organizations based in Greater China, Asia–Pacific, and developing markets, while respondents in North America are least likely to report them.

When asked about the types of sustainability efforts using AI, respondents most often mention initiatives to improve environmental impact, such as optimization of energy efficiency or waste reduction. AI use is least common in efforts to improve organizations’ social impact (for example, sourcing of ethically made products), though respondents working for North American organizations are more likely than their peers to report that use.

Investment is yet another area that could contribute to the widening of the gap: AI high performers are poised to continue outspending other organizations on AI efforts. Even though respondents at those leading organizations are just as likely as others to say they’ll increase investments in the future, they’re spending more than others now, meaning they’ll be increasing from a base that is a higher percentage of revenues. Respondents at AI high performers are nearly eight times more likely than their peers to say their organizations spend at least 20 percent of their digital-technology budgets on AI-related technologies. And these digital budgets make up a much larger proportion of their enterprise spend: respondents at AI high performers are over five times more likely than other respondents to report that their organizations spend more than 20 percent of their enterprise-wide revenue on digital technologies.

Finally, all of this may be giving AI high performers a leg up in attracting AI talent. There are indications that these organizations have less difficulty hiring for roles such as AI data scientist and data engineer. Respondents from organizations that are not AI high performers say filling those roles has been “very difficult” much more often than respondents from AI high performers do.

The bottom line: high performers are already well positioned for sustained AI success, improved efficiency in new AI development, and a resultingly more attractive environment for talent. The good news for organizations outside the leader group is that there’s a clear blueprint of best practices for success.

3. AI talent tales: New hot roles, continued diversity woes

Our first detailed look at the AI talent picture signals the maturation of AI, surfaces the most common strategies organizations employ for talent sourcing and upskilling, and shines a light on AI’s diversity problem—while showing yet again a link between diversity and success.

Hiring is a challenge, but less so for high performers

All organizations report that hiring AI talent, particularly data scientists, remains difficult. AI high performers report slightly less difficulty and hired some roles, like machine learning engineers, more often than other organizations.

Reskilling and upskilling are common alternatives to hiring

When it comes to sourcing AI talent, the most popular strategy among all respondents is reskilling existing employees. Nearly half are doing so. Recruiting from top-tier universities as well as from technology companies that aren’t in the top tier, such as regional leaders, are also common strategies. But a look at the strategies of high performers suggests organizations might be best served by tapping as many recruiting channels as possible (Exhibit 2). These companies are doing more than others to recruit AI-related talent from various sources. The findings show that while they’re more likely to recruit from top-tier technical universities and tech companies, they’re also more likely to source talent from other universities, training academies, and diversity-focused programs or professional organizations.

Responses suggest that both AI high performers and other organizations are upskilling technical and nontechnical employees on AI, with nearly half of respondents at both AI high performers and other organizations saying they are reskilling as a way of gaining more AI talent. However, high performers are taking more steps than other organizations to build employees’ AI-related skills.

Respondents at high performers are nearly three times more likely than other respondents to say their organizations have capability-building programs to develop technology personnel’s AI skills. The most common approaches they use are experiential learning , self-directed online courses, and certification programs, whereas other organizations most often lean on self-directed online courses.

High performers are also much more likely than other organizations to go beyond providing access to self-directed online course work to upskill nontechnical employees on AI. Respondents at high performers are nearly twice as likely as others to report offering peer-to-peer learning and certification programs to nontechnical personnel.

Increasing diversity on AI teams is a work in progress

We also explored the level of diversity within organizations’ AI-focused teams, and we see that there is significant room for improvement at most organizations. The average share of employees on these teams at respondents’ organizations who identify as women is just 27 percent (Exhibit 3). The share is similar when looking at the average proportion of racial or ethnic minorities developing AI solutions: just 25 percent. What’s more, 29 percent of respondents say their organizations have no minority employees working on their AI solutions.

Some companies are working to improve the diversity of their AI talent, though there’s more being done to improve gender diversity than ethnic diversity. Forty-six percent of respondents say their organizations have active programs to increase gender diversity within the teams that are developing AI solutions, through steps such as partnering with diversity-focused professional associations to recruit candidates. One-third say their organizations have programs to increase racial and ethnic diversity. We also see that organizations with women or minorities working on AI solutions often have programs in place to address these employees’ experiences.

In line with previous McKinsey studies , the research shows a correlation between diversity and outperformance. Organizations at which respondents say at least 25 percent of AI development employees identify as women are 3.2 times more likely than others to be AI high performers. Those at which at least one-quarter of AI development employees are racial or ethnic minorities are more than twice as likely to be AI high performers.

The online survey was in the field from May 3 to May 27, 2022, and from August 15 to August 17, 2022, and garnered responses from 1,492 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 744 said their organizations had adopted AI in at least one function and were asked questions about their organizations’ AI use. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

The survey content and analysis were developed by Michael Chui , a partner at the McKinsey Global Institute and a partner in McKinsey’s Bay Area office; Bryce Hall , an associate partner in the Washington, DC, office; Helen Mayhew , a partner in the Sydney office; and Alex Singla , a senior partner in the Chicago office, and Alex Sukharevsky , a senior partner in the London office, global leaders of QuantumBlack, AI by McKinsey.

The authors wish to thank Sanath Angalakudati, Medha Bankhwal, David DeLallo, Heather Hanselman, Vishan Patel, and Wilbur Wang for their contributions to this work.

Explore a career with us

Related articles.

" "

Why digital trust truly matters

""

Digital twins: The foundation of the enterprise metaverse

Sameer Gupta photo

Power up: How Southeast Asia’s largest bank is becoming AI-fueled

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 09 April 2024

The potential for artificial intelligence to transform healthcare: perspectives from international health leaders

  • Christina Silcox 1 ,
  • Eyal Zimlichmann 2 , 3 ,
  • Katie Huber   ORCID: orcid.org/0000-0003-2519-8714 1 ,
  • Neil Rowen 1 ,
  • Robert Saunders 1 ,
  • Mark McClellan 1 ,
  • Charles N. Kahn III 3 , 4 ,
  • Claudia A. Salzberg 3 &
  • David W. Bates   ORCID: orcid.org/0000-0001-6268-1540 5 , 6 , 7  

npj Digital Medicine volume  7 , Article number:  88 ( 2024 ) Cite this article

2327 Accesses

39 Altmetric

Metrics details

  • Health policy
  • Health services

Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. There is also universal concern about the ability to monitor health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change. The Future of Health (FOH), an international community of senior health care leaders, collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise around this topic. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers across the globe that FOH members identified as important for fully realizing AI’s potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.

Similar content being viewed by others

artificial intelligence research paper 2022 pdf

Guiding principles for the responsible development of artificial intelligence tools for healthcare

Kimberly Badal, Carmen M. Lee & Laura J. Esserman

artificial intelligence research paper 2022 pdf

A short guide for medical professionals in the era of artificial intelligence

Bertalan Meskó & Marton Görög

artificial intelligence research paper 2022 pdf

Reporting guidelines in medical artificial intelligence: a systematic review and meta-analysis

Fiona R. Kolbinger, Gregory P. Veldhuizen, … Jakob Nikolas Kather

Introduction

Artificial intelligence (AI), supported by timely and accurate data and evidence, has the potential to transform health care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care 1 , 2 . AI integration is critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. This is true across the international community, although there is variable progress within individual countries. There is also universal concern about monitoring health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change.

The Future of Health (FOH) is an international community of senior health care leaders representing health systems, health policy, health care technology, venture funding, insurance, and risk management. FOH collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise. In total, 46 senior health care leaders were engaged in this work, from eleven countries in Europe, North America, Africa, Asia, and Australia. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers that FOH members identified as important for fully realizing AI’s potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.

Powering AI through high-quality data

“Going forward, data are going to be the most valuable commodity in health care. Organizations need robust plans about how to mobilize and use their data.”

AI algorithms will only perform as well as the accuracy and completeness of key underlying data, and data quality is dependent on actions and workflows that encourage trust.

To begin to improve data quality, FOH members agreed that an initial priority is identifying and assuring reliable availability of high-priority data elements for promising AI applications: those with the most predictive value, those of the highest value to patients, and those most important for analyses of performance, including subgroup analyses to detect bias.

Leaders should also advocate for aligned policy incentives to improve the availability and reliability of these priority data elements. There are several examples of efforts across the world to identify and standardize high-priority data elements for AI applications and beyond, such as the multinational project STANDING Together, which is developing standards to improve the quality and representativeness of data used to build and test AI tools 3 .

Policy incentives that would further encourage high-quality data collection include (1) aligned payment incentives for measures of health care quality and safety, and ensuring the reliability of the underlying data, and (2) quality measures and performance standards focused on the reliability, completeness, and timeliness of collection and sharing of high-priority data itself.

Trust and verify

“Your AI algorithms are only going to be as good as the data and the real-world evidence used to validate them, and the data are only going to be as good as the trust and privacy and supporting policies.”

FOH members stressed the importance of showing that AI tools are both effective and safe within their specific patient populations.

This is a particular challenge with AI tools, whose performance can differ dramatically across sites and over time, as health data patterns and population characteristics vary. For example, several studies of the Epic Sepsis Model found both location-based differences in performance and degradation in performance over time due to data drift 4 , 5 . However, real-world evaluations are often much more difficult for algorithms that are used for longer-term predictions, or to avert long-term complications from occurring, particularly in the absence of connected, longitudinal data infrastructure. As such, health systems must prioritize implementing data standards and data infrastructure that can facilitate the retraining or tuning of algorithms, test for local performance and bias, and ensure scalability across the organization and longer-term applications 6 .

There are efforts to help leaders and health systems develop consensus-based evaluation techniques and infrastructure for AI tools, including HealthAI: The Global Agency for Responsible AI in Health, which aims to build and certify validation mechanisms for nations and regions to adopt; and the Coalition for Health AI (CHAI), which recently announced plans to build a US-wide health AI assurance labs network 7 , 8 . These efforts, if successful, will assist manufacturers and health systems in complying with new laws, rules, and regulations being proposed and released that seek to ensure AI tools are trustworthy, such as the EU AI Act and the 2023 US Executive Order on AI.

Sharing data for better AI

“Underlying these challenges is the investment required to standardize business processes so that you actually get data that’s usable between institutions and even within an institution.”

While high-quality internal data may enable some types of AI-tool development and testing, this is insufficient to power and evaluate all AI applications. To build truly effective AI-enabled predictive software for clinical care and predictive supports, data often need to be interoperable across health systems to build a diverse picture of patients’ health across geographies, and reliably shared.

FOH members recommended that health care leaders work with researchers and policymakers to connect detailed encounter data with longitudinal outcomes, and pilot opportunities across diverse populations and systems to help assure valid outcome evaluations as well as address potential confounding and population subgroup differences—the ability to aggregate data is a clear rate-limiting step. The South African National Digital Health Strategy outlined interventions to improve the adoption of digital technologies while complying with the 2013 Protection of Personal Information Act 9 . Although challenges remain, the country has made progress on multiple fronts, including building out a Health Patient Registration System as a first step towards a portable, longitudinal patient record system and releasing a Health Normative Standards Framework to improve data flow across institutional and geographic boundaries 10 .

Leaders should adopt policies in their organizations, and encourage adoption in their province and country, that simplify data governance and sharing while providing appropriate privacy protections – including building foundations of trust with patients and the public as previously discussed. Privacy-preserving innovations include ways to “share” data without movement from protected systems using approaches like federated analyses, data sandboxes, or synthetic data. In addition to exploring privacy-preserving approaches to data sharing, countries and health systems may need to consider broad and dynamic approaches to consent 11 , 12 . As we look to a future where a patient may have thousands of algorithms churning away at their data, efforts to improve data quality and sharing should include enabling patients’ access to and engagement with their own data to encourage them to actively partner in their health and provide transparency on how their data are being used to improve health care. For example, the Understanding Patient Data program in the United Kingdom produces research and resources to explain how the National Health Service uses patients’ data 13 . Community engagement efforts can further assist with these efforts by building trust and expanding understanding.

FOH members also stressed the importance of timely data access. Health systems should work together to establish re-usable governance and privacy frameworks that allow stakeholders to clearly understand what data will be shared and how it will be protected to reduce the time needed for data use agreements. Trusted third-party data coordinating centers could also be used to set up “precertification” systems around data quality, testing, and cybersecurity to support health organizations with appropriate data stewardship to form partnerships and access data rapidly.

Incentivizing progress for AI impact

“Unless it’s tied to some kind of compensation to the organization, the drive to help implement those tools and overcome that risk aversion is going to be very high… I do think that business driver needs to be there.”

AI tools and data quality initiatives have not moved as quickly in health care due to the lack of direct payment, and often, misalignment of financial incentives and supports for high-quality data collection and predictive analytics. This affects both the ability to purchase and safely implement commercial AI products as well as the development of “homegrown” AI tools.

FOH members recommended that leaders should advocate for paying for value in health – quality, safety, better health, and lower costs for patients. This better aligns the financial incentives for accelerating the development, evaluation, and adoption of AI as well as other tools designed to either keep patients healthy or quickly diagnose and treat them with the most effective therapies when they do become ill. Effective personalized health care requires high-quality, standardized, interoperable datasets from diverse sources 14 . Within value-based payments themselves, data are critical to measuring quality of care and patient outcomes, adjusted or contextualized for factors outside of clinical control. Value-based payments therefore align incentives for (1) high-quality data collection and trusted use, (2) building effective AI tools, and (3) ensuring that those tools are improving patient outcomes and/or health system operations.

Data have become the most valuable commodity in health care, but questions remain about whether there will be an AI “revolution” or “evolution” in health care delivery. Early AI applications in certain clinical areas have been promising, but more advanced AI tools will require higher quality, real-world data that is interoperable and secure. The steps health care organization leaders and policymakers take in the coming years, starting with short-term opportunities to develop meaningful AI applications that achieve measurable improvements in outcomes and costs, will be critical in enabling this future that can improve health outcomes, safety, affordability, and equity.

Data availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Abernethy, A. et al. The promise of digital health: then, now, and the future. NAM Perspect. 6 (2022).

Akpakwu, E. Four ways AI can make healthcare more efficient and affordable. World Economic Forum https://www.weforum.org/agenda/2018/05/four-ways-ai-is-bringing-down-the-cost-of-healthcare/ (2018).

STANDING Together. https://www.datadiversity.org/home .

Wong, A. et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med 181 , 1065–1070 (2021).

Article   PubMed   Google Scholar  

Ross, C. STAT and MIT rooted out the weaknesses in health care algorithms. Here’s how we did it. STAT https://www.statnews.com/2022/02/28/data-drift-machine-learning/ (2022).

Locke, T., Parker, V., Thoumi, A., Goldstein, B. & Silcox, C. Preventing bias and inequities in AI-enabled health tools . https://healthpolicy.duke.edu/publications/preventing-bias-and-inequities-ai-enabled-health-tools (2022).

Introducing HealthAI. The International Digital Health and AI Research Collaborative (I-DAIR) https://www.i-dair.org/news/introducing-healthai (2023).

Shah, N. H. et al. A nationwide network of health AI assurance laboratories. JAMA 331 , 245 (2024).

Singh, V. AI & Data in South Africa’s Health Sector . https://policyaction.org.za/sites/default/files/PAN_TopicalGuide_AIData6_Health_Elec.pdf (2020).

Zharima, C., Griffiths, F. & Goudge, J. Exploring the barriers and facilitators to implementing electronic health records in a middle-income country: a qualitative study from South Africa. Front. Digit. Health 5 , 1207602 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Lee, A. R. et al. Identifying facilitators of and barriers to the adoption of dynamic consent in digital health ecosystems: a scoping review. BMC Med. Ethics 24 , 107 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Stoeklé, H. C., Hulier-Ammar, E. & Hervé, C. Data medicine: ‘broad’ or ‘dynamic’ consent? Public Health Ethics 15 , 181–185 (2022).

Article   Google Scholar  

Understanding Patient Data. Understanding Patient Data http://understandingpatientdata.org.uk/ .

Chén, O. Y. & Roberts, B. Personalized health care and public health in the digital age. Front. Digit. Health 3 , 595704 (2021).

Download references

Acknowledgements

The authors acknowledge Oranit Ido and Jonathan Gonzalez-Smith for their contributions to this work. This study was funded by The Future of Health, LLC. The Future of Health, LLC, was involved in all stages of this research, including study design, data collection, analysis and interpretation of data, and the preparation of this manuscript.

Author information

Authors and affiliations.

Duke-Margolis Institute for Health Policy, Duke University, Washington, DC, USA &, Durham, NC, USA

Christina Silcox, Katie Huber, Neil Rowen, Robert Saunders & Mark McClellan

Sheba Medical Center, Ramat Gan, Israel

Eyal Zimlichmann

Future of Health, Washington, DC, USA

Eyal Zimlichmann, Charles N. Kahn III & Claudia A. Salzberg

Federation of American Hospitals, Washington, DC, USA

Charles N. Kahn III

Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, MA, USA

David W. Bates

Harvard Medical School, Boston, MA, USA

Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

C.S., K.H., N.R., and R.S. conducted initial background research and analyzed qualitative data from stakeholders. All authors (C.S., E.Z., K.H., N.R., R.S., M.M., C.K., C.A.S., and D.B.) assisted with conceptualization of the project and strategic guidance. C.S., K.H., and N.R. wrote initial drafts of the manuscript. All authors contributed to critical revisions of the manuscript and read and approved the final manuscript.

Corresponding author

Correspondence to David W. Bates .

Ethics declarations

Competing interests.

C.S., K.H., N.R., and C.A.S. declare no competing interests. E.Z. reports personal fees from Arkin Holdings, personal fees from Statista and equity from Valera Health, Profility and Hello Heart. R.S. has been an external reviewer for The John A. Hartford Foundation, and is a co-chair for the Health Evolution Summit Roundtable on Value-Based Care for Specialized Populations. M.M. is an independent director on the boards of Johnson & Johnson, Cigna, Alignment Healthcare, and PrognomIQ; co-chairs the Guiding Committee for the Health Care Payment Learning and Action Network; and reports fees for serving as an adviser for Arsenal Capital Partners, Blackstone Life Sciences, and MITRE. C.K. is a Profility Board member and additionally reports equity from Valera Health and MDClone. D.W.B. reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from Valera Health, equity from Clew, equity from MDClone, personal fees and equity from AESOP, personal fees and equity from Feelbetter, equity from Guided Clinical Solutions, and grants from IBM Watson Health, outside the submitted work. D.W.B. has a patent pending (PHC-028564 US PCT), on intraoperative clinical decision support.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Silcox, C., Zimlichmann, E., Huber, K. et al. The potential for artificial intelligence to transform healthcare: perspectives from international health leaders. npj Digit. Med. 7 , 88 (2024). https://doi.org/10.1038/s41746-024-01097-6

Download citation

Received : 30 October 2023

Accepted : 29 March 2024

Published : 09 April 2024

DOI : https://doi.org/10.1038/s41746-024-01097-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

artificial intelligence research paper 2022 pdf

AI Index: State of AI in 13 Charts

In the new report, foundation models dominate, benchmarks fall, prices skyrocket, and on the global stage, the U.S. overshadows.

Illustration of bright lines intersecting on a dark background

This year’s AI Index — a 500-page report tracking 2023’s worldwide trends in AI — is out.

The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year’s report covers the rise of multimodal foundation models, major cash investments into generative AI, new performance benchmarks, shifting global opinions, and new major regulations.

Don’t have an afternoon to pore through the findings? Check out the high level here.

Pie chart showing 98 models were open-sourced in 2023

A Move Toward Open-Sourced

This past year, organizations released 149 foundation models, more than double the number released in 2022. Of these newly released models, 65.7% were open-source (meaning they can be freely used and modified by anyone), compared with only 44.4% in 2022 and 33.3% in 2021.

bar chart showing that closed models outperformed open models across tasks

But At a Cost of Performance?

Closed-source models still outperform their open-sourced counterparts. On 10 selected benchmarks, closed models achieved a median performance advantage of 24.2%, with differences ranging from as little as 4.0% on mathematical tasks like GSM8K to as much as 317.7% on agentic tasks like AgentBench.

Bar chart showing Google has more foundation models than any other company

Biggest Players

Industry dominates AI, especially in building and releasing foundation models. This past year Google edged out other industry players in releasing the most models, including Gemini and RT-2. In fact, since 2019, Google has led in releasing the most foundation models, with a total of 40, followed by OpenAI with 20. Academia trails industry: This past year, UC Berkeley released three models and Stanford two.

Line chart showing industry far outpaces academia and government in creating foundation models over the decade

Industry Dwarfs All

If you needed more striking evidence that corporate AI is the only player in the room right now, this should do it. In 2023, industry accounted for 72% of all new foundation models.

Chart showing the growing costs of training AI models

Prices Skyrocket

One of the reasons academia and government have been edged out of the AI race: the exponential increase in cost of training these giant models. Google’s Gemini Ultra cost an estimated $191 million worth of compute to train, while OpenAI’s GPT-4 cost an estimated $78 million. In comparison, in 2017, the original Transformer model, which introduced the architecture that underpins virtually every modern LLM, cost around $900.

Bar chart showing the united states produces by far the largest number of foundation models

What AI Race?

At least in terms of notable machine learning models, the United States vastly outpaced other countries in 2023, developing a total of 61 models in 2023. Since 2019, the U.S. has consistently led in originating the majority of notable models, followed by China and the UK.

Line chart showing that across many intellectual task categories, AI has exceeded human performance

Move Over, Human

As of 2023, AI has hit human-level performance on many significant AI benchmarks, from those testing reading comprehension to visual reasoning. Still, it falls just short on some benchmarks like competition-level math. Because AI has been blasting past so many standard benchmarks, AI scholars have had to create new and more difficult challenges. This year’s index also tracked several of these new benchmarks, including those for tasks in coding, advanced reasoning, and agentic behavior.

Bar chart showing a dip in overall private investment in AI, but a surge in generative AI investment

Private Investment Drops (But We See You, GenAI)

While AI private investment has steadily dropped since 2021, generative AI is gaining steam. In 2023, the sector attracted $25.2 billion, nearly ninefold the investment of 2022 and about 30 times the amount from 2019 (call it the ChatGPT effect). Generative AI accounted for over a quarter of all AI-related private investments in 2023.

Bar chart showing the united states overwhelming dwarfs other countries in private investment in AI

U.S. Wins $$ Race

And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount invested in the United Kingdom. That lineup looks the same when zooming out: Cumulatively since 2013, the United States leads investments at $335.2 billion, followed by China with $103.7 billion, and the United Kingdom at $22.3 billion.

Infographic showing 26% of businesses use AI for contact-center automation, and 23% use it for personalization

Where is Corporate Adoption?

More companies are implementing AI in some part of their business: In surveys, 55% of organizations said they were using AI in 2023, up from 50% in 2022 and 20% in 2017. Businesses report using AI to automate contact centers, personalize content, and acquire new customers. 

Bar chart showing 57% of people believe AI will change how they do their job in 5 years, and 36% believe AI will replace their jobs.

Younger and Wealthier People Worry About Jobs

Globally, most people expect AI to change their jobs, and more than a third expect AI to replace them. Younger generations — Gen Z and millennials — anticipate more substantial effects from AI compared with older generations like Gen X and baby boomers. Specifically, 66% of Gen Z compared with 46% of boomer respondents believe AI will significantly affect their current jobs. Meanwhile, individuals with higher incomes, more education, and decision-making roles foresee AI having a great impact on their employment.

Bar chart depicting the countries most nervous about AI; Australia at 69%, Great Britain at 65%, and Canada at 63% top the list

While the Commonwealth Worries About AI Products

When asked in a survey about whether AI products and services make you nervous, 69% of Aussies and 65% of Brits said yes. Japan is the least worried about their AI products at 23%.  

Line graph showing uptick in AI regulation in the united states since 2016; 25 policies passed in 2023

Regulation Rallies

More American regulatory agencies are passing regulations to protect citizens and govern the use of AI tools and data. For example, the Copyright Office and the Library of Congress passed copyright registration guidance concerning works that contained material generated by AI, while the Securities and Exchange Commission developed a cybersecurity risk management strategy, governance, and incident disclosure plan. The agencies to pass the most regulation were the Executive Office of the President and the Commerce Department. 

The AI Index was first created to track AI development. The index collaborates with such organizations as LinkedIn, Quid, McKinsey, Studyportals, the Schwartz Reisman Institute, and the International Federation of Robotics to gather the most current research and feature important insights on the AI ecosystem. 

More News Topics

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

300 Accesses

4 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

artificial intelligence research paper 2022 pdf

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

2022-RA-1675-ESGO Development of an intra-operative disease score to predict complete cytoreduction in advanced-stage ovarian cancer by using artificial intelligence

Profile image of Diederick de Jong

2022, Ovarian cancer

Related Papers

Roy Kruitwagen

artificial intelligence research paper 2022 pdf

https://ijshr.com/IJSHR_Vol.5_Issue.2_April2020/IJSHR_Abstract.0027.html

International Journal of Science and Healthcare Research (IJSHR)

Introduction: In Indian women, ovarian cancer is the fourth most common cancer, out of which epithelial ovarian cancers are the most common and present in advanced stage. Women with other comorbidities and those who are unlikely to achieve optimal debulking at primary surgery, benefit from neoadjuvant chemotherapy (NACT) followed by interval cytoreduction, with lesser surgical morbidity and equal survival rates as compared to primary cytoreduction. Material & methods: A 1 year retrospective study was conducted at tertiary care hospital situated in Bangalore from June 2018 to May 2019. Total 9 study participants who underwent interval debulking surgery following neoadjuvant chemotherapy were included in this study. Data was collected from hospital record using a pre designed questionnaire which had variables like age, stage of ovarian carcinoma at the time of diagnosis, number of NACT cycles that the study subjects underwent , blood loss, volume of residual disease, optimal cytoreduction rate, intraoperative complications and immediate surgical outcomes . Histopathological examination reports were collected and chemotherapy response score, grade and type of tumour were ascertained. Results: Epithelial cancers constituted 100% (n = 10) of all cases. The mean age of the patients who underwent IDS following NACT was 52.22 ± 5 years ranging from 46 to 62 years. All cases presented at advanced stage with 3 (33.3%) being in Stage IIIb, 4 (44.4%) being in Stage IIIc and 2 (22.2%) being in Stage IV at the time of diagnosis. The median number of NACT cycles (paclitaxel + carboplatin) was 3. Optimal cytoreduction (residual disease <1cm) was achieved in 77.7% cases. The median postoperative stay was 7 days. Conclusion: High rates of optimal cytoreduction were achieved at interval cytoreductive surgery after NACT with acceptable surgical morbidity.

European Journal of Obstetrics & Gynecology and Reproductive Biology

Eberhard Stoeckle

Bangladesh Medical Research Council Bulletin

Sharmin Lima

Background: Ovarian cancer is the seventh most common cancer and eighth most common cause of death of female. More than 75% patients are diagnosed at Stage (III - IV) and their 5-year survival rate is (25-50%) . Primary debulking surgery (PDS) followed by adjuvant chemotherapy is the cornerstone treatment for advanced ovarian cancer. Unfortunately, primary debulking surgery is not always feasible and not associated with optimal cytoreduction. Recently, neoadjuvant chemotherapy followed by Interval Debulking Surgery (NACT- IDS) is increasingly adopted. (NACT-IDS) improves optimal cytoreduction and reduces complications in comparison with PDS . However, a significant proportion of patients cannot be optimally cytoreduced even after NACT-IDS and causes futile laparotomy. So, it is necessary to develop a Predictive Score for Cytoreduction (PSC) after NACT for optimal cytoreduction at (IDS). Objective: To find out a predictive score after NACT for optimal cytoreduction at IDS in advanced...

Journal of Oncology

Journal of Gynecologic Oncology

Alejandra Martinez

Journal of Obstetrics and Gynaecology Canada

Samet Topuz

Best Practice & Research Clinical Obstetrics & Gynaecology

Franco Odicino

Ain Shams Journal of Surgery

Sara Darwish

Annals of Surgical Oncology

Gennaro Cormio

RELATED PAPERS

Marek Zajac

David Jeffrey

alit swamardika

Eduart Villanueva

Biomacromolecules

Annals of Child Neurology

minakshi balwani

Tiago Fraga

International Journal of Solids and Structures

Jorge Corbera

Global Health: Science and Practice

Shuaib Kauchali

Int'l J. of Communications, Network and System Sciences

AHMAD Usman Mohammad

Legalitas: Jurnal Hukum

Ema Susanti

Journal of Public Health Research

EMNA CHIKHAOUI

Journal of South American Earth Sciences

Claudia Del Río

Journal of the Medical Sciences (Berkala Ilmu Kedokteran)

vitasari indriani

Journal of magnetic resonance imaging : JMRI

Journal of Biomolecular Structure and Dynamics

arif mermer

Education Sciences

Mirian Agus

Industrial and Organizational Psychology

Physical Review Letters

Pinghan Chu

GIS and Spatial Analysis [Working Title]

Esra Satici

Muhamad Masrur

Journal of Hazardous Materials

Mark Pownceby

Avnish Seth

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IMAGES

  1. How To Write A Research Paper On Artificial Intelligence?

    artificial intelligence research paper 2022 pdf

  2. (PDF) Artificial intelligence

    artificial intelligence research paper 2022 pdf

  3. (PDF) A review of artificial intelligence

    artificial intelligence research paper 2022 pdf

  4. 🌱 Artificial intelligence topics for research paper. 177 Brilliant

    artificial intelligence research paper 2022 pdf

  5. (PDF) Concept of Artificial Intelligence, its Impact and Emerging Trends

    artificial intelligence research paper 2022 pdf

  6. (PDF) ARTIFICIAL INTELLIGENCE IN DIGITAL MARKETING INFLUENCES CONSUMER

    artificial intelligence research paper 2022 pdf

VIDEO

  1. Solution of Artificial Intelligence Question Paper || AI || 843 Class 12 || CBSE Board 2023-24

  2. Class10 AI artificial intelligence 2023 board paper #2024boardexam #Ai#cbseclass10

  3. Kicking off the AI@ 2022 keynotes

  4. Artificial Intelligence (AI) Sample Paper class 10 2023 -24

  5. ARTIFICIAL INTELLIGENCE 💯 QUESTIONS PAPER 📜💫💫🤗@StudyCampus2023 #boardexamination #board

  6. New AI Template for Research paper Writing

COMMENTS

  1. AI Index 2022

    The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 AI Index report measures and evaluates the rapid rate of AI advancement from research and development to technical ...

  2. (PDF) The Future of Artificial Intelligence

    PDF | On Jan 18, 2022, Andrew C. Scott and others published The Future of Artificial Intelligence | Find, read and cite all the research you need on ResearchGate

  3. AI-Based Modeling: Techniques, Applications and Research ...

    Artificial intelligence (AI) is a leading technology of the current age of the Fourth Industrial Revolution (Industry 4.0 or 4IR), with the capability of incorporating human behavior and intelligence into machines or systems. Thus, AI-based modeling is the key to build automated, intelligent, and smart systems according to today's needs. To solve real-world issues, various types of AI such ...

  4. Growth in AI and robotics research accelerates

    The number of AI and robotics papers published in the 82 high-quality science journals in the Nature Index (Count) has been rising year-on-year — so rapidly that it resembles an exponential ...

  5. Artificial Intelligence authors/titles Feb 2022

    Comments: Paper presented at the 38th International Conference on Logic Programming (ICLP 2022), 16 pages Subjects: Artificial Intelligence (cs.AI) [12] arXiv:2202.01139 [ pdf , other ]

  6. Explainable artificial intelligence: a comprehensive review

    Artificial intelligence (AI) has been considered the most prevalent technology over the last couple of decades. According to a report by the International Data Corporation (IDC), the AI global expenditures are forecasted to reach nearly $100 billion in 2023, which is more than double the spending of $37.5 billion in 2019 (IDC 2020).In the meantime, Statista, which is a well-known online portal ...

  7. [2305.04532] Latest Trends in Artificial Intelligence Technology: A

    Download a PDF of the paper titled Latest Trends in Artificial Intelligence Technology: A Scoping Review, by Teemu Niskanen and 2 other authors ... Journal of Artificial Intelligence Research, Journal of Machine Learning Research, and Machine Learning, and articles published in 2022 were observed. Certain qualifications were laid for the ...

  8. arXiv:2210.00881v1 [cs.AI] 23 Sep 2022

    Here, we use AI techniques to predict the future research directions of AI itself. We develop a new graph-based benchmark based on real-world data { the Science4Cast benchmark, which aims to predict the future state of an evolving semantic network of AI. For that, we use more than 100,000 research papers and build up a knowledge network with

  9. Generative Artificial Intelligence: Trends and Prospects

    Generative artificial intelligence can make powerful artifacts when used at scale, ... 27 September 2022 . ISSN Information: Print ISSN: 0018-9162 Electronic ISSN: ... Papers. 10004. Full. Text Views. Alerts. Alerts. Manage Content Alerts . Add to Citation Alerts . Abstract. Authors.

  10. PDF Artificial intelligence for science

    1 Introduction. We are currently amid the largest surge, arguably 'boom', in the application and development of artificial intelligence (AI) for scientific research in history. Scholarly publications, patents, education, training, salaries, research activity and investment are increasing at unprecedented rates.

  11. (PDF) Artificial Intelligence

    PDF | On Apr 3, 2022, Vukoman R. Jokanović published Artificial Intelligence | Find, read and cite all the research you need on ResearchGate

  12. PDF Review of Artificial Intelligence and Machine Learning Technologies

    increased interest in research in this area. It can be illustrated by the number of publications in scientific databases. The graph presented in Figure1shows the number of reviews in AI, ML, and DL in Scopus in years 2000-2021. Figure 1. The number of review studies devoted to artificial intelligence (AI), machine learning (ML),

  13. Journal of Artificial Intelligence Research

    The Journal of Artificial Intelligence Research (JAIR) is dedicated to the rapid dissemination of important research results to the global artificial intelligence (AI) community. The journal's scope encompasses all areas of AI, including agents and multi-agent systems, automated reasoning, constraint processing and search, knowledge ...

  14. (PDF) Artificial intelligence and the conduct of literature reviews

    Arti ficial intelligence and the conduct of. literature reviews. Gerit Wagner, Roman Lukyanenko and Guy Par ´. e . Abstract. Arti ficial intelligence (AI) is beginning to transform traditional ...

  15. AI Index Report

    Mission. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

  16. PDF 27 JULY 2022 THE IMPACT OF ARTIFICIAL INTELLIGENCE O N THE ...

    Laura Nurski ([email protected]) is a Research Fellow at Bruegel Mia Hoffmann ([email protected]) is a Research Analyst at Bruegel Recommended citation: Nurski, L. and M. Hoffman (2022) 'The impact of artificial intelligence on the nature and quality of jobs', Working Paper 14/2022, Bruegel WORKING PAPER | ISSUE 03 | 2020

  17. Vol. 75 (2022)

    JAIR is published by AI Access Foundation, a nonprofit public charity whose purpose is to facilitate the dissemination of scientific results in artificial intelligence. JAIR, established in 1993, was one of the first open-access scientific journals on the Web, and has been a leading publication venue since its inception.

  18. PDF The Impact of Artificial Intelligence on Innovation

    ABSTRACT. Artificial intelligence may greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose "method of invention" that can reshape the nature of the innovation process and the organization of R&D.

  19. The state of AI in 2022—and a half decade in review

    Meanwhile, the average number of AI capabilities that organizations use, such as natural-language generation and computer vision, has also doubled—from 1.9 in 2018 to 3.8 in 2022. Among these capabilities, robotic process automation and computer vision have remained the most commonly deployed each year, while natural-language text ...

  20. The potential for artificial intelligence to transform ...

    Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be ...

  21. PDF Artificial Intelligence in Software Testing : Impact, Problems

    Artificial Intelligence is gradually changing the landscape of software engineering in general [5] and software testing in particular [6] both in research and industry as well. In the last two decades, AI has been found to have made a considerable impact on the way we are approach-ing software testing.

  22. AI Index: State of AI in 13 Charts

    This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ...

  23. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  24. Artificial Intelligence and Big Data Analysis in 2022

    Artificial Intelligence and Big Data Analysis in 2022. September 2022. International Journal of Science and Research (IJSR) 45 (3):1617-1622. Authors: Gu Royce. University of London. To read the ...

  25. Welcome to the 2024 AI Index Report

    The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field ...

  26. (PDF) 2022-RA-1675-ESGO Development of an intra ...

    2022-RA-1675-ESGO DEVELOPMENT OF AN INTRA-OPERATIVE DISEASE SCORE TO PREDICT COMPLETE CYTOREDUCTION IN ADVANCED-STAGE OVARIAN CANCER BY USING ARTIFICIAL INTELLIGENCE Alexandros Laios, 2Evangelos Kalampokis, 1Racheal Johnson, 1Sarika Munot, Richard Hutson, 1Amudha Thangavelu, 1Tim Broadhead, 1Georgios Theophilou, 1 David Nugent, 1Diederick ...

  27. (PDF) Research paper on Artificial Intelligence

    PDF | On Dec 7, 2022, Ashutosh Kumar and others published Research paper on Artificial Intelligence | Find, read and cite all the research you need on ResearchGate.