Help | Advanced Search

Computer Science > Machine Learning

Title: scientific machine learning through physics-informed neural networks: where we are and what's next.

Abstract: Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

Submission history

Access paper:.

  • Other Formats

References & Citations

  • INSPIRE HEP
  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • AI Collection
  • Oxford Thesis Collection
  • CC0 version of this metadata

Physics-informed machine learning: from concepts to real-world applications

Machine learning (ML) has caused a fundamental shift in how we practice science, with many now placing learning from data at the focal point of their research. As the complexity of the scientific problems we want to study increases, and the amount of data generated by today's scientific experiments grows, ML is helping to automate, accelerate and enhance traditional workflows.

Emerging at the forefront of this revolution is a field called scientific machine learning (SciML). The ce...

Email this record

Please enter the email address that the record information will be sent to.

Please add any additional information to be included within the email.

Cite this record

Chicago style, access document.

  • Moseley_2022_Physics-informed_machine_learning.pdf (Dissemination version, pdf, 60.3MB)

Why is the content I wish to access not available via ORA?

Content may be unavailable for the following four reasons.

  • Version unsuitable We have not obtained a suitable full-text for a given research output. See the versions advice for more information.
  • Recently completed Sometimes content is held in ORA but is unavailable for a fixed period of time to comply with the policies and wishes of rights holders.
  • Permissions All content made available in ORA should comply with relevant rights, such as copyright. See the copyright guide for more information.
  • Clearance Some thesis volumes scanned as part of the digitisation scheme funded by Dr Leonard Polonsky are currently unavailable due to sensitive material or uncleared third-party copyright content. We are attempting to contact authors whose theses are affected.

Alternative access to the full-text

Request a copy.

We require your email address in order to let you know the outcome of your request.

Provide a statement outlining the basis of your request for the information of the author.

Please note any files released to you as part of your request are subject to the terms and conditions of use for the Oxford University Research Archive unless explicitly stated otherwise by the author.

Contributors

Bibliographic details, item description, terms of use, views and downloads.

If you are the owner of this record, you can report an update to it here: Report update to this record

Report an update

We require your email address in order to let you know the outcome of your enquiry.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 24 May 2021

Physics-informed machine learning

  • George Em Karniadakis   ORCID: orcid.org/0000-0002-9713-7120 1 , 2 ,
  • Ioannis G. Kevrekidis 3 , 4 ,
  • Lu Lu   ORCID: orcid.org/0000-0002-5476-5768 5 ,
  • Paris Perdikaris 6 ,
  • Sifan Wang 7 &
  • Liu Yang   ORCID: orcid.org/0000-0002-7476-9168 1  

Nature Reviews Physics volume  3 ,  pages 422–440 ( 2021 ) Cite this article

86k Accesses

1900 Citations

176 Altmetric

Metrics details

  • Applied mathematics
  • Computational science

Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems.

Physics-informed machine learning integrates seamlessly data and mathematical physics models, even in partially understood, uncertain and high-dimensional contexts.

Kernel-based or neural network-based regression methods offer effective, simple and meshless implementations.

Physics-informed neural networks are effective and efficient for ill-posed and inverse problems, and combined with domain decomposition are scalable to large problems.

Operator regression, search for new intrinsic variables and representations, and equivariant neural network architectures with built-in physical constraints are promising areas of future research.

There is a need for developing new frameworks and standardized benchmarks as well as new mathematics for scalable, robust and rigorous next-generation physics-informed learning machines.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

92,52 € per year

only 7,71 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

physics informed neural networks thesis

Similar content being viewed by others

physics informed neural networks thesis

Physics-enhanced deep surrogates for partial differential equations

physics informed neural networks thesis

Multi-resolution partial differential equations preserved learning framework for spatiotemporal dynamics

physics informed neural networks thesis

Physics-informed learning of governing equations from scarce data

Hart, J. K. & Martinez, K. Environmental sensor networks: a revolution in the earth system science? Earth Sci. Rev. 78 , 177–191 (2006).

Article   ADS   Google Scholar  

Kurth, T. et al. Exascale deep learning for climate analytics (IEEE, 2018).

Reddy, D. S. & Prasad, P. R. C. Prediction of vegetation dynamics using NDVI time series data and LSTM. Model. Earth Syst. Environ. 4 , 409–419 (2018).

Article   Google Scholar  

Reichstein, M. et al. Deep learning and process understanding for data-driven earth system science. Nature 566 , 195–204 (2019).

Alber, M. et al. Integrating machine learning and multiscale modeling — perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit. Med. 2 , 1–11 (2019).

Iten, R., Metger, T., Wilming, H., Del Rio, L. & Renner, R. Discovering physical concepts with neural networks. Phys. Rev. Lett. 124 , 010508 (2020).

Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378 , 686–707 (2019).

Article   ADS   MathSciNet   MATH   Google Scholar  

Schmidt, M. & Lipson, H. Distilling free-form natural laws from experimental data. Science 324 , 81–85 (2009).

Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 113 , 3932–3937 (2016).

Jasak, H. et al. OpenFOAM: A C++ library for complex physics simulations. Int. Workshop Coupled Methods Numer. Dyn. 1000 , 1–20 (2007).

ADS   Google Scholar  

Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117 , 1–19 (1995).

Article   ADS   MATH   Google Scholar  

Jia, X. et al. Physics-guided machine learning for scientific discovery: an application in simulating lake temperature profiles. Preprint at arXiv https://arxiv.org/abs/2001.11086 (2020).

Lu, L., Jin, P., Pang, G., Zhang, Z. & Karniadakis, G. E. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 3 , 218–229 (2021).

Kashefi, A., Rempe, D. & Guibas, L. J. A point-cloud deep learning framework for prediction of fluid flow fields on irregular geometries. Phys. Fluids 33 , 027104 (2021).

Li, Z. et al. Fourier neural operator for parametric partial differential equations. in Int. Conf. Learn. Represent. (2021).

Yang, Y. & Perdikaris, P. Conditional deep surrogate models for stochastic, high-dimensional, and multi-fidelity systems. Comput. Mech. 64 , 417–434 (2019).

Article   MathSciNet   MATH   Google Scholar  

LeCun, Y. & Bengio, Y. et al. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 3361 , 1995 (1995).

Google Scholar  

Mallat, S. Understanding deep convolutional networks. Phil. Trans. R. Soc. A 374 , 20150203 (2016).

Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A. & Vandergheynst, P. Geometric deep learning: going beyond Euclidean data. IEEE Signal Process. Mag. 34 , 18–42 (2017).

Cohen, T., Weiler, M., Kicanaoglu, B. & Welling, M. Gauge equivariant convolutional networks and the icosahedral CNN. Proc. Machine Learn. Res. 97 , 1321–1330 (2019).

Owhadi, H. Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games. SIAM Rev. 59 , 99–149 (2017).

Raissi, M., Perdikaris, P. & Karniadakis, G. E. Inferring solutions of differential equations using noisy multi-fidelity data. J. Comput. Phys. 335 , 736–746 (2017).

Raissi, M., Perdikaris, P. & Karniadakis, G. E. Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM J. Sci. Comput. 40 , A172–A198 (2018).

Owhadi, H. Bayesian numerical homogenization. Multiscale Model. Simul. 13 , 812–828 (2015).

Hamzi, B. & Owhadi, H. Learning dynamical systems from data: a simple cross-validation perspective, part I: parametric kernel flows. Physica D 421 , 132817 (2021).

Article   MathSciNet   Google Scholar  

Reisert, M. & Burkhardt, H. Learning equivariant functions with matrix valued kernels. J. Mach. Learn. Res. 8 , 385–408 (2007).

MathSciNet   MATH   Google Scholar  

Owhadi, H. & Yoo, G. R. Kernel flows: from learning kernels from data into the abyss. J. Comput. Phys. 389 , 22–47 (2019).

Winkens, J., Linmans, J., Veeling, B. S., Cohen, T. S. & Welling, M. Improved semantic segmentation for histopathology using rotation equivariant convolutional networks. in Conf. Med. Imaging Deep Learn. (2018).

Bruna, J. & Mallat, S. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35 , 1872–1886 (2013).

Kondor, R., Son, H. T., Pan, H., Anderson, B. & Trivedi, S. Covariant compositional networks for learning graphs. Preprint at arXiv https://arxiv.org/abs/1801.02144 (2018).

Tai, K. S., Bailis, P. & Valiant, G. Equivariant transformer networks. Proc. Int. Conf. Mach. Learn. 97 , 6086–6095 (2019).

Pfau, D., Spencer, J. S., Matthews, A. G. & Foulkes, W. M. C. Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Phys. Rev. Res. 2 , 033429 (2020).

Pun, G. P., Batra, R., Ramprasad, R. & Mishin, Y. Physically informed artificial neural networks for atomistic modeling of materials. Nat. Commun. 10 , 1–10 (2019).

Ling, J., Kurzawski, A. & Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807 , 155–166 (2016).

Jin, P., Zhang, Z., Zhu, A., Tang, Y. & Karniadakis, G. E. SympNets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Netw. 132 , 166–179 (2020).

Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9 , 4950 (2018).

Lagaris, I. E., Likas, A. & Fotiadis, D. I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 9 , 987–1000 (1998).

Sheng, H. & Yang, C. PFNN: A penalty-free neural network method for solving a class of second-order boundary-value problems on complex geometries. J. Comput. Phys. 428 , 110085 (2021).

McFall, K. S. & Mahan, J. R. Artificial neural network method for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions. IEEE Transac. Neural Netw. 20 , 1221–1233 (2009).

Beidokhti, R. S. & Malek, A. Solving initial-boundary value problems for systems of partial differential equations using neural networks and optimization techniques. J. Franklin Inst. 346 , 898–913 (2009).

Lagari, P. L., Tsoukalas, L. H., Safarkhani, S. & Lagaris, I. E. Systematic construction of neural forms for solving partial differential equations inside rectangular domains, subject to initial, boundary and interface conditions. Int. J. Artif. Intell. Tools 29 , 2050009 (2020).

Zhang, D., Guo, L. & Karniadakis, G. E. Learning in modal space: solving time-dependent stochastic PDEs using physics-informed neural networks. SIAM J. Sci. Comput. 42 , A639–A665 (2020).

Dong, S. & Ni, N. A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks. J. Comput. Phys . 435 , 110242 (2021).

Wang, B., Zhang, W. & Cai, W. Multi-scale deep neural network (MscaleDNN) methods for oscillatory stokes flows in complex domains. Commun. Comput. Phys. 28 , 2139–2157 (2020).

Liu, Z., Cai, W. & Xu, Z. Q. J. Multi-scale deep neural network (MscaleDNN) for solving Poisson–Boltzmann equation in complex domains. Commun. Comput. Phys. 28 , 1970–2001 (2020).

Mattheakis, M., Protopapas, P., Sondak, D., Di Giovanni, M. & Kaxiras, E. Physical symmetries embedded in neural networks. Preprint at arXiv https://arxiv.org/abs/1904.08991 (2019).

Cai, W., Li, X. & Liu, L. A phase shift deep neural network for high frequency approximation and wave problems. SIAM J. Sci. Comput. 42 , A3285–A3312 (2020).

Darbon, J. & Meng, T. On some neural network architectures that can represent viscosity solutions of certain high dimensional Hamilton-Jacobi partial differential equations. J. Comput. Phys. 425 , 109907 (2021).

Sirignano, J. & Spiliopoulos, K. DGM: a deep learning algorithm for solving partial differential equations. J. Comput. Phys. 375 , 1339–1364 (2018).

Kissas, G. et al. Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 358 , 112623 (2020).

Zhu, Y., Zabaras, N., Koutsourelakis, P. S. & Perdikaris, P. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J. Comput. Phys. 394 , 56–81 (2019).

Geneva, N. & Zabaras, N. Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks. J. Comput. Phys. 403 , 109056 (2020).

Wu, J. L. et al. Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. J. Comput. Phys. 406 , 109209 (2020).

Pfrommer, S., Halm, M. & Posa, M. Contactnets: learning of discontinuous contact dynamics with smooth, implicit representations. Preprint at arXiv https://arxiv.org/abs/2009.11193 (2020).

Erichson, N.B., Muehlebach, M. & Mahoney, M. W. Physics-informed autoencoders for Lyapunov-stable fluid flow prediction. Preprint at arXiv https://arxiv.org/abs/1905.10866 (2019).

Shah, V. et al. Encoding invariances in deep generative models. Preprint at arXiv https://arxiv.org/abs/1906.01626 (2019).

Geneva, N. & Zabaras, N. Transformers for modeling physical systems. Preprint at arXiv https://arxiv.org/abs/2010.03957 (2020).

Li, Z. et al. Multipole graph neural operator for parametric partial differential equations. in Adv. Neural Inf. Process. Syst. (2020).

Nelsen, N. H. & Stuart, A. M. The random feature model for input–output maps between Banach spaces. Preprint at arXiv https://arxiv.org/abs/2005.10224 (2020).

Cai, S., Wang, Z., Lu, L., Zaki, T. A. & Karniadakis, G. E. DeepM&Mnet: inferring the electroconvection multiphysics fields based on operator approximation by neural networks. J. Comput. Phys . 436 , 110296 (2020).

Mao, Z., Lu, L., Marxen, O., Zaki, T. A. & Karniadakis, G. E. DeepM&Mnet for hypersonics: predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators. Preprint at arXiv https://arxiv.org/abs/2011.03349 (2020).

Meng, X. & Karniadakis, G. E. A composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems. J. Comput. Phys. 401 , 109020 (2020).

Sirignano, J., MacArt, J. F. & Freund, J. B. DPM: a deep learning PDE augmentation method with application to large-eddy simulation. J. Comput. Phys. 423 , 109811 (2020).

Lu, L. et al. Extraction of mechanical properties of materials through deep learning from instrumented indentation. Proc. Natl Acad. Sci. USA 117 , 7052–7062 (2020).

Reyes, B., Howard, A. A., Perdikaris, P. & Tartakovsky, A. M. Learning unknown physics of non-Newtonian fluids. Preprint at arXiv https://arxiv.org/abs/2009.01658 (2020).

Wang, W. & Gómez-Bombarelli, R. Coarse-graining auto-encoders for molecular dynamics. NPJ Comput. Mater. 5 , 1–9 (2019).

Rico-Martinez, R., Anderson, J. & Kevrekidis, I. Continuous-time nonlinear signal processing: a neural network based approach for gray box identification (IEEE, 1994).

Xu, K., Huang, D. Z. & Darve, E. Learning constitutive relations using symmetric positive definite neural networks. Preprint at arXiv https://arxiv.org/abs/2004.00265 (2020).

Huang, D. Z., Xu, K., Farhat, C. & Darve, E. Predictive modeling with learned constitutive laws from indirect observations. Preprint at arXiv https://arxiv.org/abs/1905.12530 (2019).

Xu, K., Tartakovsky, A. M., Burghardt, J. & Darve, E. Inverse modeling of viscoelasticity materials using physics constrained learning. Preprint at arXiv https://arxiv.org/abs/2005.04384 (2020).

Li, D., Xu, K., Harris, J. M. & Darve, E. Coupled time-lapse full-waveform inversion for subsurface flow problems using intrusive automatic differentiation. Water Resour. Res. 56 , e2019WR027032 (2020).

Tartakovsky, A., Marrero, C. O., Perdikaris, P., Tartakovsky, G. & Barajas-Solano, D. Physics-informed deep neural networks for learning parameters and constitutive relationships in subsurface flow problems. Water Resour. Res. 56 , e2019WR026731 (2020).

Xu, K. & Darve, E. Adversarial numerical analysis for inverse problems. Preprint at arXiv https://arxiv.org/abs/1910.06936 (2019).

Yang, Y., Bhouri, M. A. & Perdikaris, P. Bayesian differential programming for robust systems identification under uncertainty. Proc. R. Soc. A 476 , 20200290 (2020).

Article   ADS   MathSciNet   Google Scholar  

Rackauckas, C. et al. Universal differential equations for scientific machine learning. Preprint at arXiv https://arxiv.org/abs/2001.04385 (2020).

Wang, S., Yu, X. & Perdikaris, P. When and why PINNs fail to train: a neural tangent kernel perspective. Preprint at arXiv https://arxiv.org/abs/2007.14527 (2020).

Wang, S., Wang, H. & Perdikaris, P. On the eigenvector bias of Fourier feature networks: from regression to solving multi-scale PDEs with physics-informed neural networks. Preprint at arXiv https://arxiv.org/abs/2012.10047 (2020).

Pang, G., Yang, L. & Karniadakis, G. E. Neural-net-induced Gaussian process regression for function approximation and PDE solution. J. Comput. Phys. 384 , 270–288 (2019).

Wilson, A. G., Hu, Z., Salakhutdinov, R. & Xing, E. P. Deep kernel learning. Proc. Int. Conf. Artif. Intell. Stat. 51 , 370–378 (2016).

Owhadi, H. Do ideas have shape? Plato’s theory of forms as the continuous limit of artificial neural networks. Preprint at arXiv https://arxiv.org/abs/2008.03920 (2020).

Owhadi, H. & Scovel, C. Operator-Adapted Wavelets, Fast Solvers, and Numerical Homogenization: From a Game Theoretic Approach to Numerical Approximation and Algorithm Design (Cambridge Univ. Press, 2019).

Micchelli, C. A. & Rivlin, T. J. in Optimal Estimation in Approximation Theory (eds. Micchelli, C. A. & Rivlin, T. J.) 1–54 (Springer, 1977).

Sard, A. Linear Approximation (Mathematical Surveys 9, American Mathematical Society, 1963).

Larkin, F. Gaussian measure in Hilbert space and applications in numerical analysis. Rocky Mt. J. Math. 2 , 379–421 (1972).

Sul’din, A. V. Wiener measure and its applications to approximation methods. I. Izv. Vyssh. Uchebn. Zaved. Mat . 3 , 145–158 (1959).

Diaconis, P. Bayesian numerical analysis. Stat. Decision Theory Relat. Top. IV 1 , 163–175 (1988).

Kimeldorf, G. S. & Wahba, G. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Math. Stat. 41 , 495–502 (1970).

Owhadi, H., Scovel, C. & Schäfer, F. Statistical numerical approximation. Not. Am. Math. Soc. 66 , 1608–1617 (2019).

Tsai, Y. H. H., Bai, S., Yamada, M., Morency, L. P. & Salakhutdinov, R. Transformer dissection: a unified understanding of transformer’s attention via the lens of kernel. Preprint at arXiv https://arxiv.org/abs/1908.11775 (2019).

Kadri, H. et al. Operator-valued kernels for learning from functional response data. J. Mach. Learn. Res. 17 , 1–54 (2016).

MathSciNet   Google Scholar  

González-García, R., Rico-Martínez, R. & Kevrekidis, I. G. Identification of distributed parameter systems: a neural net based approach. Comput. Chem. Eng. 22 , S965–S968 (1998).

Long, Z., Lu, Y., Ma, X. & Dong, B. PDE-Net: learning PDEs from data. Proc. Int. Conf. Mach. Learn. 80 , 3208–3216 (2018).

He, J. & Xu, J. MgNet: a unified framework of multigrid and convolutional neural network. Sci. China Math. 62 , 1331–1354 (2019).

He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition (IEEE, 2016).

Rico-Martinez, R., Krischer, K., Kevrekidis, I., Kube, M. & Hudson, J. Discrete- vs. continuous-time nonlinear signal processing of Cu electrodissolution data. Chem. Eng. Commun. 118 , 25–48 (1992).

Weinan, E. A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5 , 1–11 (2017).

Chen, T. Q., Rubanova, Y., Bettencourt, J. & Duvenaud, D. K. Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 31 , 6571–6583 (2018).

Jia, J. & Benson, A. R. Neural jump stochastic differential equations. Adv. Neural Inf. Process. Syst. 32 , 9847–9858 (2019).

Rico-Martinez, R., Kevrekidis, I. & Krischer, K. in Neural Networks for Chemical Engineers (ed. Bulsari, A. B.) 409–442 (Elsevier, 1995).

He, J., Li, L., Xu, J. & Zheng, C. ReLU deep neural networks and linear finite elements. J. Comput. Math. 38 , 502–527 (2020).

Jagtap, A. D., Kharazmi, E. & Karniadakis, G. E. Conservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 365 , 113028 (2020).

Yang, L., Zhang, D. & Karniadakis, G. E. Physics-informed generative adversarial networks for stochastic differential equations. SIAM J. Sci. Comput. 42 , A292–A317 (2020).

Pang, G., Lu, L. & Karniadakis, G. E. fPINNs: fractional physics-informed neural networks. SIAM J. Sci. Comput. 41 , A2603–A2626 (2019).

Kharazmi, E., Zhang, Z. & Karniadakis, G. E. hp-VPINNs: variational physics-informed neural networks with domain decomposition. Comput. Methods Appl. Mech. Eng. 374 , 113547 (2021).

Jagtap, A. D. & Karniadakis, G. E. Extended physics-informed neural networks (XPINNs): a generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 28 , 2002–2041 (2020).

Raissi, M., Yazdani, A. & Karniadakis, G. E. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science 367 , 1026–1030 (2020).

Yang, L., Meng, X. & Karniadakis, G. E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 415 , 109913 (2021).

Wang, S. & Perdikaris, P. Deep learning of free boundary and Stefan problems. J. Comput. Phys. 428 , 109914 (2020).

Spigler, S. et al. A jamming transition from under-to over-parametrization affects generalization in deep learning. J. Phys. A 52 , 474001 (2019).

Geiger, M. et al. Scaling description of generalization with number of parameters in deep learning. J. Stat. Mech. Theory Exp. 2020 , 023401 (2020).

Belkin, M., Hsu, D., Ma, S. & Mandal, S. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proc. Natl Acad. Sci. USA 116 , 15849–15854 (2019).

Geiger, M. et al. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Phys. Rev. E 100 , 012115 (2019).

Mei, S., Montanari, A. & Nguyen, P. M. A mean field view of the landscape of two-layer neural networks. Proc. Natl Acad. Sci. USA 115 , E7665–E7671 (2018).

Mehta, P. & Schwab, D. J. An exact mapping between the variational renormalization group and deep learning. Preprint at arXiv https://arxiv.org/abs/1410.3831 (2014).

Stoudenmire, E. & Schwab, D. J. Supervised learning with tensor networks. Adv. Neural Inf. Process. Syst. 29 , 4799–4807 (2016).

Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B. & LeCun, Y. The loss surfaces of multilayer networks. Proc. Artif. Intell. Stat. 38 , 192–204 (2015).

Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J. & Ganguli, S. Exponential expressivity in deep neural networks through transient chaos. Adv. Neural Inf. Process. Syst. 29 , 3360–3368 (2016).

Yang, G. & Schoenholz, S. Mean field residual networks: on the edge of chaos. Adv. Neural Inf. Process. Syst. 30 , 7103–7114 (2017).

Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Why and when can deep — but not shallow — networks avoid the curse of dimensionality: a review. Int. J. Autom. Comput. 14 , 503–519 (2017).

Grohs, P., Hornung, F., Jentzen, A. & Von Wurstemberger, P. A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. Preprint at arXiv https://arxiv.org/abs/1809.02362 (2018).

Han, J., Jentzen, A. & Weinan, E. Solving high-dimensional partial differential equations using deep learning. Proc. Natl Acad. Sci. USA 115 , 8505–8510 (2018).

Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 , 2672–2680 (2014).

Brock, A., Donahue, J. & Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. in Int. Conf. Learn. Represent. (2019).

Yu, L., Zhang, W., Wang, J. & Yu, Y. SeqGAN: sequence generative adversarial nets with policy gradient (AAAI Press, 2017).

Zhu, J.Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks (IEEE, 2017).

Yang, L., Daskalakis, C. & Karniadakis, G. E. Generative ensemble-regression: learning particle dynamics from observations of ensembles with physics-informed deep generative models. Preprint at arXiv https://arxiv.org/abs/2008.01915 (2020).

Lanthaler, S., Mishra, S. & Karniadakis, G. E. Error estimates for DeepONets: a deep learning framework in infinite dimensions. Preprint at arXiv https://arxiv.org/abs/2102.09618 (2021).

Deng, B., Shin, Y., Lu, L., Zhang, Z. & Karniadakis, G. E. Convergence rate of DeepONets for learning operators arising from advection–diffusion equations. Preprint at arXiv https://arxiv.org/abs/2102.10621 (2021).

Xiu, D. & Karniadakis, G. E. The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24 , 619–644 (2002).

Marzouk, Y. M., Najm, H. N. & Rahn, L. A. Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. 224 , 560–586 (2007).

Stuart, A. M. Inverse problems: a Bayesian perspective. Acta Numerica 19 , 451 (2010).

Tripathy, R. K. & Bilionis, I. Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification. J. Comput. Phys. 375 , 565–588 (2018).

Karumuri, S., Tripathy, R., Bilionis, I. & Panchal, J. Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks. J. Comput. Phys. 404 , 109120 (2020).

Yang, Y. & Perdikaris, P. Adversarial uncertainty quantification in physics-informed neural networks. J. Comput. Phys. 394 , 136–152 (2019).

Raissi, M., Perdikaris, P. & Karniadakis, G. E. Machine learning of linear differential equations using Gaussian processes. J. Comput. Phys. 348 , 683–693 (2017).

& Fan, D. et al. A robotic intelligent towing tank for learning complex fluid-structure dynamics. Sci. Robotics 4 , eaay5063 (2019).

Winovich, N., Ramani, K. & Lin, G. ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains. J. Comput. Phys. 394 , 263–279 (2019).

Zhang, D., Lu, L., Guo, L. & Karniadakis, G. E. Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. J. Comput. Phys. 397 , 108850 (2019).

Gal, Y. & Ghahramani, Z. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc. Int. Conf. Mach. Learn. 48 , 1050–1059 (2016).

Cai, S. et al. Flow over an espresso cup: inferring 3-D velocity and pressure fields from tomographic background oriented Schlieren via physics-informed neural networks. J. Fluid Mech . 915 (2021).

Mathews, A., Francisquez, M., Hughes, J. & Hatch, D. Uncovering edge plasma dynamics via deep learning from partial observations. Preprint at arXiv https://arxiv.org/abs/2009.05005 (2020).

Rotskoff, G. M. & Vanden-Eijnden, E. Learning with rare data: using active importance sampling to optimize objectives dominated by rare events. Preprint at arXiv https://arxiv.org/abs/2008.06334 (2020).

Patel, R. G. et al. Thermodynamically consistent physics-informed neural networks for hyperbolic systems. Preprint at https://arxiv.org/abs/2012.05343 (2020).

Shukla, K., Di Leoni, P. C., Blackshire, J., Sparkman, D. & Karniadakis, G. E. Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. J. Nondestruct. Eval. 39 , 1–20 (2020).

Behler, J. & Parrinello, M. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett. 98 , 146401 (2007).

Zhang, L., Han, J., Wang, H., Car, R. & Weinan, E. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. 120 , 143001 (2018).

Jia, W. et al. Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning. Preprint at arXiv https://arxiv.org/abs/2005.00223 (2020).

Nakata, A. et al. Large scale and linear scaling DFT with the CONQUEST code. J. Chem. Phys. 152 , 164112 (2020).

Zhu, W., Xu, K., Darve, E. & Beroza, G. C. A general approach to seismic inversion with automatic differentiation. Preprint at arXiv https://arxiv.org/abs/2003.06027 (2020).

Abadi, M. et al. Tensorflow: a system for large-scale machine learning. Proc. OSDI 16 , 265–283 (2016).

Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32 , 8026–8037 (2019).

Chollet, F. et al. Keras — Deep learning library. Keras https://keras.io (2015).

Frostig, R., Johnson, M. J. & Leary, C. Compiling machine learning programs via high-level tracing. in Syst. Mach. Learn. (2018).

Lu, L., Meng, X., Mao, Z. & Karniadakis, G. E. DeepXDE: a deep learning library for solving differential equations. SIAM Rev. 63 , 208–228 (2021).

Hennigh, O. et al. NVIDIA SimNet: an AI-accelerated multi-physics simulation framework. Preprint at arXiv https://arxiv.org/abs/2012.07938 (2020).

Koryagin, A., Khudorozkov, R. & Tsimfer, S. PyDEns: a Python framework for solving differential equations with neural networks. Preprint at arXiv https://arxiv.org/abs/1909.11544 (2019).

Chen, F. et al. NeuroDiffEq: A python package for solving differential equations with neural networks. J. Open Source Softw. 5 , 1931 (2020).

Rackauckas, C. & Nie, Q. DifferentialEquations.jl — a performant and feature-rich ecosystem for solving differential equations in Julia. J. Open Res. Softw. 5 , 15 (2017).

Haghighat, E. & Juanes, R. SciANN: a Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Meth. Appl. Mech. Eng. 373 , 113552 (2020).

Xu, K. & Darve, E. ADCME: Learning spatially-varying physical fields using deep neural networks. Preprint at arXiv https://arxiv.org/abs/2011.11955 (2020).

Gardner, J. R., Pleiss, G., Bindel, D., Weinberger, K. Q. & Wilson, A. G. Gpytorch: blackbox matrix–matrix Gaussian process inference with GPU acceleration. Adv. Neural Inf. Process. Syst. 31 , 7587–7597 (2018).

Novak, R. et al. Neural Tangents: fast and easy infinite neural networks in Python. in Conf. Neural Inform. Process. Syst. (2020).

Xu, K. & Darve, E. Physics constrained learning for data-driven inverse modeling from sparse observations. Preprint at arXiv https://arxiv.org/abs/2002.10521 (2020).

Xu, K. & Darve, E. The neural network approach to inverse problems in differential equations. Preprint at arXiv https://arxiv.org/abs/1901.07758 (2019).

Xu, K., Zhu, W. & Darve, E. Distributed machine learning for computational engineering using MPI. Preprint at arXiv https://arxiv.org/abs/2011.01349 (2020).

Elsken, T., Metzen, J. H. & Hutter, F. Neural architecture search: a survey. J. Mach. Learn. Res. 20 , 1–21 (2019).

He, X., Zhao, K. & Chu, X. AutoML: a survey of the state-of-the-art. Knowl. Based Syst. 212 , 106622 (2021).

Hospedales, T., Antoniou, A., Micaelli, P. & Storkey, A. Meta-learning in neural networks: a survey. Preprint at arXiv https://arxiv.org/abs/2004.05439 (2020).

Xu, Z.-Q. J., Zhang, Y., Luo, T., Xiao, Y. & Ma, Z. Frequency principle: Fourier analysis sheds light on deep neural networks. Commun. Comput. Phys. 28 , 1746–1767 (2020).

Rahaman, N. et al. On the spectral bias of neural networks. Proc. Int. Conf. Mach. Learn. 97 , 5301–5310 (2019).

Ronen, B., Jacobs, D., Kasten, Y. & Kritchman, S. The convergence rate of neural networks for learned functions of different frequencies. Adv. Neural Inf. Process. Syst. 32 , 4761–4771 (2019).

Cao, Y., Fang, Z., Wu, Y., Zhou, D. X. & Gu, Q. Towards understanding the spectral bias of deep learning. Preprint at arXiv https://arxiv.org/abs/1912.01198 (2019).

Wang, S., Teng, Y. & Perdikaris, P. Understanding and mitigating gradient pathologies in physics-informed neural networks. Preprint at arXiv https://arxiv.org/abs/2001.04536 (2020).

Tancik, M. et al. Fourier features let networks learn high frequency functions in low dimensional domains. Adv. Neural Inf. Process. Syst. 33 (2020).

Cai, W. & Xu, Z. Q. J. Multi-scale deep neural networks for solving high dimensional PDEs. Preprint at arXiv https://arxiv.org/abs/1910.11710 (2019).

Arbabi, H., Bunder, J. E., Samaey, G., Roberts, A. J. & Kevrekidis, I. G. Linking machine learning with multiscale numerics: data-driven discovery of homogenized equations. JOM 72 , 4444–4457 (2020).

Owhadi, H. & Zhang, L. Metric-based upscaling. Commun. Pure Appl. Math. 60 , 675–723 (2007).

Blum, A. L. & Rivest, R. L. Training a 3-node neural network is NP-complete. Neural Netw. 5 , 117–127 (1992).

Lee, J. D., Simchowitz, M., Jordan, M. I. & Recht, B. Gradient descent only converges to minimizers. Annu. Conf. Learn. Theory 49 , 1246–1257 (2016).

Jagtap, A. D., Kawaguchi, K. & Em Karniadakis, G. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A 476 , 20200334 (2020).

Wight, C. L. & Zhao, J. Solving Allen–Cahn and Cahn–Hilliard equations using the adaptive physics informed neural networks. Preprint at arXiv https://arXiv.org/abs/2007.04542 (2020).

Goswami, S., Anitescu, C., Chakraborty, S. & Rabczuk, T. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. Theor. Appl. Fract. Mech. 106 , 102447 (2020).

Betancourt, M. A geometric theory of higher-order automatic differentiation. Preprint at arXiv https://arxiv.org/abs/1812.11592 (2018).

Bettencourt, J., Johnson, M. J. & Duvenaud, D. Taylor-mode automatic differentiation for higher-order derivatives in JAX. in Conf. Neural Inform. Process. Syst. (2019).

Newman, D, Hettich, S., Blake, C. & Merz, C. UCI repository of machine learning databases. ICS http://www.ics.uci.edu/~mlearn/MLRepository.html (1998).

Bianco, S., Cadene, R., Celona, L. & Napoletano, P. Benchmark analysis of representative deep neural network architectures. IEEE Access 6 , 64270–64277 (2018).

Vlachas, P. R. et al. Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics. Neural Networks (2020).

Shin, Y., Darbon, J. & Karniadakis, G. E. On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs. Commun. Comput. Phys. 28 , 2042–2074 (2020).

Mishra, S. & Molinaro, R. Estimates on the generalization error of physics informed neural networks (PINNs) for approximating PDEs. Preprint at arXiv https://arxiv.org/abs/2006.16144 (2020).

Mishra, S. & Molinaro, R. Estimates on the generalization error of physics informed neural networks (PINNs) for approximating PDEs II: a class of inverse problems. Preprint at arXiv https://arxiv.org/abs/2007.01138 (2020).

Shin, Y., Zhang, Z. & Karniadakis, G.E. Error estimates of residual minimization using neural networks for linear PDEs. Preprint at arXiv https://arxiv.org/abs/2010.08019 (2020).

Kharazmi, E., Zhang, Z. & Karniadakis, G. Variational physics-informed neural networks for solving partial differential equations. Preprint at arXiv https://arxiv.org/abs/1912.00873 (2019).

Jo, H., Son, H., Hwang, H. Y. & Kim, E. Deep neural network approach to forward-inverse problems. Netw. Heterog. Media 15 , 247–259 (2020).

Guo, M. & Haghighat, E. An energy-based error bound of physics-informed neural network solutions in elasticity. Preprint at arXiv https://arxiv.org/abs/2010.09088 (2020).

Lee, J. Y., Jang, J. W. & Hwang, H. J. The model reduction of the Vlasov–Poisson–Fokker–Planck system to the Poisson–Nernst–Planck system via the deep neural network approach. Preprint at arXiv https://arxiv.org/abs/2009.13280 (2020).

Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. in Int. Conf. Learn. Represent. (2015).

Luo, T. & Yang, H. Two-layer neural networks for partial differential equations: optimization and generalization theory. Preprint at arXiv https://arxiv.org/abs/2006.15733 (2020).

Jacot, A., Gabriel, F. & Hongler, C. Neural tangent kernel: convergence and generalization in neural networks. Adv. Neural Inf. Process. Syst. 31 , 8571–8580 (2018).

Alnæs, M. et al. The FEniCS project version 1.5. Arch. Numer. Softw. 3 , 9–23 (2015).

Kemeth, F. P. et al. An emergent space for distributed data with hidden internal order through manifold learning. IEEE Access 6 , 77402–77413 (2018).

Kemeth, F. P. et al. Learning emergent PDEs in a learned emergent space. Preprint at arXiv https://arxiv.org/abs/2012.12738 (2020).

Defense Advanced Research Projects Agency. DARPA shredder challenge rules. DARPA https://web.archive.org/web/20130221190250/http://archive.darpa.mil/shredderchallenge/Rules.aspx (2011).

Rovelli, C. Forget time. Found. Phys. 41 , 1475 (2011).

Hy, T. S., Trivedi, S., Pan, H., Anderson, B. M. & Kondor, R. Predicting molecular properties with covariant compositional networks. J. Chem. Phys. 148 , 241745 (2018).

Hachmann, J. et al. The Harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid. J. Phys. Chem. Lett. 2 , 2241–2251 (2011).

Byrd, R. H., Lu, P., Nocedal, J. & Zhu, C. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16 , 1190–1208 (1995).

Download references

Acknowledgements

We thank H. Owhadi (Caltech) for his insightful comments on the connections between NNs and kernel methods. G.E.K. acknowledges support from the DOE PhILMs project (no. DE-SC0019453) and OSD/AFOSR MURI grant FA9550-20-1-0358. I.G.K. acknowledges support from DARPA (PAI and ATLAS programmes) as well as an AFOSR MURI grant through UCSB. P.P. acknowledges support from the DARPA PAI programme (grant HR00111890034), the US Department of Energy (grant DE-SC0019116), the Air Force Office of Scientific Research (grant FA9550-20-1-0060), and DOE-ARPA (grant 1256545).

Author information

Authors and affiliations.

Division of Applied Mathematics, Brown University Providence, RI, USA

George Em Karniadakis & Liu Yang

School of Engineering, Brown University Providence, RI, USA

George Em Karniadakis

Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, MD, USA

Ioannis G. Kevrekidis

Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA

Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA

Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, PA, USA

Paris Perdikaris

Graduate Group in Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, PA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

Authors are listed in alphabetical order. G.E.K. supervised the project. All authors contributed equally to writing the paper.

Corresponding author

Correspondence to George Em Karniadakis .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information.

Nature Reviews Physics thanks the anonymous reviewers for their contribution to the peer review of this work.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Related links

ADCME : https://kailaix.github.io/ADCME.jl/latest

DeepXDE: https://deepxde.readthedocs.io/

GPyTorch: https://gpytorch.ai/

NeuroDiffEq: https://github.com/NeuroDiffGym/neurodiffeq

NeuralPDE: https://neuralpde.sciml.ai/dev/

Neural Tangents: https://github.com/google/neural-tangents

PyDEns: https://github.com/analysiscenter/pydens

PyTorch: https://pytorch.org

SciANN: https://www.sciann.com/

SimNet: https://developer.nvidia.com/simnet

TensorFlow: www.tensorflow.org

Data of variable accuracy.

A representation formula for the solution of the Hamilton–Jacobi equation.

A physics-informed neural network-like method with random sampling.

Characterization of the robustness of dynamic behaviour to small perturbations, in the neighbourhood of an equilibrium.

Sets with regions of missing data.

Rectified linear unit.

Increasing model capacity beyond the point of interpolation resulting in improved performance.

Generative stochastic artificial neural networks that can learn a probability distribution over their set of inputs.

Uncertainty due to the inherent randomness of data.

Uncertainty due to limited data and knowledge.

A type of generalized polynomial chaos with measures defined by data.

An approximation used in gravity-driven flows, which ignores density differences except in the gravity term.

A function used to study transitions between metastable states in stochastic systems.

A type of system with both reaction and diffusion.

Dual refinement of the mesh by increasing either the number of subdomains or the approximations degree.

A regularization term associated with Hölder constants of differential equations that controls the derivatives of neural networks.

A quantity that measures richness of a class of real-valued functions with respect to a probability distribution.

Linear model of a (nonlinear) dynamical system obtained via a Koopman operator theory.

Iterations of an algorithm for the numerical computation of equilibria.

A nonlinear dimensionality reduction technique for embedding intrinsically low-dimensional data from high-dimensional representations to lower-dimensional spaces.

t -distributed stochastic neighbour embedding. A nonlinear dimensionality reduction technique for embedding intrinsically low-dimensional data from high-dimensional representations to lower-dimensional spaces.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Karniadakis, G.E., Kevrekidis, I.G., Lu, L. et al. Physics-informed machine learning. Nat Rev Phys 3 , 422–440 (2021). https://doi.org/10.1038/s42254-021-00314-5

Download citation

Accepted : 31 March 2021

Published : 24 May 2021

Issue Date : June 2021

DOI : https://doi.org/10.1038/s42254-021-00314-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Recent advances in earthquake seismology using machine learning.

  • Hisahiko Kubo
  • Makoto Naoi
  • Masayuki Kano

Earth, Planets and Space (2024)

Tracking an untracked space debris after an inelastic collision using physics informed neural network

  • Gurpreet Singh
  • Sanat K. Biswas

Scientific Reports (2024)

Neural operators for accelerating scientific simulations and design

  • Kamyar Azizzadenesheli
  • Nikola Kovachki
  • Anima Anandkumar

Nature Reviews Physics (2024)

On the use of deep learning for phase recovery

  • Kaiqiang Wang
  • Edmund Y. Lam

Light: Science & Applications (2024)

Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy

  • Maofeng Cao

Nature Communications (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

physics informed neural networks thesis

Purdue University Graduate School

Physics Informed Neural Networks for Engineering Systems

Degree type.

  • Master of Science
  • Mechanical Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Additional committee member 2, additional committee member 3, usage metrics.

  • Mechanical engineering not elsewhere classified

CC BY 4.0

  • Research repository
  • Education Repository
  • Cultural Heritage

Physics-informed neural networks for highly compressible flows

Physics-informed neural networks for highly compressible flows: assessing and enhancing shock-capturing capabilities

Wagenaar, Thomas (TU Delft Aerospace Engineering)

ORCID 0000-0002-9890-3173

Delft University of Technology

Aerospace Engineering

While physics-informed neural networks have been shown to accurately solve a wide range of fluid dynamics problems, their effectivity on highly compressible flows is so far limited. In particular, they struggle with transonic and supersonic problems that involve discontinuities such as shocks. While there have been multiple efforts to alleviate the nonphysical phenomena that arise on such problems, current work does not identify and address the underlying failure modes sufficiently. This thesis shows that physics-informed neural networks struggle with highly compressible problems for two independent reasons. Firstly, the differential Euler equations conserve entropy along streamlines, so that physics-informed neural networks try to find an isentropic solution to a non-isentropic problem. Secondly, conventional slip boundary conditions form strong local minima that result in fictive objects that simplify the flow. In response to these failure modes, two new adaptations are introduced, namely a local viscosity method and a streamline output representation. The local viscosity method includes viscosity as an additional network output and adds a viscous loss term to the loss function, resulting in localized viscosity that facilitates the entropy change at shocks. Furthermore, the streamline output representation provides a more natural formulation of the slip boundary conditions, which prevents zero-velocity regions while promoting shock attachment. To the author's best knowledge, this thesis provides the first inviscid steady solutions of curved and detached shocks by physics-informed neural networks.

Physics Informed Neural Networks Compressible flow Shock capturing Euler equations Failure mode Entropy Supersonic flow Transonic flow

http://resolver.tudelft.nl/uuid:6fd86786-153e-4c98-b4e2-8fa36f90eb2a

Open-source code for a selection of the results: https://github.com/wagenaartje/pinn4hcf ---- Custom implementation of Adaptive Competitive Gradient Descent (ACGD): https://github.com/wagenaartje/torch-cgd

Student theses

master thesis

© 2023 Thomas Wagenaar

Physics Informed Neural Network Based Time-Independent Schrödinger Equation Solver

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Codebase for PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs.

i207M/PINNacle

Folders and files, repository files navigation, pinnacle: a comprehensive benchmark of physics-informed neural networks for solving pdes.

This repository is our codebase for PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs . Our paper is currently under review. We will provide more detailed guide soon.

physics informed neural networks thesis

Implemented Methods

This benchmark paper implements the following variants and create a new challenging dataset to compare them,

See these references for more details,

  • Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
  • Understanding and mitigating gradient pathologies in physics-informed neural networks
  • When and why PINNs fail to train: A neural tangent kernel perspective
  • A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks
  • MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale Training of Physics-informed Neural Networks
  • Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems
  • Sobolev Training for Physics Informed Neural Networks
  • Variational Physics-Informed Neural Networks For Solving Partial Differential Equations
  • hp-VPINNs: Variational Physics-Informed Neural Networks With Domain Decomposition
  • Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
  • Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
  • Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations

Installation

📄 Full Documention

Run all 20 cases with default settings:

If you find out work useful, please cite our paper at:

We also suggest you have a look at the survey paper ( Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications ) about PINNs, neural operators, and other paradigms of PIML.

Contributors 5

@i207M

  • Python 99.9%

Physics-Informed Neural Networks for Solar Wind Prediction

  • Conference paper
  • First Online: 10 August 2023
  • Cite this conference paper

physics informed neural networks thesis

  • Rob Johnson 9 ,
  • Soukaïna Filali Boubrahimi   ORCID: orcid.org/0000-0001-5693-6383 9 ,
  • Omar Bahri 9 &
  • Shah Muhammad Hamdi   ORCID: orcid.org/0000-0002-9303-7835 9  

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13645))

Included in the following conference series:

  • International Conference on Pattern Recognition

350 Accesses

Solar wind modeling is categorized into empirical and physics-based models that both predict the properties of solar wind in different parts of the heliosphere. Empirical models are relatively inexpensive to run and have shown great success at predicting the solar wind at the L1 Lagrange point. Physics-based models provide more sophisticated scientific modeling based on magnetohydrodynamics (MHD) that are computationally expensive to run. In this paper, we propose to combine empirical and physics-based models by developing a physics-guided neural network for solar wind prediction. To the best of our knowledge, this is the first attempt to forecast solar wind by combining data-driven methods with physics constraints. Our results show the superiority of our physics-constrained model compared to other state-of-the-art deep learning predictive models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

https://sites.google.com/view/solarwindprediction/ .

Angryk, R.A., et al.: Multivariate time series dataset for space weather data analytics. Sci. Data 7 (1), 1–13 (2020)

Article   Google Scholar  

Bahri, O., Boubrahimi, S.F., Hamdi, S.M.: Shapelet-based counterfactual explanations for multivariate time series. arXiv preprint arXiv:2208.10462 (2022)

Bartlett, P.L., Foster, D.J., Telgarsky, M.J.: Spectrally-normalized margin bounds for neural networks. Adv. Neural Inf. Process. Syst. 30 (2017)

Google Scholar  

Board, S.S., Council, N.R., et al.: Severe Space Weather Events: Understanding Societal and Economic Impacts: a Workshop Report. National Academies Press, Washington (2009)

Boozer, A.H.: Ohm’s law for mean magnetic fields. J. Plasma Phys. 35 (1), 133–139 (1986)

Boubrahimi, S.F., Aydin, B., Kempton, D., Angryk, R.: Spatio-temporal interpolation methods for solar events metadata. In: 2016 IEEE International Conference on Big Data (Big Data), pp. 3149–3157. IEEE (2016)

Boubrahimi, S.F., Aydin, B., Martens, P., Angryk, R.: On the prediction of 100 MEV solar energetic particle events using goes satellite data. In: 2017 IEEE International Conference on Big Data (Big Data), pp. 2533–2542. IEEE (2017)

Bresler, A., Joshi, G., Marcuvitz, N.: Orthogonality properties for modes in passive and active uniform wave guides. J. Appl. Phys. 29 (5), 794–799 (1958)

Article   MathSciNet   MATH   Google Scholar  

Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches (2014). 10.48550/ARXIV.1409.1259, https://arxiv.org/abs/1409.1259

Eastwood, J., et al.: The economic impact of space weather: where do we stand? Risk Anal. 37 (2), 206–218 (2017)

Emmons, D., Acebal, A., Pulkkinen, A., Taktakishvili, A., MacNeice, P., Odstrcil, D.: Ensemble forecasting of coronal mass ejections using the WSA-ENLIL with coned model. Space Weather 11 (3), 95–106 (2013)

Golan, I., El-Yaniv, R.: Deep anomaly detection using geometric transformations. Adv. Neural Inf. Process. Syst. 31 (2018)

Karpatne, A., Watkins, W., Read, J.S., Kumar, V.: Physics-guided neural networks (PGNN): an application in lake temperature modeling. CoRR abs/1710.11431 (2017), https://arxiv.org/abs/1710.11431

Li, P., Boubrahimi, S.F., Hamdi, S.M.: Graph-based clustering for time series data. In: 2021 IEEE Big Data, pp. 4464–4467. IEEE (2021)

Li, P., Boubrahimi, S.F., Hamdi, S.M.: Shapelets-based data augmentation for time series classification. In: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1373–1378. IEEE (2021)

Luo, B., Zhong, Q., Liu, S., Gong, J.: A new forecasting index for solar wind velocity based on EIT 284 Å observations. Solar Phys. 250 (1), 159–170 (2008)

Ma, R., Angryk, R.A., Riley, P., Boubrahimi, S.F.: Coronal mass ejection data clustering and visualization of decision trees. Astrophys. J. Suppl. Ser. 236 (1), 14 (2018)

Martin, S.: Solar winds travelling at 300km per second to hit earth today. www.express.co.uk/news/science/1449974/solar-winds-space-weather-forecast-sunspot-solar-storm-aurora-evg , Accessed 01 May 2022

Mukai, T., et al.: The low energy particle (LEP) experiment onboard the Geotail satellite. J. Geomag. Geoelectr. 46 (8), 669–692 (1994). https://doi.org/10.5636/jgg.46.669

Muzaheed, A.A.M., Hamdi, S.M., Boubrahimi, S.F.: Sequence model-based end-to-end solar flare classification from multivariate time series data. In: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 435–440. IEEE (2021)

Owens, M., et al.: A Computationally efficient, time-dependent model of the solar wind for use as a surrogate to three-dimensional numerical magnetohydrodynamic simulations. Solar Phys. 295 (3), 1–17 (2020). https://doi.org/10.1007/s11207-020-01605-3

Papitashvili, N., Bilitza, D., King, J.: Omni: a description of near-earth solar wind environment. In: 40th COSPAR Scientific Assembly, vol. 40, pp. C0–1 (2014)

Raju, H., Das, S.: CNN-based deep learning model for solar wind forecasting. Solar Phys. 296 (9), 1–25 (2021). https://doi.org/10.1007/s11207-021-01874-6

Shugai, Y.S.: Analysis of quasistationary solar wind stream forecasts for 2010–2019. Russian Meteorol. Hydrol. 46 (3), 172–178 (2021). https://doi.org/10.3103/S1068373921030055

Wilkinson, M.D., et al.: The fair guiding principles for scientific data management and stewardship. Sci. Data 3 (1), 1–9 (2016)

Yang, Y., Shen, F.: Modeling the global distribution of solar wind parameters on the source surface using multiple observations and the artificial neural network technique. Solar Phys. 294 (8), 1–22 (2019). https://doi.org/10.1007/s11207-019-1496-5

Download references

Acknowledgment

This project has been supported in part by funding from GEO and CISE directorates under NSF awards #2204363 and #2153379.

Author information

Authors and affiliations.

Department of Computer Science, Utah State University, Logan, 84322, UT, USA

Rob Johnson, Soukaïna Filali Boubrahimi, Omar Bahri & Shah Muhammad Hamdi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Soukaïna Filali Boubrahimi .

Editor information

Editors and affiliations.

York University, Toronto, ON, Canada

Jean-Jacques Rousseau

University of Ontario Institute of Technology, Oshawa, ON, Canada

Bill Kapralos

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Cite this paper.

Johnson, R., Boubrahimi, S.F., Bahri, O., Hamdi, S.M. (2023). Physics-Informed Neural Networks for Solar Wind Prediction. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13645. Springer, Cham. https://doi.org/10.1007/978-3-031-37731-0_21

Download citation

DOI : https://doi.org/10.1007/978-3-031-37731-0_21

Published : 10 August 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-37730-3

Online ISBN : 978-3-031-37731-0

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

Societies and partnerships

The International Association for Pattern Recognition

  • Find a journal
  • Track your research

IMAGES

  1. So, what is a physics-informed neural network?

    physics informed neural networks thesis

  2. BDCC

    physics informed neural networks thesis

  3. Figure 1 from Accelerating Physics-Informed Neural Network Training

    physics informed neural networks thesis

  4. Physics-informed neural networks building blocks. PINNs are made up of

    physics informed neural networks thesis

  5. Schematic of a physics-informed neural network (PINN), where the loss

    physics informed neural networks thesis

  6. | Physics-inspired neural network architectures. a

    physics informed neural networks thesis

VIDEO

  1. Physics-Informed Neural Networks for Harvesting and Saving Flow Energy

  2. Physics-Informed Neural Networks: Convergence and Applications

  3. Mathematical Guarantees for Physics-Informed Neural Networks (Tim De Ryck)

  4. [Deep Learning] Physics-informed Neural Networks (PINNs)

  5. Scientific Machine Learning through the Lens of Physics Informed Neural Networks

  6. Spiking Neural Network on FPGA

COMMENTS

  1. Scientific Machine Learning through Physics-Informed Neural Networks

    Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit ...

  2. A Review of Physics Informed Neural Networks for Multiscale ...

    This paper presents the fundamentals of Physics Informed Neural Networks (PINNs) and reviews literature on the methodology and application of PINNs. PINNs are universal approximators that integrates physical laws that can be described by partial differential equations (PDEs) and given data, in the learning process. The formulations of PINNs are first presented in an example of linear ...

  3. PDF Physics Informed Neural Networks in Computational Finance: High

    2 Physics Informed Neural Networks where 2 is the set of weights of the network to be tuned. Let K 2N the total numberoflayersinthenetwork. For k2f1;:::;Kgandv k 2Rd k, theaffinefunction L k: Rd k!Rd k+1 isgivenbyL kv k= W kv k+ b k whereW k2Rd k+1 d k andb k2Rd k+1: Notethatd 1 = d0andd K= m= 1:Withthat,wedenotethetunableweights by = fW k;b ...

  4. PDF Scientific Machine Learning Through Physics-Informed Neural Networks

    Abstract. Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs.

  5. Physics-informed machine learning: from concepts to real-world

    Machine learning (ML) has caused a fundamental shift in how we practice science, with many now placing learning from data at the focal point of their research. As the complexity of the scientific problems we want to study increases, and the amount of data generated by today's scientific experiments

  6. PDF Physics-Informed Neural Networks and Option Pricing Andreas Louskos

    cated neural networks is not considerably di erent. The outline of a feed-forward neural network is provided in Figure 2. A layer in a neural network is de ned as a collection of neurons that each individually process the inputs and provides a singular output. Consequently, the output of a layer is a vector whose dimensionality is equal to

  7. Physics-informed neural networks: A deep learning framework for solving

    We introduce physics-informed neural networks - neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations. In this work, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential ...

  8. PDF Physics-Informed Neural Networks: Theory and Applications

    5 Physics-Informed Neural Networks: Theory and Applications 181 5.4, we present some numerical examples, focusing on some pedagogical examples of standard PINNs for problems that are feasible to compute on regular desktops or even mobile computers, followed by some concluding remarks in Sect. 5.5. 5.2 Basics of Artificial Neural Networks

  9. PDF Physics-informed machine learning

    Physics- informed machine learning integrates seamlessly data and mathematical physics models, even in partially understood, uncertain and high- dimensional contexts. Kernel- based or neural ...

  10. Solving Partial Differential Equations using Physics-Informed Neural

    This method is called Physics-Informed Neural Network (PINN). In this thesis, we study dense neural networks (DNNs), including codes developed in the context of this bachelor project. We derive the backpropagation equations necessary for training and use different configurations in a DNN to test its interpolating accuracy.

  11. Physics Informed Neural Networks for Engineering Systems

    This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physicsinformed neural networks leverage the information gathered over centuries in theform of physical laws mathematically represented in the form of partial differentialequations to make up for the dearth of data associated with ...

  12. PDF ABSTRACT Dissertation: A PHYSICS -INFORMED NEURAL NETWORK FRAMEWORK FOR

    In this context, this dissertation presents a physics-informed neural network (PINN) architecture for remaining useful life (RUL) estimation based on big machinery data(BMD) collected from sensor monitoring networks ) in CESs. The main (SMNs outcomes of this work are twofold. First, a systematic guide for to preprocess BMD

  13. A Gentle Introduction to Physics-Informed Neural Networks, with

    A modern approach to solving mathematical models involving differential equations, the so-called Physics-Informed Neural Network (PINN), is based on the techniques which include the use of ...

  14. Physics-informed neural networks for highly compressible flows

    This thesis shows that physics-informed neural networks struggle with highly compressible problems for two independent reasons. Firstly, the differential Euler equations conserve entropy along streamlines, so that physics-informed neural networks try to find an isentropic solution to a non-isentropic problem.

  15. STARS

    STARS - Showcase of Text, Archives, Research & Scholarship at UCF

  16. PDF Towards Zero-Inertia Power Systems: Stability Analysis, Control ...

    Towards Zero-Inertia Power Systems: Stability Analysis, Control & Physics-Informed Neural Networks This thesis was prepared by: Georgios Misyris Supervisors: Associate Professor Spyros Chatzivasileiadis, Technical University of Denmark Consultant Grid Technology Tilman Weckesser, Danish Energy Association

  17. Physics‐Informed Neural Networks (PINNs) for Wave Propagation and Full

    Recent advancement in machine learning have provided new paradigms for scientists and engineers to solve challenging problems. Here we apply a new strategy in machine learning (physics-informed neural networks (PINNs)) to seismic imaging, that takes advantage of the governing physical laws to complement the limited data available (seismograms).

  18. A physics-informed deep learning approach for bearing fault detection

    The proposed physics-informed deep learning approach was validated using (1) data from 18 bearings on an agricultural machine operating in the field, and (2) data from bearings on a laboratory test stand in the Case Western Reserve University (CWRU) Bearing Data Center. ... Physics-based convolutional neural network for fault diagnosis of ...

  19. Unveiling the optimization process of Physics Informed Neural Networks

    This study investigates the potential accuracy boundaries of physics-informed neural networks, contrasting their approach with previous similar works and traditional numerical methods. We find that selecting improved optimization algorithms significantly enhances the accuracy of the results. Simple modifications to the loss function may also improve precision, offering an additional avenue for ...

  20. Loss-attentional physics-informed neural networks

    Abstract. Physics-informed neural networks (PINNs) have emerged as a significant endeavour in recent years to utilize artificial intelligence technology for solving various partial differential equations (PDEs). Nevertheless, the vanilla PINN model structure encounters challenges in accurately approximating solutions at hard-to-fit regions with ...

  21. Physics-Informed Neural Networks for Power Systems

    This paper introduces for the first time, to our knowledge, a framework for physics-informed neural networks in power system applications. Exploiting the underlying physical laws governing power systems, and inspired by recent developments in the field of machine learning, this paper proposes a neural network training procedure that can make use of the wide range of mathematical models ...

  22. Physics Informed Neural Network Based Time-Independent Schrödinger

    This paper presents a novel approach for solving the time-independent Schrödinger equation for any arbitrary potential using Physics-Informed Neural Networks (PINNs). PINNs seamlessly embed physical principles into neural networks, allowing precise quantum wavefunction predictions. Extensive experimentation highlights its superior performance over conventional solvers. This innovative ...

  23. PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations; Understanding and mitigating gradient pathologies in physics-informed neural networks; When and why PINNs fail to train: A neural tangent kernel perspective ...

  24. Physics-Informed Neural Networks for Solar Wind Prediction

    To develop a physics constrained neural network, we rely on a loss function that penalizes the network based on violations of physics laws, and a physics model input . 5.1 A Physics Based Loss Function. As discussed in Sect. 4, our data follows Ohm's law for an ideal plasma that we modified to Eq. 6. Since there is a negativity constraint, we ...

  25. Microstructure-Sensitive Deformation Modeling and Materials ...

    Therefore, a new paradigm called the physics-informed neural network (PINN) is introduced in the literature. This study applies the PINN to microstructure-sensitive modeling and inverse design to explore the material behavior under deformation processing. In particular, we demonstrate the application of PINN to small-data problems driven by a ...

  26. Crack propagation simulation and overload fatigue life prediction via

    DOI: 10.1016/j.ijfatigue.2024.108382 Corpus ID: 269746944; Crack propagation simulation and overload fatigue life prediction via enhanced physics-informed neural networks @article{Chen2024CrackPS, title={Crack propagation simulation and overload fatigue life prediction via enhanced physics-informed neural networks}, author={Zhiying Chen and Yanwei Dai and Yinghua Liu}, journal={International ...