Systematic Literature Reviews

  • First Online: 01 January 2012

Cite this chapter

characteristics of literature review pdf

  • Claes Wohlin 7 ,
  • Per Runeson 8 ,
  • Martin Höst 8 ,
  • Magnus C. Ohlsson 9 ,
  • Björn Regnell 8 &
  • Anders Wesslén 10  

8 Citations

1 Altmetric

Systematic literature reviews are conducted to “ identify, analyse and interpret all available evidence related to a specific research question ” [96]. As it aims to give a complete, comprehensive and valid picture of the existing evidence, both the identification, analysis and interpretation must be conducted in a scientifically and rigorous way. In order to achieve this goal, Kitchenham and Charters have adapted guidelines for systematic literature reviews, primarily from medicine, evaluated them [24] and updated them accordingly [96]. These guidelines, structured according to a three-step process for planning, conducting and reporting the review, are summarized below.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Anastas, J.W., MacDonald, M.L.: Research Design for the Social Work and the Human Services, 2nd edn. Columbia University Press, New York (2000)

Google Scholar  

Andersson, C., Runeson, P.: A spiral process model for case studies on software quality monitoring – method and metrics. Softw. Process: Improv. Pract. 12 (2), 125–140 (2007). doi:  10.1002/spip.311

Andrews, A.A., Pradhan, A.S.: Ethical issues in empirical software engineering: the limits of policy. Empir. Softw. Eng. 6 (2), 105–110 (2001)

American Psychological Association: Ethical principles of psychologists and code of conduct. Am. Psychol. 47 , 1597–1611 (1992)

Avison, D., Baskerville, R., Myers, M.: Controlling action research projects. Inf. Technol. People 14 (1), 28–45 (2001). doi:  10.1108/09593840110384762 http://www.emeraldinsight.com/10.1108/09593840110384762

Babbie, E.R.: Survey Research Methods. Wadsworth, Belmont (1990)

Basili, V.R.: Quantitative evaluation of software engineering methodology. In: Proceedings of the First Pan Pacific Computer Conference, vol. 1, pp. 379–398. Australian Computer Society, Melbourne (1985)

Basili, V.R.: Software development: a paradigm for the future. In: Proceedings of the 13th Annual International Computer Software and Applications Conference, COMPSAC’89, Orlando, pp. 471–485. IEEE Computer Society Press, Washington (1989)

Basili, V.R.: The experimental paradigm in software engineering. In: H.D. Rombach, V.R. Basili, R.W. Selby (eds.) Experimental Software Engineering Issues: Critical Assessment and Future Directives. Lecture Notes in Computer Science, vol. 706. Springer, Berlin Heidelberg (1993)

Basili, V.R.: Evolving and packaging reading technologies. J. Syst. Softw. 38 (1), 3–12 (1997)

Basili, V.R., Weiss, D.M.: A methodology for collecting valid software engineering data. IEEE Trans. Softw. Eng. 10 (6), 728–737 (1984)

Basili, V.R., Selby, R.W.: Comparing the effectiveness of software testing strategies. IEEE Trans. Softw. Eng. 13 (12), 1278–1298 (1987)

Basili, V.R., Rombach, H.D.: The TAME project: towards improvement-oriented software environments. IEEE Trans. Softw. Eng. 14 (6), 758–773 (1988)

Basili, V.R., Green, S.: Software process evaluation at the SEL. IEEE Softw. 11 (4), pp. 58–66 (1994)

Basili, V.R., Selby, R.W., Hutchens, D.H.: Experimentation in software engineering. IEEE Trans. Softw. Eng. 12 (7), 733–743 (1986)

Basili, V.R., Caldiera, G., Rombach, H.D.: Experience factory. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 469–476. Wiley, New York (1994)

Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metrics paradigm. In: J.J. Marciniak (ed.) Encyclopedia of Software Engineering, pp. 528–532. Wiley (1994)

Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: The empirical investigation of perspective-based reading. Empir. Soft. Eng. 1 (2), 133–164 (1996)

Basili, V.R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sørumgård, S., Zelkowitz, M.V.: Lab package for the empirical investigation of perspective-based reading. Technical report, Univeristy of Maryland (1998). http://www.cs.umd.edu/projects/SoftEng/ESEG/manual/pbr_package/manual.html

Basili, V.R., Shull, F., Lanubile, F.: Building knowledge through families of experiments. IEEE Trans. Softw. Eng. 25 (4), 456–473 (1999)

Baskerville, R.L., Wood-Harper, A.T.: A critical perspective on action research as a method for information systems research. J. Inf. Technol. 11 (3), 235–246 (1996). doi:  10.1080/026839696345289

Benbasat, I., Goldstein, D.K., Mead, M.: The case research strategy in studies of information systems. MIS Q. 11 (3), 369 (1987). doi: 10.2307/248684

Bergman, B., Klefsjö, B.: Quality from Customer Needs to Customer Satisfaction. Studentlitteratur, Lund (2010)

Brereton, P., Kitchenham, B.A., Budgen, D., Turner, M., Khalil, M.: Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80 (4), 571–583 (2007). doi: 10.1016/j.jss.2006.07.009

Brereton, P., Kitchenham, B.A., Budgen, D.: Using a protocol template for case study planning. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering. University of Bari, Italy (2008)

Briand, L.C., Differding, C.M., Rombach, H.D.: Practical guidelines for measurement-based process improvement. Softw. Process: Improv. Pract. 2 (4), 253–280 (1996)

Briand, L.C., El Emam, K., Morasca, S.: On the application of measurement theory in software engineering. Empir. Softw. Eng. 1 (1), 61–88 (1996)

Briand, L.C., Bunse, C., Daly, J.W.: A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Trans. Softw. Eng. 27 (6), 513–530 (2001)

British Psychological Society: Ethical principles for conducting research with human participants. Psychologist 6 (1), 33–35 (1993)

Budgen, D., Kitchenham, B.A., Charters, S., Turner, M., Brereton, P., Linkman, S.: Presenting software engineering results using structured abstracts: a randomised experiment. Empir. Softw. Eng. 13 , 435–468 (2008). doi: 10.1007/s10664-008-9075-7

Budgen, D., Burn, A.J., Kitchenham, B.A.: Reporting computing projects through structured abstracts: a quasi-experiment. Empir. Softw. Eng. 16 (2), 244–277 (2011). doi: 10.1007/s10664-010-9139-3

Campbell, D.T., Stanley, J.C.: Experimental and Quasi-experimental Designs for Research. Houghton Mifflin Company, Boston (1963)

Chrissis, M.B., Konrad, M., Shrum, S.: CMMI(R): Guidelines for process integration and product improvement. Technical report, SEI (2003)

Ciolkowski, M., Differding, C.M., Laitenberger, O., Münch, J.: Empirical investigation of perspective-based reading: A replicated experiment. Technical report, 97-13, ISERN (1997)

Coad, P., Yourdon, E.: Object-Oriented Design, 1st edn. Prentice-Hall, Englewood (1991)

Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 70 , 213–220 (1968)

Cook, T.D., Campbell, D.T.: Quasi-experimentation – Design and Analysis Issues for Field Settings. Houghton Mifflin Company, Boston (1979)

Corbin, J., Strauss, A.: Basics of Qualitative Research, 3rd edn. SAGE, Los Angeles (2008)

Cruzes, D.S., Dybå, T.: Research synthesis in software engineering: a tertiary study. Inf. Softw. Technol. 53 (5), 440–455 (2011). doi: 10.1016/j.infsof.2011.01.004

Dalkey, N., Helmer, O.: An experimental application of the delphi method to the use of experts. Manag. Sci. 9 (3), 458–467 (1963)

DeMarco, T.: Controlling Software Projects. Yourdon Press, New York (1982)

Demming, W.E.: Out of the Crisis. MIT Centre for Advanced Engineering Study, MIT Press, Cambridge, MA (1986)

Dieste, O., Grimán, A., Juristo, N.: Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 14 , 513–539 (2009). http://dx.doi.org/10.1007/s10664-008-9091-7

Dittrich, Y., Rönkkö, K., Eriksson, J., Hansson, C., Lindeberg, O.: Cooperative method development. Empir. Softw. Eng. 13 (3), 231–260 (2007). doi: 10.1007/s10664-007-9057-1

Doolan, E.P.: Experiences with Fagan’s inspection method. Softw. Pract. Exp. 22 (2), 173–182 (1992)

Dybå, T., Dingsøyr, T.: Empirical studies of agile software development: a systematic review. Inf. Softw. Technol. 50 (9-10), 833–859 (2008). doi: DOI: 10.1016/j.infsof.2008.01.006

Dybå, T., Dingsøyr, T.: Strength of evidence in systematic reviews in software engineering. In: Proceedings of the 2nd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’08, Kaiserslautern, pp. 178–187. ACM, New York (2008). doi:  http://doi.acm.org/10.1145/1414004.1414034

Dybå, T., Kitchenham, B.A., Jørgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22 , 58–65 (2005). doi:  http://doi.ieeecomputersociety.org/10.1109/MS.2005.6

Dybå, T., Kampenes, V.B., Sjøberg, D.I.K.: A systematic review of statistical power in software engineering experiments. Inf. Softw. Technol. 48 (8), 745–755 (2006). doi: 10.1016/j.infsof.2005.08.009

Easterbrook, S., Singer, J., Storey, M.-A., Damian, D.: Selecting empirical methods for software engineering research. In: F. Shull, J. Singer, D.I. Sjøberg (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Eick, S.G., Loader, C.R., Long, M.D., Votta, L.G., Vander Wiel, S.A.: Estimating software fault content before coding. In: Proceedings of the 14th International Conference on Software Engineering, Melbourne, pp. 59–65. ACM Press, New York (1992)

Eisenhardt, K.M.: Building theories from case study research. Acad. Manag. Rev. 14 (4), 532 (1989). doi: 10.2307/258557

Endres, A., Rombach, H.D.: A Handbook of Software and Systems Engineering – Empirical Observations, Laws and Theories. Pearson Addison-Wesley, Harlow/New York (2003)

Fagan, M.E.: Design and code inspections to reduce errors in program development. IBM Syst. J. 15 (3), 182–211 (1976)

Fenton, N.: Software measurement: A necessary scientific basis. IEEE Trans. Softw. Eng. 3 (20), 199–206 (1994)

Fenton, N., Pfleeger, S.L.: Software Metrics: A Rigorous and Practical Approach, 2nd edn. International Thomson Computer Press, London (1996)

Fenton, N., Pfleeger, S.L., Glass, R.: Science and substance: A challenge to software engineers. IEEE Softw. 11 , 86–95 (1994)

Fink, A.: The Survey Handbook, 2nd edn. SAGE, Thousand Oaks/London (2003)

Flyvbjerg, B.: Five misunderstandings about case-study research. In: Qualitative Research Practice, concise paperback edn., pp. 390–404. SAGE, London (2007)

Frigge, M., Hoaglin, D.C., Iglewicz, B.: Some implementations of the boxplot. Am. Stat. 43 (1), 50–54 (1989)

Fusaro, P., Lanubile, F., Visaggio, G.: A replicated experiment to assess requirements inspection techniques. Empir. Softw. Eng. 2 (1), 39–57 (1997)

Glass, R.L.: The software research crisis. IEEE Softw. 11 , 42–47 (1994)

Glass, R.L., Vessey, I., Ramesh, V.: Research in software engineering: An analysis of the literature. Inf. Softw. Technol. 44 (8), 491–506 (2002). doi: 10.1016/S0950-5849(02)00049-6

Gómez, O.S., Juristo, N., Vegas, S.: Replication types in experimental disciplines. In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Bolzano-Bozen (2010)

Gorschek, T., Wohlin, C.: Requirements abstraction model. Requir. Eng. 11 , 79–101 (2006). doi: 10.1007/s00766-005-0020-7

Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: A model for technology transfer in practice. IEEE Softw. 23 (6), 88–95 (2006)

Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: Industry evaluation of the requirements abstraction model. Requir. Eng. 12 , 163–190 (2007). doi: 10.1007/s00766-007-0047-z

Grady, R.B., Caswell, D.L.: Software Metrics: Establishing a Company-Wide Program. Prentice-Hall, Englewood (1994)

Grant, E.E., Sackman, H.: An exploratory investigation of programmer performance under on-line and off-line conditions. IEEE Trans. Human Factor Electron. HFE-8 (1), 33–48 (1967)

Gregor, S.: The nature of theory in information systems. MIS Q. 30 (3), 491–506 (2006)

Hall, T., Flynn, V.: Ethical issues in software engineering research: a survey of current practice. Empir. Softw. Eng. 6 , 305–317 (2001)

Hannay, J.E., Sjøberg, D.I.K., Dybå, T.: A systematic review of theory use in software engineering experiments. IEEE Trans. Softw. Eng. 33 (2), 87–107 (2007). doi: 10.1109/TSE.2007.12

Hannay, J.E., Dybå, T., Arisholm, E., Sjøberg, D.I.K.: The effectiveness of pair programming: a meta-analysis. Inf. Softw. Technol. 51 (7), 1110–1122 (2009). doi: 10.1016/j.infsof.2009.02.001

Hayes, W.: Research synthesis in software engineering: a case for meta-analysis. In: Proceedings of the 6th International Software Metrics Symposium, Boca Raton, pp. 143–151 (1999)

Hetzel, B.: Making Software Measurement Work: Building an Effective Measurement Program. Wiley, New York (1993)

Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. MIS Q. 28 (1), 75–105 (2004)

Höst, M., Regnell, B., Wohlin, C.: Using students as subjects – a comparative study of students and professionals in lead-time impact assessment. Empir. Softw. Eng. 5 (3), 201–214 (2000)

Höst, M., Wohlin, C., Thelin, T.: Experimental context classification: Incentives and experience of subjects. In: Proceedings of the 27th International Conference on Software Engineering, St. Louis, pp. 470–478 (2005)

Höst, M., Runeson, P.: Checklists for software engineering case study research. In: Proceedings of the 1st International Symposium on Empirical Software Engineering and Measurement, Madrid, pp. 479–481 (2007)

Hove, S.E., Anda, B.: Experiences from conducting semi-structured interviews in empirical software engineering research. In: Proceedings of the 11th IEEE International Software Metrics Symposium, pp. 1–10. IEEE Computer Society Press, Los Alamitos (2005)

Humphrey, W.S.: Managing the Software Process. Addison-Wesley, Reading (1989)

Humphrey, W.S.: A Discipline for Software Engineering. Addison Wesley, Reading (1995)

Humphrey, W.S.: Introduction to the Personal Software Process. Addison Wesley, Reading (1997)

IEEE: IEEE standard glossary of software engineering terminology. Technical Report, IEEE Std 610.12-1990, IEEE (1990)

Iversen, J.H., Mathiassen, L., Nielsen, P.A.: Managing risk in software process improvement: an action research approach. MIS Q. 28 (3), 395–433 (2004)

Jedlitschka, A., Pfahl, D.: Reporting guidelines for controlled experiments in software engineering. In: Proceedings of the 4th International Symposium on Empirical Software Engineering, Noosa Heads, pp. 95–104 (2005)

Johnson, P.M., Tjahjono, D.: Does every inspection really need a meeting? Empir. Softw. Eng. 3 (1), 9–35 (1998)

Juristo, N., Moreno, A.M.: Basics of Software Engineering Experimentation. Springer, Kluwer Academic Publishers, Boston (2001)

Juristo, N., Vegas, S.: The role of non-exact replications in software engineering experiments. Empir. Softw. Eng. 16 , 295–324 (2011). doi: 10.1007/s10664-010-9141-9

Kachigan, S.K.: Statistical Analysis: An Interdisciplinary Introduction to Univariate and Multivariate Methods. Radius Press, New York (1986)

Kachigan, S.K.: Multivariate Statistical Analysis: A Conceptual Introduction, 2nd edn. Radius Press, New York (1991)

Kampenes, V.B., Dyba, T., Hannay, J.E., Sjø berg, D.I.K.: A systematic review of effect size in software engineering experiments. Inf. Softw. Technol. 49 (11–12), 1073–1086 (2007). doi: 10.1016/j.infsof.2007.02.015

Karahasanović, A., Anda, B., Arisholm, E., Hove, S.E., Jørgensen, M., Sjøberg, D., Welland, R.: Collecting feedback during software engineering experiments. Empir. Softw. Eng. 10 (2), 113–147 (2005). doi: 10.1007/s10664-004-6189-4. http://www.springerlink.com/index/10.1007/s10664-004-6189-4

Karlström, D., Runeson, P., Wohlin, C.: Aggregating viewpoints for strategic software process improvement. IEE Proc. Softw. 149 (5), 143–152 (2002). doi: 10.1049/ip-sen:20020696

Kitchenham, B.A.: The role of replications in empirical software engineering – a word of warning. Empir. Softw. Eng. 13 , 219–221 (2008). 10.1007/s10664-008-9061-0

Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (version 2.3). Technical Report, EBSE Technical Report EBSE-2007-01, Keele University and Durham University (2007)

Kitchenham, B.A., Pickard, L.M., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Softw. 12 (4), 52–62 (1995)

Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., El Emam, K., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 28 (8), 721–734 (2002). doi: 10.1109/TSE.2002.1027796. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1027796

Kitchenham, B., Fry, J., Linkman, S.G.: The case against cross-over designs in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, pp. 65–67. IEEE Computer Society, Los Alamitos (2003)

Kitchenham, B.A., Dybå, T., Jørgensen, M.: Evidence-based software engineering. In: Proceedings of the 26th International Conference on Software Engineering, Edinburgh, pp. 273–281 (2004)

Kitchenham, B.A., Al-Khilidar, H., Babar, M.A., Berry, M., Cox, K., Keung, J., Kurniawati, F., Staples, M., Zhang, H., Zhu, L.: Evaluating guidelines for reporting empirical software engineering studies. Empir. Softw. Eng. 13 (1), 97–121 (2007). doi: 10.1007/s10664-007-9053-5. http://www.springerlink.com/index/10.1007/s10664-007-9053-5

Kitchenham, B.A., Jeffery, D.R., Connaughton, C.: Misleading metrics and unsound analyses. IEEE Softw. 24 , 73–78 (2007). doi: 10.1109/MS.2007.49

Kitchenham, B.A., Brereton, P., Budgen, D., Turner, M., Bailey, J., Linkman, S.G.: Systematic literature reviews in software engineering – a systematic literature review. Inf. Softw. Technol. 51 (1), 7–15 (2009). doi: 10.1016/j.infsof.2008.09.009. http://www.dx.doi.org/10.1016/j.infsof.2008.09.009

Kitchenham, B.A., Pretorius, R., Budgen, D., Brereton, P., Turner, M., Niazi, M., Linkman, S.: Systematic literature reviews in software engineering – a tertiary study. Inf. Softw. Technol.  52 (8), 792–805 (2010). doi: 10.1016/j.infsof.2010.03.006

Kitchenham, B.A., Sjøberg, D.I.K., Brereton, P., Budgen, D., Dybå, T., Höst, M., Pfahl, D., Runeson, P.: Can we evaluate the quality of software engineering experiments? In: Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, Bolzano/Bozen (2010)

Kitchenham, B.A., Budgen, D., Brereton, P.: Using mapping studies as the basis for further research – a participant-observer case study. Inf. Softw. Technol. 53 (6), 638–651 (2011). doi: 10.1016/j.infsof.2010.12.011

Laitenberger, O., Atkinson, C., Schlich, M., El Emam, K.: An experimental comparison of reading techniques for defect detection in UML design documents. J. Syst. Softw. 53 (2), 183–204 (2000)

Larsson, R.: Case survey methodology: quantitative analysis of patterns across case studies. Acad. Manag. J. 36 (6), 1515–1546 (1993)

Lee, A.S.: A scientific methodology for MIS case studies. MIS Q. 13 (1), 33 (1989). doi: 10.2307/248698. http://www.jstor.org/stable/248698?origin=crossref

Lehman, M.M.: Program, life-cycles and the laws of software evolution. Proc. IEEE 68 (9), 1060–1076 (1980)

Lethbridge, T.C., Sim, S.E., Singer, J.: Studying software engineers: data collection techniques for software field studies. Empir. Softw. Eng. 10 , 311–341 (2005)

Linger, R.: Cleanroom process model. IEEE Softw. pp. 50–58 (1994)

Linkman, S., Rombach, H.D.: Experimentation as a vehicle for software technology transfer – a family of software reading techniques. Inf. Softw. Technol. 39 (11), 777–780 (1997)

Lucas, W.A.: The case survey method: aggregating case experience. Technical Report, R-1515-RC, The RAND Corporation, Santa Monica (1974)

Lucas, H.C., Kaplan, R.B.: A structured programming experiment. Comput. J. 19 (2), 136–138 (1976)

Lyu, M.R. (ed.): Handbook of Software Reliability Engineering. McGraw-Hill, New York (1996)

Maldonado, J.C., Carver, J., Shull, F., Fabbri, S., Dória, E., Martimiano, L., Mendonça, M., Basili, V.: Perspective-based reading: a replicated experiment focused on individual reviewer effectiveness. Empir. Softw. Eng. 11 , 119–142 (2006). doi:  10.1007/s10664-006-5967-6

Manly, B.F.J.: Multivariate Statistical Methods: A Primer, 2nd edn. Chapman and Hall, London (1994)

Marascuilo, L.A., Serlin, R.C.: Statistical Methods for the Social and Behavioral Sciences. W. H. Freeman and Company, New York (1988)

Miller, J.: Estimating the number of remaining defects after inspection. Softw. Test. Verif. Reliab. 9 (4), 167–189 (1999)

Miller, J.: Applying meta-analytical procedures to software engineering experiments. J. Syst. Softw. 54 (1), 29–39 (2000)

Miller, J.: Statistical significance testing: a panacea for software technology experiments? J. Syst. Softw. 73 , 183–192 (2004). doi:  http://dx.doi.org/10.1016/j.jss.2003.12.019

Miller, J.: Replicating software engineering experiments: a poisoned chalice or the holy grail. Inf. Softw. Technol. 47 (4), 233–244 (2005)

Miller, J., Wood, M., Roper, M.: Further experiences with scenarios and checklists. Empir. Softw. Eng. 3 (1), 37–64 (1998)

Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. Wiley, New York (2000)

Myers, G.J.: A controlled experiment in program testing and code walkthroughs/inspections. Commun. ACM 21 , 760–768 (1978). doi:  http://doi.acm.org/10.1145/359588.359602

Noblit, G.W., Hare, R.D.: Meta-Ethnography: Synthesizing Qualitative Studies. Sage Publications, Newbury Park (1988)

Ohlsson, M.C., Wohlin, C.: A project effort estimation study. Inf. Softw. Technol. 40 (14), 831–839 (1998)

Owen, S., Brereton, P., Budgen, D.: Protocol analysis: a neglected practice. Commun. ACM 49 (2), 117–122 (2006). doi: 10.1145/1113034.1113039

Paulk, M.C., Curtis, B., Chrissis, M.B., Weber, C.V.: Capability maturity model for software. Technical Report, CMU/SEI-93-TR-24, Software Engineering Institute, Pittsburgh (1993)

Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, Electronic Workshops in Computing (eWIC). BCS, University of Bari, Italy (2008)

Petersen, K., Wohlin, C.: Context in industrial software engineering research. In: Proceedings of the 3rd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, pp. 401–404 (2009)

Pfleeger, S.L.: Experimental design and analysis in software engineering part 1–5. ACM Sigsoft, Softw. Eng. Notes, 19 (4), 16–20; 20 (1), 22–26; 20 (2), 14–16; 20 (3), 13–15; 20 , (1994)

Pfleeger, S.L., Atlee, J.M.: Software Engineering: Theory and Practice, 4th edn. Pearson Prentice-Hall, Upper Saddle River (2009)

Pickard, L.M., Kitchenham, B.A., Jones, P.W.: Combining empirical results in software engineering. Inf. Softw. Technol. 40 (14), 811–821 (1998). doi: 10.1016/S0950-5849(98)00101-3

Porter, A.A., Votta, L.G.: An experiment to assess different defect detection methods for software requirements inspections. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, pp. 103–112 (1994)

Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experiment. IEEE Trans. Softw. Eng. 21 (6), 563–575 (1995)

Porter, A.A., Votta, L.G.: Comparing detection methods for software requirements inspection: a replicated experimentation: a replication using professional subjects. Empir. Softw. Eng. 3 (4), 355–380 (1998)

Porter, A.A., Siy, H.P., Toman, C.A., Votta, L.G.: An experiment to assess the cost-benefits of code inspections in large scale software development. IEEE Trans. Softw. Eng. 23 (6), 329–346 (1997)

Potts, C.: Software engineering research revisited. IEEE Softw. pp. 19–28 (1993)

Rainer, A.W.: The longitudinal, chronological case study research strategy: a definition, and an example from IBM Hursley Park. Inf. Softw. Technol. 53 (7), 730–746 (2011)

Robinson, H., Segal, J., Sharp, H.: Ethnographically-informed empirical studies of software practice. Inf. Softw. Technol. 49 (6), 540–551 (2007). doi: 10.1016/j.infsof.2007.02.007

Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 1st edn. Blackwell, Oxford/Cambridge (1993)

Robson, C.: Real World Research: A Resource for Social Scientists and Practitioners-Researchers, 2nd edn. Blackwell, Oxford/Madden (2002)

Runeson, P., Skoglund, M.: Reference-based search strategies in systematic reviews. In: Proceedings of the 13th International Conference on Empirical Assessment and Evaluation in Software Engineering. Electronic Workshops in Computing (eWIC). BCS, Durham University, UK (2009)

Runeson, P., Höst, M., Rainer, A.W., Regnell, B.: Case Study Research in Software Engineering. Guidelines and Examples. Wiley, Hoboken (2012)

Sandahl, K., Blomkvist, O., Karlsson, J., Krysander, C., Lindvall, M., Ohlsson, N.: An extended replication of an experiment for assessing methods for software requirements. Empir. Softw. Eng. 3 (4), 381–406 (1998)

Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Softw. Eng. 25 (4), 557–572 (1999)

Selby, R.W., Basili, V.R., Baker, F.T.: Cleanroom software development: An empirical evaluation. IEEE Trans. Softw. Eng. 13 (9), 1027–1037 (1987)

Shepperd, M.: Foundations of Software Measurement. Prentice-Hall, London/New York (1995)

Shneiderman, B., Mayer, R., McKay, D., Heller, P.: Experimental investigations of the utility of detailed flowcharts in programming. Commun. ACM 20 , 373–381 (1977). doi: 10.1145/359605.359610

Shull, F.: Developing techniques for using software documents: a series of empirical studies. Ph.D. thesis, Computer Science Department, University of Maryland, USA (1998)

Shull, F., Basili, V.R., Carver, J., Maldonado, J.C., Travassos, G.H., Mendonça, M.G., Fabbri, S.: Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proceedings of the 1st International Symposium on Empirical Software Engineering, Nara, pp. 7–16 (2002)

Shull, F., Mendoncça, M.G., Basili, V.R., Carver, J., Maldonado, J.C., Fabbri, S., Travassos, G.H., Ferreira, M.C.: Knowledge-sharing issues in experimental software engineering. Empir. Softw. Eng.  9 , 111–137 (2004). doi: 10.1023/B:EMSE.0000013516.80487.33

Shull, F., Carver, J., Vegas, S., Juristo, N.: The role of replications in empirical software engineering. Empir. Softw. Eng. 13 , 211–218 (2008). doi: 10.1007/s10664-008-9060-1

Sieber, J.E.: Protecting research subjects, employees and researchers: implications for software engineering. Empir. Softw. Eng. 6 (4), 329–341 (2001)

Siegel, S., Castellan, J.: Nonparametric Statistics for the Behavioral Sciences, 2nd edn. McGraw-Hill International Editions, New York (1988)

Singer, J., Vinson, N.G.: Why and how research ethics matters to you. Yes, you! Empir. Softw. Eng. 6 , 287–290 (2001). doi: 10.1023/A:1011998412776

Singer, J., Vinson, N.G.: Ethical issues in empirical studies of software engineering. IEEE Trans. Softw. Eng. 28 (12), 1171–1180 (2002). doi: 10.1109/TSE.2002.1158289. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1158289

Simon S.: Fermat’s Last Theorem. Fourth Estate, London (1997)

Sjøberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.-K., Rekdal, A.C.: A survey of controlled experiments in software engineering. IEEE Trans. Softw. Eng. 31 (9), 733–753 (2005). doi: 10.1109/TSE.2005.97. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1514443

Sjøberg, D.I.K., Dybå, T., Anda, B., Hannay, J.E.: Building theories in software engineering. In: Shull, F., Singer, J., Sjøberg D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Sommerville, I.: Software Engineering, 9th edn. Addison-Wesley, Wokingham, England/ Reading (2010)

Sørumgård, S.: Verification of process conformance in empirical studies of software development. Ph.D. thesis, The Norwegian University of Science and Technology, Department of Computer and Information Science, Norway (1997)

Stake, R.E.: The Art of Case Study Research. SAGE Publications, Thousand Oaks (1995)

Staples, M., Niazi, M.: Experiences using systematic review guidelines. J. Syst. Softw. 80 (9), 1425–1437 (2007). doi: 10.1016/j.jss.2006.09.046

Thelin, T., Runeson, P.: Capture-recapture estimations for perspective-based reading – a simulated experiment. In: Proceedings of the 1st International Conference on Product Focused Software Process Improvement (PROFES), Oulu, pp. 182–200 (1999)

Thelin, T., Runeson, P., Wohlin, C.: An experimental comparison of usage-based and checklist-based reading. IEEE Trans. Softw. Eng. 29 (8), 687–704 (2003). doi: 10.1109/TSE.2003.1223644

Tichy, W.F.: Should computer scientists experiment more? IEEE Comput. 31 (5), 32–39 (1998)

Tichy, W.F., Lukowicz, P., Prechelt, L., Heinz, E.A.: Experimental evaluation in computer science: a quantitative study. J. Syst. Softw. 28 (1), 9–18 (1995)

Trochim, W.M.K.: The Research Methods Knowledge Base, 2nd edn. Cornell Custom Publishing, Cornell University, Ithaca (1999)

van Solingen, R., Berghout, E.: The Goal/Question/Metric Method: A Practical Guide for Quality Improvement and Software Development. McGraw-Hill International, London/Chicago (1999)

Verner, J.M., Sampson, J., Tosic, V., Abu Bakar, N.A., Kitchenham, B.A.: Guidelines for industrially-based multiple case studies in software engineering. In: Third International Conference on Research Challenges in Information Science, Fez, pp. 313–324 (2009)

Vinson, N.G., Singer, J.: A practical guide to ethical research involving humans. In: Shull, F., Singer, J., Sjøberg, D. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008)

Votta, L.G.: Does every inspection need a meeting? In: Proceedings of the ACM SIGSOFT Symposium on Foundations of Software Engineering, ACM Software Engineering Notes, vol. 18, pp. 107–114. ACM Press, New York (1993)

Wallace, C., Cook, C., Summet, J., Burnett, M.: Human centric computing languages and environments. In: Proceedings of Symposia on Human Centric Computing Languages and Environments, Arlington, pp. 63–65 (2002)

Wohlin, C., Gustavsson, A., Höst, M., Mattsson, C.: A framework for technology introduction in software organizations. In: Proceedings of the Conference on Software Process Improvement, Brighton, pp. 167–176 (1996)

Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. Kluwer, Boston (2000)

Wohlin, C., Aurum, A., Angelis, L., Phillips, L., Dittrich, Y., Gorschek, T., Grahn, H., Henningsson, K., Kågström, S., Low, G., Rovegård, P., Tomaszewski, P., van Toorn, C., Winter, J.: Success factors powering industry-academia collaboration in software research. IEEE Softw. (PrePrints) (2011). doi: 10.1109/MS.2011.92

Yin, R.K.: Case Study Research Design and Methods, 4th edn. Sage Publications, Beverly Hills (2009)

Zelkowitz, M.V., Wallace, D.R.: Experimental models for validating technology. IEEE Comput. 31 (5), 23–31 (1998)

Zendler, A.: A preliminary software engineering theory as investigated by published experiments. Empir. Softw. Eng. 6 , 161–180 (2001). doi:  http://dx.doi.org/10.1023/A:1011489321999

Download references

Author information

Authors and affiliations.

School of Computing Blekinge Institute of Technology, Karlskrona, Sweden

Claes Wohlin

Department of Computer Science, Lund University, Lund, Sweden

Per Runeson, Martin Höst & Björn Regnell

System Verification Sweden AB, Malmö, Sweden

Magnus C. Ohlsson

ST-Ericsson AB, Lund, Sweden

Anders Wesslén

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A. (2012). Systematic Literature Reviews. In: Experimentation in Software Engineering. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29044-2_4

Download citation

DOI : https://doi.org/10.1007/978-3-642-29044-2_4

Published : 02 May 2012

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-642-29043-5

Online ISBN : 978-3-642-29044-2

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A GUIDE TO LITERATURE REVIEW

Profile image of selorm kuffour

A literature review must be coherent, systematic and clear. The review of literature must stick to answering the research question and also there must be a justification of every argument using extracts and illustrations. It is essential that all sources used in the literature review are properly recorded and referenced appropriately to avoid the incidence of plagiarism. Finally the work must be proof-read. It is also worth noting that literature review is not producing a list of items. Also it is essential that the contents of the literature to be reviewed are well read and also spelling mistakes or wrong dates of publication are avoided.

Related Papers

HUMANUS DISCOURSE

Humanus Discourse

The importance of literature review in academic writing of different categories, levels, and purposes cannot be overemphasized. The literature review establishes both the relevance and justifies why new research is relevant. It is through a literature review that a gap would be established, and which the new research would fix. Once the literature review sits properly in the research work, the objectives/research questions naturally fall into their proper perspective. Invariably, other chapters of the research work would be impacted as well. In most instances, scanning through literature also provides you with the need and justification for your research and may also well leave a hint for further research. Literature review in most instances exposes a researcher to the right methodology to use. The literature review is the nucleus of a research work that might when gotten right spotlights a work and can as well derail a research work when done wrongly. This paper seeks to unveil the practical guides to writing a literature review, from purpose, and components to tips. It follows through the exposition of secondary literature. It exposes the challenges in writing a literature review and at the same time recommended tips that when followed will impact the writing of the literature review.

characteristics of literature review pdf

yakubu nawati

Rebekka Tunombili

Amanda Bolderston

A literature review can be an informative, critical, and useful synthesis of a particular topic. It can identify what is known (and unknown) in the subject area, identify areas of controversy or debate, and help formulate questions that need further research. There are several commonly used formats for literature reviews, including systematic reviews conducted as primary research projects; reviews written as an introduction and foundation for a research study, such as a thesis or dissertation; and reviews as secondary data analysis research projects. Regardless of the type, a good review is characterized by the author’s efforts to evaluate and critically analyze the relevant work in the field. Published reviews can be invaluable, because they collect and disseminate evidence from diverse sources and disciplines to inform professional practice on a particular topic. This directed reading will introduce the learner to the process of conducting and writing their own literature review.

• Learning outcomes • The nature of a literature review • Identifying the main subject and themes • Reviewing previous research • Emphasizing leading research studies • Exploring trends in the literature • Summarizing key ideas in a subject area • Summary A literature review is usually regarded as being an essential part of student projects, research studies and dissertations. This chapter examines the reasons for the importance of the literature review, and the things which it tries to achieve. It also explores the main strategies which you can use to write a good literature review.

Meriel Louise Anne Villamil

tecnico emergencias

Learning how to effectively write a literature review is a critical tool for success for an academic, and perhaps even professional career. Being able to summarize and synthesize prior research pertaining to a certain topic not only demonstrates having a good grasp on available information for a topic, but it also assists in the learning process. Although literature reviews are important for one's academic career, they are often misunderstood and underdeveloped. This article is intended to provide both undergraduate and graduate students in the criminal justice field specifically, and social sciences more generally, skills and perspectives on how to develop and/or strengthen their skills in writing a literature review. Included in this discussion are foci on the structure , process, and art of writing a literature review. What is a Literature Review? In essence, a literature review is a comprehensive overview of prior research regarding a specific topic. The overview both shows the reader what is known about a topic, and what is not yet known, thereby setting up the rationale or need for a new investigation, which is what the actual study to which the literature review is attached seeks to do. Stated a bit differently (Creswell 1994, pp. 20, 21) explains: The literature in a research study accomplishes several purposes: (a) It shares with the reader the results of other studies that are closely related to the study being reported (Fraenkel & Wallen, 1990. (b) It relates a study to the larger, ongoing dialog in the literature about a topic, filling in gaps and extending prior studies (Marshall & Rossman, 1989). (c) It provides a framework for establishing the importance of the study. As an overview, a well done literature review includes all of the main themes and subthemes found within the general topic chosen for the study. These themes and subthemes are usually interwoven with the methods or findings of the prior research. Also, a literature review sets the stage for and JOURNAL

Andrew Johnson

This chapter describes the process of writing a literature review and what the product should look like

Auxiliadora Padilha

Ignacio Illan Conde

RELATED PAPERS

Josep Pascual Beneyto

Health Education Research

Anne Krayer

Universidad Ciencia y Tecnología

JEFFERSON ESTUARDO MENDOZA CARRERA

Territory, Politics, Governance

Michael Hechter

European Journal of Medicinal Chemistry

SERGIO LEITE

Alberto Rodríguez-Lifante

2019 5th Experiment International Conference (exp.at'19)

Denis Gillet

Lúcia Pimenta

Annals of the New York Academy of Sciences

Antonino De Lorenzo

Venija Vučković

Misha Kilmer

arXiv (Cornell University)

Giampiero M Gallo

Muhamad Fauzi , Ssonko Muhammedi

Schizophrenia Research

musfeptial musfeptial

AIP Conference Proceedings

R. Gernhäuser

American Journal of Roentgenology

Brian Cabral

Paediatrica Indonesiana

Irene Yuniar

Scientific reports

Daniela Montesarchio

Cogitare Enfermagem

ELYANA TEIXEIRA SOUSA

Journal of Neurology

Susie Wolstenholme

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • For authors
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Machine learning for clinical outcome prediction in cerebrovascular and endovascular neurosurgery: systematic review and meta-analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-2967-6528 Haydn Hoffman 1 ,
  • Jason J Sims 2 ,
  • Violiza Inoa-Acosta 1 , 3 ,
  • Daniel Hoit 4 ,
  • http://orcid.org/0000-0002-1536-1613 Adam S Arthur 1 , 4 ,
  • Dan Y Draytsel 5 ,
  • YeonSoo Kim 5 ,
  • Nitin Goyal 1 , 3
  • 1 Semmes-Murphey Neurologic and Spine Institute , Memphis , Tennessee , USA
  • 2 The University of Tennessee Health Science Center , Memphis , Tennessee , USA
  • 3 Neurology , University of Tennessee Health Science Center , Memphis , Tennessee , USA
  • 4 Neurosurgery , University of Tennessee Health Science Center , Memphis , Tennessee , USA
  • 5 SUNY Upstate Medical University , Syracuse , New York , USA
  • Correspondence to Dr Haydn Hoffman, Semmes-Murphey Neurologic and Spine Institute, Memphis, Tennessee, USA; hhoffman{at}semmes-murphey.com

Background Machine learning (ML) may be superior to traditional methods for clinical outcome prediction. We sought to systematically review the literature on ML for clinical outcome prediction in cerebrovascular and endovascular neurosurgery.

Methods A comprehensive literature search was performed, and original studies of patients undergoing cerebrovascular surgeries or endovascular procedures that developed a supervised ML model to predict a postoperative outcome or complication were included.

Results A total of 60 studies predicting 71 outcomes were included. Most cohorts were derived from single institutions (66.7%). The studies included stroke (32), subarachnoid hemorrhage ((SAH) 16), unruptured aneurysm (7), arteriovenous malformation (4), and cavernous malformation (1). Random forest was the best performing model in 12 studies (20%) followed by XGBoost (13.3%). Among 42 studies in which the ML model was compared with a standard statistical model, ML was superior in 33 (78.6%). Of 10 studies in which the ML model was compared with a non-ML clinical prediction model, ML was superior in nine (90%). External validation was performed in 10 studies (16.7%). In studies predicting functional outcome after mechanical thrombectomy the pooled area under the receiver operator characteristics curve (AUROC) of the test set performances was 0.84 (95% CI 0.79 to 0.88). For studies predicting outcomes after SAH, the pooled AUROCs for functional outcomes and delayed cerebral ischemia were 0.89 (95% CI 0.76 to 0.95) and 0.90 (95% CI 0.66 to 0.98), respectively.

Conclusion ML performs favorably for clinical outcome prediction in cerebrovascular and endovascular neurosurgery. However, multicenter studies with external validation are needed to ensure the generalizability of these findings.

Data availability statement

Data are available upon reasonable request.

https://doi.org/10.1136/jnis-2024-021759

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Machine learning may be superior to traditional clinical and statistical models for predicting outcomes in patients undergoing cerebrovascular or neuroendovascular surgery.

WHAT THIS STUDY ADDS

This study summarizes the performance of machine learning models for predicting various postoperative outcomes, comparing the performance of machine learning to clinical prediction models, and identifying the most promising algorithms.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

By detailing the development and validation of machine learning models for clinical outcome prediction, we identified ways in which future studies can advance the literature.

Introduction

Surgical and endovascular treatment of cerebrovascular disease spans numerous conditions including ischemic stroke with large vessel occlusions, aneurysms and subarachnoid hemorrhage (SAH), arteriovenous malformations (AVM), and fistulas. Due to the high rates of morbidity associated with these conditions and the potential for complications related to their treatment, prediction of clinical outcomes or events before they occur is an important goal. Traditionally, development of clinical prediction models has relied on expert opinions or classical statistical techniques. Machine learning (ML), which is a form of artificial intelligence, may be a superior method for predicting clinical outcomes. ML allows computers to learn from data without being explicitly programmed. 1 There are numerous difficulties associated with training and employing ML models for cerebrovascular disease that have not been reviewed in the literature. 2

Although multiple studies have reviewed ML for cerebrovascular disease, these have either been limited to specific conditions, 3–5 focused on diagnosis rather than outcome prediction, 6 7 only included specific algorithms, 8 or did not describe how the models were developed. 9 The goal of this study was to systematically review the literature on ML for clinical outcome prediction in cerebrovascular and endovascular neurosurgery to (1) describe common practices for training and evaluating ML models for this task; (2) identify challenges in developing ML models for outcome prediction in cerebrovascular disease and discuss potential solutions; and (3) evaluate the performance of ML compared with standard statistical techniques and non-ML clinical prediction models. The results of this study could provide a benchmark for ML performance, identify the most promising ML algorithms, and guide future research by identifying limitations in the current literature.

This study was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 10

Search strategy

We performed a comprehensive search of the literature as of 20 October, 2023 using PubMed, Scopus, and EMBASE. The search strategies for each database are provided in online supplemental table 1 .

Supplemental material

Selection criteria.

The inclusion criteria included: (1) original studies comprising patients undergoing cerebrovascular surgeries or endovascular procedures; (2) development of a supervised ML model to predict a postoperative outcome or complication; (3) use of tabular data; and (4) description of model training. The exclusion criteria included: (1) not all patients underwent a cerebrovascular surgery or endovascular procedure; (2) validation studies that did not describe development of a new model; (3) imaging detection or segmentation studies; (4) studies describing time series models; (5) studies in which the primary goal was an explanatory analysis rather than outcome prediction; and (6) conference proceedings.

Data extraction

Data were extracted in duplicate from the included studies using a standardized form. Variables were selected by incorporating the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) 11 and adding ML-specific data. The extracted variables were broadly categorized into (1) cohort, outcome, and demographics; (2) feature selection and preprocessing; (3) model selection, training, and tuning; and (4) model performance. Cohort data included total number of included patients, target outcome for model prediction, proportion of patients who experienced the outcome, and demographic variables. Feature selection data included total number of features included in the model, whether preprocessing techniques were described, use of automated feature selection techniques, and handling of missing data. Model selection and training data included use of a hold-out test set, proportion of patients allocated for testing, use of an intermediate validation set, use of cross-validation, number of models screened, methods for hyperparameter tuning, and methods for addressing class imbalance. Model performance data included metrics used, whether calibration was evaluated, models evaluated, whether models were superior to standard statistical techniques and non-ML clinical prediction models, use of explanatory techniques, and whether external validation was performed.

Generalized linear regression models were categorized as standard statistical techniques. Median test set scores for the top performing ML model were recorded for the following metrics: area under the receiver operator characteristics curve (AUROC), sensitivity, specificity, and accuracy. When explanatory techniques were used, the top five important features were recorded for the following subgroups: (1) studies predicting functional outcome after mechanical thrombectomy (MT) for acute ischemic stroke; (2) studies predicting functional outcome after SAH; and (3) studies predicting delayed cerebral ischemia (DCI) after SAH.

The Prediction model Risk Of Bias ASsessment Tool (PROBAST) was used to evaluate risk of bias for each study. 12 PROBAST includes 20 signaling questions designed to systematically appraise studies that develop, validate, or update multivariable prognostic prediction models across four domains: participants, predictors, outcomes, and analysis. 12

Statistical analysis

Normally distributed continuous variables were reported as mean and SD and non-normally distributed variables were reported as median and IQR. The Shapiro–Wilk test was used to assess for normality. Pooled metrics for the three previously mentioned subgroups were generated using random effects models fitted using restricted maximum likelihood estimation and displayed with forest plots. 13

Cohort characteristics

A total of 60 studies predicting 71 outcomes published between 2011 and 2023 met the inclusion criteria (see online supplemental figure 1 ). There was a substantial increase in the number of studies published in 2022 compared with previous years ( figure 1 ). The studies included stroke (32), SAH (16), unruptured aneurysm (7), AVM (4), and cavernous malformation (1). The surgeries or endovascular procedures included MT (29), aneurysm treatment (23), stereotactic radiosurgery (4), carotid endarterectomy (3), and AVM resection (1). The median cohort size was 293 patients (IQR 156–446) and most cohorts (66.7%) were derived from single institutions. The types of outcomes included clinical/functional outcomes (39), clinical events (20), treatment complications (6), and treatment efficacy (6). All outcomes were binary. Detailed study data are shown in online supplemental table 2 . Based on PROBAST, 37 studies were at low risk of bias, 22 had high risk of bias, and in one study there was an unclear risk of bias ( figure 2 ). Individual study data for each PROBAST domain are shown in online supplemental figure 2 .

  • Download figure
  • Open in new tab
  • Download powerpoint

Number of studies included by publication year according to disease (A) and surgery/endovascular procedure (B). AVM, arteriovenous malformation; SAH, subarachnoid hemorrhage.

Summary of the prediction model risk of bias assessment tool (PROBAST) results for each domain.

Data characteristics

The median number of features was 16 (IQR 11–25). A total of 31 (51.7%) studies described preprocessing steps and 28 (46.7%) used automated feature selection techniques. The most common feature selection methods were least absolute shrinkage and selection operator (LASSO; 12) and recursive feature elimination (7). Missing data were described in 28 (46.7%) studies and imputation of missing data was performed in 19 (31.7%). Imputation methods are shown in online supplemental table 2 . Only three (5%) studies described outlier handling. The mean (SD) proportion of patients with the target outcome was 0.34 (0.17).

Model training

Forty-seven studies (78.3%) used a hold-out test set and four studies (6.7%) used an intermediate validation set. The median proportion of data used for the test set was 23% (IQR 20–30%). Cross-validation was used for model training in 52 (86.7%) studies. An initial model screening step before hyperparameter tuning was performed in six (10%) studies and a mean of 6.5 models were screened. Hyperparameter tuning was described in 36 (60%) studies and the most common method was grid search (63.9%), followed by random search (11.1%) and Bayesian optimization (11.1%). Techniques to overcome class imbalance were used in 14 (23.3%) studies and minority class oversampling was the most common method used (35.7%).

Out of 45 studies describing tools for model development, Python was used in 60% and R was used in 33%. The most popular Python package was scikit-learn (20 studies) and the most popular R package was caret (nine studies). Source code was provided in 12 (20%) studies. Eight studies (13%) provided a public online interface to use their ML model.

Model evaluation

A total of 245 testing scores were reported. AUROC was the most used metric (55 studies; 91.7%) followed by sensitivity (46 studies; 76.7%), specificity (43 studies; 71.7%), and accuracy (42 studies; 70%). Calibration was evaluated in 11 studies (18.3%). A total of 182 (median 3; IQR 1–4) models were evaluated on the test set. Random forest (RF) was the most tested model (35 studies; 58.3%) followed by support vector machine (31 studies; 51.7%), neural network ((NN) 28 studies; 46.7%), and XGBoost (21 studies; 35%). The complete list of models evaluated is shown in online supplemental table 3 . As shown in online supplemental figure 3 , RF was the best performing model in 12 studies (20%) followed by XGBoost (13.3%) and NN (11.7%). External validation was performed in 10 studies (16.7%), while the rest only internally validated their model(s). The AUROC, accuracy, sensitivity, and specificity scores on internal and external testing sets are summarized in online supplemental figure 4 .

In 42 studies in which the ML model was compared with a standard statistical model, ML was superior in 33 (78.6%) and, in 10 studies in which the ML model was compared with a non-ML clinical prediction model, ML was superior in nine (90%).

Explanatory techniques to identify which features were most predictive in the model were used in 40 studies (66.7%). SHapley Additive exPlanations (SHAP) was the most common explanatory algorithm, used in 16 (40%) of these.

Subgroup analysis: mechanical thrombectomy (MT)

Twenty-nine studies included patients who underwent MT. Of these, 22 described models predicting functional outcome (modified Rankin Scale (mRS) score). All studies dichotomized mRS with 21 categorizing favorable functional outcomes as 0–2 and one using 0–4. Most of these studies had a low risk of bias (77.3%). As shown in figure 3 , the pooled AUROC of the test set performances was 0.84 (95% CI 0.79 to 0.88). The most important features in these models included admission National Institutes of Health Stroke Scale (NIHSS) score (15 studies), age (14 studies), baseline mRS score (6 studies), glucose (4 studies), and Alberta Stroke Program Early CT Score ((ASPECTS), 3 studies). The full list of features is shown in online supplemental table 4 and the distribution of testing scores is shown in online supplemental figure 5 .

Forest plot summarizing test set area under the receiver operator characteristics curve (AUROC) for studies predicting functional outcomes after mechanical thrombectomy. One study that did not include the incidence of the target outcome was excluded from the meta-analysis.

Subgroup analysis: subarachnoid hemorrhage (SAH)

A total of 16 studies included patients with SAH. Of these, eight sought to predict functional outcome (6 mRS, 2 Glasgow Outcome Scale) and six predicted DCI. The mRS thresholds for favorable functional outcome were 0–2 in five studies and 0–3 in one study. For functional outcome prediction, four (50%) studies had a low risk of bias and the pooled AUROC of the test set performances was 0.89 (95% CI 0.76 to 0.95) ( figure 4 ). The most important features for predicting functional outcome were age (6 studies), Glasgow Coma Scale (4 studies), and World Federation of Neurosurgical Societies grade (4 studies). For predicting DCI, four (67%) studies had a low risk of bias and the pooled AUROC of the test set performances was 0.90 (95% CI 0.66 to 0.98). Features for predicting DCI were heterogenous, with only two features (clot thickness on CT and white blood cell count) being identified in more than one study. The full list of features is shown in online supplemental table 5 and the distribution of testing scores is shown in online supplemental figure 6 .

Forest plots summarizing test set area under the receiver operator characteristics curve (AUROC) for studies predicting (A) functional outcomes and (B) delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage. For both functional outcomes and DCI, two studies did not report AUROC and were excluded from the meta-analysis.

This study summarizes the literature on ML for outcome prediction in cerebrovascular and endovascular neurosurgery. Common practices for data handling, model training, and model evaluation are described and the top performing models are identified. The data suggest that ML holds promise for predicting postoperative outcomes, given testing scores were favorable and frequently better than those obtained with standard statistical models or clinical scoring systems. ML has primarily been applied to patients with large vessel occlusion undergoing MT and patients with SAH. Few studies have addressed unruptured aneurysms, AVMs, and cavernous malformations, and none included patients with arteriovenous fistulas, traumatic cerebrovascular injury, or vascular insufficiency requiring bypass. This is likely due to the relative rarity of these conditions and the need for large numbers of samples to reliably train most state-of-the-art ML algorithms. The quality of the studies was heterogenous, with only 62% having an overall low risk of bias. Studies were most commonly downgraded due to inadequate definition of outcomes or predictors, insufficient number of patients with the target outcome, and very small (n<100) cohorts without external validation.

Challenges in model training

Several factors make training ML models for cerebrovascular disease more difficult. Many conditions such as SAH and AVMs are relatively uncommon, resulting in few patients available for model training. ML models require large amounts of data to perform optimally, so smaller training cohorts may yield poorer performing models. Small datasets may further be limited by missing data, since most ML algorithms cannot handle missing values. Missing data is ubiquitous in clinical practice, although only a minority of studies described imputation methods. High-quality reliable data are also needed for good ML model performance. Some features commonly used in stroke studies such as ASPECTS suffer from variable inter-rater and intra-rater agreement, 14 which would hamper the model’s ability to use this for making predictions.

Outcomes of clinical interest such as adverse events or procedural complications are often rare, leading to class imbalance. This may result in reduced sensitivity of the model for detecting the outcome. Class imbalance was common among the included studies, but a minority (23.3%) used methods to address this. Resampling methods for addressing class imbalance include undersampling and oversampling. Since undersampling involves removing samples, this is undesirable in the smaller cohorts common in vascular neurosurgery. The Synthetic Minority Oversampling Technique (SMOTE) is a method for resolving class imbalance by creating synthetic examples in the minority class and may result in less overfitting than standard oversampling. 15 Class imbalance may also limit the number of features that can be used to train the model. In binary classification it is recommended there be at least 10 observations of the less common outcome per feature, although this rule has been challenged. 16

Model selection

It is usually impossible to know the top performing ML model a priori. Decision trees typically do not generalize well to unseen data. M5P model trees can produce better results by combining multiple linear regressions within the tree, but tree-based ensemble methods such as RF, XGBoost, LightGBM, and CatBoost typically generate superior performances for structured tabular data. RF and XGBoost were the two most common top performing models in this review. RF and XGBoost create numerous decision trees that only consider subsets of samples and features to reduce overfitting. 17 To make predictions, a weighted sum of the individual decision trees is calculated. These models likely performed well in the studies included in this review because they can handle correlated variables, missing values, and sparse data. Deep learning methods such as fully connected NNs often do not perform as well as tree-based ensemble methods for structured data, but NN was the third most common top performing model. Other forms of deep learning have excelled in unstructured data such as images and text. It may be reasonable to perform a model screening step that compares multiple models on the training data prior to hyperparameter tuning and testing. There are several tools to automate the data preprocessing, feature selection, and model training steps across numerous models with minimal code, which is an approach called ‘autoML’. Some of these tools include Auto-Sklearn, tree-based pipeline optimization tool, and H2O autoML. These were compared in one of the included studies. 18

Challenges in model evaluation

There are numerous metrics used to evaluate model performance including AUROC, accuracy, sensitivity, specificity (recall), and positive predictive value (precision). Accuracy is easy to interpret, but it is not ideal for imbalanced datasets since the model may only predict the minority class and still achieve high accuracy. Although accuracy was used in a majority of studies, most of these reported other metrics as well. AUROC was the most used metric but also may be misleading in imbalanced datasets when predicting rare events. 19 It has been shown to produce an overly optimistic assessment of model performance. 20 An alternative is area under the precision recall curve (AUPRC), which places greater emphasis on the minority class. This was only used in three studies but may be preferable when predicting rare events such as DCI, early neurologic deterioration after MT, and in-stent stenosis. Precision is particularly important for studies predicting negative outcomes. For example, when predicting futile recanalization there should be a high level of confidence the patient will not benefit from MT for such an effective treatment to be withheld. Conversely, accuracy places equal emphasis on true negatives and true positives. F1 score is another metric that is well suited to imbalanced datasets because it combines sensitivity and positive predictive value.

Comparison with standard statistical techniques and clinical prediction models

In most studies that compared ML with standard statistical techniques and non-ML clinical prediction models, ML yielded superior performances on the test set. Although ML has many theoretical advantages, complex models could overfit datasets in which there are limited training data or the relationship between the features and outcome is linear. Overfitting occurs when a ML model learns patterns specific to the training set that do not generalize to unseen data. Overfitting can be reduced by increasing the number of samples for training, but this is usually not feasible in clinical practice unless data are combined from multiple institutions. Additional methods for reducing overfitting include reducing the number of features, hyperparameter tuning, and combining the predictions of multiple models.

Model interpretation

Lack of model interpretability may be a concern to clinicians. Not knowing how a model makes predictions may make clinicians less likely to use it or allow biases/inaccuracies to be propagated. 21 Although some algorithms like decision trees are easily interpretable, most top performing algorithms are ‘black boxes’. Therefore, it is important that ML studies provide insight into the most important features for making predictions. While some algorithms have internal methods for doing this, model agnostic methods such as partial dependence plots, SHAP, Local Interpretable Model-Agnostic Explanations, and permutation feature importance are popular techniques that were encountered frequently in this meta-analysis.

In the subgroup analyses the most informative features for predicting outcomes after SAH and MT were consistent with established prognostic factors. Admission NIHSS, age, and pre-morbid mRS have previously been linked to functional outcomes after thrombectomy. 22 23 Age, Glasgow Coma Scale, and World Federation of Neurosurgical Societies grade were the most common features for predicting functional outcomes after SAH, which are also established risk factors for poor outcome. 24 25 The fact that the models identified established prognostic factors is reassuring of their generalizability. If a model finds spurious associations between non-predictive features and the outcome, it is less likely to perform well on external data.

Generalizability

Generalizability refers to a model’s performance on unseen data. The simplest way to evaluate this is with a hold-out test set, which was used in most studies. This involves setting aside a portion of the data on which to evaluate the final model. Given this is derived from the same dataset as the training data, it provides less information about generalizability than temporal or external validation. ML models may fail to generalize due to various types of dataset shifts including covariate shift, prior probability shift, and concept shift. 2 These occur when the population on which the model is tested differs from that on which the model was trained.

Creation of generalizable ML models for vascular neurosurgery may be further complicated by the rapidly evolving nature of the field. Changes in devices, techniques, indications for treatment, and inclusion criteria for treatment could make previously generalizable ML models obsolete. For example, inclusion of patients with large core infarcts in a temporal validation cohort may reduce the performance of a ML model predicting outcomes after thrombectomy if the training cohort only included patients with favorable ASPECTS scores. Likewise, a ML model created from an institutional dataset where endovascular aneurysm treatment is preferred may perform poorly when externally validated on data from an institution where microsurgical clipping is preferred.

Future directions

Given only 10 studies validated their models externally, this should be a focus for future studies. Prospective validation studies of ML models in vascular neurosurgery are also lacking. Multicenter cohorts should be prioritized to develop more accurate and generalizable models. Combining data from several institutions may also facilitate model development for rare pathology such as dural arteriovenous fistulas. Although tree-based ensemble models such as RF and XGBoost performed better than NNs, there are several new deep learning methods for tabular data not used in the included studies that may perform better for classification problems. 26 A minority of studies provided their source code for model training, which should be included to increase the rigor of peer review and facilitate reproducibility.

Limitations

Rare cerebrovascular pathology such as dural arteriovenous fistulas, AVMs, and cavernous malformations were under-represented in this review. A tool for evaluating the quality and risk of bias specific to ML studies is lacking. Publication bias could also have influenced the results. Some studies did not report the incidence of the target outcome, which prevented their inclusion in the meta-analysis.

ML for clinical outcome prediction in cerebrovascular and endovascular neurosurgery has primarily focused on functional outcome prediction for patients undergoing MT or ruptured aneurysm treatment. In general, ML outperformed standard statistical techniques and non-ML clinical prediction models. RF and XGBoost were the top performing algorithms. Future studies on ML model development should aim to incorporate data from multiple institutions and include external validation.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Li Z , et al
  • Velagapudi L ,
  • Saiegh FA ,
  • Swaminathan S , et al
  • Shlobin NA ,
  • Waqas M , et al
  • Tseng FS , et al
  • Murray NM ,
  • Unberath M ,
  • Hager GD , et al
  • Oakden-Rayner L ,
  • Bird A , et al
  • Badgeley M ,
  • Mocco J , et al
  • Gilotra K ,
  • Mani R , et al
  • Liberati A ,
  • Tetzlaff J , et al
  • Moons KGM ,
  • de Groot JAH ,
  • Bouwmeester W , et al
  • Riley RD , et al
  • Debray TP ,
  • Guilbert F , et al
  • Chawla NV ,
  • Bowyer KW ,
  • Hall LO , et al
  • van Smeden M ,
  • Moons KGM , et al
  • Kannath SK ,
  • Mathew J , et al
  • Carrington AM ,
  • Fieguth PW ,
  • Qazi H , et al
  • Movahedi F ,
  • Rasheed K ,
  • Ghaly M , et al
  • Zhang G , et al
  • O’Connor KP ,
  • Hathidara MY ,
  • Danala G , et al
  • Mandrekar J ,
  • Rabinstein AA , et al
  • Uzunkaya F ,
  • İdil Soylu A

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

X @haydnhoffmanmd, @InoaVioliza, @AdamArthurMD

Contributors HH: conceptualization, methodology, investigation, formal analysis, writing, guarantor. JJS: investigation, data curation, writing - reviewing and editing. VI-A: writing - reviewing and editing, supervision, project administration. DH: writing - reviewing and editing, supervision, project administration. ASA: writing - reviewing and editing, supervision, project administration. DYD: investigation, data curation, writing - reviewing and editing. YK: investigation, data curation, writing - reviewing and editing. NG: conceptualization, methodology, supervision, project administration.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. How to write a literature review: Tips, Format and Significance

    characteristics of literature review pdf

  2. How to Write a Literature Review: Guide, Template, Examples

    characteristics of literature review pdf

  3. Literature Review: Outline, Strategies, and Examples

    characteristics of literature review pdf

  4. A Complete Guide on How to Write Good a Literature Review

    characteristics of literature review pdf

  5. basic parts of a literature review

    characteristics of literature review pdf

  6. 39 Best Literature Review Examples (Guide & Samples)

    characteristics of literature review pdf

VIDEO

  1. Exploring Forms of Literature: Easy Explanation, Types, and Examples

  2. Systematic Literature Review: An Introduction [Urdu/Hindi]

  3. ElicitPro: Advanced AI Tool for Literature Review with PDF Chat Option and Tutorial Features

  4. Literature review in research

  5. Review of PDF.ai

  6. Characteristics of Literature, Elements, Features of Literature, Literature Definition and Types

COMMENTS

  1. PDF LITERATURE REVIEWS

    2. MOTIVATE YOUR RESEARCH in addition to providing useful information about your topic, your literature review must tell a story about how your project relates to existing literature. popular literature review narratives include: ¡ plugging a gap / filling a hole within an incomplete literature ¡ building a bridge between two "siloed" literatures, putting literatures "in conversation"

  2. PDF How to Write a Literature Review

    Where the emphasis is on an investigation or analysis of the literature (analytical evaluation) then your literature review is concentrating on the nature of the problem, its cause and effect as a basis for action to solve it. FORMATIVE When a literature review emphasizes explanation of what you believe the knowledge stemming from

  3. (PDF) Literature Review as a Research Methodology: An overview and

    Literature reviews allow scientists to argue that they are expanding current. expertise - improving on what already exists and filling the gaps that remain. This paper demonstrates the literatu ...

  4. PDF How to Write a Literature Review

    A literature review is a review or discussion of the current published material available on a particular topic. It attempts to synthesizeand evaluatethe material and information according to the research question(s), thesis, and central theme(s). In other words, instead of supporting an argument, or simply making a list of summarized research ...

  5. PDF A Literature Review

    A literature review is a compilation, classification, and evaluation of what other researchers have written on a particular topic. A literature review normally forms part of a research thesis but it can also stand alone as a self-contained review of writings on a subject. In either case, its purpose is to: Place each work in the context of its ...

  6. PDF Writing a Literature Review

    something is done to that material. In a quality literature review, the. "something" that is done to the literature should include synthesis or integrative. work that provides a new perspective on the topic (Boote & Penny 2005; Torraco. 2005), resulting in a review that is more than the sum of the parts. A quality.

  7. PDF CHAPTER 3 Conducting a Literature Review

    A literature review is constructed using information from existing legitimate sources of knowledge. Identifying which sources are appropriate when writing a literature review can be puzzling. Furthermore, knowing where the sources can be found is sometimes challeng-ing. What to do with the sources once they are gathered is a common source of ...

  8. PDF A Guide to Writing the Dissertation Literature Review

    Gall, Borg, and Gall (1996) estimate that completion of an acceptable dissertation literature review will take between three and six months of effort. The purpose of this guide is to collect and summarize the most relevant information on how to write a dissertation literature review.

  9. Writing a Literature Review

    Collect and Read: Collect literature relevant to your topic that fits within the focus, type, scope, and disci-pline you have chosen for your review. Use databases, bibliographies, and recommendations from advisors to identify adequate source material. Read the sources carefully enough to understand their main arguments and relevance to your study.

  10. Guidance on Conducting a Systematic Literature Review

    Step 3: Search the Literature. The quality of literature review is highly dependent on the literature collected for the review—"Garbage-in, garbage-out.". The literature search finds materials for the review; therefore, a systematic review depends on a systematic search of literature. Channels for literature search.

  11. PDF Literature Review and Focusing the Research

    No matter what the reason for the literature review or the paradigm within which the researcher is working, many aspects of the literature review process are the same. A general outline for conducting a literature review is provided in Box 3.1. Some of the differences in the process that emanate from paradigm choice include the following: 1.

  12. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  13. PDF RESEARCH TOPICS, LITERATURE REVIEWS, AND HYPOTHESES

    • Recognize the characteristics of appropriate research topics • Understand the purpose of a literature review • Learn how to search for literature review material • Develop the skills to critically evaluate material that appears in literature reviews • Identify ways to organize and write a cohesive evidence-based literature review

  14. PDF Literature Reviews

    genres of writing. All disciplines use literature reviews. Most commonly, the literature review is a part of a research paper, article, book, thesis or dissertation. Sometimes your instructor may ask you to simply write a literature review as a stand-alone document. This handout will consider the literature review as a section of a larger ...

  15. (PDF) The following are the characteristics of good literature review

    A literature review is a scholarly paper that presents the current knowledge including substantive findings as well as theoretical and methodological contributions to a particular topic. Literature

  16. Guidance on Conducting a Systematic Literature Review

    Introduction. Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  17. (PDF) Writing a Literature Review Research Paper: A step-by-step approach

    A literature review is a surveys scholarly articles, books and other sources relevant to a particular. issue, area of research, or theory, and by so doing, providing a description, summary, and ...

  18. PDF Chapter 4 Systematic Literature Reviews

    ing to a three-step process for planning, conducting and reporting the review, are summarized below. 4.1 Planning the Review To plan a systematic literature review includes several actions: Identification of the need for a review. The need for a systematic review originates from a researcher aiming to understand the state-of-the-art in an area,

  19. PDF Systematic Literature Reviews: an Introduction

    Compared to traditional literature overviews, which often leave a lot to the expertise of the authors, SRs treat the literature review process like a scientific process, and apply concepts of empirical research in order to make the review process more transparent and replicable and to reduce the possibility of bias.

  20. (PDF) A GUIDE TO LITERATURE REVIEW

    The literature review is the nucleus of a research work that might when gotten right spotlights a work and can as well derail a research work when done wrongly. This paper seeks to unveil the practical guides to writing a literature review, from purpose, and components to tips. It follows through the exposition of secondary literature.

  21. (PDF) A guide to systematic literature reviews

    The first stage in conducting a systematic. review is to develop a protocol that clearly defines: 1) the aims. and objectives of the review; 2) the inclusion and exclusion. criteria for studies ...

  22. Machine learning for clinical outcome prediction in cerebrovascular and

    Background Machine learning (ML) may be superior to traditional methods for clinical outcome prediction. We sought to systematically review the literature on ML for clinical outcome prediction in cerebrovascular and endovascular neurosurgery. Methods A comprehensive literature search was performed, and original studies of patients undergoing cerebrovascular surgeries or endovascular procedures ...

  23. (PDF) Traits of Effective Leaders: A Literature Review

    objective traits of l eadership among individual s who are in those positions. We explore literature on. objective leadership traits such as gender, age, education level, and job satisfaction ...

  24. The characteristics of leadership and their ...

    Request PDF | The characteristics of leadership and their effectiveness in quality management in healthcare - A systematic literature review and a content analysis | Effective quality leadership ...