University of Leeds logo

  • Study and research support
  • Literature searching

Literature searching explained

Develop a search strategy.

A search strategy is an organised structure of key terms used to search a database. The search strategy combines the key concepts of your search question in order to retrieve accurate results.

Your search strategy will account for all:

  • possible search terms
  • keywords and phrases
  • truncated and wildcard variations of search terms
  • subject headings (where applicable)

Each database works differently so you need to adapt your search strategy for each database. You may wish to develop a number of separate search strategies if your research covers several different areas.

It is a good idea to test your strategies and refine them after you have reviewed the search results.

How a search strategy looks in practice

Take a look at this example literature search in PsycINFO (PDF) about self-esteem.

The example shows the subject heading and keyword searches that have been carried out for each concept within our research question and how they have been combined using Boolean operators. It also shows where keyword techniques like truncation, wildcards and adjacency searching have been used.

Search strategy techniques

The next sections show some techniques you can use to develop your search strategy.

Skip straight to:

  • Choosing search terms
  • Searching with keywords
  • Searching for exact phrases
  • Using truncated and wildcard searches

Searching with subject headings

  • Using Boolean logic

Citation searching

Choose search terms.

Concepts can be expressed in different ways eg “self-esteem” might be referred to as “self-worth”. Your aim is to consider each of your concepts and come up with a list of the different ways they could be expressed.

To find alternative keywords or phrases for your concepts try the following:

  • Use a thesaurus to identify synonyms.
  • Search for your concepts on a search engine like Google Scholar, scanning the results for alternative words and phrases.
  • Examine relevant abstracts or articles for alternative words, phrases and subject headings (if the database uses subject headings).

When you've done this, you should have lists of words and phrases for each concept as in this completed PICO model (PDF) or this example concept map (PDF).

As you search and scan articles and abstracts, you may discover different key terms to enhance your search strategy.

Using truncation and wildcards can save you time and effort by finding alternative keywords.

Search with keywords

Keywords are free text words and phrases. Database search strategies use a combination of free text and subject headings (where applicable).

A keyword search usually looks for your search terms in the title and abstract of a reference. You may wish to search in title fields only if you want a small number of specific results.

Some databases will find the exact word or phrase, so make sure your spelling is accurate or you will miss references.

Search for the exact phrase

If you want words to appear next to each other in an exact phrase, use quotation marks, eg “self-esteem”.

Phrase searching decreases the number of results you get and makes your results more relevant. Most databases allow you to search for phrases, but check the database guide if you are unsure.

Truncation and wildcard searches

You can use truncated and wildcard searches to find variations of your search term. Truncation is useful for finding singular and plural forms of words and variant endings.

Many databases use an asterisk (*) as their truncation symbol. Check the database help section if you are not sure which symbol to use. For example, “therap*” will find therapy, therapies, therapist or therapists. A wildcard finds variant spellings of words. Use it to search for a single character, or no character.

Check the database help section to see which symbol to use as a wildcard.

Wildcards are useful for finding British and American spellings, for example: “behavio?r” in Medline will find both behaviour and behavior.

There are sometimes different symbols to find a variable single character. For example, in the Medline database, “wom#n” will find woman and also women.

Use adjacency searching for more accurate results

You can specify how close two words appear together in your search strategy. This can make your results more relevant; generally the closer two words appear to each other, the closer the relationship is between them.

Commands for adjacency searching differ among databases, so make sure you consult database guides.

In OvidSP databases (like Medline), searching for “physician ADJ3 relationship” will find both physician and relationship within two major words of each other, in any order. This finds more papers than "physician relationship".

Using this adjacency retrieves papers with phrases like "physician patient relationship", "patient physician relationship", "relationship of the physician to the patient" and so on.

Database subject headings are controlled vocabulary terms that a database uses to describe what an article is about.

Watch our 3-minute introduction to subject headings video . You can also  View the video using Microsoft Stream (link opens in a new window, available for University members only).

Using appropriate subject headings enhances your search and will help you to find more results on your topic. This is because subject headings find articles according to their subject, even if the article does not use your chosen key words.

You should combine both subject headings and keywords in your search strategy for each of the concepts you identify. This is particularly important if you are undertaking a systematic review or an in-depth piece of work

Subject headings may vary between databases, so you need to investigate each database separately to find the subject headings they use. For example, for Medline you can use MeSH (Medical Subject Headings) and for Embase you can use the EMTREE thesaurus.

SEARCH TIP: In Ovid databases, search for a known key paper by title, select the "complete reference" button to see which subject headings the database indexers have given that article, and consider adding relevant ones to your own search strategy.

Use Boolean logic to combine search terms

Boolean operators (AND, OR and NOT) allow you to try different combinations of search terms or subject headings.

Databases often show Boolean operators as buttons or drop-down menus that you can click to combine your search terms or results.

The main Boolean operators are:

OR is used to find articles that mention either of the topics you search for.

AND is used to find articles that mention both of the searched topics.

NOT excludes a search term or concept. It should be used with caution as you may inadvertently exclude relevant references.

For example, searching for “self-esteem NOT eating disorders” finds articles that mention self-esteem but removes any articles that mention eating disorders.

Citation searching is a method to find articles that have been cited by other publications.

Use citation searching (or cited reference searching) to:

  • find out whether articles have been cited by other authors
  • find more recent papers on the same or similar subject
  • discover how a known idea or innovation has been confirmed, applied, improved, extended, or corrected
  • help make your literature review more comprehensive.

You can use cited reference searching in:

  • OvidSP databases
  • Google Scholar
  • Web of Science

Cited reference searching can complement your literature search. However be careful not to just look at papers that have been cited in isolation. A robust literature search is also needed to limit publication bias.

Charles Sturt University

Literature Review: Developing a search strategy

  • Traditional or narrative literature reviews
  • Scoping Reviews
  • Systematic literature reviews
  • Annotated bibliography
  • Keeping up to date with literature
  • Finding a thesis
  • Evaluating sources and critical appraisal of literature
  • Managing and analysing your literature
  • Further reading and resources

From research question to search strategy

Keeping a record of your search activity

Good search practice could involve keeping a search diary or document detailing your search activities (Phelps et. al. 2007, pp. 128-149), so that you can keep track of effective search terms, or to help others to reproduce your steps and get the same results. 

This record could be a document, table or spreadsheet with:

  • The names of the sources you search and which provider you accessed them through - eg Medline (Ovid), Web of Science (Thomson Reuters). You should also include any other literature sources you used.
  • how you searched (keyword and/or subject headings)
  • which search terms you used (which words and phrases)
  • any search techniques you employed (truncation, adjacency, etc)
  • how you combined your search terms (AND/OR). Check out the Database Help guide for more tips on Boolean Searching.
  • The number of search results from each source and each strategy used. This can be the evidence you need to prove a gap in the literature, and confirms the importance of your research question.

A search planner may help you to organise you thoughts prior to conducting your search. If you have any problems with organising your thoughts prior, during and after searching please contact your Library  Faculty Team   for individual help.

  • Literature search - a librarian's handout to introduce tools, terms and techniques Created by Elsevier librarian, Katy Kavanagh Web, this document outlines tools, terms and techniques to think about when conducting a literature search.
  • Search planner

Literature search cycle

what is a search strategy in a literature review

Diagram text description

This diagram illustrates the literature search cycle. It shows a circle in quarters. Top left quarter is identify main concepts with rectangle describing how to do this by identifying:controlled vocabulary terms, synonyms, keywords and spelling. Top right quarter select library resources to search and rectangle describing resources to search library catalogue relevant journal articles and other resource. Bottom right corner of circle search resources and in rectangle consider using boolean searching proximity searching and truncated searching techniques. Bottom left quarter of circle review and refine results. In rectangle evaluate results, rethink keywords and create alerts.

Have a search framework

Search frameworks are mnemonics which can help you focus your research question. They are also useful in helping you to identify the concepts and terms you will use in your literature search.

PICO is a search framework commonly used in the health sciences to focus clinical questions.  As an example, you work in an aged care facility and are interested in whether cranberry juice might help reduce the common occurrence of urinary tract infections.  The PICO framework would look like this:

Now that the issue has been broken up to its elements, it is easier to turn it into an answerable research question: “Does cranberry juice help reduce urinary tract infections in people living in aged care facilities?”

Other frameworks may be helpful, depending on your question and your field of interest. PICO can be adapted to PICOT (which adds T ime) or PICOS (which adds S tudy design), or PICOC (adding C ontext).

For qualitative questions you could use

  • SPIDER : S ample,  P henomenon of  I nterest,  D esign,  E valuation,  R esearch type  

For questions about causes or risk,

  • PEO : P opulation,  E xposure,  O utcomes

For evaluations of interventions or policies, 

  • SPICE: S etting,  P opulation or  P erspective,  I ntervention,  C omparison,  E valuation or
  • ECLIPSE: E xpectation,  C lient group,  L ocation,  I mpact,  P rofessionals,  SE rvice 

See the University of Notre Dame Australia’s examples of some of these frameworks. 

You can also try some PICO examples in the National Library of Medicine's PubMed training site: Using PICO to frame clinical questions.

Contact Your Faculty Team Librarian

Faculty librarians are here to provide assistance to students, researchers and academic staff by providing expert searching advice, research and curriculum support.

  • Faculty of Arts & Education team
  • Faculty of Business, Justice & Behavioural Science team
  • Faculty of Science team

Further reading

Cover Art

  • << Previous: Annotated bibliography
  • Next: Keeping up to date with literature >>
  • Last Updated: May 12, 2024 12:18 PM
  • URL: https://libguides.csu.edu.au/review

Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.

Trinity College Dublin, The University of Dublin

Menu

Trinity Search

Trinity menu.

  • Faculties and Schools
  • Trinity Courses
  • Trinity Research

Writing a Literature Review

  • Getting Started
  • Defining the Scope
  • Finding the Literature

Getting your search right

Keyword searches, widening your search: truncation and wildcards, combining your terms: search operators, being more specific: phrase and proximity searching, subject headings, combining keyword and subject heading searches, using methodological search filters.

  • Managing Your Research
  • Writing the Review
  • Systematic Reviews and Other Review Types
  • Useful Books
  • Useful Videos
  • Useful Links
  • Commonly Used Terms

Test your strategy!

Success in coloured letters

  • Search the database for each of the test records and make a note of the unique record number for each one - in Medline this is in the UI field.
  • Run your search strategy.
  • Run a search for all the record numbers for your test set using 'OR' in between each one.
  • Lastly combine the result of your search strategy with the test set using 'OR'.
  • If the number of records retrieved stays the same then the strategy has identified all the records. If it doesn't, combine the result of your search strategy with the test set, this time using 'NOT'. This will identify the records in your test set which are not being retrieved. Work out why these weren't retrieved and adjust your search strategy accordingly.

Search in scrabble tiles

Think about...

  • abbreviations
  • related terms
  • UK/US spellings
  • singular/plural forms of words
  • thesaurus terms (where available)

Your search is likely to be complex and involve multiple steps to do with different subjects, what are often called "strands" or "strings" in the search. Look at the appendices of existing reviews for an idea of what's involved in creating a comprehensive search.

Most people should start by finding all the articles on Topic A, then moving on to Topic B, then Topic C (and so on), then combining those strands together using AND (see Combining your terms: search operators below). This will then give you results that mention all those topics.

You will then need to adapt (or "translate") your strategy for each database depending on the searching options available on each one. A core of terms is used across multiple databases - this is the "systematic" part - BUT with additions and subtractions as necessary. While the words in the title and abstract might remain the same, it's highly likely the thesaurus terms (if they exist) will be different across the databases. You may need to leave out some strings completely; for example, let's say you are doing a study that needs to find Randomised Controlled Trials (RCTs) on a particular disease and its treatment. You will be looking in multiple databases for words to do with the condition, and also words that are used for RCTs. But when you are looking in databases that are composed entirely of RCTs (trials registers), the part of a search looking for RCTs doesn’t need to be included as it's redundant.

The techniques described below will help ensure you cover everything. Contact your Subject Librarian if you would like guidance on constructing your search.

This video from the University of Reading gives a good overview of literature searching tips and tricks:

Jump to 01:45 for truncation and 05:46 for wildcards.

  • Contact your Subject Librarian
  • How to translate searches between certain databases A fantastic crib sheet from Karolinska Institutet University Library, showing how to translate searches between Medline (Ovid), PsycInfo (Ovid), Embase (Elsevier), Web of Science, Cinahl (Ebsco), Cochrane (Wiley), and PubMed.

Most searches have two elements - the "keywords" part and the "subject headings" part - for each topic. When you are initially constructing your search and trialling it in a database, you are likely to just add your keywords, click Search, and see how many that retrieves. But after that, for any type of comprehensive search, you should look at limiting your keywords to looking in specific search fields .

A field in this context is where the database only looks at one aspect of the information about the article. Common examples are the Author, Title, Abstract, and Journal Name. More esoteric ones could be fields like the CAS Registry Entry or Corporate Author.

In complex reviews like systematic, scoping and rapid reviews, the accepted wisdom is to limit these "keyword" searches to the Title and Abstract fields, plus (if available, and the search is looking to be comprehensive) any available "Author Keyword" or "Contributed Indexing" fields. It is vital that the keywords you use in these fields are identical - you are using the same words in the Title, Abstract and any related fields - and that you combine them using OR (see Combining your terms: search operators below)

A title, abstract and keyword search in MEDLINE

Using keyword searching limited to the Title/Abstract/Keywords fields should reduce the number of results which are retrieved in error or are only on the periphery of your subject. If you do this, please be aware that you will need to ensure that you have definitely also included all relevant subject headings in your search strategy (in databases that use controlled vocabulary) otherwise you risk missing out on useful results. It *is* quite possible that there will be no relevant subject headings in a particular search.

Although some databases will automatically search for variant spellings, mostly they will just search for the exact letters you type in. Use wildcard and truncation symbols to take control of your search and include variations to widen your search and ensure you don't miss something relevant.

A truncation symbol (*) retrieves any number of letters  - useful to find different word endings based on the root of a word: africa*  will find africa, african, africans, africaans agricultur*  will find agriculture, agricultural, agriculturalist

A wildcard symbol (?) replaces a single letter . It's useful for retrieving alternate spelling spellings (i.e. British vs American English) and simple plurals: wom?n  will find woman or women behavio?r  will find behaviour or behavior

Hint: Not all databases use the * and ? symbols - some may use different ones (! instead of *, for example), or not have the feature at all, so check the online help section of the database before you start.

  • Introduction

Search operators (also called Boolean operators) allow you to include multiple words and concepts in your searches. This means you can search for all of your terms at once rather than carrying out multiple searches for each concept.

There are three main operators:

  • OR - for combining alternative words for your concepts and widening your results e.g. women OR gender
  • AND - for combining your concepts giving more specific results e.g. women AND Africa
  • NOT  - to exclude specific terms from your search - use this with caution as you might exclude relevant results accidentally!

women OR female

what is a search strategy in a literature review

Using OR will bring you back records containing either of your search terms. It will return items that include both terms, but will also return items that contain only one of the terms.

This will give you a broader range of results.

OR can be used to link together synonyms. These are then placed in brackets to show that they are all the same concept.

  • (cat OR kitten OR feline)
  • (women OR female)

women AND Africa

Using AND will find items that contain both of your search terms, giving you a more specific set of results.

If you're getting too many results, using AND can be a good way to narrow your search.

women NOT Africa

Using NOT will find articles containing a particular term, but will exclude articles containing your second term.

Use this with caution - by excluding results you might miss out on key resources.

  • Phrase searching
  • Proximity searching

Sometimes your search may contain common words (i.e. development, communication) which will retrieve too many irrelevant records, even when using an AND search. On many databases, including Google, to look for a specific phrase, use inverted commas:

  • "agricultural development"
  • "foot and mouth"

Your search will only bring back items containing these exact phrases.

Some databases automatically perform a phrase search if you do not use any search operators. For example, "agriculture africa" is not a phrase used in English so you may not find any items on the subject. Use AND in between your search words to avoid this.

On Scopus to search for an exact phrase use { } e.g. {agricultural development}. Using quotes on Scopus will find your words in the same field (e.g., title) but not necessarily next to one another. In this database, you need to be very careful with those brackets - {heart-attack} and {heart attack} will return different results because the dash is included. Wildcards are searched as actual characters, e.g. {health care?} returns results such as: Who pays for health care?

Some databases use proximity operators, which are a more advanced search function. You can use these to tell the database how close one word must be to another and, in some cases, in what order. This makes a search more specific and excludes irrelevant records.

For instance, if you were searching for references about women in Africa, you might retrieve irrelevant records for items about women published in Africa. Performing a proximity search will only retrieve the two words in the same sentence, making your search more accurate.

Each database has its own way of proximity searching, often with multiple ways of doing it, so it's important to check the online help before you start . Here are some examples of the variety of possible searches:

  • Web of Science : women  same Africa - retrieves records where the words 'women' and 'Africa' appear in the same sentence
  • JSTOR : agricultural development ~5  - retrieves records where the words 'agricultural' and 'development' are within five words of one another
  • Scopus : agricultural  W/2 development - retrieves records where the word 'agricultural' is within two words of the word 'development'. 

After completing your keywords search on a topic, you can move on to looking for appropriate subject headings.

Most databases have this controlled vocabulary feature (standardised subject headings or thesaurus terms - a bit like standard tags) which can help ensure you capture all the relevant studies; for example, MEDLINE, CENTRAL and PubMed use the exact same headings, which are called MeSH (Medical Subject Headings). Some of these headings will be the same in other related databases like CINAHL, but many of them will be slightly different, could be the same but have subtly different meanings, or not be there at all.

Not all databases have these types of subject headings - Web of Science and JSTOR don't allow you to search for subject headings like these, although you can of course search for subjects in them.

The easiest way to search for a subject heading is to go to the relevant area in the database that searches specifically for them; this might be called something like Thesaurus, Subject Headings, or similar. Then search for some of the words to do with your topic - not all of them at once, just a word on its own or a very simple phrase. Does this bring anything up? When you read the description, are you talking about the same thing?

You can then tell it to search for everything listed under that subject heading, then move on to looking for another subject heading. It's quite common for one topic to have several relevant headings.

Once you have found all the relevant headings, and made the database run searches for them, you will then combine them together using OR.

After you have found all the title/abstract/keywords for Topic A, and then all the relevant subject headings, you then combine those together using OR. You may need to go into the Search History section of the database to do this, and work out whether you can tick boxes next to your various searches to combine them, or have to type out something like "#1 OR #2".

How to combine title/abstract/keyword searches with subject searches in Ebsco's MEDLINE

This gives you a "super search" with everything in the database on that topic. It's likely to be a lot!

It might be that adding them together gives no extra results than the amount in either the keywords or the subject headings on their own. This is unusual, but not impossible:

A combined title, abstract, keywords and subject heading search in MEDLINE

You now go back to the start and for Topic B do the same title/abstract/keywords searches, then the relevant subject headings searches, then combine them as above. Then Topic C, and so on. Again, each of these super searches may have very high numbers - possibly millions.

Finally, you then combine all these super searches together, but this time using AND; they need to mention all the topics. It's possible that there are no articles in that database that mention all those things - the more subjects you AND together, the less results you are likely to find. However, it's also possible to end up with zero as there is a mistake in your search, and in most cases having zero results won't allow you to write your paper or thesis. So contact us if you think you have too few or two many results, and we can advise.

Methodological search filters are search terms or strategies that identify a topic or aspect. They are predefined, tried and tested filters which can be applied to a search.

Study types: 'systematic reviews', 'Randomised Controlled Trials'

Age groups: 'children', 'elderly'

Language: 'English'

They are available to select via the results filters displayed alongside your results and are normally applied at the very end of your search . For instance, on PubMed after running your results it is possible to limit by 'Ages' which gives predefined groupings such as 'Infant: birth-23 months'. These limits and filters are not always the same across the databases, so do be careful .

  • << Previous: Finding the Literature
  • Next: Managing Your Research >>
  • Last Updated: Oct 10, 2023 1:52 PM
  • URL: https://libguides.tcd.ie/literature-reviews
  • Subject guides
  • Researching for your literature review
  • Develop a search strategy

Researching for your literature review: Develop a search strategy

  • Literature reviews
  • Literature sources
  • Before you start
  • Keyword search activity
  • Subject search activity
  • Combined keyword and subject searching
  • Online tutorials
  • Apply search limits
  • Run a search in different databases
  • Supplementary searching
  • Save your searches
  • Manage results

Identify key terms and concepts

Start developing a search strategy by identifying the key words and concepts within your research question. The aim is to identify the words likely to have been used in the published literature on this topic.

For example: What are the key infection control strategies for preventing the transmission of Meticillin-resistant Staphylococcus aureus (MRSA) in aged care homes .

Treat each component as a separate concept so that your topic is organised into separate blocks (concepts).

For each concept block, list the key words derived from your research question, as well as any other relevant terms or synonyms that you have found in your preliminary searches. Also consider singular and plural forms of words, variant spellings, acronyms and relevant index terms (subject headings).  

As part of the process of developing a search strategy, it is recommended that you keep a master list of search terms for each key concept. This will make it easier when it comes to translating your search strategy across multiple database platforms. 

Concept map template for documenting search terms

Combine search terms and concepts

Boolean operators are used to combine the different concepts in your topic to form a search strategy. The main operators used to connect your terms are AND and OR . See an explanation below:

  • Link keywords related to a single concept with OR
  • Linking with OR broadens a search (increases the number of results) by searching for any of the alternative keywords

Example: nursing home OR aged care home

  • Link different concepts with AND
  • Linking with AND narrows a search (reduces the number of results) by retrieving only those records that include all of your specified keywords

Example: nursing home AND infection control

  • using NOT narrows a search by excluding results that contain certain search terms
  • Most searches do not require the use of the NOT operator

Example: aged care homes NOT residential homes will retrieve all the results that include the words aged care homes but don't include the words residential homes . So if an article discussed both concepts this article would not be retrieved as it would be excluded on the basis of the words residential homes .

See the website for venn diagrams demonstrating the function of AND/OR/NOT:

Combine the search terms using Boolean

Advanced search operators - truncation and wildcards

By using a truncation symbol you can capture all of the various endings possible for a particular word. This may increase the number of results and reduce the likelihood of missing something relevant. Some tips about truncation:

  • The truncation symbol is generally an asterisk symbol * and is added at the end of a word.
  • It may be added to the root of a word that is a word in itself. Example: prevent * will retrieve prevent, prevent ing , prevent ion prevent ative etc. It may also be added to the root of a word that is not a word in itself. Example: strateg * will retrieve strateg y , strateg ies , strateg ic , strateg ize etc.
  • If you don't want to retrieve all possible variations, an easy alternative is to utilise the OR operator instead e.g. strategy OR strategies. Always use OR instead of truncation where the root word is too small e.g. ill OR illness instead of ill*

There are also wildcard symbols that function like truncation but are often used in the middle of a word to replace zero, one or more characters.

  • Unlike the truncator which is usually an asterisk, wildcards vary across database platforms
  • Common wildcards symbols are the question mark ? and hash #.
  • Example:  wom # n finds woman or women, p ? ediatric finds pediatric or paediatric.  

See the Database search tips for details of these operators, or check the Help link in any database.

Phrase searching

For words that you want to keep as a phrase, place two or more words in "inverted commas" or "quote marks". This will ensure word order is maintained and that you only retrieve results that have those words appearing together.

Example: “nursing homes”

There are a few databases that don't require the use of quote marks such as Ovid Medline and other databases in the Ovid suite. The Database search tips provides details on phrase searching in key databases, or you can check the Help link in any database.

Subject headings (index terms)

Identify appropriate subject headings (index terms).

Many databases use subject headings to index content. These are selected from a controlled list and describe what the article is about. 

A comprehensive search strategy is often best achieved by using a combination of keywords and subject headings where possible.

In-depth knowledge of subject headings is not required for users to benefit from improved search performance using them in their searches.

Advantages of subject searching:

  • Helps locate articles that use synonyms, variant spellings, plurals
  • Search terms don’t have to appear in the title or abstract

Note: Subject headings are often unique to a particular database, so you will need to look for appropriate subject headings in each database you intend to use.

Subject headings are not available for every topic, and it is best to only select them if they relate closely to your area of interest.

MeSH (Medical Subject Headings)

The MeSH thesaurus provides standard terminology, imposing uniformity and consistency on the indexing of biomedical literature. In Pubmed/Medline each record is tagged with  MeSH  (Medical Subject Headings).

The MeSH vocabulary includes:

  • Represent concepts found in the biomedical literature
  • Some headings are commonly considered for every article (eg. Species (including humans), Sex, Age groups (for humans), Historical time periods)
  • attached to MeSH headings to describe a specific aspect of a concept
  • describe the type of publication being indexed; i.e., what the item is, not what the article is about (eg. Letter, Review, Randomized Controlled Trial)
  • Terms in a separate thesaurus, primarily substance terms

Create a 'gold set'

It is useful to build a ‘sample set’ or ‘gold set’ of relevant references before you develop your search strategy..

Sources for a 'gold set' may include:

  • key papers recommended by subject experts or supervisors
  • citation searching - looking at a reference list to see who has been cited, or using a citation database (eg. Scopus, Web of Science) to see who has cited a known relevant article
  • results of preliminary scoping searches.

The papers in your 'gold set' can then be used to help you identify relevant search terms

  • Look up your 'gold set' articles in a database that you will use for your literature review. For the articles indexed in the database, look at the records to see what keywords and/or subject headings are listed.

The 'gold set' will also provide a means of testing your search strategy

  • When an article in the sample set that is also indexed in the database is not retrieved, your search strategy can be revised in order to include it (see what concepts or keywords can be incorporated into your search strategy so that the article is retrieved).
  • If your search strategy is retrieving a lot of irrelevant results, look at the irrelevant records to determine why they are being retrieved. What keywords or subject headings are causing them to appear? Can you change these without losing any relevant articles from your results?
  • Information on the process of testing your search strategy using a gold set can be found in the systematic review guide

Example search strategy

A search strategy is the planned and structured organisation of terms used to search a database.

An example of a search strategy incorporating all three concepts, that could be applied to different databases is shown below:

screenshot of search strategy entered into a database Advanced search screen

You will use a combination of search operators to construct a search strategy, so it’s important to keep your concepts grouped together correctly. This can be done with parentheses (round brackets), or by searching for each concept separately or on a separate line.

The above search strategy in a nested format (combined into a single line using parentheses) would look like:

("infection control*" OR "infection prevention") AND ("methicillin resistant staphylococcus aureus" OR "meticillin resistant staphylococcus aureus" OR MRSA) AND ( "aged care home*" OR "nursing home*")

  • << Previous: Search strategies - Health/Medical topic example
  • Next: Keyword search activity >>

Duke University Libraries

Literature Reviews

  • 3. Search the literature
  • Getting started
  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search

Creating a search strategy

Document your search, rinse and repeat, grey literature, grey literature sources.

  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

what is a search strategy in a literature review

Contact a Librarian

Ask a Librarian

  • Thesauri / subject headings
  • Ask a librarian!

When conducting a literature review, it is imperative to brainstorm a list of keywords related to your topic. Examining the titles, abstracts, and author-provided keywords of pertinent literature is a great starting point.

Things to keep in mind:

  • Alternative spellings (e.g., behavior and behaviour)
  • Variants and truncation (e.g., environ* = environment, environments, environmental, environmentally)
  • Synonyms (e.g., alternative fuels >> electricity, ethanol, natural gas, hydrogen fuel cells)
  • Phrases and double quotes (e.g., "food security" versus food OR security) 

One way to visually organize your thoughts is to create a table where each column represents one concept in your research question. For example, if your research question is...

Does social media play a role in the number of eating disorder diagnoses in college-aged women?

...then your table might look something like this:

Generative AI tools, such as chatbots, are actually quite helpful at this stage when it comes to brainstorming synonyms and other related terms. You can also look at author-provided keywords from benchmark articles (key papers related to your topic), databases' controlled vocabularies, or do a preliminary search and look through abstracts from relevant papers.

Generative AI tools :  ChatGPT ,  Google Gemini (formerly Bard) ,  Claude , Microsoft Copilot

For more information on how to incorporate AI tools into your research, check out the section on  AI Tools .

Boolean searching yields more effective and precise search results. Boolean operators include  AND , OR , and NOT . These are logic-based words that help search engines narrow down or broaden search results.

Using the Operators

The Boolean operator  AND  tells a search engine that you want to find information about two (or more) search terms. For example, sustainability AND plastics. This will narrow down your search results because the search engine will only bring back results that include both search terms.

The Boolean operator  OR  tells the search engine that you want to find information about either search term you've entered. For example, sustainability OR plastics. This will broaden your search results because the search engine will bring back any results that have either search term in them.

The Boolean operator  NOT  tells the search engine that you want to find information about the first search term, but nothing about the second. For example, sustainability NOT plastics. This will narrow down your research results because the search engine will bring back only resources about the first search term (sustainability), but exclude any resources that include the second search term (plastics).

Boolean searching Venn diagram

Some databases offer a thesaurus , controlled vocabulary , or list of available subject headings that are assigned to each of its records, either by an indexer or by the original author. The use of controlled vocabularies is a highly effective, efficient, and deliberate way of comprehensively discovering the material within a field of study.

  • APA Thesaurus of Psychological Index Terms  (via PsycInfo database)
  • Medical Subject Headings (MeSH)  (via PubMed)
  • List of ProQuest database thesauri

Web of Science's Core Collection offers a list of subject categories that are searchable by the  Web of Science  Categories field .

Reach out to a Duke University Libraries librarian at [email protected] or use the chat function.

Information animated icons created by Freepik - Flaticon

While not essential for traditional literature reviews, documenting your search can help you:

  • Keep track of what you've done so that you don't repeat unproductive searches
  • Reuse successful search strategies for future papers
  • Help you describe your search process for manuscripts
  • Justify your search process

Documenting your search will help you stay organized and save time when tweaking your search strategy. This is a critical step for rigorous review papers, such as  systematic reviews .

One of the easiest ways to document your search strategy is to use a table like this:

what is a search strategy in a literature review

If you find that you're receiving too many results , try the following tips:

  • Use more AND operators to connect keywords/concepts in order to narrow down your search.
  • Use more specific keywords rather than an umbrella term (e.g., "formaldehyde" instead of "chemical").
  • Use quotation marks (" ") to search an entire phrase.
  • Use filters such as date, language, document type, etc.
  • Examine your research question to see if it's still too broad.

On the other hand, if you're not receiving enough results :

  • Use more OR operators to connect related terms and bring in additional results.
  • Use more generic terms (e.g., "acetone" instead of "dimethyl ketone") or fewer keywords altogether.
  • Use wildcard operators (*) to expand your results (e.g., toxi* searches toxic, toxin, toxins).
  • Examine your research question to see if it's too narrow.

Grey (or gray) literature refers to research materials and publications that are not commercially published or widely distributed through traditional academic channels. If you are tasked with doing an intensive type of review or evidence synthesis, or you are involved in research related to policy-making, you will likely want to include searching for grey literature.   This type of literature includes:

  • working papers
  • government documents
  • conference proceedings
  • theses and dissertations
  • white papers...etc.

For more information on grey literature, please see our Grey Literature guide .

  • Public policy
  • Health/medicine
  • Statistics/data
  • Thesis/dissertation
  • ProQuest Central This link opens in a new window Search for articles from thousands of scholarly journals
  • OpenDOAR OpenDOAR is the quality-assured, global Directory of Open Access Repositories. We host repositories that provide free, open access to academic outputs and resources.
  • OAIster A catalog of millions of open-access resources harvested from WorldCat.
  • GreySource An index of repository hyperlinks across all disciplines.
  • Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. We conduct public opinion polling, demographic research, content analysis and other data-driven social science research.
  • The World Bank The World Bank is a vital source of financial and technical assistance to developing countries around the world.
  • World Health Organization (WHO): IRIS IRIS is the Institutional Repository for Information Sharing, a digital library of WHO's published material and technical information in full text produced since 1948.
  • PolicyArchive PolicyArchive is a comprehensive digital library of public policy research containing over 30,000 documents.
  • Kaiser Family Foundation KFF is the independent source for health policy research, polling, and journalism. Our mission is to serve as a nonpartisan source of information for policymakers, the media, the health policy community, and the public.
  • MedNar Mednar is a free, medically-focused deep web search engine that uses Explorit Everywhere!, an advanced search technology by Deep Web Technologies. As an alternative to Google, Mednar accelerates your research with a search of authoritative public and deep web resources, returning the most relevant results to one easily navigable page.
  • Global Index Medicus The Global Index Medicus (GIM) provides worldwide access to biomedical and public health literature produced by and within low-middle income countries. The main objective is to increase the visibility and usability of this important set of resources. The material is collated and aggregated by WHO Regional Office Libraries on a central search platform allowing retrieval of bibliographical and full text information.

For more in-depth information related to grey literature searching in medicine, please visit Duke Medical Center Library's guide .

  • Education Resources Information Center (ERIC) ERIC is a comprehensive, easy-to-use, searchable, Internet-based bibliographic and full-text database of education research and information. It is sponsored by the Institute of Education Sciences within the U.S. Department of Education.
  • National Center for Occupational Safety and Health (NIOSHTIC-2) NIOSHTIC-2 is a searchable bibliographic database of occupational safety and health publications, documents, grant reports, and other communication products supported in whole or in part by NIOSH (CDC).
  • National Technical Information Service (NTIS) The National Technical Information Service acquires, indexes, abstracts, and archives the largest collection of U.S. government-sponsored technical reports in existence. The NTRL offers online, free and open access to these authenticated government technical reports.
  • Science.gov Science.gov provides access to millions of authoritative scientific research results from U.S. federal agencies.
  • GovInfo GovInfo is a service of the United States Government Publishing Office (GPO), which is a Federal agency in the legislative branch. GovInfo provides free public access to official publications from all three branches of the Federal Government.
  • CQ Press Library This link opens in a new window Search for analysis of Congressional actions and US political issues. Includes CQ Weekly and CQ Researcher.
  • Congressional Research Service (CRS) This collection provides the public with access to research products produced by the Congressional Research Service (CRS) for the United States Congress.

Please see the Data Sets and Collections page from our Statistical Sciences guide.

  • arXiv arXiv is a free distribution service and an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Materials on this site are not peer-reviewed by arXiv.
  • OSF Preprints OSF Preprints is an open access option for discovering multidisciplinary preprints as well as postprints and working papers.
  • << Previous: 2. Plan your search
  • Next: 4. Organize your results >>
  • Last Updated: May 17, 2024 8:42 AM
  • URL: https://guides.library.duke.edu/litreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

  • Interlibrary Loan

Ask an Expert

Ask an expert about access to resources, publishing, grants, and more.

MD Anderson faculty and staff can also request a one-on-one consultation with a librarian or scientific editor.

  • Library Calendar

Log in to the Library's remote access system using your MyID account.

The University of Texas MD Anderson Cancer Center Home

  • Library Home
  • Research Guides

Literature Search Basics

Develop a search strategy.

  • Define your search
  • Decide where to search

What is a search strategy

Advanced search tips.

  • Track and save your search
  • Class Recording: Writing an Effective Narrative Review
  • A search strategy includes  a combination of keywords, subject headings, and limiters (language, date, publication type, etc.)
  • A search strategy should be planned out and practiced before executing the final search in a database.
  • A search strategy and search results should be documented throughout the searching process.

What is a search strategy?

A search strategy is an organized combination of keywords, phrases, subject headings, and limiters used to search a database.

Your search strategy will include:

  • keywords 
  • boolean operators
  • variations of search terms (synonyms, suffixes)
  • subject headings 

Your search strategy  may  include:

  • truncation (where applicable)
  • phrases (where applicable)
  • limiters (date, language, age, publication type, etc.)

A search strategy usually requires several iterations. You will need to test the strategy along the way to ensure that you are finding relevant articles. It's also a good idea to review your search strategy with your co-authors. They may have ideas about terms or concepts you may have missed.

Additionally, each database you search is developed differently. You will need to adjust your strategy for each database your search.  For instance, Embase is a European database, many of the medical terms are slightly different than those used in MEDLINE and PubMed.

Choose search terms

Start by writing down as many terms as you can think of that relate to your question. You might try  cited reference searching  to find a few good articles that you can review for relevant terms.

Remember than most terms or  concepts can be expressed in different ways.  A few things to consider:

  • synonyms: "cancer" may be referred to as "neoplasms", "tumors", or "malignancy"
  • abbreviations: spell out the word instead of abbreviating
  • generic vs. trade names of drugs

Search for the exact phrase

If you want words to appear next to each other in an exact phrase, use quotation marks, eg “self-esteem”.

Phrase searching decreases the number of results you get. Most databases allow you to search for phrases, but check the database guide if you are unsure.

Truncation and wildcards

Many databases use an asterisk (*) as their truncation symbol  to find various word endings like singulars and plurals.  Check the database help section if you are not sure which symbol to use. 

"Therap*"

retrieves: therapy, therapies, therapist or therapists.

Use a wildcard (?) to find different spellings like British and American spellings.

"Behavio?r" retrieves behaviour and behavior.

Searching with subject headings

Database subject headings are controlled vocabulary terms that a database uses to describe what an article is about.

Using appropriate subject headings enhances your search and will help you to find more results on your topic. This is because subject headings find articles according to their subject, even if the article does not use your chosen key words.

You should combine both subject headings and keywords in your search strategy for each of the concepts you identify. This is particularly important if you are undertaking a systematic review or an in-depth piece of work

Subject headings may vary between databases, so you need to investigate each database separately to find the subject headings they use. For example, for MEDLINE you can use MeSH (Medical Subject Headings) and for Embase you can use the EMTREE thesaurus.

SEARCH TIP:  In Ovid databases, search for a known key paper by title, select the "complete reference" button to see which subject headings the database indexers have given that article, and consider adding relevant ones to your own search strategy.

Use Boolean logic to combine search terms

what is a search strategy in a literature review

Boolean operators (AND, OR and NOT) allow you to try different combinations of search terms or subject headings.

Databases often show Boolean operators as buttons or drop-down menus that you can click to combine your search terms or results.

The main Boolean operators are:

OR is used to find articles that mention  either  of the topics you search for.

AND is used to find articles that mention  both  of the searched topics.

NOT excludes a search term or concept. It should be used with caution as you may inadvertently exclude relevant references.

For example, searching for “self-esteem NOT eating disorders” finds articles that mention self-esteem but removes any articles that mention eating disorders.

Adjacency searching 

Use adjacency operators to search by phrase or with two or more words in relation to one another. A djacency searching commands differ among databases. Check the database help section if you are not sure which searching commands to use. 

In Ovid Medline

"breast ADJ3 cancer" finds the word breast within three words of cancer, in any order.

This includes breast cancer or cancer of the breast.

Cited Reference Searching

Cited reference searching is a method to find articles that have been cited by other publications. 

Use cited reference searching to:

  • find keywords or terms you may need to include in your search strategy
  • find pivotal papers the same or similar subject area
  • find pivotal authors in the same or similar subject area
  • track how a topic has developed over time

Cited reference searching is available through these tools:

  • Web of Science
  • GoogleScholar
  • << Previous: Decide where to search
  • Next: Track and save your search >>
  • Last Updated: Nov 29, 2022 3:34 PM
  • URL: https://mdanderson.libguides.com/literaturesearchbasics

Banner

Best Practice for Literature Searching

  • Literature Search Best Practice
  • What is literature searching?
  • What are literature reviews?
  • Hierarchies of evidence
  • 1. Managing references
  • 2. Defining your research question
  • 3. Where to search
  • 4. Search strategy
  • 5. Screening results
  • 6. Paper acquisition
  • 7. Critical appraisal
  • Further resources
  • Training opportunities and videos
  • Join FSTA student advisory board This link opens in a new window
  • Chinese This link opens in a new window
  • Italian This link opens in a new window
  • Persian This link opens in a new window
  • Portuguese This link opens in a new window
  • Spanish This link opens in a new window

Creating a search strategy

Once you have determined what your research question is and where you think you should search, you need to translate your question into a useable search. Doing so will:

  • Make it much more likely that you will find the relevant research and minimise false hits (irrelevant results)
  • Save you time in the long run
  • Help you to stay objective throughout your searching and stick to your plan
  • Help you replicate and update your results (where needed)
  • Help future researchers build on your research.

If you need to explore a topic first, your search strategy can initially be quite loose. You can then revisit search terms and update your search strategy accordingly. Record your search strategy as you develop it and capture the final version for each place that you search.

Remember that information retrieval in the area of food is complex because of the broadness of the field and the way in which content is indexed.   As a result, there is often a high level of ‘noise’ when searching food topics in a database not designed for food content. Creating successful search strategies involves knowledge of a database, its scope, indexing and structure.

what is a search strategy in a literature review

  • Key concepts and meaningful terms
  • Keywords or subject headings
  • Alternative keywords
  • Care in linking concepts correctly
  • Regular evaluation of search results, to ensure that your search is focused
  • A detailed record of your final strategy. You will need to re-run your search at the end of the review process to catch any new literature published since you began.
  • Search matrix
  • Populated matrix
  • Revised matrix (after running searches)

what is a search strategy in a literature review

  • DOWNLOAD THE SEARCH MATRIX

Using a search matrix helps you brainstorm and collect words to include in your search. To populate a search matrix:

  • Identify the main concepts in your search
  • Run initial searches with your terms, scanning abstract and subject terms (sometimes called descriptors, keywords, MeSH headings, or thesaurus terms, depending on which database you are using) of relevant results for words to add to the matrix.
  •  Explore a database thesaurus hierarchy for suitable broader and narrower terms.
Note : You don’t need to fill all of the boxes in a search matrix.

what is a search strategy in a literature review

You will find that you need to do some searches as you experiment in running it and this will help you refine your search strategy. For the search on this example question:

  • Some of the broader terms turned out to be too broad, introducing a host of irrelevant results about pork and chicken
  • Some of the narrower terms were unnecessary, as any result containing “beef extract” is captured by just using the term beef.

See the revised matrix (after running searches) tab!

what is a search strategy in a literature review

This revised matrix shows both adjustments made to terms, and how the terms are connected with Boolean operators.  Different forms of the same concept (the columns) are connected with OR, and each of the different concepts are connected with AND.   

Search tools

  • Boolean operators
  • Phrases and proximity searching
  • Truncation and wildcards

what is a search strategy in a literature review

Boolean operators tell a database or search engine how the terms you type are related to each other.  

Use OR to connect variations representing the same concept . In many search interfaces you will want to put your OR components inside parentheses like this: (safe OR “food safety” OR decontamination OR contamination OR disinfect*). These are now lumped together into a single food safety concept for your search.

Use AND to link different concepts. By typing (safe OR “food safety” OR decontamination OR contamination OR disinfect*) AND (beef OR “cattle carcasses”)—you are directing the database to display results containing both concepts.

NOT  eliminates all results containing a specific word.  Use NOT with caution. The term excluded might be used in a way you have not anticipated, and you will not know because you will not see the missing results.

Learn more about using Boolean operators:  Research Basics: Using Boolean Operators to Build a Search (ifis.org)

The search in the matrix above would look like this in a database:

("food safety"  OR  safety  OR  decontamination  OR  contamination  OR  disinfection)  AND  (thaw*  OR  defrost*  OR  "thawing medium")  AND  ("sensory quality attributes"  OR  "sensory perception"  OR  quality  OR  aroma  OR  appearance  OR  "eating quality"  OR  juiciness  OR  mouthfeel  OR  texture  OR  "mechanical properties"  OR  "sensory analysis"  OR  "rheological properties")  AND  (beef  OR  "cattle carcasses")

Thesaurus terms will help you capture variations in words and spellings that researchers might use to refer to the same concept, but you can and should also use other mechanisms utilised by databases to do the same. This is especially important for searches in databases where the thesaurus is not specialised for food science.

  • Phrase searching , putting two or more words inside quotation marks like “food safety” will ensure that those words appear in a single field (i.e. title or abstract or subject heading) together as the phrase. Phrase searching can eliminate false hits where the words used separately do not represent the needed concept.
  • Some databases allow you to use proximity searching to specify that words need to be near each other. For instance, if you type ripening N5 cheese you will get results with a maximum of five words between ripening and cheese .  You would get results containing cheese ripening as well as results containing ripening of semi-hard goat cheese .

Learn how to test if a phrase search or a proximity search is the better choice for your search:  Proximity searching, phrase searching, and Boolean AND: 3 techniques to focus your literature search (ifis.org)

Note : Proximity symbols vary from database to database. Some use N plus a number, while others use NEAR, ADJ or W. Always check the database help section to be sure that you are using the right symbols for that database .

Truncating a word mean typing the start of a word, followed by a symbol, usually an asterisk (*).  This symbol tells the database to return the letters you have typed followed either by no letters (if appropriate) or letters.  It is an easy way to capture a concept that might be expressed with a variety of endings. 

Sometimes you need to adjust where you truncate to avoid irrelevant results.  See the difference between results for nutri* or nutrit*

Inserting  wildcard  symbols into words covers spelling variations.  In some databases, typing  organi?ation  would return results with  organisation  or  organization , and  flavo#r  would bring back results with  flavor  or  flavour .  

Note : While the truncation symbol is often *, it can also be $ or !.   Wildcard symbols also vary from database to database. $ or ? are sometimes used. Always check the database help section to be sure that you are using the right symbols for that database.

In building a search you can combine all the tools available to you.    “Brewer* yeast”  , which uses both phrase searching and truncation, will bring back results for  brewer yeast ,  brewer’s yeast  and  brewers yeast , three variations which are all used in the literature.

Best Practice!

BEST PRACTICE RECOMMENDATION:   Always check a database's help section to be sure that you are using the correct  proximity, truncation or wildcard symbols for that database. 

Handsearching

It is good practice to supplement your database searches with handsearching . This is the process of manually looking through the table of contents of journals and conferences to find studies that your database searches missed. A related activity is looking through the reference lists of relevant articles found through database searches. There are three reasons why doing both these things is a good idea:

  • If, through handsearching, you identify additional articles which are in the database you used but weren’t included in the results from your searches, you can look at the article records to consider if you need to adjust your search strategy. You may have omitted a useful variation of a concept from your search string.
  • Even when your search string is excellent, some abstracts and records don’t contain terms that allow them to be easily identified in a search, but are relevant to your research.
  • References might point to research published before the indexing began for the databases you are using.

For handsearching, target journals or conference proceedings that are clearly in the area of your topic and look through tables of contents. Sometimes valuable information within supplements or letters is not indexed within databases.

Academic libraries might subscribe to tools which can speed the process such as Zetoc  (which includes conference and journal contents) or Browzine (which only covers journals).  You can also see past and current issues’ tables of contents on a journal’s webpage.

Handsearching is a valuable but labour-intensive activity, so think carefully about where to invest your time.

Best practice!

BEST PRACTICE RECOMMENDATION:   Ask a colleague, lecturer, or librarian to review your search strategy. This can be very helpful, especially if you are new to a topic. It adds credibility to your literature search and will help ensure that you are running the best search possible.

BEST PRACTICE RECOMMENDATION:   Remember to save a detailed record of your searches so that you can run them shortly before you are ready to submit your project to see if any new relevant research has been published since you embarked on your project. A good way to do this is to document:

  • Where the search was run
  • The exact search
  • The date it was run
  • The number of results

Keeping all this information will make it easy to see if your search picks up new results when you run it again.

BEST PRACTICE RECOMMENDATION: If you are publishing your research, take note of journals appearing frequently in your search results for an indication of where to publish a research topic for good impact.

  • << Previous: 3. Where to search
  • Next: 5. Screening results >>
  • Last Updated: May 17, 2024 5:48 PM
  • URL: https://ifis.libguides.com/literature_search_best_practice
  • Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

204k Accesses

205 Citations

118 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

what is a search strategy in a literature review

  • University of Michigan Library
  • Research Guides

Systematic Reviews

  • Search Strategy
  • Work with a Search Expert
  • Covidence Review Software
  • Types of Reviews
  • Evidence in a Systematic Review
  • Information Sources

Developing an Answerable Question

Creating a search strategy, identifying synonyms & related terms, keywords vs. index terms, combining search terms using boolean operators, a sr search strategy, search limits.

  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment
  • Reporting Results
  • For Search Professionals

Validated Search Filters

Depending on your topic, you may be able to save time in constructing your search by using specific search filters (also called "hedges") developed & validated by researchers in the Health Information Research Unit (HiRU) of McMaster University, under contract from the National Library of Medicine.  These filters can be found on

  • PubMed’s Clinical Queries &  Health Services Research Queries pages
  • Ovid Medline’s Clinical Queries  filters or here
  • Embase  & PsycINFO
  • EBSCOhost’s main search page for CINAHL (Clinical Queries category)
  • HiRU’s Nephrology Filters page
  • American U of Beirut, esp. for " humans" filters .
  • Countway Library of Medicine methodology filters
  • InterTASC Information Specialists' Sub-Group Search Filter Resource
  • SIGN (Scottish Intercollegiate Guidelines Network) filters page

Why Create a Sensitive Search?

In many literature reviews, you try to balance the sensitivity of the search (how many potentially relevant articles you find) &  specificit y (how many definitely relevant articles  you find ), realizing that you will miss some.  In a systematic review, you want a very sensitive search:  you are trying to find any potentially relevant article.  A systematic review search will:

  • contain many synonyms & variants of search terms
  • use care in adding search filters
  • search multiple resources, databases & grey literature, such as reports & clinical trials

PICO is a good framework to help clarify your systematic review question.

P -   Patient, Population or Problem: What are the important characteristics of the patients &/or problem?

I -  Intervention:  What you plan to do for the patient or problem?

C -  Comparison: What, if anything, is the alternative to the intervention?

O -  Outcome:  What is the outcome that you would like to measure?

Beyond PICO: the SPIDER tool for qualitative evidence synthesis.

5-SPICE: the application of an original framework for community health worker program design, quality improvement and research agenda setting.

A well constructed search strategy is the core of your systematic review and will be reported on in the methods section of your paper. The search strategy retrieves the majority of the studies you will assess for eligibility & inclusion. The quality of the search strategy also affects what items may have been missed.  Informationists can be partners in this process.

For a systematic review, it is important to broaden your search to maximize the retrieval of relevant results.

Use keywords:  How other people might describe a topic?

Identify the appropriate index terms (subject headings) for your topic.

  • Index terms differ by database (MeSH, or  Medical Subject Headings ,   Emtree terms , Subject headings) are assigned by experts based on the article's content.
  • Check the indexing of sentinel articles (3-6 articles that are fundamental to your topic).  Sentinel articles can also be used to  test your search results.

Include spelling variations (e.g., behavior, behaviour ).  

Both types of  search terms are useful & both should be used in your search.

Keywords help to broaden your results.  They will be searched for at least in journal titles, author names, article titles, & article abstracts.  They can also be tagged to search all text.

Index/subject terms  help to focus your search appropriately, looking for items that have had a specific term applied by an indexer.

Boolean operators let you combine search terms in specific ways to broaden or narrow your results.

what is a search strategy in a literature review

An example of a search string for one concept in a systematic review.

what is a search strategy in a literature review

In this example from a PubMed search, [mh] = MeSH &  [tiab] = Title/Abstract, a more focused version of a keyword search.

A typical database search limit allows you to narrow results so that you retrieve articles that are most relevant to your research question. Limit types vary by database & include:

  • Article/publication type
  • Publication dates

In a systematic review search, you should use care when applying limits, as you may lose articles inadvertently.  For more information, see, particularly regarding language & format limits.     Cochrane 2008 6.4.9

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

Chris cooper.

1 Institute of Health Research, University of Exeter Medical School, Exeter, UK

Andrew Booth

2 HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Jo Varley-Campbell

Nicky britten.

3 Institute of Health Research, University of Exeter Medical School, Exeter, UK

Ruth Garside

4 European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Associated Data

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Electronic supplementary material

The online version of this article (10.1186/s12874-018-0545-3) contains supplementary material, which is available to authorized users.

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Guidance documents audited for this literature review

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

An external file that holds a picture, illustration, etc.
Object name is 12874_2018_545_Fig1_HTML.jpg

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 – 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. ​ Fig.1. 1 . The data extraction table to inform Fig. ​ Fig.1 1 is reported in Table  2 . Table ​ Table2 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

The order of literature search methods as presented in the guidance documents

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 – 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 – 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 – 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 – 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 – 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table ​ Table2 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

  • a search not restricted to English language;
  • where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and
  • at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 – 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 – 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 – 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 – 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 – 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 – 4 , 6 – 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table ​ Table2 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 – 4 , 6 – 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 – 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table ​ Table2 2 also summaries the process of literature searching which follows bibliographic database searching. As Table ​ Table2 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 – 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table ​ Table2 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 – 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 – 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table ​ Table2, 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Additional file

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Abbreviations

Authors’ contributions.

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Ethics approval and consent to participate

Consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Chris Cooper, Email: [email protected] .

Andrew Booth, Email: [email protected] .

Jo Varley-Campbell, Email: [email protected] .

Nicky Britten, Email: [email protected] .

Ruth Garside, Email: [email protected] .

How to undertake a literature search: a step-by-step guide

Affiliation.

  • 1 Literature Search Specialist, Library and Archive Service, Royal College of Nursing, London.
  • PMID: 32279549
  • DOI: 10.12968/bjon.2020.29.7.431

Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the results and search strategy. It discusses each of the steps in a little more detail, with examples and suggestions on where to get help. This structured approach will help readers obtain a more focused set of results and, ultimately, save time and effort.

Keywords: Databases; Literature review; Literature search; Reference management software; Research questions; Search strategy.

  • Databases, Bibliographic*
  • Information Storage and Retrieval / methods*
  • Nursing Research
  • Review Literature as Topic*

MCPHS Library Logo

Literature Reviews & Search Strategies

  • Defining the Literature Review
  • Types of Literature Reviews
  • Choosing Databases

Overview of Search Strategies

Search strategies, subject searching, example: iteratively developing + using keywords, demonstration: developing keywords from a question, demonstration: an advanced search.

  • Organizing Your Literature
  • Books: Research Design & Scholarly Writing
  • Recommended Tutorials

There are many ways to find literature for your review, and we recommend that you use a combination of strategies - keeping in mind that you're going to be searching multiple times in a variety of ways, using different databases and resources. Searching the literature is not a straightforward, linear process - it's iterative (translation: you'll search multiple times, modifying your strategies as you go, and sometimes it'll be frustrating). 

  • Known Item Searching
  • Citation Jumping

Some form of a keyword search is the way most of us get at scholarly articles in database - it's a great approach! Make sure you're familiar with these librarian strategies to get the most out of your searches.

Figuring out the best keywords for your research topic/question is a process - you'll start with one or a few words and then shift, adapt, and expand them as you start finding source that describe the topic using other words. Your search terms are the bridge between known topics and the unknowns of your research question - so sometimes one specific word will be enough, sometimes you'll need several different words to describe a concept AND you'll need to connect that concept to a second (and/or third) concept.

The number and specificity of your search terms depend on your topic and the scope of your literature review.

Connect Keywords Using Boolean

Make the database work more.

...uses the asterisk (*) to end a word at its core, allowing you to retrieve many more documents containing variations of the search term.  Example: educat* will find educate, educates, education, educators, educating and more.

Phrase Searching

...is when you put quotations marks around two or more words, so that the database looks for those words in that exact order. Examples: "higher education," "public health" and "pharmaceutical industry."

Controlled Vocabulary

... is when you use the terms the database uses to describe what each article is about as search terms. Searching using controlled vocabularies is a great way to get at everything on a topic in a database.  

Databases and search engines are probably going to bring back a lot of results - more than a human can realistically go through. Instead of trying to manually read and sort them all, use the filters in each database to remove the stuff you wouldn't use anyway (ie it's outside the scope of your project).

To make sure you're consistent between searches and databases, write down the filters you're using.

A Few Filters to Try

Once you know you have a good article , there are a lot of useful parts to it - far beyond the content.

Not sure where to start? Try course readings and other required materials.

Useful Parts of a Good Article

Ways to use citations.

  • Interactive Tutorial: Searching Cited and Citing Practice starting your search at an article and using the references to gather additional sources.

Older sources eat into the found article as references, and the found article is cited by more recent publications.

Your search results don't have to be frozen in the moment you search! There are a few things you can set up to keep your search going automatically.

Searching using subject headings is a comprehensive search strategy that requires some planning and topic knowledge. Work through this PubMed tutorial for an introduction to this important approach to searching.

tutorial on PubMed Subject Search: How it Works

Through these videos and the accompanying PDF, you'll see an example of starting with a potential research question and developing search terms through brainstorming and keyword searching.

  • Slidedeck: Keywords and Advanced Search PowerPoint slides to accompany the two demonstration videos on developing keywords from a question, and doing an advanced search.
  • << Previous: Choosing Databases
  • Next: Organizing Your Literature >>
  • Last Updated: Jun 14, 2023 11:18 AM
  • URL: https://mcphs.libguides.com/litreviews

University of Derby

Literature Reviews: systematic searching at various levels

  • for assignments
  • for dissertations / theses
  • Search strategy and searching
  • Boolean Operators

Search strategy template

  • Screening & critiquing
  • Citation Searching
  • Google Scholar (with Lean Library)
  • Resources for literature reviews
  • Adding a referencing style to EndNote
  • Exporting from different databases
  • PRISMA Flow Diagram
  • Grey Literature

You can map out your search strategy in whatever way works for you.

Some people like lists and so plan their search strategy out in a grid-box or table format. Some people are more visual and like to draw their strategy out using a mind-map approach (either on paper or using mind-mapping software). Some people use sticky notes or Trello or a spreadsheet.

If it works for you then as long as it enables you to search systematically and thoroughly there's no need to change the way you work. 

If your search strategies are not very developed, the method you use doesn't lead to a good search, then consider using one of the other methods to see if changing your approach helps.

  • Search Strategy Document
  • << Previous: Boolean Operators
  • Next: Screening & critiquing >>
  • Last Updated: Apr 12, 2024 11:57 AM
  • URL: https://libguides.derby.ac.uk/literature-reviews

Banner

Doing the literature review: Reporting your search strategy

  • The literature review: why?
  • Types of literature review
  • Selecting databases
  • Scoping search
  • Using a database thesaurus
  • Advanced search in a database
  • Citation information
  • Using a reference manager

Reporting your search strategy

  • Writing & structuring
  • When to stop

PRISMA, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, is a minimum set of items for reporting in systematic reviews and meta-analysis. Even when your goal isn't a systematic review or a meta-analysis, this set of items can also help you to search in more systematic way and to make your search more transparent and reproducable (also for your future self!).

The PRISMA 2020 Statement consists of a checklist and a flow diagram .

The checklist consists of 27 items. It guides you through the choices you have to make when you do a literature review. For example: what are criteria for eligibility (when do you include a paper or not), which databases have you used, what was the search strategy? This will make your literature review more systematic, better structured, and it will be easier to write down the steps in your paper or article. 

The flow diagram shows the ‘flow’ of information in the different phases of a systematic review, by showing the number of records identified, included and excluded, and the reasons for exclusions. In the academic literature you will find a lot of variants of the flow diagram.

In 2021, the PRISMA extension for searching was published: a checklist of 16 items to report literature searches in systematic reviews. They are more specific than the PRISMA Statement. 

Examples of the PRISMA flow diagram

  • The PRISMA 2020 flow diagram
  • In Frontiers in Public Health
  • In Current Opinion in Psychology

In the PRISMA flow diagram you summarize the study selection process. In the diagram you report:

Identification

  • The number of records found by searching databases. You can add the number of records found in each database used.
  • The number of records found via other methods, for example via the reference lists, via citing articles, or via experts.
  • The number of records screened – this means in this stage reading the titles and abstracts.
  • The number of records excluded, because they do not meet your inclusion criteria (for example, the age group of the participants in the study doesn’t match) or fit your exclusion criteria (for example the article is in French).
  • The number of reports sought for retrieval, and the number of reports not retrieved.
  • The number of reports assessed for eligibility – and the number of reports excluded, with the reason why.
  • The number of studies included in the review.

what is a search strategy in a literature review

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., . . . Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71

In the other tabs you can see examples of how the PRISMA flow diagram is applied in academic articles. 

Flow diagram as reported in Wang et al.

Source: Wang, H., Buljac-Samardzic, M., Wang, W., van Wijngaarden, J., Yuan, S., & van de Klundert, J. (2021). What do we know about teamwork in Chinese hospitals? A systematic review.  Frontiers in Public Health,  9,  Article  735754. https://doi.org/10.3389/fpubh.2021.735754

PRISMA Flow diagram in Scheuplein and van Harmelen (2022)

Source: Scheuplein, M., & van Harmelen, A. (2022). The importance of friendships in reducing brain responses to stress in adolescents exposed to childhood adversity: A preregistered systematic review. Current Opinion in Psychology, 45, Article 101310.  https://doi.org/10.1016/j.copsyc.2022.101310

Quick links

Open Access

  • Selecting databases You have to decide which databases you will use in your literature search. To limit location bias, you have to use more than one database. Make an informed choice! In this module we list some database features you can take into account.
  • PROSPERO An international database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health related outcome.

what is a search strategy in a literature review

Email the Information skills team

  • << Previous: Using a reference manager
  • Next: Writing & structuring >>
  • Last Updated: Apr 24, 2024 9:23 AM
  • URL: https://libguides.eur.nl/informationskillslitreview

Developing NICE guidelines: the manual

NICE process and methods [PMG20] Published: 31 October 2014 Last updated: 17 January 2024

  • Tools and resources
  • 1 Introduction
  • 2 The scope
  • 3 Decision-making committees
  • 4 Developing review questions and planning the evidence review

5 Identifying the evidence: literature searching and evidence submission

  • 6 Reviewing evidence
  • 7 Incorporating economic evaluation
  • 8 Linking to other guidance
  • 9 Interpreting the evidence and writing the guideline
  • 10 The validation process for draft guidelines, and dealing with stakeholder comments
  • 11 Finalising and publishing the guideline recommendations
  • 12 Support for putting the guideline recommendations into practice
  • 13 Ensuring that published guidelines are current and accurate
  • 14 Updating guideline recommendations
  • 15 Appendices
  • Update information

NICE process and methods

5.1 introduction, 5.2 searches during guideline recommendation scoping and surveillance, 5.3 searches during guideline recommendation development, 5.4 health inequalities and equality and diversity, 5.5 quality assurance, 5.6 documenting the search, 5.7 re-running searches, 5.8 calls for evidence from stakeholders, 5.9 references and further reading.

The systematic identification of evidence is an essential step in developing NICE guideline recommendations.

This chapter sets out how evidence is identified at each stage of the guideline development cycle. It provides details of the systematic literature searching methods used to identify the best available evidence for NICE guidelines. It also provides details of associated information management processes including quality assurance (peer review), re‑running searches, and documenting the search process.

Our searching methods are informed by the chapter on searching & selecting studies in the Cochrane Handbook for Systematic Reviews of Interventions and the Campbell Collaboration's searching for studies guide . The Summarized Research in Information Retrieval for HTA (SuRe Info) resource also provides research-based advice on information retrieval for systematic reviews.

Our literature searches are designed to be systematic, transparent, and reproducible, and minimise dissemination bias. Dissemination bias may affect the results of reviews and includes publication bias and database bias.

We use search methods that balance recall and precision. When the need to reduce the number of studies requires pragmatic search approaches that may increase the risk of missing relevant studies, the context and trade-offs are discussed and agreed within the development team and made explicit in the reported search methods.

A flexible approach to identifying evidence is adopted, guided by the subject of the review question (see the chapter on developing review questions and planning the evidence review ), type of evidence sought, and the resource constraints of the evidence review. Often an evidence review will be an update of our earlier work, therefore the approach can be informed by previous searches and surveillance reviews (see the chapter on ensuring that published guidelines are current and accurate ).

Scoping searches

Scoping searches are top-level searches to support scope development. The purpose of the searches is to investigate the current evidence around the topic, and to identify any areas where an evidence review may be beneficial and any research gaps. The results of the searches are used to draft the scope of the upcoming guideline or update and to inform the discussions at scoping workshops (if held). Scoping searches do not aim to be exhaustive.

In some cases, scoping searches are not required when it is more efficient to use the surveillance review (see the chapter on the scope ).

The sources searched at scoping stage will vary according to the topic, type of review questions the guideline or update will seek to address, and type of evidence sought. Each scoping search is tailored using combinations of the following types of information:

NICE guidance and guidance from other organisations

policy and legislation guides

key systematic reviews and epidemiological reviews

economic evaluations

current practice data, including costs and resource use and any safety concerns

views and experiences of people using services, their family members or carers, or the public

other real-world health and social care data (for example audits, surveys, registries, electronic health records, patient-generated health data), if appropriate

summaries of interventions that may be appropriate, including any national safety advice

statistics (for example on epidemiology, natural history of the condition, service configuration or national prevalence data).

All scoping searches are fully documented and if new issues are identified at a scoping workshop, the search is updated. A range of possible sources considered for scoping searches is provided in the appendix on suggested sources for scoping .

Health inequalities searches

The purpose of these searches is to identify evidence to help inform the scope, health inequalities briefing, or the equality and health inequalities assessment (EHIA). They help identify key issues relevant to health inequalities on the topic, for example covering protected characteristics, groups experiencing or at risk of inequalities, or wider determinants of health.

The searches involve finding key data sources, such as routinely available national databases, audits or published reports by charities, non-governmental bodies, or government organisations.

Surveillance searches

Surveillance determines whether published recommendations remain current. The searches are tailored to the evidence required. This may include searches for new or updated policies, legislation, guidance from other organisations, or ongoing studies in the area covered by the evidence review.

If required, published evidence is identified by searching a range of bibliographic databases relevant to the topic. Surveillance searches generally use the same core set of databases used during the development of the original evidence review. A list of sources is given in the appendix on sources for evidence reviews .

The search approach and sources will vary between topics and may include:

population and intervention searches

focused searches for specific question areas

forward and backward citation searching.

Searches usually focus on randomised controlled trials and systematic reviews, although other study types will be considered where appropriate, for example for diagnostic questions.

The search period starts at either the end of the search for the last update of a guideline evidence review, or at the last search date for any previous surveillance check. Where appropriate, living evidence surveillance could be set up to continuously monitor the publication of new evidence over a period of time until impact reaches the threshold for actions. For more information on NICE guideline recommendation surveillance, see the chapter on ensuring that guideline recommendations are current and accurate and appendix on surveillance - interim principles for monitoring approaches of guideline recommendations .

Search protocols

Search protocols form part of the wider guideline review protocol (see the appendix on the review protocol template ). They pre‑define how the evidence is identified and provide a basis for developing the search strategies.

Once the final scope is agreed, the information specialist develops the search protocols and agrees them with the development team before the evidence search begins.

A search protocol includes the following elements:

approach to the search strategy, tailored to the review question and eligibility criteria

sources to be searched

plans to use any additional or alternative search techniques , when known at the protocol development stage, and the reasons for their use

details of any limits to be applied to the search

references to any key papers used to inform the search approach.

Searches are done on a mix of bibliographic databases, websites and other sources, depending on the subject of the review question and the type of evidence sought.

For most searches there are key sources that are prioritised, and other potentially relevant sources that can be considered. It is important to ensure adequate coverage of the relevant literature and to search a range of sources. However, there are practical limits to the number of sources that can be searched in the standard time available for an evidence review.

The selection of sources varies according to the requirements of the review question.

Clinical intervention sources

For reviews of the effectiveness of clinical interventions the following sources are prioritised for searching:

the Cochrane Central Register of Controlled Trials (CENTRAL)

the Cochrane Database of Systematic Reviews (CDSR)

Clinical safety sources

In addition to the sources searched for clinical interventions, the following should be prioritised for clinical safety review questions:

MHRA drug safety updates

National patient safety alerts .

Antimicrobial resistance sources

For reviews of antimicrobial resistance, the following sources should be prioritised:

UK Health Security Agency's English surveillance programme for antimicrobial utilisation and resistance (ESPAUR) report

UK Health Security Agency's antimicrobial resistance local indicators .

Cost-effectiveness sources

For reviews of cost effectiveness, economic databases are used in combination with general bibliographic databases, such as MEDLINE and Embase (see appendix G on sources for economic reviews ).

Economic evaluations of social care interventions may be published in journals that are not identified through standard searches. Targeted searches based on references of key articles and contacting authors can be considered to identify relevant papers.

Topic-specific sources

Some topics we cover may require the use of topic-specific sources. Examples include:

PsycINFO (psychology and psychiatry)

CINAHL (nursing and allied health professions)

ASSIA (Applied Social Sciences Index and Abstracts)

HealthTalk , and other sources to identify the views and experiences of people using services, carers and the public

social policy and practice

social care online

sociological abstracts

transport database

Greenfile (environmental literature)

HMIC (Health Management Information Consortium).

Searching for model inputs

Evidence searches may be needed to inform design-oriented conceptual models. Examples include precise searches to find representative NHS costs for an intervention or finding out the proportion of people offered an intervention who take up the offer.

Some model inputs, such as costs, use national sources such as national list prices or national audit data. In some cases, it may be more appropriate to identify costs from the academic literature. Further advice on methods to identify model inputs are also informed by Paisley (2016) and Kaltenhaler et al. (2011). See also the chapter on incorporating economic evaluation .

Real-world data

Information specialists can identify sources of real-world data (such as electronic health records, registries, and audits) for data analysts to explore further. The Health Data Research Innovation Gateway can be used to identify datasets. The NICE real-world evidence framework (2022) has additional guidance on searching for and selecting real-world data sources.

Grey literature

For some review questions, for example, where significant evidence is likely to be published in non-journal sources and there is a paucity of evidence in published journal sources, it may be appropriate to search for grey literature . Useful sources of grey literature include:

HMIC (Health Management Information Consortium)

TRIP database

Canadian Agency for Drugs and Technology in Health (CADTH) Grey Matters resource .

Committee members may also be able to suggest additional appropriate sources for grey literature.

A list containing potential relevant sources is provided in the appendix on sources for evidence reviews .

Developing search strategies

The approach to devising and structuring search strategies is informed by the review protocol. The PICO (population, intervention, comparator and outcome) or SPICE (setting, perspective, intervention, comparison, evaluation) frameworks may be used to structure a search strategy for intervention review questions. For other types of review questions, alternative frameworks may be more suitable.

It is sometimes more efficient to conduct a single search for multiple review questions, rather than conducting a separate search for each question.

Some topics may not easily lend themselves to PICO- or SPICE-type frameworks. In these cases, it may be better to combine multiple, shorter searches rather than attempting to capture the entire topic using a single search. This is often referred to as multi-stranded searching.

In some instances, for example where the terminology around a topic is diffuse or ill defined, it may be difficult to specify the most appropriate search terms in advance. In these cases, an iterative approach to searching can be used.

In an iterative approach, searching is done in several stages, with each search considering the evidence that has already been retrieved (for example, see Booth et al. 2020 ). Searching in stages allows the reviewers to review the most relevant, high-quality information first and then make decisions for identifying additional evidence if needed.

Decisions to use iterative approaches are agreed by the development team and staff with responsibility for quality assurance because it can affect timelines.

Updating previous work

Where high-quality review-level evidence is available on a topic, the review team may choose to update or expand this previous work rather than duplicating the existing findings. In these cases, the original review searches are re-run and expanded to account for any differences in scope and inclusion criteria between the original review and the update.

Cost-effectiveness searches

There are several methods that can be used to identify economic evaluations:

All relevant review questions can be covered by a single search using the population search terms, combined with a search filter, to identify economic evidence.

The search strategies for individual review questions can be combined with search filters to identify economic evidence. If using this approach, it may be necessary to adapt strategies for some databases to ensure adequate sensitivity.

Economic evidence can be manually sifted while screening evidence from a general literature search (so no separate searches are required).

The rationale for the selected approach is recorded in the search protocol.

Where searches are needed to populate an economic model, these are usually done separately.

Identifying search terms

Search terms usually consist of a combination of subject headings and free‑text terms from the titles and abstracts of relevant references.

When identifying subject headings, variations in thesaurus and indexing terms for each database should be considered, for example MeSH (Medical Subject Headings) in MEDLINE and Emtree in Embase. Not all databases have indexing terms and some contain records that have not yet been indexed.

Free‑text terms may include synonyms, acronyms and abbreviations, spelling variants, old and new terminology, brand and generic medicine names, and lay and medical terminology.

For updates, previous search terms, including those from surveillance searches, are reviewed and used to inform new search terms. New or changed terms are identified, as well as any changes to indexing terms. This also applies when an existing review, for example a Cochrane review, is being updated to answer a review question.

Key studies can be a useful source of search terms, as can reports, guidelines, topic-specific websites, committee members and topic experts.

Some websites and databases have limited search functionality. It may be necessary to use fewer search terms or do multiple searches of the same resource with different search term combinations.

It may be helpful to use frequency analysis or text mining to develop the search-term strategy. Tools such as PubReMiner and Medline Ranker can help, either by highlighting search terms that might not otherwise be apparent, or by flagging terms of high value when exhaustive synonym searching is unfeasible or inadvisable.

Search limits

The application of limits to search strategies will reflect the eligibility criteria in the review protocol. Typically, English language limits, date limits, and the exclusion of conference abstracts and animal studies are usually done as a matter of routine.

Search filters

A search filter is a string of search terms with known (validated) performance. When a particular study design is required for a review question, relevant search filters are usually applied to literature search strategies.

Other search filters relating to age, setting, geography, and health inequalities are also applied as relevant. The most comprehensive list of available search filters is the search filter resource of the InterTASC Information Specialists' SubGroup . This resource also includes critical appraisal tools, which are used for filter selection.

Economics-related filters

A variety of search filters of relevance to cost effectiveness are available. These include filters for economic evaluations, quality of life data, and cost-utilities data. It may be necessary to use more than 1 filter to identify relevant data. In addition, it may be appropriate to add geographic search filters, such as those for the UK or Organisation for Economic Co-operation and Development (OECD) countries, to retrieve economic studies relevant to the UK or OECD (Ayiku et al. 2017, 2019, 2021).

Use of machine learning-based classifiers

Machine learning-based classification software has been developed for some study types (for example the Cochrane RCT classifier, Thomas et al. 2020 ). These classifiers apply a probability weighting to each bibliographical reference within a set of search results. The weighting relates to the reference's likelihood to be a particular study type, based on a model created from analysis of known, relevant papers. The weightings can then be used to either order references for screening or be used with a fixed cut-off value to divide a list of references into those more likely to be included, and those that can be excluded without manual screening.

We support the use of machine classifiers if their performance characteristics are known, and if they improve efficiency in the search and screening process. However, caution is needed when using classifiers, because they may not be as effective if used on data that is different to the type of data for which they were originally developed. For example, the Cochrane RCT classifier is reported to have over 99% recall for health studies but showed "unacceptably low" recall for educational research ( Stansfield et al. 2022 ).

Priority screening, a type of machine classifier that orders references for manual sifting based on previous sifting decisions, is considered in the chapter on reviewing evidence .

Additional search techniques

Additional search techniques are used alongside database searching when it is known, or reasonably likely, that relevant evidence is not indexed in bibliographic databases, or when it will be difficult to retrieve relevant evidence from databases in a way that adequately balances recall and precision. Additional search techniques include forward and backward citation searching, journal hand-searches and contacting experts and stakeholders.

Existing reviews may provide an additional source of primary studies, with reference lists being used as an indirect method of identifying primary research.

Various tools, including Citationchaser and Web of Science, are available to speed up the process of citation searching. These may not be as comprehensive as manual reference list checking (due to limitations of the underlying data sources), but the trade-off in terms of speed is generally acceptable.

All search techniques should follow the same principles of transparency, rigour and reproducibility as other search methods.

If possible, additional search techniques should be considered at the outset and documented in the search protocol. They should also be documented in the supporting appendices for the final evidence review.

All searches aim to be inclusive. This may mean not specifying any population groups.

Searches should avoid inadvertently excluding relevant groups. For example, if the population group is older people, a search for older people should pick up subpopulations such as disabled older people.

Additional search strategies may be needed to target evidence about people with protected characteristics or people experiencing or at risk from other inequalities.

Searches may need to be developed iteratively to ensure coverage of the health inequalities issues or evidence on the impacts of an intervention on equality.

Appropriate terminology for the search should be used, considering how language has evolved.

Quality assuring the literature search is an important step in developing guideline recommendations. Studies have shown that errors do occur.

For each search (including economic searches), the initial MEDLINE search strategy is quality assured by a second information specialist. A standardised checklist, based on the PRESS peer review of electronic search strategies: 2015 guideline statement , is used to ensure clarity and consistency when quality assuring search strategies.

The information specialist carrying out the quality assurance process also considers how appropriate the overall search approach is to the parameters of the evidence review (for example, the time available to carry out the review). The quality assurance comments are recorded and the information specialist who conducted the search should respond to the comments and revise the search strategy as needed.

Search strategy translations across the remaining databases are also checked by a second information specialist to ensure that the strategies have been adapted appropriately, in accordance with the interfaces and search functionality of the sources used.

Details of the evidence search are included as appendices to the individual evidence reviews. They are published for consultation alongside the draft evidence review and included in the final version.

Records are kept of the searches undertaken during guideline recommendation development for all review questions to ensure that the process for identifying the evidence is transparent and reproducible.

We use the PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews to inform search reporting. The search documentation is an audit trail that allows the reader to understand both the technical aspect of what was done (such as which sources were searched; what platform was used and on what date; any deviations from the original search protocol) and the underlying rationale for the search approach where this may not be immediately apparent.

Documenting the search begins with creating the search protocol (see the section on search protocols ). If using an iterative or emergent stepped approach, initial search strategies, key decision points and the reasons for subsequent search steps are clearly documented in the search protocol and final evidence review. When using a proprietary search engine such as Google, whose underlying algorithm adapts to different users, the search is reported in a way that should allow the reader to understand what was done.

Searches undertaken to identify evidence for each review question (including economics searches) may be re-run before consultation or before publication. For example, searches are re‑run if the evidence changes quickly, there is reason to believe that substantial new evidence exists, or the development time is longer than usual.

A decision to re‑run searches is taken by the development team and staff with responsibility for quality assurance.

If undertaken, searches are re‑run at least 6 to 8 weeks before the final committee meeting before consultation.

If evidence is identified after the last cut‑off date for searching but before publication, a judgement on its impact is made by the development team and staff with responsibility for quality assurance. In exceptional circumstances, this evidence can be considered if its impact is judged as potentially substantial.

In some topic areas or for some review questions, staff with responsibility for quality assurance, the development team or the committee may believe that there is relevant evidence in addition to that identified by the searches. In these situations, the development team may invite stakeholders, and possibly also other relevant organisations or individuals with a significant role or interest (see expert witnesses in the section on other attendees at committee meetings in the chapter on decision-making committees ), to submit evidence. A call for evidence is issued directly to registered stakeholders on the NICE website. Examples and details of process are included in the appendix on call for evidence and expert witnesses . Confidential information should be kept to an absolute minimum.

Ayiku L, Levay P, Hudson T et al. (2017) The medline UK filter: development and validation of a geographic search filter to retrieve research about the UK from OVID medline. Health Information and Libraries Journal 34(3): 200–216

Ayiku L, Levay P, Hudson T et al. (2019) The Embase UK filter: validation of a geographic search filter to retrieve research about the UK from OVID Embase. Health Information and Libraries Journal 36(2): 121–133

Ayiku L, Hudson T, Williams C et al. (2021) The NICE OECD countries' geographic search filters: Part 2-validation of the MEDLINE and Embase (Ovid) filters . Journal of the Medical Library Association 109(4): 583–9

Booth A, Briscoe S, Wright JM (2020) The "realist search": a systematic review of current practice and reporting . Research Synthesis Methods 11: 14–35

Canadian Agency for Drugs and Technologies in Health (2019) Grey Matters: a practical tool for searching health-related grey literature [online; accessed 24 July 2023]

Glanville J, Lefebvre C, Wright K (editors) (2008, updated 2017) The InterTASC Information Specialists' Subgroup Search Filters Resource [online; accessed 24 July 2023]

Kaltenthaler E, Tappenden P, Paisley S (2011) NICE DSU Technical support document 13: identifying and reviewing evidence to inform the conceptualisation and population of cost-effectiveness models [online; accessed 24 July 2023]

Kugley S, Wade A, Thomas J et al. (2017) Searching for studies: a guide to information retrieval for Campbell systematic reviews . Oslo: The Campbell Collaboration

Lefebvre C, Glanville J, Briscoe S et al. Chapter 4: Searching for and selecting studies . In: Higgins JPT, Thomas J, Cumpston M et al. (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.2 (updated February 2021). Cochrane, 2021

McGowan J, Sampson M, Salzwedel DM et al. (2016) PRESS Peer Review of Electronic Search Strategies: 2015 guideline statement . Journal of Clinical Epidemiology 75: 40–6

National Institute for Health and Care Excellence (2022) NICE real-world evidence framework [online; accessed 24 July 2023]

Paisley S (2016) Identification of key parameters in decision-analytic models of cost-effectiveness: a description of sources and a recommended minimum search requirement. Pharmacoeconomics 34: 597–8

Rethlefsen M, Kirtley S, Waffenschmidt S et al. (2021) PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews . Systematic Reviews 10: 39

Stansfield C, Stokes G, Thoman J (2022) Applying machine classifiers to update searches: analysis from two case studies . Research Synthesis Methods 13: 121–33

Summarized research for Information Retrieval in HTA (SuRe Info) [online; accessed 24 July 2023]

  • Open access
  • Published: 17 May 2024

Ferroptosis is a protective factor for the prognosis of cancer patients: a systematic review and meta-analysis

  • Shen Li 1   na1 ,
  • Kai Tao 2   na1 ,
  • Hong Yun 1   na1 ,
  • Jiaqing Yang 1 , 2 ,
  • Yuanling Meng 3 ,
  • Fan Zhang 4 &
  • Xuelei Ma 1  

BMC Cancer volume  24 , Article number:  604 ( 2024 ) Cite this article

10 Accesses

1 Altmetric

Metrics details

Cancer is a leading global cause of death. Conventional cancer treatments like surgery, radiation, and chemotherapy have associated side effects. Ferroptosis, a nonapoptotic and iron-dependent cell death, has been identified and differs from other cell death types. Research has shown that ferroptosis can promote and inhibit tumor growth, which may have prognostic value. Given the unclear role of ferroptosis in cancer biology, this meta-analysis aims to investigate its impact on cancer prognosis.

This systematic review and meta-analysis conducted searches on PubMed, Embase, and the Cochrane Library databases. Eight retrospective studies were included to compare the impact of ferroptosis inhibition and promotion on cancer patient prognosis. The primary endpoints were overall survival (OS) and progression-free survival (PFS). Studies lacking clear descriptions of hazard ratios (HR) and 95% confidence intervals for OS and PFS were excluded. Random-effects meta-analysis and meta-regression were performed on the included study data to assess prognosis differences between the experimental and control groups. Meta-analysis results included HR and 95% confidence intervals.

This study has been registered with PROSPERO, CRD 42023463720 on September 27, 2023.

A total of 2,446 articles were screened, resulting in the inclusion of 5 articles with 938 eligible subjects. Eight studies were included in the meta-analysis after bias exclusion. The meta-analysis, after bias exclusion, demonstrated that promoting ferroptosis could increase cancer patients’ overall survival (HR 0.31, 95% CI 0.21–0.44) and progression-free survival (HR 0.26, 95% CI 0.16–0.44) compared to ferroptosis inhibition. The results showed moderate heterogeneity, suggesting that biological activities promoting cancer cell ferroptosis are beneficial for cancer patient’s prognosis.

Conclusions

This systematic review and meta-analysis demonstrated that the promotion of ferroptosis yields substantial benefits for cancer prognosis. These findings underscore the untapped potential of ferroptosis as an innovative anti-tumor therapeutic strategy, capable of addressing challenges related to drug resistance, limited therapeutic efficacy, and unfavorable prognosis in cancer treatment.

Registration

CRD42023463720.

Peer Review reports

Cancer has progressively become the world’s leading cause of mortality, imposing substantial disease burdens. According to GLOBOCAN 2020, the global cancer burden will reach 28.4 million cases in 2040 [ 1 ]. And approximately one in every five men and one in every six women will develop cancer, with one in eight men and one in ten women succumbing to cancer before reaching 75 years of age [ 2 ]. It is estimated that over half of all cancer-related deaths (57.3%) and nearly half of all new cancer cases (48.4%) are concentrated in Asia [ 2 ]. Presently, common treatments for cancer encompass surgery, radiation, and chemotherapy [ 3 , 4 ]. However, these approaches may harm normal cells and result in significant side effects, including hepatotoxicity, ototoxicity, cardiotoxicity, nausea, vomiting, and more [ 5 , 6 ]. Despite advancements in therapy, cancer remains the second leading global cause of death, following ischemic heart disease, and is projected to become the leading cause by 2060 [ 7 ].

In 2012, a nonapoptotic, iron-dependent form of cell death initiated by the oncogenic Ras-selective lethal small molecule erastin was termed “ferroptosis” [ 8 ]. Ferroptosis exhibits distinct morphological characteristics compared to other regulated cell death forms. Notably, ferroptosis lacks the hallmark signs of apoptosis, such as chromatin condensation and apoptotic bodies, instead manifesting as shrunken mitochondria, reduced mitochondrial cristae, and an accumulation of lipid peroxides [ 8 , 9 , 10 ]. Its underlying mechanism also differs from other regulated cell death processes. Ferroptosis is inhibited by the system xc-—GSH—GPX4 pathway and is induced by the accumulation of phospholipid hydroperoxides, rather than the involvement of cell death executioner proteins such as caspases and mixed lineage kinase domain-like protein, among others [ 9 , 11 ].

An increasing body of research has explored the role of ferroptosis in tumors, suggesting its dual role in tumor promotion and inhibition. Various experimental agents, including erastin, RSL3, and drugs such as sorafenib, sulfasalazine, statins, and artemisinin, along with ionizing radiation and cytokines like IFN-γ and TGF-β1, can induce ferroptosis and inhibit tumors [ 12 ]. However, emerging evidence hints at ferroptosis potentially promoting tumor growth by triggering inflammation-associated immunosuppression within the tumor microenvironment [ 12 , 13 ]. Numerous studies have also indicated the prognostic value of ferroptosis [ 14 , 15 , 16 , 17 , 18 ].

Given the unclear role of ferroptosis in cancer biology, we conducted this meta-analysis to investigate its impact on cancer prognosis.

Search strategy and selection criteria

This systematic review and meta-analysis were conducted following PRISMA guidelines. PubMed, EMBASE, and the Cochrane Library were systematically searched from their inception until February 27, 2024, with no language restrictions. The search strategy included the following terms: (ferroptosis or oxytosis) AND (Neoplasm or Tumor or Tumors or Neoplasia or Cancer or Cancers or Malignant Neoplasm or Malignancy or Malignant Neoplasms or Neoplasms, Malignant or Benign Neoplasms or Neoplasm, Benign or Malignancies or Neoplasm, Malignant or Benign Neoplasm or Neoplasms, Benign or Neoplasias) AND (prognosis or Prognoses or Prognostic Factors or Prognostic Factor or Factor, Prognostic or Factors, Prognostic) as free text.

The objective of this study is to investigate and elucidate the impact of ferroptosis on cancer patients’ prognosis. We will compare the differences in prognosis between cancer patients with genes that promote ferroptosis and those with genes that inhibit it. The primary endpoints of the study include HRs and 95% confidence intervals for OS and PFS. It is important to note that the upregulation and downregulation of ferroptosis-related genes are not used as criteria for grouping; rather, the experimental and control groups are divided based on the ultimate impact of genes on ferroptosis. This meta-analysis was limited to studies conducted in humans. Participant data from cohort studies were extracted and analyzed. The collected information included the first author, study period, country of study, study size, ferroptosis-related gene, the effect of genes on ferroptosis, type of cancer, HR, and 95% confidence intervals for OS and PFS.

Both exclusion and inclusion criteria were pre-specified. Studies demonstrating a relationship between prognosis and ferroptosis in cancer patients were selected. Inclusion criteria were as follows: (1) Articles were limited to those involving human samples only. (2) All cancer patients had been diagnosed by pathological evidence. (3) Expression of ferroptosis-related genes had been assessed through immunohistochemistry from tumor specimens, conducted according to standard protocols. (4) All patients had been subject to follow-up, and results had been reported. Exclusion criteria encompassed: (1) Duplicate articles. (2) Article types other than original research, such as reviews, meta-analyses, letters, or editorial comments. (3) Studies involving cellular or animal-based research. (4) Patients with multiple primary cancers. The literature search, study selection, and data extraction were independently performed by Shen Li and Kai Tao, with any discrepancies reviewed and resolved by another author, Xuelei Ma, through consensus.

Data analysis

We employed Stata 14 software to calculate statistics. The specific analysis method is as follows: (1) We collected and analyzed the HR for OS and PFS reported in the included studies. The results were visualized using forest plots to illustrate the differences in prognosis between cancer patients whose genes promote ferroptosis and those whose genes inhibit it, thereby demonstrating the impact of ferroptosis on the prognosis of cancer patients. (2) Heterogeneity test was conducted by I 2 statistic to assess the heterogeneity of the results. Low heterogeneity was defined as an I 2 value less than or equal to 25%, moderate heterogeneity as between 25 and 75%, and high heterogeneity as exceeding 75%. (3) To evaluate potential publication bias, we employed funnel plots and conducted Egger tests. A p -value greater than 0.05 in Egger test indicates no significant bias. (4) Sensitivity analysis was conducted to examine any studies with significant influence on the overall results. (5) Meta-regression was conducted to assess the potential influence of covariates on the outcome [ 19 , 20 ]. We subjected the included covariates to regression testing, including country of study, ferroptosis-related gene, the effect of genes on ferroptosis, and type of cancer, to explore possible sources of heterogeneity and reduce potential bias. This study has been registered with PROSPERO, CRD 42023463720.

Bias analysis and quality assessment

Three researchers (LS, YJQ and TK) independently conducted a bias risk assessment following the Cochrane Bias Assessment Handbook. Considering that all included studies were retrospective articles, this study employed the Cochrane bias risk tool, which comprises five domains, to evaluate the risk of bias in each included study: (1) selection bias, (2) measurement bias, (3) data integrity bias, (4) outcome selection bias, and (5) other biases. Each researcher independently assessed the risk as low, high, or unclear for each domain. In cases of any uncertainty, Dr. Xuelei Ma made the final judgment. Based on the risk of bias, the quality of evidence was categorized as very low, low, moderate, or high. The quality assessment of this study adheres to the GRADE system.

We identified a total of 2,446 articles through literature searches, with 6 articles from the Cochrane Library and 2,440 from other databases, including PubMed and Embase. We excluded 962 duplicate articles. Among the remaining literature, we excluded 1,477 articles after abstract screening as they did not align with our research objectives. Subsequently, we conducted full-text reviews and eligibility assessments on the remaining 7 articles. Ultimately, we included 5 articles in our analysis. The review process was conducted independently by LS, TK and MYL, with a third reviewer, Xuelei Ma, reassessing articles with uncertain eligibility. The process is illustrated in Fig.  1 .

figure 1

Study selection

Among the five clinical articles, all studies were conducted in Asia, with 2 studies in China (40%) and 3 in Japan (60%). The research covered various cancer types, including gastric cancer and esophageal cancer of the digestive system, epithelial ovarian cancer of the female reproductive system, and osteosarcoma originating from undifferentiated bone fibrous tissue. In terms of age reporting, the median age of patients with epithelial ovarian cancer was 52 years, while osteosarcoma patients had an average age of 30.2 years, which is consistent with the characteristics of these two diseases. Three out of the five articles included two studies each, resulting in a total of 8 studies. Glutathione peroxidase 4 (GPX4) was the most studied gene (4/8, 50%) related to regulating ferroptosis. Like most other genes, GPX4 plays a role in inhibiting ferroptosis by suppressing lipid peroxidation. In contrast, heme oxygenase 1 (HMOX1), through catalyzing the degradation of heme into divalent iron ions, biliverdin, and CO, can promote ferroptosis by increasing the labile iron pool (LIP) (1/8, 12.5%). It’s worth noting that, as shown in Table  1 , only 3 studies (3/8, 37.5%) reported cut-off values, while the rest did not report them. We will discuss the importance of this missing data in the Discussion section.

Main outcome

A total of 8 studies reported HRs and 95% confidence intervals for OS. The forest plot indicates that the ferroptosis-promoting group had better OS compared to the ferroptosis-inhibiting group (HR 0.43, 95% CI 0.22–0.83). Data analysis reports substantial heterogeneity (I 2  = 87.8%, 95% CI 45.6%-94.7%) (Fig.  2 ). After conducting sensitivity analysis, we found that the study by Song et al. might introduce significant bias. After excluding this study and reanalyzing the data, the results showed that the ferroptosis-promoting group had better OS compared to the ferroptosis-inhibiting group (HR 0.31, 95% CI 0.21–0.44), with decreased heterogeneity (I 2  = 58.1%, 95% CI 0%-82.7%), indicating moderate heterogeneity (Fig.  3 ).

figure 2

Forest plot of the pooled overall survival between the ferroptosis-promoting group and the ferroptosis-inhibiting group

figure 3

Forest plot of the pooled overall survival between the ferroptosis-promoting group and the ferroptosis-inhibiting group after excluding one study with a large bias

Six studies reported HRs and 95% CIs for PFS. The analysis results suggest that the ferroptosis-promoting group had better PFS compared to the ferroptosis-inhibiting group (HR 0.47, 95% CI 0.17–1.30), although it was not statistically significant. Data analysis reports high heterogeneity (I 2  = 93.2%, 95% CI 43.5%-97.5%) (Fig.  4 ). After conducting sensitivity analysis, similar to the OS results, we found that the study by Song et al. might introduce significant bias. After excluding this study, the results showed that the ferroptosis-promoting group had significantly better PFS prognosis compared to the ferroptosis-inhibiting group (HR 0.26, 95% CI 0.16–0.44), with moderate heterogeneity (I 2  = 69.7%, 95% CI 0%-89.6%), and the results were statistically significant (Fig.  5 ).

figure 4

Forest plot of the pooled progression-free survival between the ferroptosis-promoting group and the ferroptosis-inhibiting group

figure 5

Forest plot of the pooled progression-free survival between the ferroptosis-promoting group and the ferroptosis-inhibiting group after excluding one study with a large bias

Separate meta-regression analyses for OS and PFS results revealed that covariates such as country of study, ferroptosis-related gene, the effect of genes on ferroptosis, and type of cancer had no influence on the results.

Risk of bias in studies

All included studies underwent a risk of bias assessment following the guidelines recommended by the Cochrane Handbook, which includes five bias domains. We classified 2 studies as having low bias risk (2/8, 25%), indicating low bias risk across all domains. Five studies exhibited some lower risk (5/8, 62.5%), suggesting mild uncertainty in at least one domain but no definite high risk. One study had a high risk (1/8, 12.5%), indicating high bias risk in more than one domain. No studies presented a higher risk overall. The reasons for non-low bias risk were predominantly due to incomplete outcome data (9/14, 64%). In multiple lower risk studies, the reason for uncertain bias in other domains was the lack of reported cut-off values. We believe that different cut-off values can introduce a certain degree of bias into study results, which may affect the interpretation of the results of the study Moreover, we excluded a study of Song, which have introduced a large bias because its results were not reported clearly and correctly with low credibility. We conducted a thorough review of their experimental procedures and relevant sensitivity analysis, concluding that it could affect the overall bias risk of the study. After excluding the study by Song et al., the Egger tests for OS and PFS had p -values of 0.20 and 0.205, respectively, indicating no significant publication bias.

To the best of our knowledge, this systematic review represents the pioneering effort to explore the correlation between ferroptosis and cancer prognosis. Through a comprehensive meta-analysis, we aimed to determine whether ferroptosis influences cancer prognosis and its potential applicability as a therapeutic target. The hallmarks of tumorigenesis encompass the evasion of regulatory cell death, unbridled proliferation, and cellular immortality [ 26 , 27 ]. The resistance exhibited by cancer cells poses a formidable challenge in cancer treatment, as conventional chemotherapy agents often fall short in inducing effective cell death [ 28 ]. Ferroptosis emerges as a promising strategy to overcome this resistance [ 27 ]. Nevertheless, ferroptosis assumes a dual role in the context of anti-tumor immunity. CD8 + T cells, for instance, can secrete Interferon-γ to promote ferroptosis in cancer cells, while ferroptotic cancer cells can reciprocally enhance the maturation of dendritic cells and macrophage efficiency [ 13 ]. However, it’s worth noting that some T helper cell subsets and CD8 + T cells can themselves undergo ferroptosis, thereby tempering the overall impact of ferroptosis on anti-tumor immunity [ 13 ].

In our study, we have uncovered that the promotion of ferroptosis in cancer cells serves as a protective factor for cancer patient prognosis. In our analysis of OS, involving eight studies, the results indicate that patients in the group where ferroptosis is promoted exhibit improved overall survival rates compared to the group where it is inhibited (HR 0.43, 95% CI 0.22–0.83). Following a sensitivity analysis, we observed certain biases in the study conducted by Song et al. Upon a thorough review of the research, we discovered that this study found ZFP36 can express in both tumor and para-carcinoma tissues, and the expression of ZFP36 was higher in para-carcinoma tissues Elevated ZFP36 expression inhibits ferroptosis, consequently leading to fewer instances of ferroptosis in the tumor-adjacent tissue, resulting in better patient prognoses. However, in the other studies included, ferroptosis-regulating genes were all found to be overexpressed or suppressed in tumor tissue instead of tumor-adjacent tissue. Meanwhile, the low accuracy of results from the study of Song et al. can introduce bias to our study. So we exclude this particular article to assure the quality of our results. Upon its exclusion, patients in the group where ferroptosis is promoted demonstrated better overall survival rates (HR 0.31, 95% CI 0.21–0.44), with reduced study heterogeneity and a higher p -value in the Egger test. For this intriguing study, we look forward to future research that directly investigates the role of ZFP36 in tumor tissue and whether it presents contrasting effects on patient prognosis. In our study on PFS, after sensitivity analysis, forest plots indicated that patients in the group where ferroptosis is promoted exhibit improved overall survival rates compared to the group where it is inhibited (HR 0.26, 95% CI 0.16–0.44). The heterogeneity could have raised from the absence of the cut-off values, different countries, the differences of ferroptosis-related genes, the type of cancers and the effect of genes on ferroptosis. After conducting meta-regression, we did not identify covariates including country, ferroptosis-related genes, type of cancer and the effect of genes influencing the results. Considering that 5 of the 8 studies we included did not report the cut-off value, we could not include this in meta-regression, which can lead to potential heterogeneity.

As the pioneering meta-analysis investigating the impact of ferroptosis on cancer patient prognosis, we are pleased to find that it serves as a protective factor for cancer patient prognoses. Ferroptosis, as a novel biological behavior distinct from apoptosis, holds promise as a potential approach in cancer treatment. Currently, we have identified numerous key genes in the ferroptosis pathways, and if ferroptosis proves to be an effective cancer treatment modality, targeting these genes would hold significant clinical relevance. These potential targets included down-regulation of GPX4, ZFP36, SLC7A11, FSP1 expression and up-regulation of HMOX1 expression. Moving forward, there is a promising potential to translate these interventions targeting specific factors into practical applications in clinical therapy. This holds great promise as an exciting new avenue in the realm of cancer bio-therapy.

Despite our rigorous article selection, feature extraction, and analysis, this study has certain limitations. Firstly, we require more clinical research, whether retrospective or randomized controlled studies, to substantiate the favorable impact of promoting ferroptosis in cancer cells on the prognosis of cancer patients, including both OS and PFS, both of which are pivotal for patients’ quality of life. Secondly, the cut-off value is a critical parameter; regrettably, many of the articles we included did not report this metric, making it challenging to assess the extent of bias in prognosis results due to cut-off value variations. We also hope that future related meta-analyses will delve further into the influence of cut-off values.

This meta-analysis, by comparing the promotion and inhibition of ferroptosis in cancer patients, reveals that fostering ferroptosis in cancer cells is a protective factor for cancer patient prognosis. Ferroptosis-related genes hold the potential to become novel biomarkers for targeted therapy, and promoting ferroptosis in cancer cells could represent a new and effective approach to cancer treatment.

Availability of data and materials

To ensure transparency and reproducibility of the study, all data generated or analyzed during this study are included in this published article and its supplementary information files. The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. Please note that data sharing is intended for academic research purposes only and not for other purposes.

Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71(3):209–49.

Article   PubMed   Google Scholar  

Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424.

Wang Y, Jiang H, Fu L, Guan L, Yang J, Ren J, et al. Prognostic value and immunological role of PD-L1 gene in pan-cancer. BMC Cancer. 2024;24(1):20.

Article   PubMed   PubMed Central   Google Scholar  

Zaimy MA, Saffarzadeh N, Mohammadi A, Pourghadamyari H, Izadi P, Sarli A, et al. New methods in the diagnosis of cancer and gene therapy of cancer based on nanoparticles. Cancer Gene Ther. 2017;24(6):233–43.

Article   CAS   PubMed   Google Scholar  

Oun R, Moussa YE, Wheate NJ. The side effects of platinum-based chemotherapy drugs: a review for chemists. Dalton Trans. 2018;47(19):6645–53.

Wang K, Tepper JE. Radiation therapy-associated toxicity: etiology, management, and prevention. CA Cancer J Clin. 2021;71(5):437–54.

Mattiuzzi C, Lippi G. Current cancer epidemiology. Journal of epidemiology and global health. 2019;9(4):217–22.

Dixon SJ, Lemberg KM, Lamprecht MR, Skouta R, Zaitsev EM, Gleason CE, et al. Ferroptosis: an iron-dependent form of nonapoptotic cell death. Cell. 2012;149(5):1060–72.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Jiang X, Stockwell BR, Conrad M. Ferroptosis: mechanisms, biology and role in disease. Nat Rev Mol Cell Biol. 2021;22(4):266–82.

Stockwell BR, Friedmann Angeli JP, Bayir H, Bush AI, Conrad M, Dixon SJ, et al. Ferroptosis: a regulated cell death nexus linking metabolism, redox biology, and disease. Cell. 2017;171(2):273–85.

Galluzzi L, Vitale I, Aaronson SA, Abrams JM, Adam D, Agostinis P, et al. Molecular mechanisms of cell death: recommendations of the nomenclature committee on cell death 2018. Cell Death Differ. 2018;25(3):486–541.

Chen X, Kang R, Kroemer G, Tang D. Broadening horizons: the role of ferroptosis in cancer. Nat Rev Clin Oncol. 2021;18(5):280–96.

Lei G, Zhuang L, Gan B. Targeting ferroptosis as a vulnerability in cancer. Nat Rev Cancer. 2022;22(7):381–96.

Chen H, He Y, Pan T, Zeng R, Li Y, Chen S, et al. Ferroptosis-related gene signature: a new method for personalized risk assessment in patients with diffuse large B-cell lymphoma. Pharmgenomics Pers Med. 2021;14:609–19.

PubMed   PubMed Central   Google Scholar  

Hsieh PL, Chao SC, Chu PM, Yu CC. Regulation of ferroptosis by non-coding RNAs in head and neck cancers. Int J Mol Sci. 2022;23(6):3142.

Liu Y, Duan C, Dai R, Zeng Y. Ferroptosis-mediated crosstalk in the tumor microenvironment implicated in cancer progression and therapy. Front Cell Dev Biol. 2021;9:739392.

Yu S, Jia J, Zheng J, Zhou Y, Jia D, Wang J. Recent progress of ferroptosis in lung diseases. Front Cell Dev Biol. 2021;9:789517.

Zeng H, You C, Zhao L, Wang J, Ye X, Yang T, et al. Ferroptosis-associated classifier and indicator for prognostic prediction in cutaneous melanoma. J Oncology. 2021;2021:3658196.

Article   Google Scholar  

Wang Y, Fu L, Lu T, Zhang G, Zhang J, Zhao Y, et al. Clinicopathological and prognostic significance of long non-coding RNA MIAT in human cancers: a review and meta-analysis. Front Genet. 2021;12:729768.

Wang Y, Jiang X, Zhang D, Zhao Y, Han X, Zhu L, et al. LncRNA DUXAP8 as a prognostic biomarker for various cancers: a meta-analysis and bioinformatics analysis. Front Genet. 2022;13:907774.

Shishido Y, Amisaki M, Matsumi Y, Yakura H, Nakayama Y, Miyauchi W, et al. Antitumor effect of 5-aminolevulinic acid through ferroptosis in esophageal squamous cell carcinoma. Ann Surg Oncol. 2021;28(7):3996–4006.

Song P, Xie Z, Chen C, Chen L, Wang X, Wang F, et al. Identification of a novel iron zinc finger protein 36 (ZFP36) for predicting the overall survival of osteosarcoma based on the Gene Expression Omnibus (GEO) database. Annals of translational medicine. 2021;9(20):1552.

Sugezawa K, Morimoto M, Yamamoto M, Matsumi Y, Nakayama Y, Hara K, et al. GPX4 regulates tumor cell proliferation via suppressing ferroptosis and exhibits prognostic significance in gastric cancer. Anticancer Res. 2022;42(12):5719–29.

Wu X, Shen S, Qin J, Fei W, Fan F, Gu J, et al. High co-expression of SLC7A11 and GPX4 as a predictor of platinum resistance and poor prognosis in patients with epithelial ovarian cancer. BJOG. 2022;129 Suppl 2(Suppl 2):40–9.

Miyauchi W, Shishido Y, Matsumi Y, Matsunaga T, Makinoya M, Shimizu S, et al. Simultaneous regulation of ferroptosis suppressor protein 1 and glutathione peroxidase 4 as a new therapeutic strategy of ferroptosis for esophageal squamous cell carcinoma. Esophagus. 2023;20(3):492–501.

Wang Y, Lin K, Xu T, Wang L, Fu L, Zhang G, et al. Development and validation of prognostic model based on the analysis of autophagy-related genes in colon cancer. Aging. 2021;13(14):19028–47.

Zhao L, Zhou X, Xie F, Zhang L, Yan H, Huang J, et al. Ferroptosis in cancer and cancer immunotherapy. Cancer communications (London, England). 2022;42(2):88–116.

Xu T, Ding W, Ji X, Ao X, Liu Y, Yu W, et al. Molecular mechanisms of ferroptosis and its role in cancer therapy. J Cell Mol Med. 2019;23(8):4900–12.

Download references

Acknowledgements

Not applicable.

No funding.

Author information

Shen Li, Kai Tao and Hong Yun contributed equally to this work.

Authors and Affiliations

Department of Biotherapy, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, Chengdu, Sichuan, China

Shen Li, Hong Yun, Jiaqing Yang & Xuelei Ma

West China School of Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan, China

Kai Tao & Jiaqing Yang

West China School of Stomatology, Sichuan University, Chengdu, Sichuan, China

Yuanling Meng

Health Management Center, General Practice Medical Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China

You can also search for this author in PubMed   Google Scholar

Contributions

Study Concept and design: MXL, LS, ZF; Search Strategy: TK, LS, YH; Selection Criteria: TK, LS, YJQ, YH; Quality Assessment: LS, YJQ, MXL and TK; Drafting of the Manuscript: LS, TK, YH. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Fan Zhang or Xuelei Ma .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Li, S., Tao, K., Yun, H. et al. Ferroptosis is a protective factor for the prognosis of cancer patients: a systematic review and meta-analysis. BMC Cancer 24 , 604 (2024). https://doi.org/10.1186/s12885-024-12369-5

Download citation

Received : 07 November 2023

Accepted : 10 May 2024

Published : 17 May 2024

DOI : https://doi.org/10.1186/s12885-024-12369-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Ferroptosis

ISSN: 1471-2407

what is a search strategy in a literature review

  • Open access
  • Published: 14 May 2024

Research outcomes informing the selection of public health interventions and strategies to implement them: A cross-sectional survey of Australian policy-maker and practitioner preferences

  • Luke Wolfenden 1 , 2 , 3 ,
  • Alix Hall 1 , 2 , 3 ,
  • Adrian Bauman 1 , 4 , 5 ,
  • Andrew Milat 6 , 7 ,
  • Rebecca Hodder 1 , 2 , 3 ,
  • Emily Webb 1 ,
  • Kaitlin Mooney 1 ,
  • Serene Yoong 1 , 2 , 3 , 8 , 9 ,
  • Rachel Sutherland 1 , 2 , 3 &
  • Sam McCrabb 1 , 2 , 3  

Health Research Policy and Systems volume  22 , Article number:  58 ( 2024 ) Cite this article

245 Accesses

7 Altmetric

Metrics details

A key role of public health policy-makers and practitioners is to ensure beneficial interventions are implemented effectively enough to yield improvements in public health. The use of evidence to guide public health decision-making to achieve this is recommended. However, few studies have examined the relative value, as reported by policy-makers and practitioners, of different broad research outcomes (that is, measures of cost, acceptability, and effectiveness). To guide the conduct of research and better inform public health policy and practice, this study aimed at describing the research outcomes that Australian policy-makers and practitioners consider important for their decision-making when selecting: (a) public health interventions; (b) strategies to support their implementation; and (c) to assess the differences in research outcome preferences between policy-makers and practitioners.

An online value-weighting survey was conducted with Australian public health policy-makers and practitioners working in the field of non-communicable disease prevention. Participants were presented with a list of research outcomes and were asked to select up to five they considered most critical to their decision-making. They then allocated 100 points across these – allocating more points to outcomes perceived as more important. Outcome lists were derived from a review and consolidation of evaluation and outcome frameworks in the fields of public health knowledge translation and implementation. We used descriptive statistics to report relative preferences overall and for policy-makers and practitioners separately.

Of the 186 participants; 90 primarily identified as policy-makers and 96 as public health prevention practitioners. Overall, research outcomes of effectiveness, equity, feasibility, and sustainability were identified as the four most important outcomes when considering either interventions or strategies to implement them. Scores were similar for most outcomes between policy-makers and practitioners.

For Australian policy-makers and practitioners working in the field of non-communicable disease prevention, outcomes related to effectiveness, equity, feasibility, and sustainability appear particularly important to their decisions about the interventions they select and the strategies they employ to implement them. The findings suggest researchers should seek to meet these information needs and prioritize the inclusion of such outcomes in their research and dissemination activities. The extent to which these outcomes are critical to informing the decision of policy-makers and practitioners working in other jurisdictions or contexts warrants further investigation.

Peer Review reports

Research evidence has a key role in public health policy-making [ 1 ]. Consideration of research is important to maximize the potential impact of investments in health policies and services. Public health policy-makers and practitioners frequently seek out research to inform their professional decision-making [ 2 ]. However, they report that published research is not well aligned with their evidence needs [ 3 , 4 ]. Public health decision-making is a complex and dynamic process where evidence is used in a variety of ways, and for different purposes [ 3 , 5 , 6 ]. Ensuring research meets the evidence needs of public health policy-makers and practitioners is, therefore, an important strategy to improve its use in decision-making [ 7 , 8 , 9 , 10 ].

“Research outcomes” are broad domains or constructs measured to evaluate the impacts of health policies, practices or interventions, such as their effectiveness or acceptability. They are distinct from “outcome measures”, which are the measures selected to assess an outcome. Outcome measures require detailed specification of measurement parameters, including the measurement techniques and instrument, and consideration of the suitability of its properties (for example, validity) given the research question. The inclusion of research outcomes considered most relevant to public health policy-makers and practitioners is one way in which researchers can support evidence-informed decision-making.

Policy-makers are primarily responsible for developing public health policy and selecting and resourcing health programs. Practitioners are primarily responsible for supporting their implementation. As such, public health policy-makers and practitioners require research to: (i) help identify “what works” to guide the selection of interventions that will be beneficial for their community, for example, those that are effective in improving health, and acceptable to the target population and/or (ii) to help identify “how to implement” effective intervention, for example, strategies that are capable of achieving implementation at a level sufficient to accrue benefit, are affordable and reach the targeted population [ 6 , 11 ]. Research that includes outcomes relevant to these responsibilities facilitates evidence-informed decision-making by public health policy-makers and practitioners.

Initiatives such as the World Health Organization INTEGRATe Evidence (WHO INTEGRATE) framework [ 12 ], and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Evidence to Decision framework [ 13 ] have been designed to support the selection of public health interventions. Application of these frameworks required the collation and synthesis of a range of scientific evidence including studies employing qualitative and quantitative research designs. Collectively, the frameworks suggest public health policy-makers and practitioners should consider, alongside research outcomes reporting the effectiveness of a public health intervention, other research outcomes such as cost–effectiveness, potential harms and acceptability of an intervention to patients or community.

Several authors have also sought to guide outcomes researchers should include in implementation studies [ 11 ]. Proctor and colleagues defined a range of implementation research outcomes [distinct from service or clinical (intervention) effectiveness outcomes] – including intervention adoption, appropriateness, feasibility, fidelity, cost, penetration and sustainability [ 14 ]. This work helped standardize how the field of implementation science defined, measured and reported implementation outcomes. More recently McKay and colleagues put forward measures of implementation “determinants” and “outcomes” and proposed a “minimum set” of such outcomes to include in implementation and scale-up studies. The implementation research outcomes proposed by both Proctor and McKay and colleagues were developed primarily from the input of researchers to improving the quality and consistency of reporting in implementation science. However, the relative value of these outcomes to the decision-making of public health policy-makers, and in particular practitioners, has largely been unexplored.

While several studies have explored policy-maker and practitioner research evidence preferences, these have focused on a small number of potential outcomes [ 15 , 16 , 17 ]. An appraisal of the potential value, and importance of a comprehensive range of research outcomes to public health policy-maker and practitioner decision-making, therefore, is warranted. In this study, we sought to quantify the relative importance of research outcomes from the perspective of Australian public health policy-makers and practitioners working in the field of non-communicable disease prevention (hereafter referred to as “prevention” policy-makers or practitioners). Specifically, using a value-weighting methodology to elicit relative preferences, the study aimed to describe: (a) research the outcomes prevention policy-makers and practitioners regard as important to their decision-making when selecting a public health intervention to address an identified health issue, (b), research the outcomes prevention policy-makers and practitioners regard as important to their decision-making when selecting a strategy to support the implementation of a public health intervention in the community and (c) assess the differences between prevention policy-makers and practitioners regarding their research outcome preferences.

Design and setting

An online cross-sectional value-weighting survey was conducted with Australian public health prevention policy-makers and practitioners. This study was undertaken as one step of a broader program of work to establish a core outcome set that has been prospectively registered on the Core Outcome Measures in Effectiveness Trials (COMET database; https://www.comet-initiative.org/Studies/Details/1791 ).

Participant eligibility

To be eligible, participants had to self-identify as having worked as a public health prevention policy-maker or practitioner at a government or non-government health organization within the past 5 years. While the term “policy-maker” has been used to describe legislators in US studies, in Australian research it has broadly been used to describe employees of government departments (or non-government agencies) involved in the development of public health policy [ 18 , 19 , 20 , 21 , 22 ]. Policy-makers are not typically involved in the direct implementation of policy or the delivery of health services. We defined a “policy-maker” as a professional who makes decisions, plans and actions that are undertaken to achieve specific public health prevention goals on behalf of a government or non-government organization [ 23 ]. Practitioners are typically employed by government or non-government organizations responsible for prevention service provision, and are directly involved in the implementation or supporting the implementation of public health policies or programs. Specifically, we defined a “practitioner” as a professional engaged in the delivery of public health prevention programs, implementing services or models of care in health and community settings (definition developed by research team). Research and evaluation are a core competency of the public health prevention workforce in Australia [ 24 ], as it is in other countries [ 25 ]. As such, participants may be engaged in research and have published research studies. Researchers, such as those employed by academic institutions only and without an explicit public health policy or practice role in a policy or practice organization, were excluded.

Recruitment

Comprehensive methods were used to recruit individuals through several agencies. First, email invitations were distributed to Australian government health agencies at local (for example, New South Wales Local Health District Population Health units), state (for example, departments or ministries of health) and national levels, as well as to non-government organizations (for example, Cancer Council) and professional societies (for example, Public Health Association Australia). Registered practitioners with the International Union for Health Promotion and Education (IUHPE) from Australia were contacted by public domain emails or on LinkedIn (where identified) with the study invitation. Authors who had published articles of relevant topics from 2018 to 2021 within three Australian public health journals [ Australian and New Zealand Journal of Public Health ( ANZJPH ), Health Promotion Journal of Australia ( HPJA ) and Public Health Research and Practice ( PHRP )] were invited to participate in the study. Invitation emails included links to the information statement for participants and the online survey. The online survey was also promoted on the social media account of a partnering organization [National Centre of Implementation Science (NCOIS)] as well as on Twitter and LinkedIn. From these social media accounts individuals could self-select to participate in the online survey. Reminder emails were sent to non-responders at approximately 2 and 4 weeks following the initial email invitation.

Data collection and measures

The online survey was kept on servers at the Hunter Medical Research Institute, New South Wales, Australia, and deployed using the REDCap software [ 26 ], a secured web-based application for building and managing online surveys and databases. The length of the survey was approximately 20–30 min in duration.

Professional characteristics

Participants completed brief items assessing their professional role (that is, practitioner or policy-maker), the number of years’ experience as policy-makers or practitioners, their professional qualifications and the prevention risk factors (for example, smoking, nutrition, physical activity, injury, sexual health, etc.) for which they had expertise.

Valued intervention and implementation outcomes

We sought to identify outcomes that may be valued by public health policy-makers and practitioners when making decisions about what policies and/or programs of interventions to implement and how implementation could best occur. We separated outcomes on this basis, consistent with recommendations of the evidence policy and practice [ 27 ], the effectiveness–implementation research typology [ 28 , 29 ] and trial conduct and reporting guidelines [ 30 ]. This is illustrated in a broad study logic model (Fig.  1 ).

figure 1

Both effective interventions and effective implementation are required to improve health outcomes

The authors undertook a review of intervention- and implementation-relevant outcome frameworks to determine program and intervention outcomes that may be of interest to policy-makers and practitioners, including the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) Framework [ 31 , 32 ], the Intervention Scalability Assessment Tool (ISAT) [ 18 ] and Proctor and colleagues’ implementation outcome definitions [ 14 ] as well as a series of publications on the topic [ 31 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ]. This was used to generate a comprehensive inventory of all possible outcomes (and outcome definitions) that may be of interest to public health policy-makers and practitioners. The outcome list was then reduced following grouping of outcomes addressing similar constructs or concepts. A panel of 16 public health policy-makers provided feedback on their perceived importance of each outcome for evidence-informed policy and practice decision-making, as well as the proposed outcome definition. This process occurred over two rounds until no further suggested improvements or clarifications were provided or requested, yielding a final list of 17 outcomes to inform the selection of public health intervention and 16 outcomes for the selection of implementation strategies (Additional file 1 : Table S1). Panel participants also pre-tested the survey instrument; however, they were not invited to participate in the value-weighting study.

Participating public health policy-makers and practitioners completed the value-weighting survey. Value-weighting surveys offer advantages over other methods to identify preferences (such as ranks or mean scores on a rating scale), as they provide an opportunity to quantify the relative preference or value of different dissemination strategies from the perspective of public health policy-makers or practitioners. Specifically, they were only presented with the list of outcomes and their definition, and were asked to select up to 5 of the 17 interventions “that they considered are critical to their decision-making when selecting a public health intervention to address an identified health issue” and 16 implementation outcomes “that they consider to be critical to their decision-making when selecting a strategy to support the implementation of a public health intervention in the community” in a decision-making context. Participants were then asked to value weight, allocating 100 points across their five (or less) intervention and implementation outcomes. A higher allocation of points represented a greater level of perceived importance. In this way, participants weight the allocation of points to outcomes based on preference. No statistical weights are applied in the analysis. Participants were asked to select up to five outcomes as this restriction forced a prioritization of the outcomes among participants. The identification of a small number of critical outcomes, rather than all relevant outcomes, is also recommended to facilitate research outcome harmonization [ 44 , 45 ].

Statistical analysis

All statistical analyses and data management were undertaken in SAS version 9.3. Descriptive statistics were used to describe the study sample. Similar to other value-weighting studies, we used descriptive analyses to identify the intervention and implementation outcomes ranked from highest to lowest importance [ 46 , 47 ]. Items not selected or allocated any points were assumed a score of 0, to reflect that they were not perceived as a high-priority outcome by the participant. Specifically, the mean points allocated to each of the individual outcomes were calculated and ranked in descending order. This was calculated overall for the entire participant sample, as well as separately by policy-makers and practitioners. As points were assigned in free-text fields, in instances where participants allocated more or less than 100 points across the individual items, the points they allocated were standardized to 100. Differences in the points allocated to each individual outcome by policy-maker/practitioner role were explored using Mann–Whitney U test. To examine any differences in the outcome preferences by participant risk factor expertise, we also examined and described outcome preferences among risk factor subgroups (with a combined sample of > 30 participants). These findings are discussed.

A total of 186 eligible participants completed the survey in part or in full.

Of the 186 participants, 90 primarily identified as policy-makers and 96 as public health prevention practitioners (Table  1 ). In all, 37% of participants (47% policy-makers, 27% practitioners) had over 15 years’ experience, and approximately one third (32% policy-makers, 36% practitioners) had a PhD. The most common areas of experience were nutrition and dietetics (38% policy-maker, 53% practitioner), physical activity or sedentary behaviour (46% policy-maker, 44% practitioner), obesity (49% policy-maker, 48% practitioner) and tobacco, alcohol or other drugs (51% policy-maker, 34% practitioner).

Valued outcomes

Intervention outcomes.

A total of 169 participants (83 policy-makers and 86 practitioners, with 7 and 10 missing, respectively) responded to the value-weighting questions for the 17 listed intervention outcomes. Table 2 (Fig.  2 ) reports the mean and standard deviation of points allocated by policy-makers and practitioners for each outcome, ranked in descending order to represent the most to least important. For policy-makers and practitioners combined, the effectiveness of an intervention, and its impact on equity, were clearly identified by participants as the leading two outcomes, with a mean allocation of 24.47 [standard deviation (SD) = 17.43] and 13.44(SD = 12.80), respectively. The mean scores for outcomes of feasibility (9.78) and sustainability (9.04) that ranked third and fourth, respectively, were similar; then scores dropped noticeably to 7.24 for acceptability and 5.81 for economic outcomes.

figure 2

Line graph representing mean points allocated for the 17 intervention outcomes overall and by role

For most outcomes, average scores were similar for policy-makers and practitioners. However, practitioner scores for the outcome of acceptability (mean = 8.95, SD = 9.11), which ranked third most important for practitioners was significantly different than for policy-makers (mean = 5.48, SD = 9.62), where it was ranked seventh ( p  = 0.005). Economics/cost outcomes were ranked fifth by policy-makers (mean = 8.28, SD = 10.63), which significantly differed from practitioners (mean = 3.43, SD = 6.56), where it was ranked ninth ( p  = 0.002). For co-benefits, ranked eighth by policy-makers (mean = 4.37, SD = 7.78), scores were significantly different than for practitioners (mean = 2.27, SD = 6.49), where it was ranked thirteenth ( p  = 0.0215). Rankings for the top five outcomes were identical for those with expertise in nutrition and dietetics, physical activity or sedentary behaviour, obesity and tobacco, alcohol or other drugs (Additional file 1 : Table S2).

Implementation outcomes

A total of 153 participants (75 policy-makers and 78 practitioners, with 15 and 18 missing, respectively) responded to the value-weighting questions for the 16 listed implementation outcomes (Table  3 , Fig.  3 ). The effectiveness of an implementation strategy was clearly identified by participants as the most important intervention outcome, with a mean allocation of 19.82 (SD = 16.85) overall. The mean scores for the next three ranked outcomes namely equity (mean = 10.42, SD = 12.7), feasibility (mean = 10.2, SD = 12.91) and sustainability (mean = 10.08, SD = 10.58) were similar, and thereafter, scores noticeably dropped for measures of adoption (mean = 8.55, SD = 10.90), the fifth-ranked outcome.

figure 3

Line graph representing mean points allocated for the 16 implementation outcomes overall and by role

For most implementation outcomes (Fig.  3 ) policy-makers and practitioners scores were similar. However, economics outcomes were ranked seventh for policy-makers with a mean = 5.58 (SD = 9.25), compared with practitioners who had a ranking of eleventh for this outcome (mean = 2.88, SD = 6.67). The difference in the points allocated were statistically significant between the two groups ( p  = 0.0439). Timeliness was ranked tenth most important for policy-makers, with a mean allocation of 4.03 (SD = 7.72), compared with practitioners who had a ranking of fourteenth for this outcome and a mean allocation of 2.05 (SD = 5.78). The difference in mean scores between policy-makers and practitioners on this outcome was not significant. Rankings and scores were similar for those with expertise in nutrition and dietetics, physical activity or sedentary behaviour, obesity and tobacco, alcohol or other drugs (Additional file 1 : Table S3).

Broadly, this study sought to better understand the information valued by public health policy-makers and practitioners to support their decisions regarding what and how interventions should be implemented in the community. The most valued research outcomes were the same regardless of whether policy-makers or practitioners were selecting interventions or implementation strategies. Namely outcomes regarding the effectiveness of interventions and implementation strategies. Following this, outcomes about equity, feasibility and sustainability also appeared to represent priorities. The study also found broad convergence among the most valued research outcomes, between policy-makers and practitioners, and across participants with expertise across different non-communicable disease (NCD) risk factors (for example, nutrition, obesity and tobacco). Such findings underscore the importance of research reporting these outcomes to support the translation of public health research into policy and practice.

For outcomes about decisions regarding intervention selection, the findings are broadly consistent with factors recommended by evidence-to-decision frameworks. For example, the top six ranked outcomes (effectiveness, equity, feasibility, sustainability, acceptability and economic), are also represented in both the WHO INTEGRATE framework [ 12 ] and the GRADE Evidence to Decision framework [ 13 ]. However, research outcomes about harms (adverse effects), which are included in both the WHO INTEGRATE and GRADE frameworks were ranked 13th by participants in this study. Such a finding was surprising given that potential benefits and harms of an intervention must be considered to appraise its net impact on patient or public health. Health professionals, however, do not have accurate expectations of the harms and benefits of therapeutic interventions. This appears particularly to be the case for public health professionals who acknowledge the potential for unintended consequences of policies [ 48 ] but consider these risks to be minimal [ 49 ]. The findings, therefore, may reflect the tendency of health professionals to overestimate the benefits of therapeutic interventions, and to a larger extent, underestimate harms [ 50 , 51 ]. In doing so, participants may have elevated their reported value of outcomes regarding the beneficial effectiveness of an intervention and discounted their value of outcomes reporting potential harms. Further research is warranted to substantiate this hypothesis, or explore whether other factors such as participant comprehension or misinterpretation of the outcome description may explain the finding. Nonetheless, the inclusion of measures of adverse effects (or harms) as trial outcomes is prudent to support evidence-informed public health decision-making, as is the use of strategies to facilitate risk communication to ensure the likelihood of such outcomes is understood by policy-makers and practitioners [ 52 , 53 , 54 ].

To our knowledge, this is the first study to examine the research evidence needs of public health policy-makers and practitioners when deciding on what strategies may be used to support policy or program implementation. Most of the eight implementation outcomes recommended by Proctor and colleagues [ 14 ] were ranked within the top eight by participants of this study. However, equity outcomes, ranked second by these participants, were not an outcome included in the list of outcomes defined by Proctor and colleagues. The findings may reflect public health values, which, as a discipline, has equity at its core [ 55 ]. It may also reflect the increasing attention to issues of health equity in implementation science [ 56 ].

Further, one of the eight Proctor outcomes, penetration – defined by Proctor and colleagues as the integration or saturation of an intervention within a service setting and its subsystems – was not ranked highly. Successful penetration implies a level of organization institutionalization of an intervention, which once achieved may continue to provide ongoing benefit to patients or populations. It may also suggest the capacity within the organization to expand implementation or adopt new interventions. Penetration outcomes, therefore, have been suggested to be particularly important to model and understand the potential impact of investment of scarce health resources in the implementation of public health policies and interventions [ 57 ].

At face value, such findings may suggest, at least from the perspective of public health policy-makers and practitioners, that penetration outcomes may not be particularly valued in terms of decision-making. However, it may also reflect a lack of familiarity with this term among public health policy-makers and practitioners, where related outcomes such as “reach” are more commonly used in the literature [ 14 , 58 ]. Alternatively it may be due to the conceptual similarity of this and other outcomes such as adoption, maintenance or sustainability. In other studies, for example, penetration has been operationalized to include the product of “reach”, “adoption” and “organizational maintenance” [ 58 ]. A lack of clear conceptual distinction may have led some participants to allocate points to related outcomes such as “adoption” rather than “penetration”.

The use of concept mapping techniques, consolidation of definitions of existing outcomes, and articulation of specific measures aligned to these outcomes may reduce some of these conceptual challenges. Indeed, best practice processes to develop core outcome sets for clinical trials suggest processes of engagement with end-users [ 45 ], stakeholders and researchers to articulate both broad outcomes and specific measures of these to support a shared understanding of important outcomes (and measures) to be included in such research. For example, there are many measures and economic methods to derive related to a broad outcome of “cost” (for example, absolute costs, cost–effectiveness, cost–benefit, cost–utility, and budget impact analysis) [ 59 ]. However, public health policy-makers’ preference or perceived value of these different measures to their decision-making will likely vary. While work in the field to map or align specific measures to broad outcomes is ongoing [ 57 , 58 , 60 ], extending this to empirically investigate end-user preferences for measures would be an important contribution to the field.

Broadly speaking, there was little variation in the outcomes valued between policy-makers and practitioners. However, economic evaluations were ranked as more important by policy-makers. The findings may reflect differences in the roles of Australian public health policy-makers and practitioners. That is, government policy-makers are often responsible for setting and financing the provision of public health programs, whereas health practitioners are responsible for directly supporting or undertaking their delivery. Economic considerations, therefore, may have greater primacy among policy-makers, who may be more likely to incur program costs [ 19 ]. Further research to explore and better understand these areas of divergence is warranted.

The study intended to provide information about outcomes that were generally of most use in public health policy and practice decision-making. However, such decisions are often highly contextual, and preferences may vary depending on the policy-maker or practitioner, the health issue to be addressed, the target population or broader decision-making circumstances [ 2 , 61 ]. As such, the extent to which the findings reported in this study generalize to other contexts, such as those working in different fields of public health, on different health issues or from countries or jurisdictions outside Australia is unknown. Future research examining the outcome preferences of public health policy-makers and practitioners in different contexts, therefore, is warranted.

The contextual nature of evidence needs of policy-makers and practitioners may explain, in part, the variability in outcome preferences. In many cases, for example, the mean of the outcome preference was less than its standard deviation. The interpretation of the study findings should consider this variability. That is, there is little distinguishing the mean preference ranks of many outcomes. However, the study findings at the extremes are unambiguous, suggesting clear preferences for the highest over the lowest ranking outcomes that did not differ markedly across policy-makers, practitioners or those with expertise in addressing different non-communicable disease risks such as nutrition, physical activity or tobacco or alcohol use.

Several study limitations are worth considering when interpreting the research findings. The initial inventory of outcomes was compiled from outcome frameworks, many of which were generic health or medical research outcomes that are uncommon in public health prevention research. There was considerable overlap in the outcomes included across frameworks, though how these were defined at times varied. Variability in outcome terminology has previously been identified as a problem for the field [ 62 ]. Despite being provided definitions for each, some participants may have responded to survey items based on their pre-existing understanding of these terms. Furthermore, following completion of the study, a programming error was identified whereby the definition of “Acceptability of the implementation strategy” was incorrectly assigned as “A measure of the uptake or reach of an implementation strategy”. The extent to which this may have influenced participant preferences is unclear, so sensitivity analysis was conducted by removing all participants who selected acceptability as a measure of interest. We conducted two analyses, one where the people who chose acceptability were removed but their other rankings remained and another where all their data were deleted. Results indicated that the top five outcomes did not differ after conducting the analysis, with only sustainability moving from fourth to second place in the second sensitivity analysis (Additional file 1 : Tables S4 and S5).

The pathway from research production to research in health policy or practice is complex. While a range of effective public health policies and interventions exist across a range of community settings [ 63 , 64 , 65 , 66 ], their implementation at a level capable of achieving population-level risk reductions remains elusive [ 67 , 68 , 69 , 70 ]. Nonetheless, undertaking research with end-use in mind, including reporting of outcomes valued by decision-makers, will likely facilitate the knowledge translation process [ 7 ]. In this study we found that outcomes related to effectiveness, equity, feasibility and sustainability appear important to decisions policy-makers and practitioners make about the interventions they select and the strategies they employ to implement public health prevention initiatives. Researchers interested in supporting evidence-informed decision-making should seek to provide for these information needs and prioritize such outcomes in dissemination activities to policy-makers and practitioners.

Contribution to the literature

It is essential to the research needs of policy-makers and practitioners to determine core outcomes to facilitate research use and knowledge translation.

Here we quantify the relative values of a variety of research outcomes commonly used in health research.

Findings suggest the primary outcomes of interest to public health prevention policy-makers and practitioners when making decisions about the selection of interventions and strategies to implement them are related to effectiveness, equity, feasibility and sustainability and that these do not differ markedly between public health prevention policy-makers and practitioners.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Australian and New Zealand Journal of Public Health

Health Promotion Journal of Australia

Intervention scalability assessment tool

Non-communicable disease

National Centre of Implementation Science

Public Health Research and Practice

World Health Organization

Campbell D, Moore G, Sax Institute. Increasing the use of research in policymaking. An Evidence Check rapid review brokered by the Sax Institute for the NSW Ministry of Health. 2017. https://www.health.nsw.gov.au/research/Documents/increasing-the-use-of-research.pdf .

Oliver KA, de Vocht F. Defining “evidence” in public health: a survey of policymakers’ uses and preferences. Eur J Public Health. 2017;27(2):112–7. https://doi.org/10.1093/eurpub/ckv082 .

Article   PubMed   Google Scholar  

Newson RS, Rychetnik L, King L, Milat AJ, Bauman AE. Looking for evidence of research impact and use: a qualitative study of an Australian research-policy system. Res Eval. 2021;30(4):458–69. https://doi.org/10.1093/reseval/rvab017 .

Article   Google Scholar  

van der Graaf P, Cheetham M, McCabe K, Rushmer R. Localising and tailoring research evidence helps public health decision making. Health Info Libr J. 2018;35(3):202–12. https://doi.org/10.1111/hir.12219 .

World Health Organization. Evidence, policy, impact: WHO guide for evidence-informed decision-making. World Health Organization; 2021.

Global Commission on Evidence to Address Societal Challenges. The Evidence Commission Report: a wake-up call and path forward for decisionmakers, evidence intermediaries, and impact-oriented evidence producers. 2022. https://www.mcmasterforum.org/networks/evidence-commission/report/english .

Wolfenden L, Mooney K, Gonzalez S, et al. Increased use of knowledge translation strategies is associated with greater research impact on public health policy and practice: an analysis of trials of nutrition, physical activity, sexual health, tobacco, alcohol and substance use interventions. Health Res Policy Syst. 2022;20(1):15. https://doi.org/10.1186/s12961-022-00817-2 .

Article   PubMed   PubMed Central   Google Scholar  

Kathy E, David G, Anne H, et al. Improving knowledge translation for increased engagement and impact in healthcare. BMJ Open Qual. 2020;9(3): e000983. https://doi.org/10.1136/bmjoq-2020-000983 .

Squires JE, Santos WJ, Graham ID, et al. Attributes and features of context relevant to knowledge translation in health settings: a response to recent commentaries. Int J Health Policy Management. 2023;12(1):1–4. https://doi.org/10.34172/ijhpm.2023.7908 .

Thomas A, Bussières A. Leveraging knowledge translation and implementation science in the pursuit of evidence informed health professions education. Adv Health Sci Educ Theory Pract. 2021;26(3):1157–71. https://doi.org/10.1007/s10459-020-10021-y .

Dobbins M, Jack S, Thomas H, Kothari A. Public health decision-makers’ informational needs and preferences for receiving research evidence. Worldviews Evid-Based Nurs. 2007;4(3):156–63. https://doi.org/10.1111/j.1741-6787.2007.00089.x .

Rehfuess EA, Stratil JM, Scheel IB, Portela A, Norris SL, Baltussen R. The WHO-INTEGRATE evidence to decision framework version 10: integrating WHO norms and values and a complexity perspective. BMJ Glob Health. 2019;4(Suppl 1):e000844. https://doi.org/10.1136/bmjgh-2018-000844 .

Moberg J, Oxman AD, Rosenbaum S, et al. The GRADE Evidence to Decision (EtD) framework for health system and public health decisions. Health Res Policy Syst. 2018;16(1):45. https://doi.org/10.1186/s12961-018-0320-2 .

Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. https://doi.org/10.1007/s10488-010-0319-7 .

Dodson EA, Geary NA, Brownson RC. State legislators’ sources and use of information: bridging the gap between research and policy. Health Educ Res. 2015;30(6):840–8. https://doi.org/10.1093/her/cyv044 .

Morshed AB, Dodson EA, Tabak RG, Brownson RC. Comparison of research framing preferences and information use of state legislators and advocates involved in cancer control, United States, 2012–2013. Prev Chronic Dis. 2017;14:E10. https://doi.org/10.5888/pcd14.160292 .

Turon H, Wolfenden L, Finch M, et al. Dissemination of public health research to prevent non-communicable diseases: a scoping review. BMC Public Health. 2023;23(1):757. https://doi.org/10.1186/s12889-023-15622-x .

Milat A, Lee K, Conte K, et al. Intervention Scalability Assessment Tool: a decision support tool for health policy makers and implementers. Health Res Policy Syst. 2020;18(1):1–17.

Milat AJ, King L, Newson R, et al. Increasing the scale and adoption of population health interventions: experiences and perspectives of policy makers, practitioners, and researchers. Health Res Policy Syst. 2014;12(1):18. https://doi.org/10.1186/1478-4505-12-18 .

Cleland V, McNeilly B, Crawford D, Ball K. Obesity prevention programs and policies: practitioner and policy-maker perceptions of feasibility and effectiveness. Obesity. 2013;21(9):E448–55. https://doi.org/10.1002/oby.20172 .

Wolfenden L, Bolsewicz K, Grady A, et al. Optimisation: defining and exploring a concept to enhance the impact of public health initiatives. Health Res Policy Syst. 2019;17(1):108. https://doi.org/10.1186/s12961-019-0502-6 .

Purtle J, Dodson EA, Nelson K, Meisel ZF, Brownson RC. Legislators’ sources of behavioral health research and preferences for dissemination: variations by political party. Psychiatr Serv. 2018;69(10):1105–8. https://doi.org/10.1176/appi.ps.201800153 .

World Health Organization. WHO Health policy [Internet] 2019;

Australian Health Promotion Association Core competencies for health promotion practitioners. Maroochydore: University of the Sunshine Coast. 2009;

Barry MM, Battel-Kirk B, Dempsey C. The CompHP Core Competencies Framework for Health Promotion in Europe. Health Educ Behav. 2012;39(6):648–62. https://doi.org/10.1177/1090198112465620 .

Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap) – a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81. https://doi.org/10.1016/j.jbi.2008.08.010 .

Wolfenden L, Williams CM, Kingsland M, et al. Improving the impact of public health service delivery and research: a decision tree to aid evidence-based public health practice and research. Aust N Zeal J Public Health. 2020;44(5):331–2. https://doi.org/10.1111/1753-6405.13023 .

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Wolfenden L, Williams CM, Wiggers J, Nathan N, Yoong SL. Improving the translation of health promotion interventions using effectiveness–implementation hybrid designs in program evaluations. Health Promot J Austr. 2016;27(3):204–7. https://doi.org/10.1071/HE16056 .

Wolfenden L, Foy R, Presseau J, et al. Designing and undertaking randomised implementation trials: guide for researchers. Br Med J. 2021;372:m3721. https://doi.org/10.1136/bmj.m3721 .

Suhonen R, Papastavrou E, Efstathiou G, et al. Patient satisfaction as an outcome of individualised nursing care. Scand J Caring Sci. 2012;26(2):372–80. https://doi.org/10.1111/j.1471-6712.2011.00943.x .

Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103(6):e38-46. https://doi.org/10.2105/ajph.2013.301299 .

Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The Evidence-based Practice Attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Sci. 2017;12(1):44. https://doi.org/10.1186/s13012-017-0573-0 .

Sansoni JE. Health outcomes: an overview from an Australian perspective. 2016;

Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res. 2017;17(1):88. https://doi.org/10.1186/s12913-017-2031-8 .

Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. https://doi.org/10.1186/s13012-017-0635-3 .

Simoens S. Health economic assessment: a methodological primer. Int J Environ Res Public Health. 2009;6(12):2950–66. https://doi.org/10.3390/ijerph6122950 .

Lorgelly PK, Lawson KD, Fenwick EA, Briggs AH. Outcome measurement in economic evaluations of public health interventions: a role for the capability approach? Int J Environ Res Public Health. 2010;7(5):2274–89. https://doi.org/10.3390/ijerph7052274 .

Williams K, Sansoni J, Morris D, Grootemaat P, Thompson C. Patient-reported outcome measures: literature review. Sydney: Australian Commission on Safety and Quality in Health Care; 2016.

Google Scholar  

Zilberberg MD, Shorr AF. Understanding cost-effectiveness. Clin Microbiol Infect. 2010;16(12):1707–12. https://doi.org/10.1111/j.1469-0691.2010.03331.x .

Article   CAS   PubMed   Google Scholar  

Feeny DH, Eckstrom E, Whitlock EP, et al. A primer for systematic reviewers on the measurement of functional status and health-related quality of life in older adults [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013. Available from: https://www.ncbi.nlm.nih.gov/books/NBK169159/ .

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7. https://doi.org/10.2105/ajph.89.9.1322 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Institute of Medicine Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. National Academies Press (US); 2001.

Higgins JP, Thomas J, Chandler J, et al. Cochrane handbook for systematic reviews of interventions. John Wiley & Sons; 2019.

Book   Google Scholar  

Williamson PR, Altman DG, Bagley H, et al. The COMET Handbook: version 1.0. Trials. 2017;18(3):280. https://doi.org/10.1186/s13063-017-1978-4 .

Paul CL, Sanson-Fisher R, Douglas HE, Clinton-McHarg T, Williamson A, Barker D. Cutting the research pie: a value-weighting approach to explore perceptions about psychosocial research priorities for adults with haematological cancers. Eur J Cancer Care. 2011;20(3):345–53. https://doi.org/10.1111/j.1365-2354.2010.01188.x .

Article   CAS   Google Scholar  

Fradgley EA, Paul CL, Bryant J, Oldmeadow C. Getting right to the point: identifying Australian outpatients’ priorities and preferences for patient-centred quality improvement in chronic disease care. Int J Qual Health Care. 2016;28(4):470–7. https://doi.org/10.1093/intqhc/mzw049 .

Oliver K, Lorenc T, Tinkler J, Bonell C. Understanding the unintended consequences of public health policies: the views of policymakers and evaluators. BMC Public Health. 2019;19(1):1057. https://doi.org/10.1186/s12889-019-7389-6 .

Sally M, Mark P. Good intentions and received wisdom are not enough. J Epidemiol Community Health. 2000;54(11):802. https://doi.org/10.1136/jech.54.11.802 .

Hoffmann TC, Del Mar C. Clinicians’ expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2017;177(3):407–19. https://doi.org/10.1001/jamainternmed.2016.8254 .

Hanoch Y, Rolison J, Freund AM. Reaping the benefits and avoiding the risks: unrealistic optimism in the health domain. Risk Anal. 2019;39(4):792–804. https://doi.org/10.1111/risa.13204 .

Oakley GP Jr, Johnston RB Jr. Balancing benefits and harms in public health prevention programmes mandated by governments. Br Med J. 2004;329(7456):41–3. https://doi.org/10.1136/bmj.329.7456.41 .

Pitt AL, Goldhaber-Fiebert JD, Brandeau ML. Public health interventions with harms and benefits: a graphical framework for evaluating tradeoffs. Med Decis Making. 2020;40(8):978–89. https://doi.org/10.1177/0272989x20960458 .

McDowell M, Rebitschek FG, Gigerenzer G, Wegwarth O. A simple tool for communicating the benefits and harms of health interventions: a guide for creating a fact box. MDM Policy Pract. 2016;1(1):2381468316665365. https://doi.org/10.1177/2381468316665365 .

World Health Organization. Social determinants of health. 2022.

Brownson RC, Kumanyika SK, Kreuter MW, Haire-Joshu D. Implementation science should give higher priority to health equity. Implement Sci. 2021;16(1):28. https://doi.org/10.1186/s13012-021-01097-0 .

Brownson RC, Colditz GA, Proctor EK. Dissemination and implementation research in health: translating science to practice. Oxford University Press; 2017.

Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, contrasting, and integrating dissemination and implementation outcomes included in the RE-AIM and implementation outcomes frameworks. Front Public Health. 2020;8:430. https://doi.org/10.3389/fpubh.2020.00430 .

Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: making the business case for implementation strategies. Psychiatry Res. 2020;283:112433. https://doi.org/10.1016/j.psychres.2019.06.008 .

Allen P, Pilar M, Walsh-Bailey C, et al. Quantitative measures of health policy implementation determinants and outcomes: a systematic review. Implement Sci. 2020;15(1):47. https://doi.org/10.1186/s13012-020-01007-w .

Whitty JA, Lancsar E, Rixon K, Golenko X, Ratcliffe J. A systematic review of stated preference studies reporting public preferences for healthcare priority setting. Patient. 2014;7(4):365–86. https://doi.org/10.1007/s40271-014-0063-2 .

Smith PG MR, Ross DA, editors, Field trials of health interventions: a toolbox. 3rd edition. Chapter 12, Outcome measures and case definition. 2015.

Wolfenden L, Barnes C, Lane C, et al. Consolidating evidence on the effectiveness of interventions promoting fruit and vegetable consumption: an umbrella review. Int J Behav Nutr Phys Act. 2021;18(1):11. https://doi.org/10.1186/s12966-020-01046-y .

Nathan N, Hall A, McCarthy N, et al. Multi-strategy intervention increases school implementation and maintenance of a mandatory physical activity policy: outcomes of a cluster randomised controlled trial. Br J Sports Med. 2022;56(7):385–93. https://doi.org/10.1136/bjsports-2020-103764 .

Sutherland R, Brown A, Nathan N, et al. A multicomponent mHealth-based intervention (SWAP IT) to decrease the consumption of discretionary foods packed in school lunchboxes: type I effectiveness-implementation hybrid cluster randomized controlled trial. J Med Internet Res. 2021;23(6):e25256. https://doi.org/10.2196/25256 .

Breslin G, Shannon S, Cummings M, Leavey G. An updated systematic review of interventions to increase awareness of mental health and well-being in athletes, coaches, officials and parents. Syst Rev. 2022;11(1):99. https://doi.org/10.1186/s13643-022-01932-5 .

McCrabb S, Lane C, Hall A, et al. Scaling-up evidence-based obesity interventions: a systematic review assessing intervention adaptations and effectiveness and quantifying the scale-up penalty. Obesity Rev. 2019;20(7):964–82. https://doi.org/10.1111/obr.12845 .

Wolfenden L, McCrabb S, Barnes C, et al. Strategies for enhancing the implementation of school-based policies or practices targeting diet, physical activity, obesity, tobacco or alcohol use. Cochrane Database Syst Rev. 2022. https://doi.org/10.1002/14651858.CD011677.pub3 .

Wolfenden L, Barnes C, Jones J, et al. Strategies to improve the implementation of healthy eating, physical activity and obesity prevention policies, practices or programmes within childcare services. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD011779.pub3 .

Sutherland RL, Jackson JK, Lane C, et al. A systematic review of adaptations and effectiveness of scaled-up nutrition interventions. Nutr Rev. 2022;80(4):962–79. https://doi.org/10.1093/nutrit/nuab096 .

Download references

Acknowledgements

Not applicable.

This study was funded in part by a National Health and Medical Research Council (NHMRC) Centre for Research Excellence – National Centre of Implementation Science (NCOIS) Grant (APP1153479) and a New South Wales (NSW) Cancer Council Program Grant (G1500708). LW is supported by an NHMRC Investigator Grant (G1901360).

Author information

Authors and affiliations.

Faculty of Health and Medicine, School of Medicine and Public Health, University of Newcastle, Newcastle, NSW, 2318, Australia

Luke Wolfenden, Alix Hall, Adrian Bauman, Rebecca Hodder, Emily Webb, Kaitlin Mooney, Serene Yoong, Rachel Sutherland & Sam McCrabb

Hunter New England Population Health, Hunter New England Local Health District, Wallsend, NSW, 2287, Australia

Luke Wolfenden, Alix Hall, Rebecca Hodder, Serene Yoong, Rachel Sutherland & Sam McCrabb

Hunter Medical Research Institute, Newcastle, NSW, 2305, Australia

Prevention Research Collaboration, Charles Perkins Centre, School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia

Adrian Bauman

The Australian Prevention Partnership Centre, Sydney, NSW, Australia

School of Public Health, University of Sydney, Sydney, NSW, Australia

Andrew Milat

Centre for Epidemiology and Evidence, NSW Ministry of Health, Sydney, Australia

School of Health Sciences, Swinburne University of Technology, Melbourne, VIC, 3122, Australia

Serene Yoong

Global Nutrition and Preventive Health, Institute of Health Transformation, School of Health and Social Development, Deakin University, Burwood, VIC, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

LW and SMc led the conception and design of the study, were closely involved in data analysis and interpretation and wrote the manuscript. AH, AB, AM and RH comprised the study advisory committee, reviewed the study’s methods and assisted with survey development. AH was responsible for data analysis. KM and EW assisted with survey development, data collection and preliminary analysis. AH, AB, AM, RH, SY and RS were involved in interpretation and revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Luke Wolfenden .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was provided by the University of Newcastle Human Research Ethics Committee (H-2014-0070). Implied consent was obtained from participants rather than explicit consent (that is, individuals were not required to expressly provide consent by checking a “I consent box”; rather, undertaking the survey provided implicit consent).

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:.

Table S1. Mean point allocations for each of the 17 intervention outcomes overall and by area of expertise (where field of expertise n ≥ 30). Table S2. Mean point allocations for each of the 16 implementation outcomes overall and by area of expertise (where field of expertise n ≥ 30). Table S3. Mean points for implementation outcomes overall and by area of expertise (field of expertise n ≥ 30). Table S4. Sensitivity analysis, participants who selected ‘acceptability’ removed from the analysis, their other rankings remained. Table S5. Sensitivity analysis, participants who selected ‘acceptability’ whole data set removed from the analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wolfenden, L., Hall, A., Bauman, A. et al. Research outcomes informing the selection of public health interventions and strategies to implement them: A cross-sectional survey of Australian policy-maker and practitioner preferences. Health Res Policy Sys 22 , 58 (2024). https://doi.org/10.1186/s12961-024-01144-4

Download citation

Received : 12 July 2023

Accepted : 19 April 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12961-024-01144-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is a search strategy in a literature review

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: What Companies Don’t Know About How Workers Use AI

  • Jeremie Brecheisen

what is a search strategy in a literature review

Three Gallup studies shed light on when and why AI is being used at work — and how employees and customers really feel about it.

Leaders who are exploring how AI might fit into their business operations must not only navigate a vast and ever-changing landscape of tools, but they must also facilitate a significant cultural shift within their organizations. But research shows that leaders do not fully understand their employees’ use of, and readiness for, AI. In addition, a significant number of Americans do not trust business’ use of AI. This article offers three recommendations for leaders to find the right balance of control and trust around AI, including measuring how their employees currently use AI, cultivating trust by empowering managers, and adopting a purpose-led AI strategy that is driven by the company’s purpose instead of a rules-heavy strategy that is driven by fear.

If you’re a leader who wants to shift your workforce toward using AI, you need to do more than manage the implementation of new technologies. You need to initiate a profound cultural shift. At the heart of this cultural shift is trust. Whether the use case for AI is brief and experimental or sweeping and significant, a level of trust must exist between leaders and employees for the initiative to have any hope of success.

  • Jeremie Brecheisen is a partner and managing director of The Gallup CHRO Roundtable.

Partner Center

COMMENTS

  1. How to Construct an Effective Search Strategy

    The preliminary search is the point in the research process where you can identify a gap in the literature. Use the search strategies above to help you get started. If you have any questions or need help with developing your search strategy, please schedule an appointment with a librarian. We are available to meet online and in-person.

  2. A systematic approach to searching: an efficient and complete method to

    INTRODUCTION. Librarians and information specialists are often involved in the process of preparing and completing systematic reviews (SRs), where one of their main tasks is to identify relevant references to include in the review [].Although several recommendations for the process of searching have been published [2-6], none describe the development of a systematic search strategy from ...

  3. How to carry out a literature search for a systematic review: a

    A literature search is distinguished from, but integral to, a literature review. Literature reviews are conducted for the purpose of (a) locating information on a topic or identifying gaps in the literature for areas of future study, (b) synthesising conclusions in an area of ambiguity and (c) helping clinicians and researchers inform decision-making and practice guidelines.

  4. Develop a search strategy

    A search strategy is an organised structure of key terms used to search a database. The search strategy combines the key concepts of your search question in order to retrieve accurate results. Your search strategy will account for all: possible search terms. keywords and phrases. truncated and wildcard variations of search terms.

  5. Search strategy formulation for systematic reviews: Issues, challenges

    In this review, we focus on literature searching, specifically the development of the search strategies used in systematic reviews. This is a complex process ( Cooper et al., 2018 ; Lefebvre et al., 2020 ), in which the search methods and choice of databases to be used to identify literature for the systematic review are specified and peer ...

  6. Literature Review: Developing a search strategy

    Have a search framework. Search frameworks are mnemonics which can help you focus your research question. They are also useful in helping you to identify the concepts and terms you will use in your literature search. PICO is a search framework commonly used in the health sciences to focus clinical questions. As an example, you work in an aged ...

  7. Research Guides: Literature Reviews: Develop Search Strategies

    Developing a search strategy is a balance between needing a very precise search that yields fewer highly relevant results or a comprehensive search (high retrieval) with lower precision. The focus of a narrative literature review for a dissertation or thesis is thoroughness, so you should aim for high retrieval.

  8. Researching for your literature review: Develop a search strategy

    Look up your 'sample set' articles in a database that you will use for your literature review. For the articles indexed in the database, look at the records to see what keywords and/or subject headings are listed. The 'gold set' will also provide a means of testing your search strategy

  9. Search Strategy

    Read about developing search strategies. Review the materials linked in the Resources box to learn more about searching. Watch the videos to learn more about combining synonyms to search smarter and faster. Create a search strategy that you might use in a database with the Search Strategy Builder i n the Activity box.

  10. Developing a Search Strategy

    Run your search strategy. Run a search for all the record numbers for your test set using 'OR' in between each one. Lastly combine the result of your search strategy with the test set using 'OR'. If the number of records retrieved stays the same then the strategy has identified all the records.

  11. Researching for your literature review: Develop a search strategy

    A search strategy is the planned and structured organisation of terms used to search a database. An example of a search strategy incorporating all three concepts, that could be applied to different databases is shown below:

  12. 3. Search the literature

    When conducting a literature review, it is imperative to brainstorm a list of keywords related to your topic. Examining the titles, abstracts, and author-provided keywords of pertinent literature is a great starting point. ... Rarely will you construct a search strategy that yields appropriate and comprehensive results in one try. Literature ...

  13. Develop a search strategy

    A search strategy should be planned out and practiced before executing the final search in a database. A search strategy and search results should be documented throughout the searching process. What is a search strategy? A search strategy is an organized combination of keywords, phrases, subject headings, and limiters used to search a database.

  14. 4. Search strategy

    A good search strategy will include: Key concepts and meaningful terms. Keywords or subject headings. Alternative keywords. Care in linking concepts correctly. Regular evaluation of search results, to ensure that your search is focused. A detailed record of your final strategy. You will need to re-run your search at the end of the review ...

  15. Defining the process to literature searching in systematic reviews: a

    One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [5, 122, 123]. A corresponding ...

  16. Systematic Reviews: Constructing a Search Strategy and Searc... : AJN

    The systematic literature review, widely regarded as the gold standard for determining evidence-based practice, is increasingly used to guide policy decisions and the direction of future research. ... DEVELOPING THE SEARCH STRATEGY. A review protocol with a clearly defined review question and inclusion criteria will provide the foundation for ...

  17. Research Guides: Systematic Reviews: Search Strategy

    Creating a Search Strategy. A well constructed search strategy is the core of your systematic review and will be reported on in the methods section of your paper. The search strategy retrieves the majority of the studies you will assess for eligibility & inclusion. The quality of the search strategy also affects what items may have been missed.

  18. Defining the process to literature searching in systematic reviews: a

    One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [5, 122, 123]. A corresponding ...

  19. How to Write a Literature Review

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic. There are five key steps to writing a literature review: Search for ...

  20. How to undertake a literature search: a step-by-step guide

    Abstract. Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the results and search strategy.

  21. Search Strategies

    Overview of Search Strategies. There are many ways to find literature for your review, and we recommend that you use a combination of strategies - keeping in mind that you're going to be searching multiple times in a variety of ways, using different databases and resources. Searching the literature is not a straightforward, linear process - it ...

  22. Search strategy template

    If your search strategies are not very developed, the method you use doesn't lead to a good search, then consider using one of the other methods to see if changing your approach helps. ... Tags: dissertation, grey literature, literature review, literature reviews, postgraduate, prisma, prisma flow diagram, rapid evidence reviews, undergraduate ...

  23. Doing the literature review: Reporting your search strategy

    This will make your literature review more systematic, better structured, and it will be easier to write down the steps in your paper or article. The flow diagram shows the 'flow' of information in the different phases of a systematic review, by showing the number of records identified, included and excluded, and the reasons for exclusions.

  24. 5 Identifying the evidence: literature searching and evidence ...

    The search strategies for individual review questions can be combined with search filters to identify economic evidence. If using this approach, it may be necessary to adapt strategies for some databases to ensure adequate sensitivity. ... Quality assuring the literature search is an important step in developing guideline recommendations ...

  25. Ferroptosis is a protective factor for the prognosis of cancer patients

    Search strategy and selection criteria. This systematic review and meta-analysis were conducted following PRISMA guidelines. PubMed, EMBASE, and the Cochrane Library were systematically searched from their inception until February 27, 2024, with no language restrictions.

  26. Strategies of Public University Building Maintenance—A Literature Survey

    The study conducts a thorough literature review using Scopus as a search engine, employing the full-counting method for authorship, and uses VOSviewer 1.6.20 software for bibliometric analysis to identify gaps and outline future research directions. ... especially on maintenance strategies and Life Cycle Costs of university buildings. In this ...

  27. Exploring Behavioral and Strategic Factors Affecting Secondary Students

    Literature Review Strategy of Using CPS Skills in CPS-Based STEM Education. CPS-based STEM education integrates science, technology, engineering, ... Appropriate technology is integrated into instructional design to support problem-solving, whereby students search for information, create products, or conduct mathematical analysis.

  28. Research outcomes informing the selection of public health

    A key role of public health policy-makers and practitioners is to ensure beneficial interventions are implemented effectively enough to yield improvements in public health. The use of evidence to guide public health decision-making to achieve this is recommended. However, few studies have examined the relative value, as reported by policy-makers and practitioners, of different broad research ...

  29. Research: What Companies Don't Know About How Workers Use AI

    Three Gallup studies shed light on when and why AI is being used at work — and how employees and customers really feel about it. Leaders who are exploring how AI might fit into their business ...

  30. The use of fear appeals for pandemic compliance: A systematic review of

    Interventions to pandemic outbreaks are often associated with the use of fear-appeal to trigger behavioral change, especially in public health issues. However, no systematic review exists in the literature on the effectiveness of fear appeal strategies in the context of pandemic compliance. This paper aims at providing systematic literature review that answers the following thought-provoking ...