Page Content

Overview of the review report format, the first read-through, first read considerations, spotting potential major flaws, concluding the first reading, rejection after the first reading, before starting the second read-through, doing the second read-through, the second read-through: section by section guidance, how to structure your report, on presentation and style, criticisms & confidential comments to editors, the recommendation, when recommending rejection, additional resources, step by step guide to reviewing a manuscript.

When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

The structure of the review report varies between journals. Some follow an informal structure, while others have a more formal approach.

" Number your comments!!! " (Jonathon Halbesleben, former Editor of Journal of Occupational and Organizational Psychology)

Informal Structure

Many journals don't provide criteria for reviews beyond asking for your 'analysis of merits'. In this case, you may wish to familiarize yourself with examples of other reviews done for the journal, which the editor should be able to provide or, as you gain experience, rely on your own evolving style.

Formal Structure

Other journals require a more formal approach. Sometimes they will ask you to address specific questions in your review via a questionnaire. Or they might want you to rate the manuscript on various attributes using a scorecard. Often you can't see these until you log in to submit your review. So when you agree to the work, it's worth checking for any journal-specific guidelines and requirements. If there are formal guidelines, let them direct the structure of your review.

In Both Cases

Whether specifically required by the reporting format or not, you should expect to compile comments to authors and possibly confidential ones to editors only.

Reviewing with Empathy

Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections.

The first read-through is a skim-read. It will help you form an initial impression of the paper and get a sense of whether your eventual recommendation will be to accept or reject the paper.

Keep a pen and paper handy when skim-reading.

Try to bear in mind the following questions - they'll help you form your overall impression:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?

While you should read the whole paper, making the right choice of what to read first can save time by flagging major problems early on.

Editors say, " Specific recommendations for remedying flaws are VERY welcome ."

Examples of possibly major flaws include:

  • Drawing a conclusion that is contradicted by the author's own statistical or qualitative evidence
  • The use of a discredited method
  • Ignoring a process that is known to have a strong influence on the area under study

If experimental design features prominently in the paper, first check that the methodology is sound - if not, this is likely to be a major flaw.

You might examine:

  • The sampling in analytical papers
  • The sufficient use of control experiments
  • The precision of process data
  • The regularity of sampling in time-dependent studies
  • The validity of questions, the use of a detailed methodology and the data analysis being done systematically (in qualitative research)
  • That qualitative research extends beyond the author's opinions, with sufficient descriptive elements and appropriate quotes from interviews or focus groups

Major Flaws in Information

If methodology is less of an issue, it's often a good idea to look at the data tables, figures or images first. Especially in science research, it's all about the information gathered. If there are critical flaws in this, it's very likely the manuscript will need to be rejected. Such issues include:

  • Insufficient data
  • Unclear data tables
  • Contradictory data that either are not self-consistent or disagree with the conclusions
  • Confirmatory data that adds little, if anything, to current understanding - unless strong arguments for such repetition are made

If you find a major problem, note your reasoning and clear supporting evidence (including citations).

After the initial read and using your notes, including those of any major flaws you found, draft the first two paragraphs of your review - the first summarizing the research question addressed and the second the contribution of the work. If the journal has a prescribed reporting format, this draft will still help you compose your thoughts.

The First Paragraph

This should state the main question addressed by the research and summarize the goals, approaches, and conclusions of the paper. It should:

  • Help the editor properly contextualize the research and add weight to your judgement
  • Show the author what key messages are conveyed to the reader, so they can be sure they are achieving what they set out to do
  • Focus on successful aspects of the paper so the author gets a sense of what they've done well

The Second Paragraph

This should provide a conceptual overview of the contribution of the research. So consider:

  • Is the paper's premise interesting and important?
  • Are the methods used appropriate?
  • Do the data support the conclusions?

After drafting these two paragraphs, you should be in a position to decide whether this manuscript is seriously flawed and should be rejected (see the next section). Or whether it is publishable in principle and merits a detailed, careful read through.

Even if you are coming to the opinion that an article has serious flaws, make sure you read the whole paper. This is very important because you may find some really positive aspects that can be communicated to the author. This could help them with future submissions.

A full read-through will also make sure that any initial concerns are indeed correct and fair. After all, you need the context of the whole paper before deciding to reject. If you still intend to recommend rejection, see the section "When recommending rejection."

Once the paper has passed your first read and you've decided the article is publishable in principle, one purpose of the second, detailed read-through is to help prepare the manuscript for publication. You may still decide to recommend rejection following a second reading.

" Offer clear suggestions for how the authors can address the concerns raised. In other words, if you're going to raise a problem, provide a solution ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Preparation

To save time and simplify the review:

  • Don't rely solely upon inserting comments on the manuscript document - make separate notes
  • Try to group similar concerns or praise together
  • If using a review program to note directly onto the manuscript, still try grouping the concerns and praise in separate notes - it helps later
  • Note line numbers of text upon which your notes are based - this helps you find items again and also aids those reading your review

Now that you have completed your preparations, you're ready to spend an hour or so reading carefully through the manuscript.

As you're reading through the manuscript for a second time, you'll need to keep in mind the argument's construction, the clarity of the language and content.

With regard to the argument’s construction, you should identify:

  • Any places where the meaning is unclear or ambiguous
  • Any factual errors
  • Any invalid arguments

You may also wish to consider:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Not every submission is well written. Part of your role is to make sure that the text’s meaning is clear.

Editors say, " If a manuscript has many English language and editing issues, please do not try and fix it. If it is too bad, note that in your review and it should be up to the authors to have the manuscript edited ."

If the article is difficult to understand, you should have rejected it already. However, if the language is poor but you understand the core message, see if you can suggest improvements to fix the problem:

  • Are there certain aspects that could be communicated better, such as parts of the discussion?
  • Should the authors consider resubmitting to the same journal after language improvements?
  • Would you consider looking at the paper again once these issues are dealt with?

On Grammar and Punctuation

Your primary role is judging the research content. Don't spend time polishing grammar or spelling. Editors will make sure that the text is at a high standard before publication. However, if you spot grammatical errors that affect clarity of meaning, then it's important to highlight these. Expect to suggest such amendments - it's rare for a manuscript to pass review with no corrections.

A 2010 study of nursing journals found that 79% of recommendations by reviewers were influenced by grammar and writing style (Shattel, et al., 2010).

1. The Introduction

A well-written introduction:

  • Sets out the argument
  • Summarizes recent research related to the topic
  • Highlights gaps in current understanding or conflicts in current knowledge
  • Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
  • Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Originality and Topicality

Originality and topicality can only be established in the light of recent authoritative research. For example, it's impossible to argue that there is a conflict in current understanding by referencing articles that are 10 years old.

Authors may make the case that a topic hasn't been investigated in several years and that new research is required. This point is only valid if researchers can point to recent developments in data gathering techniques or to research in indirectly related fields that suggest the topic needs revisiting. Clearly, authors can only do this by referencing recent literature. Obviously, where older research is seminal or where aspects of the methodology rely upon it, then it is perfectly appropriate for authors to cite some older papers.

Editors say, "Is the report providing new information; is it novel or just confirmatory of well-known outcomes ?"

It's common for the introduction to end by stating the research aims. By this point you should already have a good impression of them - if the explicit aims come as a surprise, then the introduction needs improvement.

2. Materials and Methods

Academic research should be replicable, repeatable and robust - and follow best practice.

Replicable Research

This makes sufficient use of:

  • Control experiments
  • Repeated analyses
  • Repeated experiments

These are used to make sure observed trends are not due to chance and that the same experiment could be repeated by other researchers - and result in the same outcome. Statistical analyses will not be sound if methods are not replicable. Where research is not replicable, the paper should be recommended for rejection.

Repeatable Methods

These give enough detail so that other researchers are able to carry out the same research. For example, equipment used or sampling methods should all be described in detail so that others could follow the same steps. Where methods are not detailed enough, it's usual to ask for the methods section to be revised.

Robust Research

This has enough data points to make sure the data are reliable. If there are insufficient data, it might be appropriate to recommend revision. You should also consider whether there is any in-built bias not nullified by the control experiments.

Best Practice

During these checks you should keep in mind best practice:

  • Standard guidelines were followed (e.g. the CONSORT Statement for reporting randomized trials)
  • The health and safety of all participants in the study was not compromised
  • Ethical standards were maintained

If the research fails to reach relevant best practice standards, it's usual to recommend rejection. What's more, you don't then need to read any further.

3. Results and Discussion

This section should tell a coherent story - What happened? What was discovered or confirmed?

Certain patterns of good reporting need to be followed by the author:

  • They should start by describing in simple terms what the data show
  • They should make reference to statistical analyses, such as significance or goodness of fit
  • Once described, they should evaluate the trends observed and explain the significance of the results to wider understanding. This can only be done by referencing published research
  • The outcome should be a critical analysis of the data collected

Discussion should always, at some point, gather all the information together into a single whole. Authors should describe and discuss the overall story formed. If there are gaps or inconsistencies in the story, they should address these and suggest ways future research might confirm the findings or take the research forward.

4. Conclusions

This section is usually no more than a few paragraphs and may be presented as part of the results and discussion, or in a separate section. The conclusions should reflect upon the aims - whether they were achieved or not - and, just like the aims, should not be surprising. If the conclusions are not evidence-based, it's appropriate to ask for them to be re-written.

5. Information Gathered: Images, Graphs and Data Tables

If you find yourself looking at a piece of information from which you cannot discern a story, then you should ask for improvements in presentation. This could be an issue with titles, labels, statistical notation or image quality.

Where information is clear, you should check that:

  • The results seem plausible, in case there is an error in data gathering
  • The trends you can see support the paper's discussion and conclusions
  • There are sufficient data. For example, in studies carried out over time are there sufficient data points to support the trends described by the author?

You should also check whether images have been edited or manipulated to emphasize the story they tell. This may be appropriate but only if authors report on how the image has been edited (e.g. by highlighting certain parts of an image). Where you feel that an image has been edited or manipulated without explanation, you should highlight this in a confidential comment to the editor in your report.

6. List of References

You will need to check referencing for accuracy, adequacy and balance.

Where a cited article is central to the author's argument, you should check the accuracy and format of the reference - and bear in mind different subject areas may use citations differently. Otherwise, it's the editor’s role to exhaustively check the reference section for accuracy and format.

You should consider if the referencing is adequate:

  • Are important parts of the argument poorly supported?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • If a manuscript only uses half the citations typical in its field, this may be an indicator that referencing should be improved - but don't be guided solely by quantity
  • References should be relevant, recent and readily retrievable

Check for a well-balanced list of references that is:

  • Helpful to the reader
  • Fair to competing authors
  • Not over-reliant on self-citation
  • Gives due recognition to the initial discoveries and related work that led to the work under assessment

You should be able to evaluate whether the article meets the criteria for balanced referencing without looking up every reference.

7. Plagiarism

By now you will have a deep understanding of the paper's content - and you may have some concerns about plagiarism.

Identified Concern

If you find - or already knew of - a very similar paper, this may be because the author overlooked it in their own literature search. Or it may be because it is very recent or published in a journal slightly outside their usual field.

You may feel you can advise the author how to emphasize the novel aspects of their own study, so as to better differentiate it from similar research. If so, you may ask the author to discuss their aims and results, or modify their conclusions, in light of the similar article. Of course, the research similarities may be so great that they render the work unoriginal and you have no choice but to recommend rejection.

"It's very helpful when a reviewer can point out recent similar publications on the same topic by other groups, or that the authors have already published some data elsewhere ." (Editor feedback)

Suspected Concern

If you suspect plagiarism, including self-plagiarism, but cannot recall or locate exactly what is being plagiarized, notify the editor of your suspicion and ask for guidance.

Most editors have access to software that can check for plagiarism.

Editors are not out to police every paper, but when plagiarism is discovered during peer review it can be properly addressed ahead of publication. If plagiarism is discovered only after publication, the consequences are worse for both authors and readers, because a retraction may be necessary.

For detailed guidelines see COPE's Ethical guidelines for reviewers and Wiley's Best Practice Guidelines on Publishing Ethics .

8. Search Engine Optimization (SEO)

After the detailed read-through, you will be in a position to advise whether the title, abstract and key words are optimized for search purposes. In order to be effective, good SEO terms will reflect the aims of the research.

A clear title and abstract will improve the paper's search engine rankings and will influence whether the user finds and then decides to navigate to the main article. The title should contain the relevant SEO terms early on. This has a major effect on the impact of a paper, since it helps it appear in search results. A poor abstract can then lose the reader's interest and undo the benefit of an effective title - whilst the paper's abstract may appear in search results, the potential reader may go no further.

So ask yourself, while the abstract may have seemed adequate during earlier checks, does it:

  • Do justice to the manuscript in this context?
  • Highlight important findings sufficiently?
  • Present the most interesting data?

Editors say, " Does the Abstract highlight the important findings of the study ?"

If there is a formal report format, remember to follow it. This will often comprise a range of questions followed by comment sections. Try to answer all the questions. They are there because the editor felt that they are important. If you're following an informal report format you could structure your report in three sections: summary, major issues, minor issues.

  • Give positive feedback first. Authors are more likely to read your review if you do so. But don't overdo it if you will be recommending rejection
  • Briefly summarize what the paper is about and what the findings are
  • Try to put the findings of the paper into the context of the existing literature and current knowledge
  • Indicate the significance of the work and if it is novel or mainly confirmatory
  • Indicate the work's strengths, its quality and completeness
  • State any major flaws or weaknesses and note any special considerations. For example, if previously held theories are being overlooked

Major Issues

  • Are there any major flaws? State what they are and what the severity of their impact is on the paper
  • Has similar work already been published without the authors acknowledging this?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • If major revisions are required, try to indicate clearly what they are
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough for you to accurately assess the work?
  • Are there any ethical issues? If you are unsure it may be better to disclose these in the confidential comments section

Minor Issues

  • Are there places where meaning is ambiguous? How can this be corrected?
  • Are the correct references cited? If not, which should be cited instead/also? Are citations excessive, limited, or biased?
  • Are there any factual, numerical or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled? If not, say which are not

Your review should ultimately help the author improve their article. So be polite, honest and clear. You should also try to be objective and constructive, not subjective and destructive.

You should also:

  • Write clearly and so you can be understood by people whose first language is not English
  • Avoid complex or unusual words, especially ones that would even confuse native speakers
  • Number your points and refer to page and line numbers in the manuscript when making specific comments
  • If you have been asked to only comment on specific parts or aspects of the manuscript, you should indicate clearly which these are
  • Treat the author's work the way you would like your own to be treated

Most journals give reviewers the option to provide some confidential comments to editors. Often this is where editors will want reviewers to state their recommendation - see the next section - but otherwise this area is best reserved for communicating malpractice such as suspected plagiarism, fraud, unattributed work, unethical procedures, duplicate publication, bias or other conflicts of interest.

However, this doesn't give reviewers permission to 'backstab' the author. Authors can't see this feedback and are unable to give their side of the story unless the editor asks them to. So in the spirit of fairness, write comments to editors as though authors might read them too.

Reviewers should check the preferences of individual journals as to where they want review decisions to be stated. In particular, bear in mind that some journals will not want the recommendation included in any comments to authors, as this can cause editors difficulty later - see Section 11 for more advice about working with editors.

You will normally be asked to indicate your recommendation (e.g. accept, reject, revise and resubmit, etc.) from a fixed-choice list and then to enter your comments into a separate text box.

Recommending Acceptance

If you're recommending acceptance, give details outlining why, and if there are any areas that could be improved. Don't just give a short, cursory remark such as 'great, accept'. See Improving the Manuscript

Recommending Revision

Where improvements are needed, a recommendation for major or minor revision is typical. You may also choose to state whether you opt in or out of the post-revision review too. If recommending revision, state specific changes you feel need to be made. The author can then reply to each point in turn.

Some journals offer the option to recommend rejection with the possibility of resubmission – this is most relevant where substantial, major revision is necessary.

What can reviewers do to help? " Be clear in their comments to the author (or editor) which points are absolutely critical if the paper is given an opportunity for revisio n." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Recommending Rejection

If recommending rejection or major revision, state this clearly in your review (and see the next section, 'When recommending rejection').

Where manuscripts have serious flaws you should not spend any time polishing the review you've drafted or give detailed advice on presentation.

Editors say, " If a reviewer suggests a rejection, but her/his comments are not detailed or helpful, it does not help the editor in making a decision ."

In your recommendations for the author, you should:

  • Give constructive feedback describing ways that they could improve the research
  • Keep the focus on the research and not the author. This is an extremely important part of your job as a reviewer
  • Avoid making critical confidential comments to the editor while being polite and encouraging to the author - the latter may not understand why their manuscript has been rejected. Also, they won't get feedback on how to improve their research and it could trigger an appeal

Remember to give constructive criticism even if recommending rejection. This helps developing researchers improve their work and explains to the editor why you felt the manuscript should not be published.

" When the comments seem really positive, but the recommendation is rejection…it puts the editor in a tough position of having to reject a paper when the comments make it sound like a great paper ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Visit our Wiley Author Learning and Training Channel for expert advice on peer review.

Watch the video, Ethical considerations of Peer Review

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

how to review in a research paper

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

how to review in a research paper

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

how to review in a research paper

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

how to review in a research paper

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

  • Primary Antibodies
  • Conjugated Antibodies for IF
  • Conjugated Antibodies for FC
  • Secondary Antibodies
  • Antibody Labeling Kits New
  • Magnetic Cell Separation Kits
  • Cytokines & Growth Factors
  • Neutralizing/activating Antibodies
  • Nanobody-based Reagents
  • Accessory Products and Kits
  • Fusion Proteins
  • Atlantic Blue™
  • Cardinal Red™
  • CoraLite® Plus 405
  • CoraLite® Plus 488
  • CoraLite® Plus 647
  • CoraLite® Plus 750
  • CoraLite®488
  • CoraLite®532
  • CoraLite®555
  • CoraLite®568
  • CoraLite®594
  • Alexa Fluor® 488
  • Alexa Fluor® 568
  • Alexa Fluor® 647
  • CoraLite®647
  • CoraLite Plus 405
  • FITC Plus NEW
  • CoraLite Plus 488
  • CoraLite Plus 555
  • CoraLite Plus 647
  • CoraLite Plus 750

How to Review a Scientific Paper in 10 Easy Steps

Summary of how to perform an invited review of a paper for publication.

Blog written by Jaime Fernández Sobaberas , 3 rd year PhD student in Biochemistry at Heidelberg University, Germany.

Reviewing scientific papers is an essential part of academic research and the publication process. It allows experts to assess the quality, validity, and significance of research findings before they are disseminated to the broader scientific community. Writing a comprehensive and constructive review contributes to the overall improvement of scientific knowledge. In this blog, we will discuss the key steps and considerations involved in reviewing a scientific paper.

Graphic summary of how to do an invited review of a scientific paper

1. Understand the Purpose of the Review:

Before you begin the review process, it’s important to understand the purpose of the review. Ask yourself why you have been asked to review the paper and what specific aspects you should focus on. Keep in mind that the goal is to provide a fair, unbiased, and constructive critique that helps the authors improve their work.

2. Familiarize Yourself with the Paper:

Start by reading the paper thoroughly and gaining a clear understanding of its content. Take note of the research question, methodology, data analysis, results, and conclusions. Identify any areas where you have expertise or concerns.

3. Evaluate the Paper’s Structure and Clarity:

Assess the paper’s overall structure, organization, and clarity of writing. Consider whether the abstract provides a concise summary of the study and whether the introduction effectively establishes the research context. Evaluate the logical flow of ideas, the use of headings and subheadings, and the clarity of the language. Note any sections that could benefit from additional clarification or restructuring.

4. Evaluate the Research Methods:

Assess the appropriateness and rigor of the research methods employed. Evaluate the study design, sample size, data collection techniques, and statistical analyses. Check whether the methods are adequately described, allowing for replication by other researchers. Identify any potential flaws or limitations in the methodology that could affect the validity of the results. Ensure that unique identifiers have been for used all reagents; for example, catalog numbers of RRIDs for all antibodies used.

5. Evaluate the Results and Analysis:

Examine the results and analysis presented in the paper. Assess whether the data supports the research question and if the statistical analysis is appropriate. Look for any inconsistencies or gaps in the data or areas where data may have been deliberately misled. Consider the significance and implications of the results and whether they are supported by the evidence presented. Read all the supporting/supplemental resources (if available) to confirm they show enough evidence to support their main findings.

6. Assess the Discussion and Conclusions:

Evaluate the interpretation of the results in the discussion section. Consider whether the authors have provided a balanced and objective analysis of the findings. Assess the extent to which the conclusions align with the research question and the overall study objectives. Note any alternative interpretations or potential avenues for future research.

7. Consider Ethical Considerations:

While reviewing a scientific paper, it’s important to be mindful of ethical considerations. Evaluate whether the study adheres to ethical guidelines and standards, such as obtaining informed consent, maintaining participant confidentiality, and minimizing potential harm. Assess whether the study design and methods align with ethical principles, particularly when human or animal subjects are involved. If you identify any ethical concerns, highlight them in your review and suggest potential remedies or improvements.

8. Verify References and Citations:

Ensure that the references and citations provided in the paper are accurate, relevant, and up-to-date. Verify that all sources mentioned in the text are included in the reference list and vice versa. Check the quality and credibility of the references, assessing whether they are from reputable sources and contribute to the overall strength of the paper. If you notice any missing or inaccurate references, point them out in your review and suggest appropriate replacements if necessary.

9. Provide Constructive Feedback:

When writing your review, be constructive and respectful in your feedback. Clearly outline both the strengths and weaknesses of the paper, offering specific suggestions for improvement. Be specific and provide references to relevant literature to support your comments. Avoid making personal attacks or using derogatory language.

10. Summarize Your Review:

Conclude your review with a concise summary of your main points. Highlight the paper’s strengths, such as novel contributions or a well-executed methodology. Discuss the key weaknesses and areas that need improvement. Finally, provide an overall recommendation regarding the acceptance, revision, or rejection of the paper.

Reviewing a scientific paper is a critical process that contributes to the quality and integrity of scientific research. By following the steps outlined in this guide, you can provide valuable feedback to authors, help improve the quality of the research, and contribute to the advancement of scientific knowledge. Remember to approach the review process with objectivity, fairness, and a commitment to fostering scientific excellence.

Related Content

Read the blog in Spanish

How to make a scientific conference poster

How to write a  PhD thesis- 10 top tips

How to navigate finishing your PhD remotely

ChatGPT and BioGPT as tools for life science research

how to review in a research paper

Pathway Posters Library

Early Career Researcher Hub

Newsletter Signup

Stay up-to-date with our latest news and events. New to Proteintech? Get 10% off your first order when you sign up.

Reviewing a scientific paper - some guidelines

The aim of the review is to provide authors with constructive feedback from specialists, so that they can make improvements to their work. This is of key importance to ensure the highest possible standard.

Before you start

Make sure that you are familiar with the Instructions for Authors of the Journal. Before you decide to accept or decline the invitation to review, consider this:

  • Is the paper within your area of expertise? If not, it may be difficult to provide a high quality review.
  • Do you have any conflict of interest? For example, do any of the authors work at the same organization as you, or do you know any of them personally? If so, let the editor know.
  • Make sure that you have the time. It is important to meet the deadlines.

Note that the paper sent to you for peer review is a privileged confidential document . This means that you cannot use the information obtained during the peer-review process for your own or any other person or organization’s advantage or to disadvantage or discredit others.

You should also not contact authors directly, this is to protect your anonymity as IWA Publishing does not share your identity with authors. Comments should only be submitted to the journal via the peer review system.

Do not allow your reviews to be influenced by the origins of a manuscript, by the nationality, religious or political beliefs, gender or other characteristics of the authors.

Two types of papers

Most papers are in one of these categories:

  • Research Papers : these papers are fully documented, interpreted accounts of significant findings of original research.
  • Review Papers : these are critical and comprehensive reviews that provide new insights or interpretation of a subject through thorough and systematic evaluation of available evidence. Note, that a review paper is more than a literature overview. It must contain an in-depth critical review of the literature. Therefore, such a review paper is expected to have at least one experienced senior researcher among the authors.

General criteria

Your review will help the editor decide whether or not to publish the article:

  • Does the paper provide insight into an important issue?
  • Does the paper tell a good story?
  • Is the paper interesting for an international audience?
  • Does the insight from the paper stimulate new, important questions?
  • Is there a high probability that the paper will be read and cited by others?

Your comments in the review

  • Remember that authors will welcome positive feedback as well as constructive criticism from you.
  • Your comments for the “Editor only” will NOT be sent to the author.
  • Comments that will be transmitted to the author(s) do NOT reveal your identity.
  • When providing comments to the authors, please do not suggest that the authors include references to your own work in their article. We do not allow reviewers to instruct authors cite their work unless it seems to be critical for the improvement of the submission. Even then, we only allow reviewers to suggest one of their own papers.

Hint : use your own word processor and copy and paste your comments.

Getting started

Start with the following quick checklist before you consider the content in detail:

  • Is the length of the paper within the limits of the journal?
  • Is the paper commercial or does it market a particular product or method?
  • Is the paper structured properly (abstract, keywords, material and methods, discussion, conclusions, references, etc.)

Abstract, introduction and conclusion

  • Make sure that the abstract is informative, can stand alone and covers the content.
  • A combination of the problem and the conclusions.
  • Maximum length according to the Journal instructions.
  • No figures or citations should be included here.
  • 3-6 keywords. Should be descriptive. The title words should not be repeated here.

Introduction

 It should

  • state the objective, the problem - the research question to be addressed,
  • provide a concise background: why the work was done,
  • quote literature only with direct bearing on the problem - not a textbook,
  • state a hypothesis – a suggested solution to the problem.

Conclusions

  • This is the “take-home message” of the paper. Should be short and concise.
  • Must be possible to derive from the results and discussion.
  • It is not a summary of the paper.
  • No references.

Read the abstract, introduction and conclusions

  • Is there a clear message?
  • Having read the introduction – can you find out what the contribution of the paper is? Try to formulate the message in your own words. This can be used later in the reviewer summary.
  • Do the perceptions or hypotheses in the introduction match the conclusions?
  • You should be able to find this out within half an hour of reading. After this you will probably have a first impression of whether the paper is worth publishing or not. If you are still positive , then continue the review process. If you are negative , you can probably already explain why the paper is not worth publishing .

Detailed review

Materials and methods

  • Experiments : are the experiments documented adequately? Have information about positive and/or negative controls, and the numbers of replicated experiments and/or samples been provided?
  • Model derivations : is the process model derived properly? Is it already known?
  • Results : are they presented so that you can easily see their significance? You may use a comment like: “ The paper would be significantly improved with the addition of more details about…”
  • Are concentrations shown with believable accuracy - or are they shown with too many significant digits?
  • Data analysis : have statistics been used in an appropriate way? Is the raw data presented in such a way that you can see if the statistical method is adequate? Is the data normally distributed so that standard deviations are motivated? Are outliers discussed?
  • Figures : Can the figures explain the results? Are the figure captions informative?
  • Tables : are all the inputs in the tables necessary to understand the message?
  • You may add comments like “ Overall I do not think that this article contains enough robust data to evidence the statement made on page X, lines Y–Z.
  • Note that the discussion section makes the paper scientific! Can the author explain and interpret the results? Can you relate the discussion to the hypotheses? You may write a comment like “ This discussion could be enlarged to explain…”
  • Have the results been critiqued against the literature? Have any similarities and discrepancies with other published data been identified and accounted for?
  • Can the conclusions be derived from the results and the discussions?

Check the references

  • Have the author(s) done their homework with previous contributions?
  • Compare the introduction with the reference list. Is it clearly indicated what is new in this paper?
  • Are there both older and newer references?
  • How many references? There ought to be 20-30 references in most cases.
  • Are there any references that cannot be read by an English speaking reader? At most 1-2 references can be allowed (where appropriate) but should be queried.
  • Is the author citing the original contribution or citing from a popular source?
  • Make sure that the references cited in the text are included in the reference list and vice versa .
  • Many authors do not have English as their mother tongue. The text does not have to be perfect English, but it has to be clear and understandable. You may add some comment like “ This paper would benefit from some closer proof reading. It includes numerous linguistic errors (e.g. agreement of verbs) that at times make it difficult to follow. I would suggest that it may be useful to engage a professional English language editor following a restructure of the paper.”
  • Phrase your feedback appropriately and with due respect.
  • You do not need to go through the language issues yourself.

Your recommendation

Ensure that your final evaluation corresponds to your answers in the review form questions; especially should you be considering major revision or rejection.

Your recommendation will almost surely be one of these:

  • Reject (explain reason in your report)
  • Accept without revision (remember that this is very unusual! Most papers can be improved in some way)
  • Revise – either major or minor (explain the revision that is required, and inform the editor if you would accept to review the revised paper)
  • Open Access
  • Collections
  • Subscriptions
  • Subscribe to Open
  • Editorial Services
  • Rights and Permissions
  • Sign Up for Our Mailing List
  • IWA Publishing
  • Republic – Export Building, Units 1.04 & 1.05
  • 1 Clove Crescent
  • London, E14 2BA, UK
  • Telephone:  +44 208 054 8208
  • Fax:  +44 207 654 5555
  • IWAPublishing.com
  • IWA-network.org
  • IWA-connect.org
  • Cookie Policy
  • Terms & Conditions
  • Get Adobe Acrobat Reader
  • ©Copyright 2021 IWA Publishing

This Feature Is Available To Subscribers Only

Sign In or Create an Account

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMJ Health Care Inform
  • v.28(1); 2021

Logo of bmjhealthcare

A step-by-step guide to peer review: a template for patients and novice reviewers

1 General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA

Charlotte Blease

2 Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA

3 Harvard Medical School

While relatively novel, patient peer review has the potential to change the healthcare publishing paradigm. It can do this by helping researchers enlarge the pool of people who are welcome to read, understand and participate in healthcare research. Academic journals who are early adopters of patient peer review have already committed to placing a priority on using person-centred language in publicly available abstracts and focusing on translational and practical research.

A wide body of literature has shown that including people with lived experiences in a truly meaningful way can improve the quality and efficiency of health research. Traditionally considered only as ‘subjects’ of research, over the last 10–15 years, patients and care partners have increasingly been invited to contribute to the design and conduct of studies. Established institutions are increasingly recognising the distinctive expertise patients possess—many patients have acquired deep insights about their conditions, symptoms, medical treatments and quality of healthcare delivery. Among some funders, including the views of patients is now a requirement to ensure research proposals are meaningful to persons with the lived experience of illness. Further illustrating these developments, patients are now involved in reviewing and making recommendations as part of funding institutions, setting research agendas and priorities, being funded for and leading their own research and leading or coauthoring scholarly publications, and are now participating in the peer review process for academic journals. 1–5 Patients offer an outsider’s perspective within mainstream healthcare: they have fewer institutional, professional or social allegiances and conflicts of interest—factors recognised as compromising the quality of research. Patient involvement is essential to move away from rhetorical commitments to embrace a truly patient-centred healthcare ecosystem where everyone has a place at the table.

As people with lived health experiences climb a ladder of engagement in patient–researcher partnerships, they may be asked to act as peer reviewers of academic manuscripts. However, many of these individuals do not hold professional training in medicine, healthcare or science and have never encountered the peer review process. Little guidance exists for patients and care partners tasked with reviewing and providing input on manuscripts in search of publication.

In conversation, however, even experienced researchers confess that learning how to peer review is part of a hidden curriculum in academia—a skill outlined by no formal means but rather learnt by mimicry. 6 As such, as they learn the process, novices may pick up bad habits. In the case of peer review, learning is the result of reading large numbers of academic papers, occasional conversations with mentors or commonly “trial by fire” experienced via reviewer comments to their own submissions. Patient reviewers are rarely exposed to these experiences and can be at a loss for where to begin. As a result, some may forgo opportunities to provide valuable and highly insightful feedback on research publications. Although some journals are highly specific about how reviewers should structure their feedback, many publications—including top-tier medical journals—assume that all reviewers will know how to construct responses. Only a few forward-thinking journals actively seeking peer review from people with lived health experiences currently point to review tips designed for experienced professionals. 7

As people with lived health experiences are increasingly invited to participate in peer review, it is essential that they be supported in this process. The peer review template for patients and novice reviewers ( table 1 ) is a series of steps designed to create a workflow for the main components of peer review. A structured workflow can help a reviewer organise their thoughts and create space to engage in critical thinking. The template is a starting point for anyone new to peer review, and it should be modified, adapted and built on for individual preferences and unique journal requirements. Peer reviews are commonly submitted via website portals, which vary widely in design and functionality; as such, reviewers are encouraged to decide how to best use the template on a case-by-case basis. Journals may require reviewers to copy and paste responses from the template into a journal website or upload a clean copy of the template as an attachment. Note: If uploading the review as an attachment, remember to remove the template examples and writing prompts .

Peer review template for patients and other novice reviewers

It is important to point out that patient reviewers are not alone in facing challenges and a steep learning curve in performing peer review. Many health research agendas and, as a result, publications straddle disciplines, requiring peer reviewers with complementary expertise and training. Some experts may be highly equipped to critique particular aspects of research papers while unsuited to comment on other parts. Curiously, however, it is seldom a requirement that invited peer reviewers admit their own limitations to comment on different dimensions of papers. Relatedly, while we do not suggest that all patient peer reviewers will be equipped to critique every aspect of submitted manuscripts—though some may be fully competent to do so—we suggest that candour about limitations of expertise would also benefit the broader research community.

As novice reviewers gain experience, they may find themselves solicited for a growing number of reviews, much like their more experienced counterparts or mentors. 8 Serving as a patient or care partner reviewer can be a rewarding form of advocacy and will be crucial to harnessing the feedback and expertise of persons with lived health experiences. As we move into a future where online searches for information are a ubiquitous first step in searching for answers to health-related questions, patient and novice reviewers may become the much-needed link between academia and the lay public.

Acknowledgments

LS thanks the experienced and novice reviewers who encouraged her to publish this template.

Twitter: @TheLizArmy, @@crblease

Contributors: Both authors contributed substantially to the manuscript. LS conceived the idea and design and drafted the text. CB refined the idea and critically revised the text.

Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests: The authors have read and understood the BMJ policy on declaration of interests and declare the following interests: LS is a member of the BMJ Patient Advisory Panel, serves as a BMJ patient reviewer and is an ad hoc patient reviewer for the Patient-Centered Outcomes Research Institute; CB is a Keane OpenNotes scholar; both LS and CB work on OpenNotes, a philanthropically funded research initiative focused on improving transparency in healthcare.

Provenance and peer review: Commissioned; externally peer reviewed.

Ethics statements

Patient consent for publication.

Not required.

How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

How to Write a Research Paper | A Beginner's Guide

A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research.

Research papers are similar to academic essays , but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research. Writing a research paper requires you to demonstrate a strong knowledge of your topic, engage with a variety of sources, and make an original contribution to the debate.

This step-by-step guide takes you through the entire writing process, from understanding your assignment to proofreading your final draft.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Understand the assignment, choose a research paper topic, conduct preliminary research, develop a thesis statement, create a research paper outline, write a first draft of the research paper, write the introduction, write a compelling body of text, write the conclusion, the second draft, the revision process, research paper checklist, free lecture slides.

Completing a research paper successfully means accomplishing the specific tasks set out for you. Before you start, make sure you thoroughly understanding the assignment task sheet:

  • Read it carefully, looking for anything confusing you might need to clarify with your professor.
  • Identify the assignment goal, deadline, length specifications, formatting, and submission method.
  • Make a bulleted list of the key points, then go back and cross completed items off as you’re writing.

Carefully consider your timeframe and word limit: be realistic, and plan enough time to research, write, and edit.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

There are many ways to generate an idea for a research paper, from brainstorming with pen and paper to talking it through with a fellow student or professor.

You can try free writing, which involves taking a broad topic and writing continuously for two or three minutes to identify absolutely anything relevant that could be interesting.

You can also gain inspiration from other research. The discussion or recommendations sections of research papers often include ideas for other specific topics that require further examination.

Once you have a broad subject area, narrow it down to choose a topic that interests you, m eets the criteria of your assignment, and i s possible to research. Aim for ideas that are both original and specific:

  • A paper following the chronology of World War II would not be original or specific enough.
  • A paper on the experience of Danish citizens living close to the German border during World War II would be specific and could be original enough.

Note any discussions that seem important to the topic, and try to find an issue that you can focus your paper around. Use a variety of sources , including journals, books, and reliable websites, to ensure you do not miss anything glaring.

Do not only verify the ideas you have in mind, but look for sources that contradict your point of view.

  • Is there anything people seem to overlook in the sources you research?
  • Are there any heated debates you can address?
  • Do you have a unique take on your topic?
  • Have there been some recent developments that build on the extant research?

In this stage, you might find it helpful to formulate some research questions to help guide you. To write research questions, try to finish the following sentence: “I want to know how/what/why…”

A thesis statement is a statement of your central argument — it establishes the purpose and position of your paper. If you started with a research question, the thesis statement should answer it. It should also show what evidence and reasoning you’ll use to support that answer.

The thesis statement should be concise, contentious, and coherent. That means it should briefly summarize your argument in a sentence or two, make a claim that requires further evidence or analysis, and make a coherent point that relates to every part of the paper.

You will probably revise and refine the thesis statement as you do more research, but it can serve as a guide throughout the writing process. Every paragraph should aim to support and develop this central claim.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to review in a research paper

A research paper outline is essentially a list of the key topics, arguments, and evidence you want to include, divided into sections with headings so that you know roughly what the paper will look like before you start writing.

A structure outline can help make the writing process much more efficient, so it’s worth dedicating some time to create one.

Your first draft won’t be perfect — you can polish later on. Your priorities at this stage are as follows:

  • Maintaining forward momentum — write now, perfect later.
  • Paying attention to clear organization and logical ordering of paragraphs and sentences, which will help when you come to the second draft.
  • Expressing your ideas as clearly as possible, so you know what you were trying to say when you come back to the text.

You do not need to start by writing the introduction. Begin where it feels most natural for you — some prefer to finish the most difficult sections first, while others choose to start with the easiest part. If you created an outline, use it as a map while you work.

Do not delete large sections of text. If you begin to dislike something you have written or find it doesn’t quite fit, move it to a different document, but don’t lose it completely — you never know if it might come in useful later.

Paragraph structure

Paragraphs are the basic building blocks of research papers. Each one should focus on a single claim or idea that helps to establish the overall argument or purpose of the paper.

Example paragraph

George Orwell’s 1946 essay “Politics and the English Language” has had an enduring impact on thought about the relationship between politics and language. This impact is particularly obvious in light of the various critical review articles that have recently referenced the essay. For example, consider Mark Falcoff’s 2009 article in The National Review Online, “The Perversion of Language; or, Orwell Revisited,” in which he analyzes several common words (“activist,” “civil-rights leader,” “diversity,” and more). Falcoff’s close analysis of the ambiguity built into political language intentionally mirrors Orwell’s own point-by-point analysis of the political language of his day. Even 63 years after its publication, Orwell’s essay is emulated by contemporary thinkers.

Citing sources

It’s also important to keep track of citations at this stage to avoid accidental plagiarism . Each time you use a source, make sure to take note of where the information came from.

You can use our free citation generators to automatically create citations and save your reference list as you go.

APA Citation Generator MLA Citation Generator

The research paper introduction should address three questions: What, why, and how? After finishing the introduction, the reader should know what the paper is about, why it is worth reading, and how you’ll build your arguments.

What? Be specific about the topic of the paper, introduce the background, and define key terms or concepts.

Why? This is the most important, but also the most difficult, part of the introduction. Try to provide brief answers to the following questions: What new material or insight are you offering? What important issues does your essay help define or answer?

How? To let the reader know what to expect from the rest of the paper, the introduction should include a “map” of what will be discussed, briefly presenting the key elements of the paper in chronological order.

The major struggle faced by most writers is how to organize the information presented in the paper, which is one reason an outline is so useful. However, remember that the outline is only a guide and, when writing, you can be flexible with the order in which the information and arguments are presented.

One way to stay on track is to use your thesis statement and topic sentences . Check:

  • topic sentences against the thesis statement;
  • topic sentences against each other, for similarities and logical ordering;
  • and each sentence against the topic sentence of that paragraph.

Be aware of paragraphs that seem to cover the same things. If two paragraphs discuss something similar, they must approach that topic in different ways. Aim to create smooth transitions between sentences, paragraphs, and sections.

The research paper conclusion is designed to help your reader out of the paper’s argument, giving them a sense of finality.

Trace the course of the paper, emphasizing how it all comes together to prove your thesis statement. Give the paper a sense of finality by making sure the reader understands how you’ve settled the issues raised in the introduction.

You might also discuss the more general consequences of the argument, outline what the paper offers to future students of the topic, and suggest any questions the paper’s argument raises but cannot or does not try to answer.

You should not :

  • Offer new arguments or essential information
  • Take up any more space than necessary
  • Begin with stock phrases that signal you are ending the paper (e.g. “In conclusion”)

There are four main considerations when it comes to the second draft.

  • Check how your vision of the paper lines up with the first draft and, more importantly, that your paper still answers the assignment.
  • Identify any assumptions that might require (more substantial) justification, keeping your reader’s perspective foremost in mind. Remove these points if you cannot substantiate them further.
  • Be open to rearranging your ideas. Check whether any sections feel out of place and whether your ideas could be better organized.
  • If you find that old ideas do not fit as well as you anticipated, you should cut them out or condense them. You might also find that new and well-suited ideas occurred to you during the writing of the first draft — now is the time to make them part of the paper.

The goal during the revision and proofreading process is to ensure you have completed all the necessary tasks and that the paper is as well-articulated as possible. You can speed up the proofreading process by using the AI proofreader .

Global concerns

  • Confirm that your paper completes every task specified in your assignment sheet.
  • Check for logical organization and flow of paragraphs.
  • Check paragraphs against the introduction and thesis statement.

Fine-grained details

Check the content of each paragraph, making sure that:

  • each sentence helps support the topic sentence.
  • no unnecessary or irrelevant information is present.
  • all technical terms your audience might not know are identified.

Next, think about sentence structure , grammatical errors, and formatting . Check that you have correctly used transition words and phrases to show the connections between your ideas. Look for typos, cut unnecessary words, and check for consistency in aspects such as heading formatting and spellings .

Finally, you need to make sure your paper is correctly formatted according to the rules of the citation style you are using. For example, you might need to include an MLA heading  or create an APA title page .

Scribbr’s professional editors can help with the revision process with our award-winning proofreading services.

Discover our paper editing service

Checklist: Research paper

I have followed all instructions in the assignment sheet.

My introduction presents my topic in an engaging way and provides necessary background information.

My introduction presents a clear, focused research problem and/or thesis statement .

My paper is logically organized using paragraphs and (if relevant) section headings .

Each paragraph is clearly focused on one central idea, expressed in a clear topic sentence .

Each paragraph is relevant to my research problem or thesis statement.

I have used appropriate transitions  to clarify the connections between sections, paragraphs, and sentences.

My conclusion provides a concise answer to the research question or emphasizes how the thesis has been supported.

My conclusion shows how my research has contributed to knowledge or understanding of my topic.

My conclusion does not present any new points or information essential to my argument.

I have provided an in-text citation every time I refer to ideas or information from a source.

I have included a reference list at the end of my paper, consistently formatted according to a specific citation style .

I have thoroughly revised my paper and addressed any feedback from my professor or supervisor.

I have followed all formatting guidelines (page numbers, headers, spacing, etc.).

You've written a great paper. Make sure it's perfect with the help of a Scribbr editor!

Open Google Slides Download PowerPoint

Is this article helpful?

Other students also liked.

  • Writing a Research Paper Introduction | Step-by-Step Guide
  • Writing a Research Paper Conclusion | Step-by-Step Guide
  • Research Paper Format | APA, MLA, & Chicago Templates

More interesting articles

  • Academic Paragraph Structure | Step-by-Step Guide & Examples
  • Checklist: Writing a Great Research Paper
  • How to Create a Structured Research Paper Outline | Example
  • How to Write a Discussion Section | Tips & Examples
  • How to Write Recommendations in Research | Examples & Tips
  • How to Write Topic Sentences | 4 Steps, Examples & Purpose
  • Research Paper Appendix | Example & Templates
  • Research Paper Damage Control | Managing a Broken Argument
  • What Is a Theoretical Framework? | Guide to Organizing

Unlimited Academic AI-Proofreading

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Elsevier QRcode Wechat

  • Research Process

Writing a good review article

  • 3 minute read
  • 71.6K views

Table of Contents

As a young researcher, you might wonder how to start writing your first review article, and the extent of the information that it should contain. A review article is a comprehensive summary of the current understanding of a specific research topic and is based on previously published research. Unlike research papers, it does not contain new results, but can propose new inferences based on the combined findings of previous research.

Types of review articles

Review articles are typically of three types: literature reviews, systematic reviews, and meta-analyses.

A literature review is a general survey of the research topic and aims to provide a reliable and unbiased account of the current understanding of the topic.

A systematic review , in contrast, is more specific and attempts to address a highly focused research question. Its presentation is more detailed, with information on the search strategy used, the eligibility criteria for inclusion of studies, the methods utilized to review the collected information, and more.

A meta-analysis is similar to a systematic review in that both are systematically conducted with a properly defined research question. However, unlike the latter, a meta-analysis compares and evaluates a defined number of similar studies. It is quantitative in nature and can help assess contrasting study findings.

Tips for writing a good review article

Here are a few practices that can make the time-consuming process of writing a review article easier:

  • Define your question: Take your time to identify the research question and carefully articulate the topic of your review paper. A good review should also add something new to the field in terms of a hypothesis, inference, or conclusion. A carefully defined scientific question will give you more clarity in determining the novelty of your inferences.
  • Identify credible sources: Identify relevant as well as credible studies that you can base your review on, with the help of multiple databases or search engines. It is also a good idea to conduct another search once you have finished your article to avoid missing relevant studies published during the course of your writing.
  • Take notes: A literature search involves extensive reading, which can make it difficult to recall relevant information subsequently. Therefore, make notes while conducting the literature search and note down the source references. This will ensure that you have sufficient information to start with when you finally get to writing.
  • Describe the title, abstract, and introduction: A good starting point to begin structuring your review is by drafting the title, abstract, and introduction. Explicitly writing down what your review aims to address in the field will help shape the rest of your article.
  • Be unbiased and critical: Evaluate every piece of evidence in a critical but unbiased manner. This will help you present a proper assessment and a critical discussion in your article.
  • Include a good summary: End by stating the take-home message and identify the limitations of existing studies that need to be addressed through future studies.
  • Ask for feedback: Ask a colleague to provide feedback on both the content and the language or tone of your article before you submit it.
  • Check your journal’s guidelines: Some journals only publish reviews, while some only publish research articles. Further, all journals clearly indicate their aims and scope. Therefore, make sure to check the appropriateness of a journal before submitting your article.

Writing review articles, especially systematic reviews or meta-analyses, can seem like a daunting task. However, Elsevier Author Services can guide you by providing useful tips on how to write an impressive review article that stands out and gets published!

What are Implications in Research

  • Manuscript Preparation

What are Implications in Research?

how to write the results section of a research paper

How to write the results section of a research paper

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

how to review in a research paper

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

  • Artificial Intelligence

Here’s why AI search engines really can’t kill Google

The ai search tools are getting better — but they don’t yet understand what a search engine really is and how we really use them..

By David Pierce , editor-at-large and Vergecast co-host with over a decade of experience covering consumer tech. Previously, at Protocol, The Wall Street Journal, and Wired.

Share this story

An illustration of a chatbot swinging into a Google logo.

AI is coming for the search business. Or so we’re told. As Google seems to keep getting worse, and tools like ChatGPT, Google Gemini, and Microsoft Copilot seem to keep getting better, we appear to be barreling toward a new way to find and consume information online. Companies like Perplexity and You.com are pitching themselves as next-gen search products, and even Google and Bing are making huge bets that AI is the future of search . Bye bye, 10 blue links; hello direct answers to all my weird questions about the world.

But the thing you have to understand about a search engine is that a search engine is many things. For all the people using Google to find important and hard-to-access scientific information, orders of magnitude more are using it to find their email inbox, get to Walmart’s website, or remember who was president before Hoover. And then there’s my favorite fact of all: that a vast number of people every year go to Google and type “google” into the search box. We mostly talk about Google as a research tool, but in reality, it’s asked to do anything and everything you can think of, billions of times a day.

The real question in front of all these would-be Google killers, then, is not how well they can find information. It’s how well they can do everything Google does. So I decided to put some of the best new AI products to the real test: I grabbed the latest list of most-Googled queries and questions according to the SEO research firm Ahrefs and plugged them into various AI tools. In some instances, I found that these language model-based bots are genuinely more useful than a page of Google results. But in most cases, I discovered exactly how hard it will be for anything — AI or otherwise — to replace Google at the center of the web.

People who work in search always say there are basically three types of queries. First and most popular is navigation, which is just people typing the name of a website to get to that website. Virtually all of the top queries on Google, from “youtube” to “wordle” to “yahoo mail,” are navigation queries. In actual reality, this is a search engine’s primary job: to get you to a website.

In actual reality, a search engine’s primary job is to get you to a website

For navigational queries, AI search engines are universally worse than Google. When you do a navigational Google search, it’s exceedingly rare that the first result isn’t the one you’re looking for — sure, it’s odd to show you all those results when what Google should actually do is just take you directly to amazon.com or whatever, but it’s fast and it’s rarely wrong. The AI bots, on the other hand, like to think for a few seconds and then provide a bunch of quasi-useful information about the company when all I want is a link. Some didn’t even link to amazon.com.

I don’t hate the additional information so much as I hate how long these AI tools take to get me what I need. Waiting 10 seconds for three paragraphs of generated text about Home Depot is not the answer; I just want a link to Home Depot. Google wins that race every time.

The next most popular kind of search is the information query: you want to know something specific, about which there is a single right answer. “NFL scores” is a hugely popular information query; “what time is it” is another one; so is “weather.” It doesn’t matter who tells you the score or the time or the temperature, it’s just a thing you need to know.

A screenshot of Perplexity showing a search result for Warriors scores.

Here, the results are all over the map. For real-time stuff like sports scores, the AI is not to be trusted: You.com and Perplexity both frequently gave me outdated information, though Copilot usually got it right. Google not only gets it right but usually pops up a widget with other stats and information, which is better than the others. Ditto anything requiring your specific location or context — Google probably has that information about you, but the AI bots mostly don’t.

When it comes to more evergreen information like “how many weeks in a year” or “when is mother’s day,” everything I tested got it right. In many cases I actually preferred the AI answers, which add a bit of helpful context. But I’m not sure how often I can trust them. Google told me there are 52.1429 weeks in a year, but You.com explained that actually it’s 52 weeks and a day, plus an added day on leap years. That’s more useful than just 52.1429! But then Perplexity told me that actually, a common year is 52 weeks, and a leap year is 52 weeks and a day — before directly contradicting itself two sentences later. Here’s the whole answer; just try to make sense of it:

A common year has approximately 52 weeks, while a leap year has 52 weeks and 1 day. In more precise terms, a regular year actually consists of 52.143 weeks, which means there is one additional day in a normal year. On the other hand, a leap year, occurring every four years except for certain exceptions, has 52 weeks and 2 days. This difference in the number of weeks is due to the extra day in a common year and the additional day in February during a leap year.

After doing some more research, I am now confident that the answer is what You.com said. But this all took too long, and forcing me to fact-check my searches kind of defeats the purpose of helpfully summarizing things for me. Google continues to win here on one thing and one thing alone: speed. 

There is one sub-genre of information queries in which the exact opposite is true, though. I call them Buried Information Queries. The best example I can offer is the very popular query, “how to screenshot on mac.” There are a million pages on the internet that contain the answer — it’s just Cmd-Shift-3 to take the whole screen or Cmd-Shift-4 to capture a selection, there, you’re welcome — but that information is usually buried under a lot of ads and SEO crap. All the AI tools I tried, including Google’s own Search Generative Experience, just snatch that information out and give it to you directly. This is great! 

An image of Copilot explaining how to take a screenshot on a Mac.

Are there complicated questions inherent in that, which threaten the business model and structure of the web? Yep ! But as a pure searching experience, it’s vastly better. I’ve had similar results asking about ingredient substitutions, coffee ratios, headphone waterproofing ratings, and any other information that is easy to know and yet often too hard to find. 

This brings me to the third kind of Google search: the exploration query. These are questions that don’t have a single answer, that are instead the beginning of a learning process. On the most popular list, things like “how to tie a tie,” “why were chainsaws invented,” and “what is tiktok” count as explorational queries. If you ever Googled the name of a musician you just heard about, or have looked up things like “stuff to do in Helena Montana” or “NASA history,” you’re exploring. These are not, according to the rankings, the primary things people use Google for. But these are the moments AI search engines can shine.

Like, wait: why were chainsaws invented? Copilot gave me a multipart answer about their medical origins, before describing their technological evolution and eventual adoption by lumberjacks. It also gave me eight pretty useful links to read more. Perplexity gave me a much shorter answer, but also included a few cool images of old chainsaws and a link to a YouTube explainer on the subject. Google’s results included a lot of the same links, but did none of the synthesizing for me. Even its generative search only gave me the very basics.

My favorite thing about the AI engines is the citations. Perplexity, You.com, and others are slowly getting better at linking to their sources, often inline, which means that if I come across a particular fact that piques my interest, I can go straight to the source from there. They don’t always offer enough sources, or put them in the right places, but this is a good and helpful trend.

One experience I had while doing these tests was actually the most eye-opening of all. The single most-searched question on Google is a simple one: “what to watch.” Google has a whole specific page design for this, with rows of posters featuring “Top picks” like Dune: Part Two and Imaginary ; “For you” which for me included Deadpool and Halt and Catch Fire ; and then popular titles and genre-sorted options. None of the AI search engines did as well: Copilot listed five popular movies; Perplexity offered a random-seeming smattering of options from Girls5eva to Manhunt to Shogun ; You.com gave me a bunch of out of date information and recommended I watch “the 14 best Netflix original movies” without telling me what they are.

AI is the right idea but a chatbot is the wrong interface

In this case, AI is the right idea — I don’t want a bunch of links, I want an answer to my question — but a chatbot is the wrong interface. For that matter, so is a page of search results! Google, obviously aware that this is the most-asked question on the platform, has been able to design something that works much better.

In a way, that’s a perfect summary of the state of things. At least for some web searches, generative AI could be a better tool than the search tech of decades past. But modern search engines aren’t just pages of links. They’re more like miniature operating systems. They can answer questions directly, they have calculators and converters and flight pickers and all kinds of other tools built right in, they can get you where you’re going with just a click or two. The goal of most search queries, according to these charts, is not to start a journey of information wonder and discovery. The goal is to get a link or an answer, and then get out. Right now, these LLM-based systems are just too slow to compete.

The big question, I think, is less about tech and more about product. Everyone, including Google, believes that AI can help search engines understand questions and process information better. That’s a given in the industry at this point. But can Google reinvent its results pages, its business model, and the way it presents and summarizes and surfaces information, faster than the AI companies can turn their chatbots into more complex, more multifaceted tools? Ten blue links isn’t the answer for search, but neither is an all-purpose text box. Search is everything, and everything is search. It’s going to take a lot more than a chatbot to kill Google.

The Matrix is coming back for a fifth movie

Denis villeneuve is doing dune 3, tesla is dragging apple into its upcoming fatal autopilot crash trial, huberman fans aren’t leaving the show behind, the disney plus password-sharing crackdown starts in june.

Sponsor logo

More from Artificial Intelligence

A new Copilot key on a Windows keyboard

Microsoft’s new era of AI PCs will need a Copilot key, says Intel

Illustration of a line art pixel brain filled with computer commands, files, and folders.

Airtable brings AI summarization to paying users

how to review in a research paper

Apple’s WWDC 2024 is set for June 10th

Red artwork of the Adobe brand logo

Adobe’s new GenStudio platform is an AI factory for advertisers

Help | Advanced Search

Computer Science > Human-Computer Interaction

Title: advancing explainable autonomous vehicle systems: a comprehensive review and research roadmap.

Abstract: Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches with the varied interests and expectations within the AV ecosystem. This study presents a review to discuss the complexities associated with explanation generation and presentation to facilitate the development of more effective and inclusive explainable AV systems. Our investigation led to categorising existing literature into three primary topics: explanatory tasks, explanatory information, and explanatory information communication. Drawing upon our insights, we have proposed a comprehensive roadmap for future research centred on (i) knowing the interlocutor, (ii) generating timely explanations, (ii) communicating human-friendly explanations, and (iv) continuous learning. Our roadmap is underpinned by principles of responsible research and innovation, emphasising the significance of diverse explanation requirements. To effectively tackle the challenges associated with implementing explainable AV systems, we have delineated various research directions, including the development of privacy-preserving data integration, ethical frameworks, real-time analytics, human-centric interaction design, and enhanced cross-disciplinary collaborations. By exploring these research directions, the study aims to guide the development and deployment of explainable AVs, informed by a holistic understanding of user needs, technological advancements, regulatory compliance, and ethical considerations, thereby ensuring safer and more trustworthy autonomous driving experiences.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

usa flag

  • Policy & Compliance
  • Peer Review Policies and Practices
  • Simplifying Review of Research Project Grant Applications
  • Applicant Guidance For Simplifying The Review Framework For Most Research Project Grants

Applicant Guidance for Simplifying the Review Framework for Most Research Project Grants

Although the simplified review framework has little impact on what is included in an application, it does have significant impact on the funding opportunities used to apply. This page provides practical guidance for applicants navigating funding opportunities through this transition.

How to Tell if Your Application Will be Impacted

Simplified peer review applies to most, but not all research project grants (RPGs). For example, none of our small business or complex, multi-project grants are included in this initiative.

Applicability

  • Activity codes: R01, R03, R15, R16, R21, R33, R34, R36, R61, RC1, RC2, RC4, RF1, RL1, RL2, U01, U34, U3R, UA5, UC1, UC2, UC4, UF1, UG3, UH2, UH3, UH5, (including the following phased awards: R21/R33, UH2/UH3, UG3/UH3, R61/R33).
  • Applications submitted to due dates on or after January 25, 2025.

Tips for Applicants

  • Funding opportunities with a mix of due dates before and after January 25, 2025 may be reissued and/or expired early since a single opportunity cannot accommodate two sets of review information.

Ensuring You Are Applying to the Right Funding Opportunity Using the Correct Forms

Need to move an existing application to apply to a different funding opportunity.

Take advantage of copy features in ASSIST, Grants.gov Workspace, and many institutional submission systems.

ASSIST Online Help: Copy Application

Grants.gov Online Help: Copy Workspace

As NIH implements the simplified review framework, you will find funding opportunities being expired and/or reissued to ensure applicants are presented with the correct review information for their target due date.

While there are no application form changes associated with this initiative, NIH is moving to new application forms (FORMS-I) to support other initiatives for due dates on or after January 25, 2025 ( NOT-OD-24-086 ). Therefore, all funding opportunities with the new review framework will also include updated forms.

Using the correct funding opportunity and application form version for your due date is critical to success. Applications submitted using a funding opportunity that is no longer available for a specific due date or submitted using the incorrect form version will be withdrawn and removed from funding consideration.

Tips for Applicants Applying to Impacted Funding Opportunities for Due Dates on or after January 25, 2025

  • The simplified review framework applies.
  • As always, be sure to be responsive to the application requirements in Section IV. Application and Submission Information as well as the review criteria in Section V. Application Review Information of the funding opportunity when preparing your application.
  • You must apply to a funding opportunity that includes the simplified review framework in Section V. Application Review Information of the funding opportunity (i.e., review criteria are organized into three factors).
  • See Do I Have the Right Forms for My Application? for help identifying the competition id.

how to review in a research paper

  • Application forms and associated instructions will be added at least 30 days and, frequently 60 days or more, prior to the first due date.
  • You can begin drafting your application attachments (Specific Aims, Research Plan, etc.) using funding opportunity and current (FORMS-H) application guide instructions and make any needed adjustments for other initiatives once FORMS-I instructions are available.

Tips for Applicants Submitting to a Due Date on or before January 24, 2025

  • The simplified review framework does not apply.
  • You must apply to a funding opportunity that includes the legacy five stand-alone criteria in Section V. Application Review Information.
  • Some opportunities may be extended to allow additional due dates prior to January 25.

Applicants are encouraged to contact a NIH Program Official if they still have questions or need additional clarification.

This page last updated on: April 4, 2024

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 March 2024

Predicting and improving complex beer flavor through machine learning

  • Michiel Schreurs   ORCID: orcid.org/0000-0002-9449-5619 1 , 2 , 3   na1 ,
  • Supinya Piampongsant 1 , 2 , 3   na1 ,
  • Miguel Roncoroni   ORCID: orcid.org/0000-0001-7461-1427 1 , 2 , 3   na1 ,
  • Lloyd Cool   ORCID: orcid.org/0000-0001-9936-3124 1 , 2 , 3 , 4 ,
  • Beatriz Herrera-Malaver   ORCID: orcid.org/0000-0002-5096-9974 1 , 2 , 3 ,
  • Christophe Vanderaa   ORCID: orcid.org/0000-0001-7443-5427 4 ,
  • Florian A. Theßeling 1 , 2 , 3 ,
  • Łukasz Kreft   ORCID: orcid.org/0000-0001-7620-4657 5 ,
  • Alexander Botzki   ORCID: orcid.org/0000-0001-6691-4233 5 ,
  • Philippe Malcorps 6 ,
  • Luk Daenen 6 ,
  • Tom Wenseleers   ORCID: orcid.org/0000-0002-1434-861X 4 &
  • Kevin J. Verstrepen   ORCID: orcid.org/0000-0002-3077-6219 1 , 2 , 3  

Nature Communications volume  15 , Article number:  2368 ( 2024 ) Cite this article

48k Accesses

846 Altmetric

Metrics details

  • Chemical engineering
  • Gas chromatography
  • Machine learning
  • Metabolomics
  • Taste receptors

The perception and appreciation of food flavor depends on many interacting chemical compounds and external factors, and therefore proves challenging to understand and predict. Here, we combine extensive chemical and sensory analyses of 250 different beers to train machine learning models that allow predicting flavor and consumer appreciation. For each beer, we measure over 200 chemical properties, perform quantitative descriptive sensory analysis with a trained tasting panel and map data from over 180,000 consumer reviews to train 10 different machine learning models. The best-performing algorithm, Gradient Boosting, yields models that significantly outperform predictions based on conventional statistics and accurately predict complex food features and consumer appreciation from chemical profiles. Model dissection allows identifying specific and unexpected compounds as drivers of beer flavor and appreciation. Adding these compounds results in variants of commercial alcoholic and non-alcoholic beers with improved consumer appreciation. Together, our study reveals how big data and machine learning uncover complex links between food chemistry, flavor and consumer perception, and lays the foundation to develop novel, tailored foods with superior flavors.

Similar content being viewed by others

how to review in a research paper

BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules

Rudraksh Tuwani, Somin Wadhwa & Ganesh Bagler

how to review in a research paper

Sensory lexicon and aroma volatiles analysis of brewing malt

Xiaoxia Su, Miao Yu, … Tianyi Du

how to review in a research paper

Predicting odor from molecular structure: a multi-label classification approach

Kushagra Saini & Venkatnarayan Ramanathan

Introduction

Predicting and understanding food perception and appreciation is one of the major challenges in food science. Accurate modeling of food flavor and appreciation could yield important opportunities for both producers and consumers, including quality control, product fingerprinting, counterfeit detection, spoilage detection, and the development of new products and product combinations (food pairing) 1 , 2 , 3 , 4 , 5 , 6 . Accurate models for flavor and consumer appreciation would contribute greatly to our scientific understanding of how humans perceive and appreciate flavor. Moreover, accurate predictive models would also facilitate and standardize existing food assessment methods and could supplement or replace assessments by trained and consumer tasting panels, which are variable, expensive and time-consuming 7 , 8 , 9 . Lastly, apart from providing objective, quantitative, accurate and contextual information that can help producers, models can also guide consumers in understanding their personal preferences 10 .

Despite the myriad of applications, predicting food flavor and appreciation from its chemical properties remains a largely elusive goal in sensory science, especially for complex food and beverages 11 , 12 . A key obstacle is the immense number of flavor-active chemicals underlying food flavor. Flavor compounds can vary widely in chemical structure and concentration, making them technically challenging and labor-intensive to quantify, even in the face of innovations in metabolomics, such as non-targeted metabolic fingerprinting 13 , 14 . Moreover, sensory analysis is perhaps even more complicated. Flavor perception is highly complex, resulting from hundreds of different molecules interacting at the physiochemical and sensorial level. Sensory perception is often non-linear, characterized by complex and concentration-dependent synergistic and antagonistic effects 15 , 16 , 17 , 18 , 19 , 20 , 21 that are further convoluted by the genetics, environment, culture and psychology of consumers 22 , 23 , 24 . Perceived flavor is therefore difficult to measure, with problems of sensitivity, accuracy, and reproducibility that can only be resolved by gathering sufficiently large datasets 25 . Trained tasting panels are considered the prime source of quality sensory data, but require meticulous training, are low throughput and high cost. Public databases containing consumer reviews of food products could provide a valuable alternative, especially for studying appreciation scores, which do not require formal training 25 . Public databases offer the advantage of amassing large amounts of data, increasing the statistical power to identify potential drivers of appreciation. However, public datasets suffer from biases, including a bias in the volunteers that contribute to the database, as well as confounding factors such as price, cult status and psychological conformity towards previous ratings of the product.

Classical multivariate statistics and machine learning methods have been used to predict flavor of specific compounds by, for example, linking structural properties of a compound to its potential biological activities or linking concentrations of specific compounds to sensory profiles 1 , 26 . Importantly, most previous studies focused on predicting organoleptic properties of single compounds (often based on their chemical structure) 27 , 28 , 29 , 30 , 31 , 32 , 33 , thus ignoring the fact that these compounds are present in a complex matrix in food or beverages and excluding complex interactions between compounds. Moreover, the classical statistics commonly used in sensory science 34 , 35 , 36 , 37 , 38 , 39 require a large sample size and sufficient variance amongst predictors to create accurate models. They are not fit for studying an extensive set of hundreds of interacting flavor compounds, since they are sensitive to outliers, have a high tendency to overfit and are less suited for non-linear and discontinuous relationships 40 .

In this study, we combine extensive chemical analyses and sensory data of a set of different commercial beers with machine learning approaches to develop models that predict taste, smell, mouthfeel and appreciation from compound concentrations. Beer is particularly suited to model the relationship between chemistry, flavor and appreciation. First, beer is a complex product, consisting of thousands of flavor compounds that partake in complex sensory interactions 41 , 42 , 43 . This chemical diversity arises from the raw materials (malt, yeast, hops, water and spices) and biochemical conversions during the brewing process (kilning, mashing, boiling, fermentation, maturation and aging) 44 , 45 . Second, the advent of the internet saw beer consumers embrace online review platforms, such as RateBeer (ZX Ventures, Anheuser-Busch InBev SA/NV) and BeerAdvocate (Next Glass, inc.). In this way, the beer community provides massive data sets of beer flavor and appreciation scores, creating extraordinarily large sensory databases to complement the analyses of our professional sensory panel. Specifically, we characterize over 200 chemical properties of 250 commercial beers, spread across 22 beer styles, and link these to the descriptive sensory profiling data of a 16-person in-house trained tasting panel and data acquired from over 180,000 public consumer reviews. These unique and extensive datasets enable us to train a suite of machine learning models to predict flavor and appreciation from a beer’s chemical profile. Dissection of the best-performing models allows us to pinpoint specific compounds as potential drivers of beer flavor and appreciation. Follow-up experiments confirm the importance of these compounds and ultimately allow us to significantly improve the flavor and appreciation of selected commercial beers. Together, our study represents a significant step towards understanding complex flavors and reinforces the value of machine learning to develop and refine complex foods. In this way, it represents a stepping stone for further computer-aided food engineering applications 46 .

To generate a comprehensive dataset on beer flavor, we selected 250 commercial Belgian beers across 22 different beer styles (Supplementary Fig.  S1 ). Beers with ≤ 4.2% alcohol by volume (ABV) were classified as non-alcoholic and low-alcoholic. Blonds and Tripels constitute a significant portion of the dataset (12.4% and 11.2%, respectively) reflecting their presence on the Belgian beer market and the heterogeneity of beers within these styles. By contrast, lager beers are less diverse and dominated by a handful of brands. Rare styles such as Brut or Faro make up only a small fraction of the dataset (2% and 1%, respectively) because fewer of these beers are produced and because they are dominated by distinct characteristics in terms of flavor and chemical composition.

Extensive analysis identifies relationships between chemical compounds in beer

For each beer, we measured 226 different chemical properties, including common brewing parameters such as alcohol content, iso-alpha acids, pH, sugar concentration 47 , and over 200 flavor compounds (Methods, Supplementary Table  S1 ). A large portion (37.2%) are terpenoids arising from hopping, responsible for herbal and fruity flavors 16 , 48 . A second major category are yeast metabolites, such as esters and alcohols, that result in fruity and solvent notes 48 , 49 , 50 . Other measured compounds are primarily derived from malt, or other microbes such as non- Saccharomyces yeasts and bacteria (‘wild flora’). Compounds that arise from spices or staling are labeled under ‘Others’. Five attributes (caloric value, total acids and total ester, hop aroma and sulfur compounds) are calculated from multiple individually measured compounds.

As a first step in identifying relationships between chemical properties, we determined correlations between the concentrations of the compounds (Fig.  1 , upper panel, Supplementary Data  1 and 2 , and Supplementary Fig.  S2 . For the sake of clarity, only a subset of the measured compounds is shown in Fig.  1 ). Compounds of the same origin typically show a positive correlation, while absence of correlation hints at parameters varying independently. For example, the hop aroma compounds citronellol, and alpha-terpineol show moderate correlations with each other (Spearman’s rho=0.39 and 0.57), but not with the bittering hop component iso-alpha acids (Spearman’s rho=0.16 and −0.07). This illustrates how brewers can independently modify hop aroma and bitterness by selecting hop varieties and dosage time. If hops are added early in the boiling phase, chemical conversions increase bitterness while aromas evaporate, conversely, late addition of hops preserves aroma but limits bitterness 51 . Similarly, hop-derived iso-alpha acids show a strong anti-correlation with lactic acid and acetic acid, likely reflecting growth inhibition of lactic acid and acetic acid bacteria, or the consequent use of fewer hops in sour beer styles, such as West Flanders ales and Fruit beers, that rely on these bacteria for their distinct flavors 52 . Finally, yeast-derived esters (ethyl acetate, ethyl decanoate, ethyl hexanoate, ethyl octanoate) and alcohols (ethanol, isoamyl alcohol, isobutanol, and glycerol), correlate with Spearman coefficients above 0.5, suggesting that these secondary metabolites are correlated with the yeast genetic background and/or fermentation parameters and may be difficult to influence individually, although the choice of yeast strain may offer some control 53 .

figure 1

Spearman rank correlations are shown. Descriptors are grouped according to their origin (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)), and sensory aspect (aroma, taste, palate, and overall appreciation). Please note that for the chemical compounds, for the sake of clarity, only a subset of the total number of measured compounds is shown, with an emphasis on the key compounds for each source. For more details, see the main text and Methods section. Chemical data can be found in Supplementary Data  1 , correlations between all chemical compounds are depicted in Supplementary Fig.  S2 and correlation values can be found in Supplementary Data  2 . See Supplementary Data  4 for sensory panel assessments and Supplementary Data  5 for correlation values between all sensory descriptors.

Interestingly, different beer styles show distinct patterns for some flavor compounds (Supplementary Fig.  S3 ). These observations agree with expectations for key beer styles, and serve as a control for our measurements. For instance, Stouts generally show high values for color (darker), while hoppy beers contain elevated levels of iso-alpha acids, compounds associated with bitter hop taste. Acetic and lactic acid are not prevalent in most beers, with notable exceptions such as Kriek, Lambic, Faro, West Flanders ales and Flanders Old Brown, which use acid-producing bacteria ( Lactobacillus and Pediococcus ) or unconventional yeast ( Brettanomyces ) 54 , 55 . Glycerol, ethanol and esters show similar distributions across all beer styles, reflecting their common origin as products of yeast metabolism during fermentation 45 , 53 . Finally, low/no-alcohol beers contain low concentrations of glycerol and esters. This is in line with the production process for most of the low/no-alcohol beers in our dataset, which are produced through limiting fermentation or by stripping away alcohol via evaporation or dialysis, with both methods having the unintended side-effect of reducing the amount of flavor compounds in the final beer 56 , 57 .

Besides expected associations, our data also reveals less trivial associations between beer styles and specific parameters. For example, geraniol and citronellol, two monoterpenoids responsible for citrus, floral and rose flavors and characteristic of Citra hops, are found in relatively high amounts in Christmas, Saison, and Brett/co-fermented beers, where they may originate from terpenoid-rich spices such as coriander seeds instead of hops 58 .

Tasting panel assessments reveal sensorial relationships in beer

To assess the sensory profile of each beer, a trained tasting panel evaluated each of the 250 beers for 50 sensory attributes, including different hop, malt and yeast flavors, off-flavors and spices. Panelists used a tasting sheet (Supplementary Data  3 ) to score the different attributes. Panel consistency was evaluated by repeating 12 samples across different sessions and performing ANOVA. In 95% of cases no significant difference was found across sessions ( p  > 0.05), indicating good panel consistency (Supplementary Table  S2 ).

Aroma and taste perception reported by the trained panel are often linked (Fig.  1 , bottom left panel and Supplementary Data  4 and 5 ), with high correlations between hops aroma and taste (Spearman’s rho=0.83). Bitter taste was found to correlate with hop aroma and taste in general (Spearman’s rho=0.80 and 0.69), and particularly with “grassy” noble hops (Spearman’s rho=0.75). Barnyard flavor, most often associated with sour beers, is identified together with stale hops (Spearman’s rho=0.97) that are used in these beers. Lactic and acetic acid, which often co-occur, are correlated (Spearman’s rho=0.66). Interestingly, sweetness and bitterness are anti-correlated (Spearman’s rho = −0.48), confirming the hypothesis that they mask each other 59 , 60 . Beer body is highly correlated with alcohol (Spearman’s rho = 0.79), and overall appreciation is found to correlate with multiple aspects that describe beer mouthfeel (alcohol, carbonation; Spearman’s rho= 0.32, 0.39), as well as with hop and ester aroma intensity (Spearman’s rho=0.39 and 0.35).

Similar to the chemical analyses, sensorial analyses confirmed typical features of specific beer styles (Supplementary Fig.  S4 ). For example, sour beers (Faro, Flanders Old Brown, Fruit beer, Kriek, Lambic, West Flanders ale) were rated acidic, with flavors of both acetic and lactic acid. Hoppy beers were found to be bitter and showed hop-associated aromas like citrus and tropical fruit. Malt taste is most detected among scotch, stout/porters, and strong ales, while low/no-alcohol beers, which often have a reputation for being ‘worty’ (reminiscent of unfermented, sweet malt extract) appear in the middle. Unsurprisingly, hop aromas are most strongly detected among hoppy beers. Like its chemical counterpart (Supplementary Fig.  S3 ), acidity shows a right-skewed distribution, with the most acidic beers being Krieks, Lambics, and West Flanders ales.

Tasting panel assessments of specific flavors correlate with chemical composition

We find that the concentrations of several chemical compounds strongly correlate with specific aroma or taste, as evaluated by the tasting panel (Fig.  2 , Supplementary Fig.  S5 , Supplementary Data  6 ). In some cases, these correlations confirm expectations and serve as a useful control for data quality. For example, iso-alpha acids, the bittering compounds in hops, strongly correlate with bitterness (Spearman’s rho=0.68), while ethanol and glycerol correlate with tasters’ perceptions of alcohol and body, the mouthfeel sensation of fullness (Spearman’s rho=0.82/0.62 and 0.72/0.57 respectively) and darker color from roasted malts is a good indication of malt perception (Spearman’s rho=0.54).

figure 2

Heatmap colors indicate Spearman’s Rho. Axes are organized according to sensory categories (aroma, taste, mouthfeel, overall), chemical categories and chemical sources in beer (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)). See Supplementary Data  6 for all correlation values.

Interestingly, for some relationships between chemical compounds and perceived flavor, correlations are weaker than expected. For example, the rose-smelling phenethyl acetate only weakly correlates with floral aroma. This hints at more complex relationships and interactions between compounds and suggests a need for a more complex model than simple correlations. Lastly, we uncovered unexpected correlations. For instance, the esters ethyl decanoate and ethyl octanoate appear to correlate slightly with hop perception and bitterness, possibly due to their fruity flavor. Iron is anti-correlated with hop aromas and bitterness, most likely because it is also anti-correlated with iso-alpha acids. This could be a sign of metal chelation of hop acids 61 , given that our analyses measure unbound hop acids and total iron content, or could result from the higher iron content in dark and Fruit beers, which typically have less hoppy and bitter flavors 62 .

Public consumer reviews complement expert panel data

To complement and expand the sensory data of our trained tasting panel, we collected 180,000 reviews of our 250 beers from the online consumer review platform RateBeer. This provided numerical scores for beer appearance, aroma, taste, palate, overall quality as well as the average overall score.

Public datasets are known to suffer from biases, such as price, cult status and psychological conformity towards previous ratings of a product. For example, prices correlate with appreciation scores for these online consumer reviews (rho=0.49, Supplementary Fig.  S6 ), but not for our trained tasting panel (rho=0.19). This suggests that prices affect consumer appreciation, which has been reported in wine 63 , while blind tastings are unaffected. Moreover, we observe that some beer styles, like lagers and non-alcoholic beers, generally receive lower scores, reflecting that online reviewers are mostly beer aficionados with a preference for specialty beers over lager beers. In general, we find a modest correlation between our trained panel’s overall appreciation score and the online consumer appreciation scores (Fig.  3 , rho=0.29). Apart from the aforementioned biases in the online datasets, serving temperature, sample freshness and surroundings, which are all tightly controlled during the tasting panel sessions, can vary tremendously across online consumers and can further contribute to (among others, appreciation) differences between the two categories of tasters. Importantly, in contrast to the overall appreciation scores, for many sensory aspects the results from the professional panel correlated well with results obtained from RateBeer reviews. Correlations were highest for features that are relatively easy to recognize even for untrained tasters, like bitterness, sweetness, alcohol and malt aroma (Fig.  3 and below).

figure 3

RateBeer text mining results can be found in Supplementary Data  7 . Rho values shown are Spearman correlation values, with asterisks indicating significant correlations ( p  < 0.05, two-sided). All p values were smaller than 0.001, except for Esters aroma (0.0553), Esters taste (0.3275), Esters aroma—banana (0.0019), Coriander (0.0508) and Diacetyl (0.0134).

Besides collecting consumer appreciation from these online reviews, we developed automated text analysis tools to gather additional data from review texts (Supplementary Data  7 ). Processing review texts on the RateBeer database yielded comparable results to the scores given by the trained panel for many common sensory aspects, including acidity, bitterness, sweetness, alcohol, malt, and hop tastes (Fig.  3 ). This is in line with what would be expected, since these attributes require less training for accurate assessment and are less influenced by environmental factors such as temperature, serving glass and odors in the environment. Consumer reviews also correlate well with our trained panel for 4-vinyl guaiacol, a compound associated with a very characteristic aroma. By contrast, correlations for more specific aromas like ester, coriander or diacetyl are underrepresented in the online reviews, underscoring the importance of using a trained tasting panel and standardized tasting sheets with explicit factors to be scored for evaluating specific aspects of a beer. Taken together, our results suggest that public reviews are trustworthy for some, but not all, flavor features and can complement or substitute taste panel data for these sensory aspects.

Models can predict beer sensory profiles from chemical data

The rich datasets of chemical analyses, tasting panel assessments and public reviews gathered in the first part of this study provided us with a unique opportunity to develop predictive models that link chemical data to sensorial features. Given the complexity of beer flavor, basic statistical tools such as correlations or linear regression may not always be the most suitable for making accurate predictions. Instead, we applied different machine learning models that can model both simple linear and complex interactive relationships. Specifically, we constructed a set of regression models to predict (a) trained panel scores for beer flavor and quality and (b) public reviews’ appreciation scores from beer chemical profiles. We trained and tested 10 different models (Methods), 3 linear regression-based models (simple linear regression with first-order interactions (LR), lasso regression with first-order interactions (Lasso), partial least squares regressor (PLSR)), 5 decision tree models (AdaBoost regressor (ABR), extra trees (ET), gradient boosting regressor (GBR), random forest (RF) and XGBoost regressor (XGBR)), 1 support vector regression (SVR), and 1 artificial neural network (ANN) model.

To compare the performance of our machine learning models, the dataset was randomly split into a training and test set, stratified by beer style. After a model was trained on data in the training set, its performance was evaluated on its ability to predict the test dataset obtained from multi-output models (based on the coefficient of determination, see Methods). Additionally, individual-attribute models were ranked per descriptor and the average rank was calculated, as proposed by Korneva et al. 64 . Importantly, both ways of evaluating the models’ performance agreed in general. Performance of the different models varied (Table  1 ). It should be noted that all models perform better at predicting RateBeer results than results from our trained tasting panel. One reason could be that sensory data is inherently variable, and this variability is averaged out with the large number of public reviews from RateBeer. Additionally, all tree-based models perform better at predicting taste than aroma. Linear models (LR) performed particularly poorly, with negative R 2 values, due to severe overfitting (training set R 2  = 1). Overfitting is a common issue in linear models with many parameters and limited samples, especially with interaction terms further amplifying the number of parameters. L1 regularization (Lasso) successfully overcomes this overfitting, out-competing multiple tree-based models on the RateBeer dataset. Similarly, the dimensionality reduction of PLSR avoids overfitting and improves performance, to some extent. Still, tree-based models (ABR, ET, GBR, RF and XGBR) show the best performance, out-competing the linear models (LR, Lasso, PLSR) commonly used in sensory science 65 .

GBR models showed the best overall performance in predicting sensory responses from chemical information, with R 2 values up to 0.75 depending on the predicted sensory feature (Supplementary Table  S4 ). The GBR models predict consumer appreciation (RateBeer) better than our trained panel’s appreciation (R 2 value of 0.67 compared to R 2 value of 0.09) (Supplementary Table  S3 and Supplementary Table  S4 ). ANN models showed intermediate performance, likely because neural networks typically perform best with larger datasets 66 . The SVR shows intermediate performance, mostly due to the weak predictions of specific attributes that lower the overall performance (Supplementary Table  S4 ).

Model dissection identifies specific, unexpected compounds as drivers of consumer appreciation

Next, we leveraged our models to infer important contributors to sensory perception and consumer appreciation. Consumer preference is a crucial sensory aspects, because a product that shows low consumer appreciation scores often does not succeed commercially 25 . Additionally, the requirement for a large number of representative evaluators makes consumer trials one of the more costly and time-consuming aspects of product development. Hence, a model for predicting chemical drivers of overall appreciation would be a welcome addition to the available toolbox for food development and optimization.

Since GBR models on our RateBeer dataset showed the best overall performance, we focused on these models. Specifically, we used two approaches to identify important contributors. First, rankings of the most important predictors for each sensorial trait in the GBR models were obtained based on impurity-based feature importance (mean decrease in impurity). High-ranked parameters were hypothesized to be either the true causal chemical properties underlying the trait, to correlate with the actual causal properties, or to take part in sensory interactions affecting the trait 67 (Fig.  4A ). In a second approach, we used SHAP 68 to determine which parameters contributed most to the model for making predictions of consumer appreciation (Fig.  4B ). SHAP calculates parameter contributions to model predictions on a per-sample basis, which can be aggregated into an importance score.

figure 4

A The impurity-based feature importance (mean deviance in impurity, MDI) calculated from the Gradient Boosting Regression (GBR) model predicting RateBeer appreciation scores. The top 15 highest ranked chemical properties are shown. B SHAP summary plot for the top 15 parameters contributing to our GBR model. Each point on the graph represents a sample from our dataset. The color represents the concentration of that parameter, with bluer colors representing low values and redder colors representing higher values. Greater absolute values on the horizontal axis indicate a higher impact of the parameter on the prediction of the model. C Spearman correlations between the 15 most important chemical properties and consumer overall appreciation. Numbers indicate the Spearman Rho correlation coefficient, and the rank of this correlation compared to all other correlations. The top 15 important compounds were determined using SHAP (panel B).

Both approaches identified ethyl acetate as the most predictive parameter for beer appreciation (Fig.  4 ). Ethyl acetate is the most abundant ester in beer with a typical ‘fruity’, ‘solvent’ and ‘alcoholic’ flavor, but is often considered less important than other esters like isoamyl acetate. The second most important parameter identified by SHAP is ethanol, the most abundant beer compound after water. Apart from directly contributing to beer flavor and mouthfeel, ethanol drastically influences the physical properties of beer, dictating how easily volatile compounds escape the beer matrix to contribute to beer aroma 69 . Importantly, it should also be noted that the importance of ethanol for appreciation is likely inflated by the very low appreciation scores of non-alcoholic beers (Supplementary Fig.  S4 ). Despite not often being considered a driver of beer appreciation, protein level also ranks highly in both approaches, possibly due to its effect on mouthfeel and body 70 . Lactic acid, which contributes to the tart taste of sour beers, is the fourth most important parameter identified by SHAP, possibly due to the generally high appreciation of sour beers in our dataset.

Interestingly, some of the most important predictive parameters for our model are not well-established as beer flavors or are even commonly regarded as being negative for beer quality. For example, our models identify methanethiol and ethyl phenyl acetate, an ester commonly linked to beer staling 71 , as a key factor contributing to beer appreciation. Although there is no doubt that high concentrations of these compounds are considered unpleasant, the positive effects of modest concentrations are not yet known 72 , 73 .

To compare our approach to conventional statistics, we evaluated how well the 15 most important SHAP-derived parameters correlate with consumer appreciation (Fig.  4C ). Interestingly, only 6 of the properties derived by SHAP rank amongst the top 15 most correlated parameters. For some chemical compounds, the correlations are so low that they would have likely been considered unimportant. For example, lactic acid, the fourth most important parameter, shows a bimodal distribution for appreciation, with sour beers forming a separate cluster, that is missed entirely by the Spearman correlation. Additionally, the correlation plots reveal outliers, emphasizing the need for robust analysis tools. Together, this highlights the need for alternative models, like the Gradient Boosting model, that better grasp the complexity of (beer) flavor.

Finally, to observe the relationships between these chemical properties and their predicted targets, partial dependence plots were constructed for the six most important predictors of consumer appreciation 74 , 75 , 76 (Supplementary Fig.  S7 ). One-way partial dependence plots show how a change in concentration affects the predicted appreciation. These plots reveal an important limitation of our models: appreciation predictions remain constant at ever-increasing concentrations. This implies that once a threshold concentration is reached, further increasing the concentration does not affect appreciation. This is false, as it is well-documented that certain compounds become unpleasant at high concentrations, including ethyl acetate (‘nail polish’) 77 and methanethiol (‘sulfury’ and ‘rotten cabbage’) 78 . The inability of our models to grasp that flavor compounds have optimal levels, above which they become negative, is a consequence of working with commercial beer brands where (off-)flavors are rarely too high to negatively impact the product. The two-way partial dependence plots show how changing the concentration of two compounds influences predicted appreciation, visualizing their interactions (Supplementary Fig.  S7 ). In our case, the top 5 parameters are dominated by additive or synergistic interactions, with high concentrations for both compounds resulting in the highest predicted appreciation.

To assess the robustness of our best-performing models and model predictions, we performed 100 iterations of the GBR, RF and ET models. In general, all iterations of the models yielded similar performance (Supplementary Fig.  S8 ). Moreover, the main predictors (including the top predictors ethanol and ethyl acetate) remained virtually the same, especially for GBR and RF. For the iterations of the ET model, we did observe more variation in the top predictors, which is likely a consequence of the model’s inherent random architecture in combination with co-correlations between certain predictors. However, even in this case, several of the top predictors (ethanol and ethyl acetate) remain unchanged, although their rank in importance changes (Supplementary Fig.  S8 ).

Next, we investigated if a combination of RateBeer and trained panel data into one consolidated dataset would lead to stronger models, under the hypothesis that such a model would suffer less from bias in the datasets. A GBR model was trained to predict appreciation on the combined dataset. This model underperformed compared to the RateBeer model, both in the native case and when including a dataset identifier (R 2  = 0.67, 0.26 and 0.42 respectively). For the latter, the dataset identifier is the most important feature (Supplementary Fig.  S9 ), while most of the feature importance remains unchanged, with ethyl acetate and ethanol ranking highest, like in the original model trained only on RateBeer data. It seems that the large variation in the panel dataset introduces noise, weakening the models’ performances and reliability. In addition, it seems reasonable to assume that both datasets are fundamentally different, with the panel dataset obtained by blind tastings by a trained professional panel.

Lastly, we evaluated whether beer style identifiers would further enhance the model’s performance. A GBR model was trained with parameters that explicitly encoded the styles of the samples. This did not improve model performance (R2 = 0.66 with style information vs R2 = 0.67). The most important chemical features are consistent with the model trained without style information (eg. ethanol and ethyl acetate), and with the exception of the most preferred (strong ale) and least preferred (low/no-alcohol) styles, none of the styles were among the most important features (Supplementary Fig.  S9 , Supplementary Table  S5 and S6 ). This is likely due to a combination of style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original models, as well as the low number of samples belonging to some styles, making it difficult for the model to learn style-specific patterns. Moreover, beer styles are not rigorously defined, with some styles overlapping in features and some beers being misattributed to a specific style, all of which leads to more noise in models that use style parameters.

Model validation

To test if our predictive models give insight into beer appreciation, we set up experiments aimed at improving existing commercial beers. We specifically selected overall appreciation as the trait to be examined because of its complexity and commercial relevance. Beer flavor comprises a complex bouquet rather than single aromas and tastes 53 . Hence, adding a single compound to the extent that a difference is noticeable may lead to an unbalanced, artificial flavor. Therefore, we evaluated the effect of combinations of compounds. Because Blond beers represent the most extensive style in our dataset, we selected a beer from this style as the starting material for these experiments (Beer 64 in Supplementary Data  1 ).

In the first set of experiments, we adjusted the concentrations of compounds that made up the most important predictors of overall appreciation (ethyl acetate, ethanol, lactic acid, ethyl phenyl acetate) together with correlated compounds (ethyl hexanoate, isoamyl acetate, glycerol), bringing them up to 95 th percentile ethanol-normalized concentrations (Methods) within the Blond group (‘Spiked’ concentration in Fig.  5A ). Compared to controls, the spiked beers were found to have significantly improved overall appreciation among trained panelists, with panelist noting increased intensity of ester flavors, sweetness, alcohol, and body fullness (Fig.  5B ). To disentangle the contribution of ethanol to these results, a second experiment was performed without the addition of ethanol. This resulted in a similar outcome, including increased perception of alcohol and overall appreciation.

figure 5

Adding the top chemical compounds, identified as best predictors of appreciation by our model, into poorly appreciated beers results in increased appreciation from our trained panel. Results of sensory tests between base beers and those spiked with compounds identified as the best predictors by the model. A Blond and Non/Low-alcohol (0.0% ABV) base beers were brought up to 95th-percentile ethanol-normalized concentrations within each style. B For each sensory attribute, tasters indicated the more intense sample and selected the sample they preferred. The numbers above the bars correspond to the p values that indicate significant changes in perceived flavor (two-sided binomial test: alpha 0.05, n  = 20 or 13).

In a last experiment, we tested whether using the model’s predictions can boost the appreciation of a non-alcoholic beer (beer 223 in Supplementary Data  1 ). Again, the addition of a mixture of predicted compounds (omitting ethanol, in this case) resulted in a significant increase in appreciation, body, ester flavor and sweetness.

Predicting flavor and consumer appreciation from chemical composition is one of the ultimate goals of sensory science. A reliable, systematic and unbiased way to link chemical profiles to flavor and food appreciation would be a significant asset to the food and beverage industry. Such tools would substantially aid in quality control and recipe development, offer an efficient and cost-effective alternative to pilot studies and consumer trials and would ultimately allow food manufacturers to produce superior, tailor-made products that better meet the demands of specific consumer groups more efficiently.

A limited set of studies have previously tried, to varying degrees of success, to predict beer flavor and beer popularity based on (a limited set of) chemical compounds and flavors 79 , 80 . Current sensitive, high-throughput technologies allow measuring an unprecedented number of chemical compounds and properties in a large set of samples, yielding a dataset that can train models that help close the gaps between chemistry and flavor, even for a complex natural product like beer. To our knowledge, no previous research gathered data at this scale (250 samples, 226 chemical parameters, 50 sensory attributes and 5 consumer scores) to disentangle and validate the chemical aspects driving beer preference using various machine-learning techniques. We find that modern machine learning models outperform conventional statistical tools, such as correlations and linear models, and can successfully predict flavor appreciation from chemical composition. This could be attributed to the natural incorporation of interactions and non-linear or discontinuous effects in machine learning models, which are not easily grasped by the linear model architecture. While linear models and partial least squares regression represent the most widespread statistical approaches in sensory science, in part because they allow interpretation 65 , 81 , 82 , modern machine learning methods allow for building better predictive models while preserving the possibility to dissect and exploit the underlying patterns. Of the 10 different models we trained, tree-based models, such as our best performing GBR, showed the best overall performance in predicting sensory responses from chemical information, outcompeting artificial neural networks. This agrees with previous reports for models trained on tabular data 83 . Our results are in line with the findings of Colantonio et al. who also identified the gradient boosting architecture as performing best at predicting appreciation and flavor (of tomatoes and blueberries, in their specific study) 26 . Importantly, besides our larger experimental scale, we were able to directly confirm our models’ predictions in vivo.

Our study confirms that flavor compound concentration does not always correlate with perception, suggesting complex interactions that are often missed by more conventional statistics and simple models. Specifically, we find that tree-based algorithms may perform best in developing models that link complex food chemistry with aroma. Furthermore, we show that massive datasets of untrained consumer reviews provide a valuable source of data, that can complement or even replace trained tasting panels, especially for appreciation and basic flavors, such as sweetness and bitterness. This holds despite biases that are known to occur in such datasets, such as price or conformity bias. Moreover, GBR models predict taste better than aroma. This is likely because taste (e.g. bitterness) often directly relates to the corresponding chemical measurements (e.g., iso-alpha acids), whereas such a link is less clear for aromas, which often result from the interplay between multiple volatile compounds. We also find that our models are best at predicting acidity and alcohol, likely because there is a direct relation between the measured chemical compounds (acids and ethanol) and the corresponding perceived sensorial attribute (acidity and alcohol), and because even untrained consumers are generally able to recognize these flavors and aromas.

The predictions of our final models, trained on review data, hold even for blind tastings with small groups of trained tasters, as demonstrated by our ability to validate specific compounds as drivers of beer flavor and appreciation. Since adding a single compound to the extent of a noticeable difference may result in an unbalanced flavor profile, we specifically tested our identified key drivers as a combination of compounds. While this approach does not allow us to validate if a particular single compound would affect flavor and/or appreciation, our experiments do show that this combination of compounds increases consumer appreciation.

It is important to stress that, while it represents an important step forward, our approach still has several major limitations. A key weakness of the GBR model architecture is that amongst co-correlating variables, the largest main effect is consistently preferred for model building. As a result, co-correlating variables often have artificially low importance scores, both for impurity and SHAP-based methods, like we observed in the comparison to the more randomized Extra Trees models. This implies that chemicals identified as key drivers of a specific sensory feature by GBR might not be the true causative compounds, but rather co-correlate with the actual causative chemical. For example, the high importance of ethyl acetate could be (partially) attributed to the total ester content, ethanol or ethyl hexanoate (rho=0.77, rho=0.72 and rho=0.68), while ethyl phenylacetate could hide the importance of prenyl isobutyrate and ethyl benzoate (rho=0.77 and rho=0.76). Expanding our GBR model to include beer style as a parameter did not yield additional power or insight. This is likely due to style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original model, as well as the smaller sample size per style, limiting the power to uncover style-specific patterns. This can be partly attributed to the curse of dimensionality, where the high number of parameters results in the models mainly incorporating single parameter effects, rather than complex interactions such as style-dependent effects 67 . A larger number of samples may overcome some of these limitations and offer more insight into style-specific effects. On the other hand, beer style is not a rigid scientific classification, and beers within one style often differ a lot, which further complicates the analysis of style as a model factor.

Our study is limited to beers from Belgian breweries. Although these beers cover a large portion of the beer styles available globally, some beer styles and consumer patterns may be missing, while other features might be overrepresented. For example, many Belgian ales exhibit yeast-driven flavor profiles, which is reflected in the chemical drivers of appreciation discovered by this study. In future work, expanding the scope to include diverse markets and beer styles could lead to the identification of even more drivers of appreciation and better models for special niche products that were not present in our beer set.

In addition to inherent limitations of GBR models, there are also some limitations associated with studying food aroma. Even if our chemical analyses measured most of the known aroma compounds, the total number of flavor compounds in complex foods like beer is still larger than the subset we were able to measure in this study. For example, hop-derived thiols, that influence flavor at very low concentrations, are notoriously difficult to measure in a high-throughput experiment. Moreover, consumer perception remains subjective and prone to biases that are difficult to avoid. It is also important to stress that the models are still immature and that more extensive datasets will be crucial for developing more complete models in the future. Besides more samples and parameters, our dataset does not include any demographic information about the tasters. Including such data could lead to better models that grasp external factors like age and culture. Another limitation is that our set of beers consists of high-quality end-products and lacks beers that are unfit for sale, which limits the current model in accurately predicting products that are appreciated very badly. Finally, while models could be readily applied in quality control, their use in sensory science and product development is restrained by their inability to discern causal relationships. Given that the models cannot distinguish compounds that genuinely drive consumer perception from those that merely correlate, validation experiments are essential to identify true causative compounds.

Despite the inherent limitations, dissection of our models enabled us to pinpoint specific molecules as potential drivers of beer aroma and consumer appreciation, including compounds that were unexpected and would not have been identified using standard approaches. Important drivers of beer appreciation uncovered by our models include protein levels, ethyl acetate, ethyl phenyl acetate and lactic acid. Currently, many brewers already use lactic acid to acidify their brewing water and ensure optimal pH for enzymatic activity during the mashing process. Our results suggest that adding lactic acid can also improve beer appreciation, although its individual effect remains to be tested. Interestingly, ethanol appears to be unnecessary to improve beer appreciation, both for blond beer and alcohol-free beer. Given the growing consumer interest in alcohol-free beer, with a predicted annual market growth of >7% 84 , it is relevant for brewers to know what compounds can further increase consumer appreciation of these beers. Hence, our model may readily provide avenues to further improve the flavor and consumer appreciation of both alcoholic and non-alcoholic beers, which is generally considered one of the key challenges for future beer production.

Whereas we see a direct implementation of our results for the development of superior alcohol-free beverages and other food products, our study can also serve as a stepping stone for the development of novel alcohol-containing beverages. We want to echo the growing body of scientific evidence for the negative effects of alcohol consumption, both on the individual level by the mutagenic, teratogenic and carcinogenic effects of ethanol 85 , 86 , as well as the burden on society caused by alcohol abuse and addiction. We encourage the use of our results for the production of healthier, tastier products, including novel and improved beverages with lower alcohol contents. Furthermore, we strongly discourage the use of these technologies to improve the appreciation or addictive properties of harmful substances.

The present work demonstrates that despite some important remaining hurdles, combining the latest developments in chemical analyses, sensory analysis and modern machine learning methods offers exciting avenues for food chemistry and engineering. Soon, these tools may provide solutions in quality control and recipe development, as well as new approaches to sensory science and flavor research.

Beer selection

250 commercial Belgian beers were selected to cover the broad diversity of beer styles and corresponding diversity in chemical composition and aroma. See Supplementary Fig.  S1 .

Chemical dataset

Sample preparation.

Beers within their expiration date were purchased from commercial retailers. Samples were prepared in biological duplicates at room temperature, unless explicitly stated otherwise. Bottle pressure was measured with a manual pressure device (Steinfurth Mess-Systeme GmbH) and used to calculate CO 2 concentration. The beer was poured through two filter papers (Macherey-Nagel, 500713032 MN 713 ¼) to remove carbon dioxide and prevent spontaneous foaming. Samples were then prepared for measurements by targeted Headspace-Gas Chromatography-Flame Ionization Detector/Flame Photometric Detector (HS-GC-FID/FPD), Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS), colorimetric analysis, enzymatic analysis, Near-Infrared (NIR) analysis, as described in the sections below. The mean values of biological duplicates are reported for each compound.

HS-GC-FID/FPD

HS-GC-FID/FPD (Shimadzu GC 2010 Plus) was used to measure higher alcohols, acetaldehyde, esters, 4-vinyl guaicol, and sulfur compounds. Each measurement comprised 5 ml of sample pipetted into a 20 ml glass vial containing 1.75 g NaCl (VWR, 27810.295). 100 µl of 2-heptanol (Sigma-Aldrich, H3003) (internal standard) solution in ethanol (Fisher Chemical, E/0650DF/C17) was added for a final concentration of 2.44 mg/L. Samples were flushed with nitrogen for 10 s, sealed with a silicone septum, stored at −80 °C and analyzed in batches of 20.

The GC was equipped with a DB-WAXetr column (length, 30 m; internal diameter, 0.32 mm; layer thickness, 0.50 µm; Agilent Technologies, Santa Clara, CA, USA) to the FID and an HP-5 column (length, 30 m; internal diameter, 0.25 mm; layer thickness, 0.25 µm; Agilent Technologies, Santa Clara, CA, USA) to the FPD. N 2 was used as the carrier gas. Samples were incubated for 20 min at 70 °C in the headspace autosampler (Flow rate, 35 cm/s; Injection volume, 1000 µL; Injection mode, split; Combi PAL autosampler, CTC analytics, Switzerland). The injector, FID and FPD temperatures were kept at 250 °C. The GC oven temperature was first held at 50 °C for 5 min and then allowed to rise to 80 °C at a rate of 5 °C/min, followed by a second ramp of 4 °C/min until 200 °C kept for 3 min and a final ramp of (4 °C/min) until 230 °C for 1 min. Results were analyzed with the GCSolution software version 2.4 (Shimadzu, Kyoto, Japan). The GC was calibrated with a 5% EtOH solution (VWR International) containing the volatiles under study (Supplementary Table  S7 ).

HS-SPME-GC-MS

HS-SPME-GC-MS (Shimadzu GCMS-QP-2010 Ultra) was used to measure additional volatile compounds, mainly comprising terpenoids and esters. Samples were analyzed by HS-SPME using a triphase DVB/Carboxen/PDMS 50/30 μm SPME fiber (Supelco Co., Bellefonte, PA, USA) followed by gas chromatography (Thermo Fisher Scientific Trace 1300 series, USA) coupled to a mass spectrometer (Thermo Fisher Scientific ISQ series MS) equipped with a TriPlus RSH autosampler. 5 ml of degassed beer sample was placed in 20 ml vials containing 1.75 g NaCl (VWR, 27810.295). 5 µl internal standard mix was added, containing 2-heptanol (1 g/L) (Sigma-Aldrich, H3003), 4-fluorobenzaldehyde (1 g/L) (Sigma-Aldrich, 128376), 2,3-hexanedione (1 g/L) (Sigma-Aldrich, 144169) and guaiacol (1 g/L) (Sigma-Aldrich, W253200) in ethanol (Fisher Chemical, E/0650DF/C17). Each sample was incubated at 60 °C in the autosampler oven with constant agitation. After 5 min equilibration, the SPME fiber was exposed to the sample headspace for 30 min. The compounds trapped on the fiber were thermally desorbed in the injection port of the chromatograph by heating the fiber for 15 min at 270 °C.

The GC-MS was equipped with a low polarity RXi-5Sil MS column (length, 20 m; internal diameter, 0.18 mm; layer thickness, 0.18 µm; Restek, Bellefonte, PA, USA). Injection was performed in splitless mode at 320 °C, a split flow of 9 ml/min, a purge flow of 5 ml/min and an open valve time of 3 min. To obtain a pulsed injection, a programmed gas flow was used whereby the helium gas flow was set at 2.7 mL/min for 0.1 min, followed by a decrease in flow of 20 ml/min to the normal 0.9 mL/min. The temperature was first held at 30 °C for 3 min and then allowed to rise to 80 °C at a rate of 7 °C/min, followed by a second ramp of 2 °C/min till 125 °C and a final ramp of 8 °C/min with a final temperature of 270 °C.

Mass acquisition range was 33 to 550 amu at a scan rate of 5 scans/s. Electron impact ionization energy was 70 eV. The interface and ion source were kept at 275 °C and 250 °C, respectively. A mix of linear n-alkanes (from C7 to C40, Supelco Co.) was injected into the GC-MS under identical conditions to serve as external retention index markers. Identification and quantification of the compounds were performed using an in-house developed R script as described in Goelen et al. and Reher et al. 87 , 88 (for package information, see Supplementary Table  S8 ). Briefly, chromatograms were analyzed using AMDIS (v2.71) 89 to separate overlapping peaks and obtain pure compound spectra. The NIST MS Search software (v2.0 g) in combination with the NIST2017, FFNSC3 and Adams4 libraries were used to manually identify the empirical spectra, taking into account the expected retention time. After background subtraction and correcting for retention time shifts between samples run on different days based on alkane ladders, compound elution profiles were extracted and integrated using a file with 284 target compounds of interest, which were either recovered in our identified AMDIS list of spectra or were known to occur in beer. Compound elution profiles were estimated for every peak in every chromatogram over a time-restricted window using weighted non-negative least square analysis after which peak areas were integrated 87 , 88 . Batch effect correction was performed by normalizing against the most stable internal standard compound, 4-fluorobenzaldehyde. Out of all 284 target compounds that were analyzed, 167 were visually judged to have reliable elution profiles and were used for final analysis.

Discrete photometric and enzymatic analysis

Discrete photometric and enzymatic analysis (Thermo Scientific TM Gallery TM Plus Beermaster Discrete Analyzer) was used to measure acetic acid, ammonia, beta-glucan, iso-alpha acids, color, sugars, glycerol, iron, pH, protein, and sulfite. 2 ml of sample volume was used for the analyses. Information regarding the reagents and standard solutions used for analyses and calibrations is included in Supplementary Table  S7 and Supplementary Table  S9 .

NIR analyses

NIR analysis (Anton Paar Alcolyzer Beer ME System) was used to measure ethanol. Measurements comprised 50 ml of sample, and a 10% EtOH solution was used for calibration.

Correlation calculations

Pairwise Spearman Rank correlations were calculated between all chemical properties.

Sensory dataset

Trained panel.

Our trained tasting panel consisted of volunteers who gave prior verbal informed consent. All compounds used for the validation experiment were of food-grade quality. The tasting sessions were approved by the Social and Societal Ethics Committee of the KU Leuven (G-2022-5677-R2(MAR)). All online reviewers agreed to the Terms and Conditions of the RateBeer website.

Sensory analysis was performed according to the American Society of Brewing Chemists (ASBC) Sensory Analysis Methods 90 . 30 volunteers were screened through a series of triangle tests. The sixteen most sensitive and consistent tasters were retained as taste panel members. The resulting panel was diverse in age [22–42, mean: 29], sex [56% male] and nationality [7 different countries]. The panel developed a consensus vocabulary to describe beer aroma, taste and mouthfeel. Panelists were trained to identify and score 50 different attributes, using a 7-point scale to rate attributes’ intensity. The scoring sheet is included as Supplementary Data  3 . Sensory assessments took place between 10–12 a.m. The beers were served in black-colored glasses. Per session, between 5 and 12 beers of the same style were tasted at 12 °C to 16 °C. Two reference beers were added to each set and indicated as ‘Reference 1 & 2’, allowing panel members to calibrate their ratings. Not all panelists were present at every tasting. Scores were scaled by standard deviation and mean-centered per taster. Values are represented as z-scores and clustered by Euclidean distance. Pairwise Spearman correlations were calculated between taste and aroma sensory attributes. Panel consistency was evaluated by repeating samples on different sessions and performing ANOVA to identify differences, using the ‘stats’ package (v4.2.2) in R (for package information, see Supplementary Table  S8 ).

Online reviews from a public database

The ‘scrapy’ package in Python (v3.6) (for package information, see Supplementary Table  S8 ). was used to collect 232,288 online reviews (mean=922, min=6, max=5343) from RateBeer, an online beer review database. Each review entry comprised 5 numerical scores (appearance, aroma, taste, palate and overall quality) and an optional review text. The total number of reviews per reviewer was collected separately. Numerical scores were scaled and centered per rater, and mean scores were calculated per beer.

For the review texts, the language was estimated using the packages ‘langdetect’ and ‘langid’ in Python. Reviews that were classified as English by both packages were kept. Reviewers with fewer than 100 entries overall were discarded. 181,025 reviews from >6000 reviewers from >40 countries remained. Text processing was done using the ‘nltk’ package in Python. Texts were corrected for slang and misspellings; proper nouns and rare words that are relevant to the beer context were specified and kept as-is (‘Chimay’,’Lambic’, etc.). A dictionary of semantically similar sensorial terms, for example ‘floral’ and ‘flower’, was created and collapsed together into one term. Words were stemmed and lemmatized to avoid identifying words such as ‘acid’ and ‘acidity’ as separate terms. Numbers and punctuation were removed.

Sentences from up to 50 randomly chosen reviews per beer were manually categorized according to the aspect of beer they describe (appearance, aroma, taste, palate, overall quality—not to be confused with the 5 numerical scores described above) or flagged as irrelevant if they contained no useful information. If a beer contained fewer than 50 reviews, all reviews were manually classified. This labeled data set was used to train a model that classified the rest of the sentences for all beers 91 . Sentences describing taste and aroma were extracted, and term frequency–inverse document frequency (TFIDF) was implemented to calculate enrichment scores for sensorial words per beer.

The sex of the tasting subject was not considered when building our sensory database. Instead, results from different panelists were averaged, both for our trained panel (56% male, 44% female) and the RateBeer reviews (70% male, 30% female for RateBeer as a whole).

Beer price collection and processing

Beer prices were collected from the following stores: Colruyt, Delhaize, Total Wine, BeerHawk, The Belgian Beer Shop, The Belgian Shop, and Beer of Belgium. Where applicable, prices were converted to Euros and normalized per liter. Spearman correlations were calculated between these prices and mean overall appreciation scores from RateBeer and the taste panel, respectively.

Pairwise Spearman Rank correlations were calculated between all sensory properties.

Machine learning models

Predictive modeling of sensory profiles from chemical data.

Regression models were constructed to predict (a) trained panel scores for beer flavors and quality from beer chemical profiles and (b) public reviews’ appreciation scores from beer chemical profiles. Z-scores were used to represent sensory attributes in both data sets. Chemical properties with log-normal distributions (Shapiro-Wilk test, p  <  0.05 ) were log-transformed. Missing chemical measurements (0.1% of all data) were replaced with mean values per attribute. Observations from 250 beers were randomly separated into a training set (70%, 175 beers) and a test set (30%, 75 beers), stratified per beer style. Chemical measurements (p = 231) were normalized based on the training set average and standard deviation. In total, three linear regression-based models: linear regression with first-order interaction terms (LR), lasso regression with first-order interaction terms (Lasso) and partial least squares regression (PLSR); five decision tree models, Adaboost regressor (ABR), Extra Trees (ET), Gradient Boosting regressor (GBR), Random Forest (RF) and XGBoost regressor (XGBR); one support vector machine model (SVR) and one artificial neural network model (ANN) were trained. The models were implemented using the ‘scikit-learn’ package (v1.2.2) and ‘xgboost’ package (v1.7.3) in Python (v3.9.16). Models were trained, and hyperparameters optimized, using five-fold cross-validated grid search with the coefficient of determination (R 2 ) as the evaluation metric. The ANN (scikit-learn’s MLPRegressor) was optimized using Bayesian Tree-Structured Parzen Estimator optimization with the ‘Optuna’ Python package (v3.2.0). Individual models were trained per attribute, and a multi-output model was trained on all attributes simultaneously.

Model dissection

GBR was found to outperform other methods, resulting in models with the highest average R 2 values in both trained panel and public review data sets. Impurity-based rankings of the most important predictors for each predicted sensorial trait were obtained using the ‘scikit-learn’ package. To observe the relationships between these chemical properties and their predicted targets, partial dependence plots (PDP) were constructed for the six most important predictors of consumer appreciation 74 , 75 .

The ‘SHAP’ package in Python (v0.41.0) was implemented to provide an alternative ranking of predictor importance and to visualize the predictors’ effects as a function of their concentration 68 .

Validation of causal chemical properties

To validate the effects of the most important model features on predicted sensory attributes, beers were spiked with the chemical compounds identified by the models and descriptive sensory analyses were carried out according to the American Society of Brewing Chemists (ASBC) protocol 90 .

Compound spiking was done 30 min before tasting. Compounds were spiked into fresh beer bottles, that were immediately resealed and inverted three times. Fresh bottles of beer were opened for the same duration, resealed, and inverted thrice, to serve as controls. Pairs of spiked samples and controls were served simultaneously, chilled and in dark glasses as outlined in the Trained panel section above. Tasters were instructed to select the glass with the higher flavor intensity for each attribute (directional difference test 92 ) and to select the glass they prefer.

The final concentration after spiking was equal to the within-style average, after normalizing by ethanol concentration. This was done to ensure balanced flavor profiles in the final spiked beer. The same methods were applied to improve a non-alcoholic beer. Compounds were the following: ethyl acetate (Merck KGaA, W241415), ethyl hexanoate (Merck KGaA, W243906), isoamyl acetate (Merck KGaA, W205508), phenethyl acetate (Merck KGaA, W285706), ethanol (96%, Colruyt), glycerol (Merck KGaA, W252506), lactic acid (Merck KGaA, 261106).

Significant differences in preference or perceived intensity were determined by performing the two-sided binomial test on each attribute.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The data that support the findings of this work are available in the Supplementary Data files and have been deposited to Zenodo under accession code 10653704 93 . The RateBeer scores data are under restricted access, they are not publicly available as they are property of RateBeer (ZX Ventures, USA). Access can be obtained from the authors upon reasonable request and with permission of RateBeer (ZX Ventures, USA).  Source data are provided with this paper.

Code availability

The code for training the machine learning models, analyzing the models, and generating the figures has been deposited to Zenodo under accession code 10653704 93 .

Tieman, D. et al. A chemical genetic roadmap to improved tomato flavor. Science 355 , 391–394 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Plutowska, B. & Wardencki, W. Application of gas chromatography–olfactometry (GC–O) in analysis and quality assessment of alcoholic beverages – A review. Food Chem. 107 , 449–463 (2008).

Article   CAS   Google Scholar  

Legin, A., Rudnitskaya, A., Seleznev, B. & Vlasov, Y. Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie. Anal. Chim. Acta 534 , 129–135 (2005).

Loutfi, A., Coradeschi, S., Mani, G. K., Shankar, P. & Rayappan, J. B. B. Electronic noses for food quality: A review. J. Food Eng. 144 , 103–111 (2015).

Ahn, Y.-Y., Ahnert, S. E., Bagrow, J. P. & Barabási, A.-L. Flavor network and the principles of food pairing. Sci. Rep. 1 , 196 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bartoshuk, L. M. & Klee, H. J. Better fruits and vegetables through sensory analysis. Curr. Biol. 23 , R374–R378 (2013).

Article   CAS   PubMed   Google Scholar  

Piggott, J. R. Design questions in sensory and consumer science. Food Qual. Prefer. 3293 , 217–220 (1995).

Article   Google Scholar  

Kermit, M. & Lengard, V. Assessing the performance of a sensory panel-panellist monitoring and tracking. J. Chemom. 19 , 154–161 (2005).

Cook, D. J., Hollowood, T. A., Linforth, R. S. T. & Taylor, A. J. Correlating instrumental measurements of texture and flavour release with human perception. Int. J. Food Sci. Technol. 40 , 631–641 (2005).

Chinchanachokchai, S., Thontirawong, P. & Chinchanachokchai, P. A tale of two recommender systems: The moderating role of consumer expertise on artificial intelligence based product recommendations. J. Retail. Consum. Serv. 61 , 1–12 (2021).

Ross, C. F. Sensory science at the human-machine interface. Trends Food Sci. Technol. 20 , 63–72 (2009).

Chambers, E. IV & Koppel, K. Associations of volatile compounds with sensory aroma and flavor: The complex nature of flavor. Molecules 18 , 4887–4905 (2013).

Pinu, F. R. Metabolomics—The new frontier in food safety and quality research. Food Res. Int. 72 , 80–81 (2015).

Danezis, G. P., Tsagkaris, A. S., Brusic, V. & Georgiou, C. A. Food authentication: state of the art and prospects. Curr. Opin. Food Sci. 10 , 22–31 (2016).

Shepherd, G. M. Smell images and the flavour system in the human brain. Nature 444 , 316–321 (2006).

Meilgaard, M. C. Prediction of flavor differences between beers from their chemical composition. J. Agric. Food Chem. 30 , 1009–1017 (1982).

Xu, L. et al. Widespread receptor-driven modulation in peripheral olfactory coding. Science 368 , eaaz5390 (2020).

Kupferschmidt, K. Following the flavor. Science 340 , 808–809 (2013).

Billesbølle, C. B. et al. Structural basis of odorant recognition by a human odorant receptor. Nature 615 , 742–749 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Smith, B. Perspective: Complexities of flavour. Nature 486 , S6–S6 (2012).

Pfister, P. et al. Odorant receptor inhibition is fundamental to odor encoding. Curr. Biol. 30 , 2574–2587 (2020).

Moskowitz, H. W., Kumaraiah, V., Sharma, K. N., Jacobs, H. L. & Sharma, S. D. Cross-cultural differences in simple taste preferences. Science 190 , 1217–1218 (1975).

Eriksson, N. et al. A genetic variant near olfactory receptor genes influences cilantro preference. Flavour 1 , 22 (2012).

Ferdenzi, C. et al. Variability of affective responses to odors: Culture, gender, and olfactory knowledge. Chem. Senses 38 , 175–186 (2013).

Article   PubMed   Google Scholar  

Lawless, H. T. & Heymann, H. Sensory evaluation of food: Principles and practices. (Springer, New York, NY). https://doi.org/10.1007/978-1-4419-6488-5 (2010).

Colantonio, V. et al. Metabolomic selection for enhanced fruit flavor. Proc. Natl. Acad. Sci. 119 , e2115865119 (2022).

Fritz, F., Preissner, R. & Banerjee, P. VirtualTaste: a web server for the prediction of organoleptic properties of chemical compounds. Nucleic Acids Res 49 , W679–W684 (2021).

Tuwani, R., Wadhwa, S. & Bagler, G. BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules. Sci. Rep. 9 , 1–13 (2019).

Dagan-Wiener, A. et al. Bitter or not? BitterPredict, a tool for predicting taste from chemical structure. Sci. Rep. 7 , 1–13 (2017).

Pallante, L. et al. Toward a general and interpretable umami taste predictor using a multi-objective machine learning approach. Sci. Rep. 12 , 1–11 (2022).

Malavolta, M. et al. A survey on computational taste predictors. Eur. Food Res. Technol. 248 , 2215–2235 (2022).

Lee, B. K. et al. A principal odor map unifies diverse tasks in olfactory perception. Science 381 , 999–1006 (2023).

Mayhew, E. J. et al. Transport features predict if a molecule is odorous. Proc. Natl. Acad. Sci. 119 , e2116576119 (2022).

Niu, Y. et al. Sensory evaluation of the synergism among ester odorants in light aroma-type liquor by odor threshold, aroma intensity and flash GC electronic nose. Food Res. Int. 113 , 102–114 (2018).

Yu, P., Low, M. Y. & Zhou, W. Design of experiments and regression modelling in food flavour and sensory analysis: A review. Trends Food Sci. Technol. 71 , 202–215 (2018).

Oladokun, O. et al. The impact of hop bitter acid and polyphenol profiles on the perceived bitterness of beer. Food Chem. 205 , 212–220 (2016).

Linforth, R., Cabannes, M., Hewson, L., Yang, N. & Taylor, A. Effect of fat content on flavor delivery during consumption: An in vivo model. J. Agric. Food Chem. 58 , 6905–6911 (2010).

Guo, S., Na Jom, K. & Ge, Y. Influence of roasting condition on flavor profile of sunflower seeds: A flavoromics approach. Sci. Rep. 9 , 11295 (2019).

Ren, Q. et al. The changes of microbial community and flavor compound in the fermentation process of Chinese rice wine using Fagopyrum tataricum grain as feedstock. Sci. Rep. 9 , 3365 (2019).

Hastie, T., Friedman, J. & Tibshirani, R. The Elements of Statistical Learning. (Springer, New York, NY). https://doi.org/10.1007/978-0-387-21606-5 (2001).

Dietz, C., Cook, D., Huismann, M., Wilson, C. & Ford, R. The multisensory perception of hop essential oil: a review. J. Inst. Brew. 126 , 320–342 (2020).

CAS   Google Scholar  

Roncoroni, Miguel & Verstrepen, Kevin Joan. Belgian Beer: Tested and Tasted. (Lannoo, 2018).

Meilgaard, M. Flavor chemistry of beer: Part II: Flavor and threshold of 239 aroma volatiles. in (1975).

Bokulich, N. A. & Bamforth, C. W. The microbiology of malting and brewing. Microbiol. Mol. Biol. Rev. MMBR 77 , 157–172 (2013).

Dzialo, M. C., Park, R., Steensels, J., Lievens, B. & Verstrepen, K. J. Physiology, ecology and industrial applications of aroma formation in yeast. FEMS Microbiol. Rev. 41 , S95–S128 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Datta, A. et al. Computer-aided food engineering. Nat. Food 3 , 894–904 (2022).

American Society of Brewing Chemists. Beer Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A.).

Olaniran, A. O., Hiralal, L., Mokoena, M. P. & Pillay, B. Flavour-active volatile compounds in beer: production, regulation and control. J. Inst. Brew. 123 , 13–23 (2017).

Verstrepen, K. J. et al. Flavor-active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Meilgaard, M. C. Flavour chemistry of beer. part I: flavour interaction between principal volatiles. Master Brew. Assoc. Am. Tech. Q 12 , 107–117 (1975).

Briggs, D. E., Boulton, C. A., Brookes, P. A. & Stevens, R. Brewing 227–254. (Woodhead Publishing). https://doi.org/10.1533/9781855739062.227 (2004).

Bossaert, S., Crauwels, S., De Rouck, G. & Lievens, B. The power of sour - A review: Old traditions, new opportunities. BrewingScience 72 , 78–88 (2019).

Google Scholar  

Verstrepen, K. J. et al. Flavor active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Snauwaert, I. et al. Microbial diversity and metabolite composition of Belgian red-brown acidic ales. Int. J. Food Microbiol. 221 , 1–11 (2016).

Spitaels, F. et al. The microbial diversity of traditional spontaneously fermented lambic beer. PLoS ONE 9 , e95384 (2014).

Blanco, C. A., Andrés-Iglesias, C. & Montero, O. Low-alcohol Beers: Flavor Compounds, Defects, and Improvement Strategies. Crit. Rev. Food Sci. Nutr. 56 , 1379–1388 (2016).

Jackowski, M. & Trusek, A. Non-Alcohol. beer Prod. – Overv. 20 , 32–38 (2018).

Takoi, K. et al. The contribution of geraniol metabolism to the citrus flavour of beer: Synergy of geraniol and β-citronellol under coexistence with excess linalool. J. Inst. Brew. 116 , 251–260 (2010).

Kroeze, J. H. & Bartoshuk, L. M. Bitterness suppression as revealed by split-tongue taste stimulation in humans. Physiol. Behav. 35 , 779–783 (1985).

Mennella, J. A. et al. A spoonful of sugar helps the medicine go down”: Bitter masking bysucrose among children and adults. Chem. Senses 40 , 17–25 (2015).

Wietstock, P., Kunz, T., Perreira, F. & Methner, F.-J. Metal chelation behavior of hop acids in buffered model systems. BrewingScience 69 , 56–63 (2016).

Sancho, D., Blanco, C. A., Caballero, I. & Pascual, A. Free iron in pale, dark and alcohol-free commercial lager beers. J. Sci. Food Agric. 91 , 1142–1147 (2011).

Rodrigues, H. & Parr, W. V. Contribution of cross-cultural studies to understanding wine appreciation: A review. Food Res. Int. 115 , 251–258 (2019).

Korneva, E. & Blockeel, H. Towards better evaluation of multi-target regression models. in ECML PKDD 2020 Workshops (eds. Koprinska, I. et al.) 353–362 (Springer International Publishing, Cham, 2020). https://doi.org/10.1007/978-3-030-65965-3_23 .

Gastón Ares. Mathematical and Statistical Methods in Food Science and Technology. (Wiley, 2013).

Grinsztajn, L., Oyallon, E. & Varoquaux, G. Why do tree-based models still outperform deep learning on tabular data? Preprint at http://arxiv.org/abs/2207.08815 (2022).

Gries, S. T. Statistics for Linguistics with R: A Practical Introduction. in Statistics for Linguistics with R (De Gruyter Mouton, 2021). https://doi.org/10.1515/9783110718256 .

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2 , 56–67 (2020).

Ickes, C. M. & Cadwallader, K. R. Effects of ethanol on flavor perception in alcoholic beverages. Chemosens. Percept. 10 , 119–134 (2017).

Kato, M. et al. Influence of high molecular weight polypeptides on the mouthfeel of commercial beer. J. Inst. Brew. 127 , 27–40 (2021).

Wauters, R. et al. Novel Saccharomyces cerevisiae variants slow down the accumulation of staling aldehydes and improve beer shelf-life. Food Chem. 398 , 1–11 (2023).

Li, H., Jia, S. & Zhang, W. Rapid determination of low-level sulfur compounds in beer by headspace gas chromatography with a pulsed flame photometric detector. J. Am. Soc. Brew. Chem. 66 , 188–191 (2008).

Dercksen, A., Laurens, J., Torline, P., Axcell, B. C. & Rohwer, E. Quantitative analysis of volatile sulfur compounds in beer using a membrane extraction interface. J. Am. Soc. Brew. Chem. 54 , 228–233 (1996).

Molnar, C. Interpretable Machine Learning: A Guide for Making Black-Box Models Interpretable. (2020).

Zhao, Q. & Hastie, T. Causal interpretations of black-box models. J. Bus. Econ. Stat. Publ. Am. Stat. Assoc. 39 , 272–281 (2019).

Article   MathSciNet   Google Scholar  

Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning. (Springer, 2019).

Labrado, D. et al. Identification by NMR of key compounds present in beer distillates and residual phases after dealcoholization by vacuum distillation. J. Sci. Food Agric. 100 , 3971–3978 (2020).

Lusk, L. T., Kay, S. B., Porubcan, A. & Ryder, D. S. Key olfactory cues for beer oxidation. J. Am. Soc. Brew. Chem. 70 , 257–261 (2012).

Gonzalez Viejo, C., Torrico, D. D., Dunshea, F. R. & Fuentes, S. Development of artificial neural network models to assess beer acceptability based on sensory properties using a robotic pourer: A comparative model approach to achieve an artificial intelligence system. Beverages 5 , 33 (2019).

Gonzalez Viejo, C., Fuentes, S., Torrico, D. D., Godbole, A. & Dunshea, F. R. Chemical characterization of aromas in beer and their effect on consumers liking. Food Chem. 293 , 479–485 (2019).

Gilbert, J. L. et al. Identifying breeding priorities for blueberry flavor using biochemical, sensory, and genotype by environment analyses. PLOS ONE 10 , 1–21 (2015).

Goulet, C. et al. Role of an esterase in flavor volatile variation within the tomato clade. Proc. Natl. Acad. Sci. 109 , 19009–19014 (2012).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Borisov, V. et al. Deep Neural Networks and Tabular Data: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 1–21 https://doi.org/10.1109/TNNLS.2022.3229161 (2022).

Statista. Statista Consumer Market Outlook: Beer - Worldwide.

Seitz, H. K. & Stickel, F. Molecular mechanisms of alcoholmediated carcinogenesis. Nat. Rev. Cancer 7 , 599–612 (2007).

Voordeckers, K. et al. Ethanol exposure increases mutation rate through error-prone polymerases. Nat. Commun. 11 , 3664 (2020).

Goelen, T. et al. Bacterial phylogeny predicts volatile organic compound composition and olfactory response of an aphid parasitoid. Oikos 129 , 1415–1428 (2020).

Article   ADS   Google Scholar  

Reher, T. et al. Evaluation of hop (Humulus lupulus) as a repellent for the management of Drosophila suzukii. Crop Prot. 124 , 104839 (2019).

Stein, S. E. An integrated method for spectrum extraction and compound identification from gas chromatography/mass spectrometry data. J. Am. Soc. Mass Spectrom. 10 , 770–781 (1999).

American Society of Brewing Chemists. Sensory Analysis Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A., 1992).

McAuley, J., Leskovec, J. & Jurafsky, D. Learning Attitudes and Attributes from Multi-Aspect Reviews. Preprint at https://doi.org/10.48550/arXiv.1210.3926 (2012).

Meilgaard, M. C., Carr, B. T. & Carr, B. T. Sensory Evaluation Techniques. (CRC Press, Boca Raton). https://doi.org/10.1201/b16452 (2014).

Schreurs, M. et al. Data from: Predicting and improving complex beer flavor through machine learning. Zenodo https://doi.org/10.5281/zenodo.10653704 (2024).

Download references

Acknowledgements

We thank all lab members for their discussions and thank all tasting panel members for their contributions. Special thanks go out to Dr. Karin Voordeckers for her tremendous help in proofreading and improving the manuscript. M.S. was supported by a Baillet-Latour fellowship, L.C. acknowledges financial support from KU Leuven (C16/17/006), F.A.T. was supported by a PhD fellowship from FWO (1S08821N). Research in the lab of K.J.V. is supported by KU Leuven, FWO, VIB, VLAIO and the Brewing Science Serves Health Fund. Research in the lab of T.W. is supported by FWO (G.0A51.15) and KU Leuven (C16/17/006).

Author information

These authors contributed equally: Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni.

Authors and Affiliations

VIB—KU Leuven Center for Microbiology, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni, Lloyd Cool, Beatriz Herrera-Malaver, Florian A. Theßeling & Kevin J. Verstrepen

CMPG Laboratory of Genetics and Genomics, KU Leuven, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Leuven Institute for Beer Research (LIBR), Gaston Geenslaan 1, B-3001, Leuven, Belgium

Laboratory of Socioecology and Social Evolution, KU Leuven, Naamsestraat 59, B-3000, Leuven, Belgium

Lloyd Cool, Christophe Vanderaa & Tom Wenseleers

VIB Bioinformatics Core, VIB, Rijvisschestraat 120, B-9052, Ghent, Belgium

Łukasz Kreft & Alexander Botzki

AB InBev SA/NV, Brouwerijplein 1, B-3000, Leuven, Belgium

Philippe Malcorps & Luk Daenen

You can also search for this author in PubMed   Google Scholar

Contributions

S.P., M.S. and K.J.V. conceived the experiments. S.P., M.S. and K.J.V. designed the experiments. S.P., M.S., M.R., B.H. and F.A.T. performed the experiments. S.P., M.S., L.C., C.V., L.K., A.B., P.M., L.D., T.W. and K.J.V. contributed analysis ideas. S.P., M.S., L.C., C.V., T.W. and K.J.V. analyzed the data. All authors contributed to writing the manuscript.

Corresponding author

Correspondence to Kevin J. Verstrepen .

Ethics declarations

Competing interests.

K.J.V. is affiliated with bar.on. The other authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Florian Bauer, Andrew John Macintosh and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary data 1, supplementary data 2, supplementary data 3, supplementary data 4, supplementary data 5, supplementary data 6, supplementary data 7, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schreurs, M., Piampongsant, S., Roncoroni, M. et al. Predicting and improving complex beer flavor through machine learning. Nat Commun 15 , 2368 (2024). https://doi.org/10.1038/s41467-024-46346-0

Download citation

Received : 30 October 2023

Accepted : 21 February 2024

Published : 26 March 2024

DOI : https://doi.org/10.1038/s41467-024-46346-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

how to review in a research paper

medRxiv

The use and impact of surveillance-based technology initiatives in inpatient and acute mental health settings: A systematic review

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jessica L. Griffiths
  • For correspondence: [email protected]
  • ORCID record for Katherine R. K. Saunders
  • ORCID record for Una Foye
  • ORCID record for Anna Greenburgh
  • ORCID record for Antonio Rojas-Garcia
  • ORCID record for Brynmor Lloyd-Evans
  • ORCID record for Sonia Johnson
  • ORCID record for Alan Simpson
  • Info/History
  • Supplementary material
  • Preview PDF

Background: The use of surveillance technologies is becoming increasingly common in inpatient mental health settings, commonly justified as efforts to improve safety and cost-effectiveness. However, the use of these technologies has been questioned in light of limited research conducted and the sensitivities, ethical concerns and potential harms of surveillance. This systematic review aims to: 1) map how surveillance technologies have been employed in inpatient mental health settings, 2) identify any best practice guidance, 3) explore how they are experienced by patients, staff and carers, and 4) examine evidence regarding their impact. Methods: We searched five academic databases (Embase, MEDLINE, PsycInfo, PubMed and Scopus), one grey literature database (HMIC) and two pre-print servers (medRxiv and PsyArXiv) to identify relevant papers published up to 18/09/2023. We also conducted backwards and forwards citation tracking and contacted experts to identify relevant literature. Quality was assessed using the Mixed Methods Appraisal Tool. Data were synthesised using a narrative approach. Results: A total of 27 studies were identified as meeting the inclusion criteria. Included studies reported on CCTV/video monitoring (n = 13), Vision-Based Patient Monitoring and Management (VBPMM) (n = 6), Body Worn Cameras (BWCs) (n = 4), GPS electronic monitoring (n = 2) and wearable sensors (n = 2). Twelve papers (44.4%) were rated as low quality, five (18.5%) medium quality, and ten (37.0%) high quality. Five studies (18.5%) declared a conflict of interest. We identified minimal best practice guidance. Qualitative findings indicate that patient, staff and carer perceptions and experiences of surveillance technologies are mixed and complex. Quantitative findings regarding the impact of surveillance on outcomes such as self-harm, violence, aggression, care quality and cost-effectiveness were inconsistent or weak. Discussion: There is currently insufficient evidence to suggest that surveillance technologies in inpatient mental health settings are achieving the outcomes they are employed to achieve, such as improving safety and reducing costs. The studies were generally of low methodological quality, lacked lived experience involvement, and a substantial proportion (18.5%) declared conflicts of interest. Further independent coproduced research is needed to more comprehensively evaluate the impact of surveillance technologies in inpatient settings, including harms and benefits. If surveillance technologies are to be implemented, it will be important to engage all key stakeholders in the development of policies, procedures and best practice guidance to regulate their use, with a particular emphasis on prioritising the perspectives of patients.

Competing Interest Statement

AS and UF have undertaken and published research on BWCs. We have received no financial support from BWC or any other surveillance technology companies. All other authors declare no competing interests.

Clinical Protocols

https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=463993

Funding Statement

This study is funded by the National Institute for Health and Care Research (NIHR) Policy Research Programme (grant no. PR-PRU-0916-22003). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. ARG was supported by the Ramon y Cajal programme (RYC2022-038556-I), funded by the Spanish Ministry of Science, Innovation and Universities.

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Data Availability

The template data extraction form is available in Supplementary 1. MMAT quality appraisal ratings for each included study are available in Supplementary 2. All data used is publicly available in the published papers included in this review.

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Addiction Medicine (316)
  • Allergy and Immunology (617)
  • Anesthesia (159)
  • Cardiovascular Medicine (2276)
  • Dentistry and Oral Medicine (279)
  • Dermatology (201)
  • Emergency Medicine (370)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (798)
  • Epidemiology (11575)
  • Forensic Medicine (10)
  • Gastroenterology (678)
  • Genetic and Genomic Medicine (3576)
  • Geriatric Medicine (336)
  • Health Economics (616)
  • Health Informatics (2305)
  • Health Policy (913)
  • Health Systems and Quality Improvement (863)
  • Hematology (335)
  • HIV/AIDS (752)
  • Infectious Diseases (except HIV/AIDS) (13149)
  • Intensive Care and Critical Care Medicine (755)
  • Medical Education (359)
  • Medical Ethics (100)
  • Nephrology (388)
  • Neurology (3346)
  • Nursing (191)
  • Nutrition (506)
  • Obstetrics and Gynecology (651)
  • Occupational and Environmental Health (645)
  • Oncology (1756)
  • Ophthalmology (524)
  • Orthopedics (209)
  • Otolaryngology (284)
  • Pain Medicine (223)
  • Palliative Medicine (66)
  • Pathology (437)
  • Pediatrics (1001)
  • Pharmacology and Therapeutics (422)
  • Primary Care Research (406)
  • Psychiatry and Clinical Psychology (3059)
  • Public and Global Health (5984)
  • Radiology and Imaging (1221)
  • Rehabilitation Medicine and Physical Therapy (714)
  • Respiratory Medicine (811)
  • Rheumatology (367)
  • Sexual and Reproductive Health (350)
  • Sports Medicine (316)
  • Surgery (386)
  • Toxicology (50)
  • Transplantation (170)
  • Urology (142)

IMAGES

  1. Tips For How To Write A Scientific Research Paper

    how to review in a research paper

  2. How to write a scientific review paper

    how to review in a research paper

  3. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    how to review in a research paper

  4. Sample of Research Literature Review

    how to review in a research paper

  5. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    how to review in a research paper

  6. Literature review article example

    how to review in a research paper

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. Choosing A Research Topic

  3. Write Your Literature Review FAST

  4. Can You Write & Submit 5 Papers In 5 Months?

  5. How To Write An Abstract

  6. ChatGPT Writes An Abstract For A Paper Or Thesis In One Minute!

COMMENTS

  1. How to review a paper

    How to review a paper. A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  2. Step by Step Guide to Reviewing a Manuscript

    Briefly summarize what the paper is about and what the findings are. Try to put the findings of the paper into the context of the existing literature and current knowledge. Indicate the significance of the work and if it is novel or mainly confirmatory. Indicate the work's strengths, its quality and completeness.

  3. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  4. Writing a Literature Review

    A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research ...

  5. How to write a review paper

    a critical review of the relevant literature and then ensuring that their research design, methods, results, and conclusions follow logically from these objectives (Maier, 2013). There exist a number of papers devoted to instruction on how to write a good review paper. Among the most . useful for scientific reviews, in my estimation, are those by

  6. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  7. How to write a superb literature review

    The best proposals are timely and clearly explain why readers should pay attention to the proposed topic. It is not enough for a review to be a summary of the latest growth in the literature: the ...

  8. How to Review a Scientific Paper in 10 Easy Steps

    Start by reading the paper thoroughly and gaining a clear understanding of its content. Take note of the research question, methodology, data analysis, results, and conclusions. Identify any areas where you have expertise or concerns. 3. Evaluate the Paper's Structure and Clarity:

  9. How to write the literature review of your research paper

    The main purpose of the review is to introduce the readers to the need for conducting the said research. A literature review should begin with a thorough literature search using the main keywords in relevant online databases such as Google Scholar, PubMed, etc. Once all the relevant literature has been gathered, it should be organized as ...

  10. How to write a thorough peer review

    4. Other, lesser suggestions and final comments. Now, read your review carefully, and preferably aloud: if you stumble when reciting your own text, then readers will probably do the same. Reading ...

  11. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research.

  12. How to conduct a review

    Respond to the invitation as soon as you can (even if it is to decline) — a delay in your decision slows down the review process and means more waiting for the author. If you do decline the invitation, it would be helpful if you could provide suggestions for alternative reviewers. 2. Managing your review.

  13. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  14. How to review a paper

    state the objective, the problem - the research question to be addressed, provide a concise background: why the work was done, quote literature only with direct bearing on the problem - not a textbook, state a hypothesis - a suggested solution to the problem. Conclusions. This is the "take-home message" of the paper. Should be short and ...

  15. Writing a Literature Review Research Paper: A step-by-step approach

    A literature review is a surveys scholarly articles, books and other sources relevant to a particular. issue, area of research, or theory, and by so doing, providing a description, summary, and ...

  16. How to write a review paper

    Writing the Review. 1Good scientific writing tells a story, so come up with a logical structure for your paper, with a beginning, middle, and end. Use appropriate headings and sequencing of ideas to make the content flow and guide readers seamlessly from start to finish.

  17. A step-by-step guide to peer review: a template for patients and novice

    Table 1. Peer review template for patients and other novice reviewers. Name of journal. Insert the name of the journal here. The journal's area of focus. Type the area of focus here (eg, oncology and health literacy) Title of manuscript. Insert the title of the manuscript you are reviewing. Link to review website.

  18. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results. Writing a review of literature is to provide a ...

  19. How to write a good scientific review article

    A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the ...

  20. (PDF) How to review a paper

    A Review Quality Instrument (RQI) that assesses the extent to which a reviewer has commented on five aspects of a manuscript (importance of the research question, originality of the paper ...

  21. How to Write a Research Paper

    Create a research paper outline. Write a first draft of the research paper. Write the introduction. Write a compelling body of text. Write the conclusion. The second draft. The revision process. Research paper checklist. Free lecture slides.

  22. (PDF) How to Review a Research Paper

    The state of evidence: what we know and what we don't know about journal peer review In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ Books, 2003:45 ...

  23. Writing a good review article

    Here are a few practices that can make the time-consuming process of writing a review article easier: Define your question: Take your time to identify the research question and carefully articulate the topic of your review paper. A good review should also add something new to the field in terms of a hypothesis, inference, or conclusion.

  24. Perplexity, Copilot, You.com: Putting the AI search engines to the test

    So I decided to put some of the best new AI products to the real test: I grabbed the latest list of most-Googled queries and questions according to the SEO research firm Ahrefs and plugged them ...

  25. From Insight to Implementation: How to Create Your AI School Guidance

    Abstract. This report provides a comprehensive review and analysis of guidance documents issued by several U.S. states (California, Kentucky, North Carolina, Ohio, Oregon, West Virginia, Virginia, Washington), international organizations (OECD, UNESCO), and foreign governments (Australia, U.K.) concerning the integration of AI into K-12 educational settings.

  26. [2404.00019] Advancing Explainable Autonomous Vehicle Systems: A

    Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches with the varied interests and expectations ...

  27. Applicant Guidance for Simplifying the Review Framework for Most

    You must apply to a funding opportunity that includes the simplified review framework in Section V. Application Review Information of the funding opportunity (i.e., review criteria are organized into three factors). Make sure you are preparing your application using forms with the competition id "FORMS-I" and associated application ...

  28. Predicting and improving complex beer flavor through machine ...

    Interestingly, different beer styles show distinct patterns for some flavor compounds (Supplementary Fig. S3).These observations agree with expectations for key beer styles, and serve as a control ...

  29. The use and impact of surveillance-based technology initiatives in

    Background: The use of surveillance technologies is becoming increasingly common in inpatient mental health settings, commonly justified as efforts to improve safety and cost-effectiveness. However, the use of these technologies has been questioned in light of limited research conducted and the sensitivities, ethical concerns and potential harms of surveillance. This systematic review aims to ...