THE PAPER REVIEW GENERATOR  

This tool is designed to speed up writing reviews for research papers for computer science. It provides a list of items that can be used to automatically generate a review draft. This website should not replace a human. Generated text should be edited by the reviewer to add more details.

How to use? Click on the check-boxes below and the review will be auto-generated according to your selection.

Introduction

Related work, problem definition, experiments, reproducibility, the generated review.

About this tool

This website is designed by Philippe Fournier-Viger by modifying the Autoreject project of Andreas Zeller (https://autoreject.org/) and replacing the textual content so as to turn what was a joke into a serious tool. By using this website, you agree to use it ethically and responsibly. If you have any suggestions to improve this tool or want to report bugs, you can contact with me . License of webpage: [C.C. Attribution 3.0 Unported license] (https://creativecommons.org/licenses/by/3.0/). License of source code to display content: [MIT license: https://mit-license.org/].  Some other websites by me

Also, I have made some useful online text processing tools .

online research paper review

Something went wrong when searching for seed articles. Please try again soon.

No articles were found for that search term.

Author, year The title of the article goes here

LITERATURE REVIEW SOFTWARE FOR BETTER RESEARCH

online research paper review

“This tool really helped me to create good bibtex references for my research papers”

Ali Mohammed-Djafari

Director of Research at LSS-CNRS, France

“Any researcher could use it! The paper recommendations are great for anyone and everyone”

Swansea University, Wales

“As a student just venturing into the world of lit reviews, this is a tool that is outstanding and helping me find deeper results for my work.”

Franklin Jeffers

South Oregon University, USA

“One of the 3 most promising tools that (1) do not solely rely on keywords, (2) does nice visualizations, (3) is easy to use”

Singapore Management University

“Incredibly useful tool to get to know more literature, and to gain insight in existing research”

KU Leuven, Belgium

“Seeing my literature list as a network enhances my thinking process!”

Katholieke Universiteit Leuven, Belgium

“I can’t live without you anymore! I also recommend you to my students.”

Professor at The Chinese University of Hong Kong

“This has helped me so much in researching the literature. Currently, I am beginning to investigate new fields and this has helped me hugely”

Aran Warren

Canterbury University, NZ

“It's nice to get a quick overview of related literature. Really easy to use, and it helps getting on top of the often complicated structures of referencing”

Christoph Ludwig

Technische Universität Dresden, Germany

“Litmaps is extremely helpful with my research. It helps me organize each one of my projects and see how they relate to each other, as well as to keep up to date on publications done in my field”

Daniel Fuller

Clarkson University, USA

“Litmaps is a game changer for finding novel literature... it has been invaluable for my productivity.... I also got my PhD student to use it and they also found it invaluable, finding several gaps they missed”

Varun Venkatesh

Austin Health, Australia

online research paper review

Mastering Literature Reviews with Litmaps

online research paper review

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

online research paper review

  • Research management

How scientists are making the most of Reddit

How scientists are making the most of Reddit

Career Feature 01 APR 24

Overcoming low vision to prove my abilities under pressure

Overcoming low vision to prove my abilities under pressure

Career Q&A 28 MAR 24

How a spreadsheet helped me to land my dream job

How a spreadsheet helped me to land my dream job

Career Column 28 MAR 24

Superconductivity case shows the need for zero tolerance of toxic lab culture

Correspondence 26 MAR 24

Cuts to postgraduate funding threaten Brazilian science — again

The beauty of what science can do when urgently needed

The beauty of what science can do when urgently needed

Career Q&A 26 MAR 24

The corpse of an exploded star and more — March’s best science images

The corpse of an exploded star and more — March’s best science images

News 28 MAR 24

How papers with doctored images can affect scientific reviews

How papers with doctored images can affect scientific reviews

Nature is committed to diversifying its journalistic sources

Nature is committed to diversifying its journalistic sources

Editorial 27 MAR 24

Postdoc Research Associates in Single Cell Multi-Omics Analysis and Molecular Biology

The Cao Lab at UT Dallas is seeking for two highly motivated postdocs in Single Cell Multi-Omics Analysis and Molecular Biology to join us.

Dallas, Texas (US)

the Department of Bioengineering, UT Dallas

online research paper review

Expression of Interest – Marie Skłodowska-Curie Actions – Postdoctoral Fellowships 2024 (MSCA-PF)

Academic institutions in Brittany are looking for excellent postdoctoral researchers willing to apply for a Marie S. Curie Postdoctoral Fellowship.

France (FR)

Plateforme projets européens (2PE) -Bretagne

online research paper review

Tenure-track Assistant Professor in Ecological and Evolutionary Modeling

Tenure-track Assistant Professor in Ecosystem Ecology linked to IceLab’s Center for modeling adaptive mechanisms in living systems under stress

Umeå, Sweden

Umeå University

online research paper review

Faculty Positions in Westlake University

Founded in 2018, Westlake University is a new type of non-profit research-oriented university in Hangzhou, China, supported by public a...

Hangzhou, Zhejiang, China

Westlake University

online research paper review

Postdoctoral Fellowships-Metabolic control of cell growth and senescence

Postdoctoral positions in the team Cell growth control by nutrients at Inst. Necker, Université Paris Cité, Inserm, Paris, France.

Paris, Ile-de-France (FR)

Inserm DR IDF Paris Centre Nord

online research paper review

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Discovering the evolution of online reviews: A bibliometric review

  • Research Paper
  • Published: 22 September 2023
  • Volume 33 , article number  49 , ( 2023 )

Cite this article

  • Yucheng Zhang 1 ,
  • Zhiling Wang 1 ,
  • Lin Xiao   ORCID: orcid.org/0000-0002-2259-6357 2 ,
  • Lijun Wang 3 &
  • Pei Huang 4  

930 Accesses

2 Citations

1 Altmetric

Explore all metrics

As a rapidly developing topic, online reviews have aroused great interest among researchers. Although the existing research can help to explain issues related to online reviews, the scattered and diversified nature of previous research hinders an overall understanding of this area. Based on bibliometrics, this study analyzes 3089 primary articles and 100,783 secondary articles published between 2003 and 2022. We comprehensively and objectively describe the development status of online reviews, show the evolutionary process of the knowledge structure of online reviews, and suggest research directions based on the analysis results. This article validates and expands previous literature reviews, helps scholars understand relevant knowledge about online reviews, and contributes to the development of online reviews.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

online research paper review

Similar content being viewed by others

online research paper review

Systematic Reviews: Characteristics and Impact

Gali Halevi & Rachel Pinotti

online research paper review

Rant or rave: variation over time in the language of online reviews

Yftah Ziser, Bonnie Webber & Shay B. Cohen

online research paper review

Constructing and evaluating automated literature review systems

Jason Portenoy & Jevin D. West

Abubakar, A. M., & Ilkan, M. (2016). Impact of online WOM on destination trust and intention to travel: A medical tourism perspective. Journal of Destination Marketing & Management, 5 (3), 192–201. https://doi.org/10.1016/j.jdmm.2015.12.005

Article   Google Scholar  

Ahani, A., Nilashi, M., Yadegaridehkordi, E., Sanzogni, L., Tarik, A. R., Knox, K., & Ibrahim, O. (2019). Revealing customers’ satisfaction and preferences through online review analysis: The case of Canary Islands hotels. Journal of Retailing and Consumer Services, 51 , 331–343. https://doi.org/10.1016/j.jretconser.2019.06.014

Allee, V. (2012). The knowledge evolution: Expanding organizational intelligence . Routledge. https://doi.org/10.4324/9780080509808

Anderson, E. W. (1998). Customer satisfaction and word of mouth. Journal of Service Research, 1 (1), 5–17. https://doi.org/10.1177/109467059800100102

Arndt, J. (1967). Role of product-related conversations in the diffusion of a new product. Journal of Marketing Research, 4 (3), 291–295. https://doi.org/10.1177/002224376700400308

Babić Rosario, A., Sotgiu, F., De Valck, K., & Bijmolt, T. H. (2016). The effect of electronic word of mouth on sales: A meta-analytic review of platform, product, and metric factors. Journal of Marketing Research, 53 (3), 297–318. https://doi.org/10.1509/jmr.14.0380

Baek, H., Ahn, J., & Choi, Y. (2012). Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17 (2), 99–126. https://doi.org/10.2753/jec1086-4415170204

Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 16 (1), 74–94. https://doi.org/10.1007/bf02723327

Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51 (6), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173

Berger, J. (2014). Word of mouth and interpersonal communication: A review and directions for future research. Journal of Consumer Psychology, 24 (4), 586–607. https://doi.org/10.1016/j.jcps.2014.05.002

Berger, J., & Heath, C. (2007). Where consumers diverge from others: Identity signaling and product domains. Journal of Consumer Research, 34 (2), 121–134. https://doi.org/10.1086/519142

Berger, J., & Milkman, K. L. (2012). What makes online content viral? Journal of Marketing Research, 49 (2), 192–205. https://doi.org/10.1509/jmr.10.0353

Boumans, J. W., & Trilling, D. (2016). Taking stock of the toolkit: An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Digital Journalism, 4 (1), 8–23. https://doi.org/10.1080/21670811.2015.1096598

Boyack, K. W., & Klavans, R. (2010). Co-citation analysis, bibliographic coupling, and direct citation: Which citation approach represents the research front most accurately? Journal of the American Society for information Science and Technology, 61 (12), 2389–2404. https://doi.org/10.1002/asi.21419

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15 (5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Brown, J. J., & Reingen, P. H. (1987). Social ties and word-of-mouth referral behavior. Journal of Consumer Research, 14 (3), 350–362. https://doi.org/10.1086/209118

Brown, J., Broderick, A. J., & Lee, N. (2007). Word of mouth communication within online communities: Conceptualizing the online social network. Journal of Interactive Marketing, 21 (3), 2–20. https://doi.org/10.1002/dir.20082

Chen, C. (2006). CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature. Journal of the American Society for information Science and Technology, 57 (3), 359–377. https://doi.org/10.1002/asi.20317

Chen, Y., & Xie, J. (2008). Online consumer review: Word-of-mouth as a new element of marketing communication mix. Management Science, 54 (3), 477–491. https://doi.org/10.1287/mnsc.1070.0810

Chen, Y.-F., & Law, R. (2016). A review of research on electronic word-of-mouth in hospitality and tourism management. International Journal of Hospitality & Tourism Administration, 17 (4), 347–372. https://doi.org/10.1080/15256480.2016.1226150

Chen, Y., Wang, Q., & Xie, J. (2011). Online social interactions: A natural experiment on word of mouth versus observational learning. Journal of Marketing Research, 48 (2), 238–254. https://doi.org/10.1509/jmkr.48.2.238

Cheung, C. M., Chan, G. W., & Limayem, M. (2005). A critical review of online consumer behavior: Empirical research. Journal of Electronic Commerce in Organizations (JECO), 3 (4), 1–19. https://doi.org/10.4018/jeco.2005100101

Cheung, M. Y., Luo, C., Sia, C. L., & Chen, H. (2009). Credibility of electronic word-of-mouth: Informational and normative determinants of on-line consumer recommendations. International Journal of Electronic Commerce, 13 (4), 9–38. https://doi.org/10.2753/JEC1086-4415130402

Cheung, C. M., & Thadani, D. R. (2012). The impact of electronic word-of-mouth communication: A literature analysis and integrative model. Decision Support Systems, 54 (1), 461–470. https://doi.org/10.1016/j.dss.2012.06.008

Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43 (3), 345–354. https://doi.org/10.1509/jmkr.43.3.345

Chintagunta, P. K., Gopinath, S., & Venkataraman, S. (2010). The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets. Marketing Science, 29 (5), 944–957. https://doi.org/10.1287/mksc.1100.0572

Choi, J., Yi, S., & Lee, K. C. (2011). Analysis of keyword networks in MIS research and implications for predicting knowledge evolution. Information & Management, 48 (8), 371–381. https://doi.org/10.1016/j.im.2011.09.004

Chong, A. Y. L., Ch’ng, E., Liu, M., & Li, B. (2017). Predicting consumer product demands via big data: The roles of online promotional marketing and online reviews. International Journal of Production Research, 55 (17), 5142–5156. https://doi.org/10.1080/00207543.2015.1066519

Chu, S.-C., & Kim, Y. (2011). Determinants of consumer engagement in electronic word-of-mouth (eWOM) in social networking sites. International Journal of Advertising, 30 (1), 47–75. https://doi.org/10.2501/IJA-30-1-047-075

Chu, S.-C., & Kim, J. (2018). The current state of knowledge on electronic word-of-mouth in advertising research. International Journal of Advertising, 37 (1), 1–13. https://doi.org/10.1080/02650487.2017.1407061

Cobo, M. J., López-Herrera, A. G., Herrera-Viedma, E., & Herrera, F. (2011). An approach for detecting, quantifying, and visualizing the evolution of a research field: A practical application to the fuzzy sets theory field. Journal of Informetrics, 5 (1), 146–166. https://doi.org/10.1016/j.joi.2010.10.002

Darley, W. K., Blankson, C., & Luethge, D. J. (2010). Toward an integrated framework for online consumer behavior and decision making process: A review. Psychology & Marketing, 27 (2), 94–116. https://doi.org/10.1002/mar.20322

Daugherty, T., & Hoffman, E. (2014). eWOM and the importance of capturing consumer attention within social media. Journal of Marketing Communications, 20 (1–2), 82–102. https://doi.org/10.1080/13527266.2013.797764

Davari, D., Vayghan, S., Jang, S., & Erdem, M. (2022). Hotel experiences during the COVID-19 pandemic: High-touch versus high-tech. International Journal of Contemporary Hospitality Management, 34 (4), 1312–1330. https://doi.org/10.1108/IJCHM-07-2021-0919

De Matos, C. A., & Rossi, C. A. V. (2008). Word-of-mouth communications in marketing: A meta-analytic review of the antecedents and moderators. Journal of the Academy of Marketing Science, 36 (4), 578–596. https://doi.org/10.1007/s11747-008-0121-1

Dellarocas, C. (2003). The digitization of word of mouth: Promise and challenges of online feedback mechanisms. Management Science, 49 (10), 1407–1424. https://doi.org/10.1287/mnsc.49.10.1407.17308

Dellarocas, C., Zhang, X. M., & Awad, N. F. (2007). Exploring the value of online product reviews in forecasting sales: The case of motion pictures. Journal of Interactive Marketing, 21 (4), 23–45. https://doi.org/10.1002/dir.20087

Dichter, E. (1966). How word-of-mouth advertising works. Harvard Business Review, 44 , 147–166. https://doi.org/10.2307/254956

Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., & Lim, W. M. (2021). How to conduct a bibliometric analysis: An overview and guidelines. Journal of Business Research, 133 , 285–296. https://doi.org/10.1016/j.jbusres.2021.04.070

Duan, W., Gu, B., & Whinston, A. B. (2008a). Do online reviews matter?—An empirical investigation of panel data. Decision Support Systems, 45 (4), 1007–1016. https://doi.org/10.1016/j.dss.2008.04.001

Duan, W., Gu, B., & Whinston, A. B. (2008b). The dynamics of online word-of-mouth and product sales—An empirical investigation of the movie industry. Journal of Retailing, 84 (2), 233–242. https://doi.org/10.1016/j.jretai.2008.04.005

Fader, P. S., & Winer, R. S. (2012). Introduction to the special issue on the emergence and impact of user-generated content. Marketing Science, 31 (3), 369–371. https://doi.org/10.1287/mksc.1120.0715

Filieri, R. (2015). What makes online reviews helpful? A diagnosticity-adoption framework to explain informational and normative influences in e-WOM. Journal of Business Research, 68 (6), 1261–1270. https://doi.org/10.1016/j.jbusres.2014.11.006

Forman, C., Ghose, A., & Wiesenfeld, B. (2008). Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Information Systems Research, 19 (3), 291–313. https://doi.org/10.1287/isre.1080.0193

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18 (1), 39–50. https://doi.org/10.1177/002224378101800104

Fresneda, J. E., & Gefen, D. (2019). A semantic measure of online review helpfulness and the importance of message entropy. Decision Support Systems, 125 , 113117. https://doi.org/10.1016/j.dss.2019.113117

Gerdt, S.-O., Wagner, E., & Schewe, G. (2019). The relationship between sustainability and customer satisfaction in hospitality: An explorative investigation using eWOM as a data source. Tourism Management, 74 , 155–172. https://doi.org/10.1016/j.tourman.2019.02.010

Ghose, A., & Ipeirotis, P. G. (2010). Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering, 23 (10), 1498–1512. https://doi.org/10.1109/TKDE.2010.188

Godes, D., & Mayzlin, D. (2004). Using online conversations to study word-of-mouth communication. Marketing Science, 23 (4), 545–560. https://doi.org/10.1287/mksc.1040.0071

Godes, D., & Mayzlin, D. (2009). Firm-created word-of-mouth communication: Evidence from a field test. Marketing Science, 28 (4), 721–739. https://doi.org/10.1287/mksc.1080.0444

Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78 (6), 1360–1380. https://doi.org/10.1086/225469

Grégoire, D. A., Noel, M. X., Déry, R., & Béchard, J. P. (2006). Is there conceptual convergence in entrepreneurship research? A co–citation analysis of frontiers of entrepreneurship research, 1981–2004. Entrepreneurship Theory and Practice, 30 (3), 333–373. https://doi.org/10.1111/j.1540-6520.2006.00124.x

Gruen, T. W., Osmonbekov, T., & Czaplewski, A. J. (2006). eWOM: The impact of customer-to-customer online know-how exchange on customer value and loyalty. Journal of Business Research, 59 (4), 449–456. https://doi.org/10.1016/j.jbusres.2005.10.004

Guan, C., Hung, Y.-C., & Liu, W. (2022). Cultural differences in hospitality service evaluations: Mining insights of user generated content. Electronic Markets, 32 (3), 1061–1081. https://doi.org/10.1007/s12525-022-00545-z

Guo, Y., Barnes, S. J., & Jia, Q. (2017). Mining meaning from online ratings and reviews: Tourist satisfaction analysis using latent dirichlet allocation. Tourism Management, 59 , 467–483. https://doi.org/10.1016/j.tourman.2016.09.009

Hennig-Thurau, T., & Walsh, G. (2003). Electronic word-of-mouth: Motives for and consequences of reading customer articulations on the Internet. International Journal of Electronic Commerce, 8 (2), 51–74. https://doi.org/10.1080/10864415.2003.11044293

Hennig-Thurau, T., Gwinner, K. P., Walsh, G., & Gremler, D. D. (2004). Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet? Journal of Interactive Marketing, 18 (1), 38–52. https://doi.org/10.1002/dir.10073

Herr, P. M., Kardes, F. R., & Kim, J. (1991). Effects of word-of-mouth and product-attribute information on persuasion: An accessibility-diagnosticity perspective. Journal of Consumer research, 17 (4), 454–462. https://doi.org/10.1086/208570

Hu, N., Liu, L., & Zhang, J. J. (2008). Do online reviews affect product sales? The role of reviewer characteristics and temporal effects. Information Technology and Management, 9 (3), 201–214. https://doi.org/10.1007/s10799-008-0041-2

Huete-Alcocer, N. (2017). A literature review of word of mouth and electronic word of mouth: Implications for consumer behavior. Frontiers in Psychology, 8 , 1256. https://doi.org/10.3389/fpsyg.2017.01256

Jalilvand, M. R., & Samiei, N. (2012). The impact of electronic word of mouth on a tourism destination choice. Internet Research, 22 (5), 591–612. https://doi.org/10.1108/10662241211271563

Jia, S. S. (2020). Motivation and satisfaction of Chinese and US tourists in restaurants: A cross-cultural text mining of online reviews. Tourism Management, 78 , 104071. https://doi.org/10.1016/j.tourman.2019.104071

Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons, 53 (1), 59–68. https://doi.org/10.1016/j.bushor.2009.09.003

King, R. A., Racherla, P., & Bush, V. D. (2014). What we know and don’t know about online word-of-mouth: A review and synthesis of the literature. Journal of Interactive Marketing, 28 (3), 167–183. https://doi.org/10.1016/j.intmar.2014.02.001

Korfiatis, N., García-Bariocanal, E., & Sánchez-Alonso, S. (2012). Evaluating content quality and helpfulness of online product reviews: The interplay of review helpfulness vs. review content. Electronic Commerce Research and Applications, 11 (3), 205–217. https://doi.org/10.1016/j.elerap.2011.10.003

Kozinets, R. V., De Valck, K., Wojnicki, A. C., & Wilner, S. J. (2010). Networked narratives: Understanding word-of-mouth marketing in online communities. Journal of Marketing, 74 (2), 71–89. https://doi.org/10.1509/jm.74.2.71

Kraus, S., Breier, M., Lim, W. M., Dabić, M., Kumar, S., Kanbach, D., & Liguori, E. (2022). Literature reviews as independent studies: Guidelines for academic practice. Review of Managerial Science, 16 (8), 2577–2595. https://doi.org/10.1007/s11846-022-00588-8

Kumar, S., Lim, W. M., Pandey, N., & Christopher Westland, J. (2021). 20 years of electronic commerce research. Electronic Commerce Research, 21 , 1–40. https://doi.org/10.1007/s10660-021-09464-1

Kwok, L., Xie, K. L., & Richards, T. (2017). Thematic framework of online review research: A systematic analysis of contemporary literature on seven major hospitality and tourism journals. International Journal of Contemporary Hospitality Management., 29 (1), 307–354. https://doi.org/10.1108/IJCHM-11-2015-0664

Lee, J., Park, D.-H., & Han, I. (2008). The effect of negative online consumer reviews on product attitude: An information processing view. Electronic Commerce Research and Applications, 7 (3), 341–352. https://doi.org/10.1016/j.elerap.2007.05.004

Lee, D., Ng, P. M., & Bogomolova, S. (2020). The impact of university brand identification and eWOM behaviour on students’ psychological well-being: A multi-group analysis among active and passive social media users. Journal of Marketing Management, 36 (3–4), 384–403. https://doi.org/10.1080/0267257X.2019.1702082

Leung, X. Y., Sun, J., & Bai, B. (2017). Bibliometrics of social media research: A co-citation and co-word analysis. International Journal of Hospitality Management, 66 , 35–45. https://doi.org/10.1016/j.ijhm.2017.06.012

Lim, W. M. (2022). Ushering a new era of Global Business and Organizational Excellence: Taking a leaf out of recent trends in the new normal. Global Business and Organizational Excellence, 41 (5), 5–13. https://doi.org/10.1002/joe.22163

Lim, W. M., Rasul, T., Kumar, S., & Ala, M. (2021a). Past, present, and future of customer engagement. Journal of Business Research, 140 , 439–458. https://doi.org/10.1016/j.jbusres.2021.11.014

Lim, W. M., Yap, S.-F., & Makkar, M. (2021b). Home sharing in marketing and tourism at a tipping point: What do we know, how do we know, and where should we be heading? Journal of Business Research, 122 , 534–566. https://doi.org/10.1016/j.jbusres.2020.08.051

Lim, W. M., Kumar, S., & Ali, F. (2022). Advancing knowledge through literature reviews: ‘What’, ‘why’, and ‘how to contribute.’ The Service Industries Journal, 42 (7–8), 481–513. https://doi.org/10.1080/02642069.2022.2047941

Lim, W. M., Kumar, S., Pandey, N., Verma, D., & Kumar, D. (2023). Evolution and trends in consumer behaviour: Insights from journal of consumer behaviour. Journal of Consumer Behaviour, 22 (1), 217–232. https://doi.org/10.1002/cb.2118

Liu, Y. (2006). Word of mouth for movies: Its dynamics and impact on box office revenue. Journal of Marketing, 70 (3), 74–89. https://doi.org/10.1509/jmkg.70.3.074

Liu, F., Lai, K.-H., Wu, J., & Duan, W. (2021). Listening to online reviews: A mixed-methods investigation of customer experience in the sharing economy. Decision Support Systems, 149 , 113609. https://doi.org/10.1016/j.dss.2021.113609

Ludwig, S., De Ruyter, K., Friedman, M., Brüggen, E. C., Wetzels, M., & Pfann, G. (2013). More than words: The influence of affective content and linguistic style matches in online reviews on conversion rates. Journal of Marketing, 77 (1), 87–103. https://doi.org/10.1509/jm.11.0560

Luo, Y., & Xu, X. (2021). Comparative study of deep learning models for analyzing online restaurant reviews in the era of the COVID-19 pandemic. International Journal of Hospitality Management, 94 , 102849. https://doi.org/10.1016/j.ijhm.2020.102849

Mariani, M. M., & Borghi, M. (2020). Online review helpfulness and firms’ financial performance: An empirical study in a service industry. International Journal of Electronic Commerce, 24 (4), 421–449. https://doi.org/10.1080/10864415.2020.1806464

Mariani, M. M., Borghi, M., & Laker, B. (2023). Do submission devices influence online review ratings differently across different types of platforms? A big data analysis. Technological Forecasting and Social Change, 189 , 122296. https://doi.org/10.1016/j.techfore.2022.122296

Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think . Houghton Mifflin Harcourt. https://doi.org/10.1093/aje/kwu085

Mayrhofer, M., Matthes, J., Einwiller, S., & Naderer, B. (2020). User generated content presenting brands on social media increases young adults’ purchase intention. International Journal of Advertising, 39 (1), 166–186. https://doi.org/10.1080/02650487.2019.1596447

Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal of Marketing, 58 (3), 20–38. https://doi.org/10.1177/002224299405800302

Mudambi, S. M., & Schuff, D. (2010). Research note: What makes a helpful online review? A study of customer reviews on Amazon. com. MIS Quarterly, 34 (1), 185–200. https://doi.org/10.2307/20721420

Mukherjee, D., Lim, W. M., Kumar, S., & Donthu, N. (2022). Guidelines for advancing theory and practice through bibliometric research. Journal of Business Research, 148 , 101–115. https://doi.org/10.1016/j.jbusres.2022.04.042

Müller, J., & Christandl, F. (2019). Content is king–But who is the king of kings? The effect of content marketing, sponsored content & user-generated content on brand responses. Computers in Human Behavior, 96 , 46–55. https://doi.org/10.1016/j.chb.2019.02.006

Nejad, M. G., Amini, M., & Sherrell, D. L. (2016). The profit impact of revenue heterogeneity and assortativity in the presence of negative word-of-mouth. International Journal of Research in Marketing, 33 (3), 656–673. https://doi.org/10.1016/j.ijresmar.2015.11.005

Palese, B., Piccoli, G., & Lui, T.-W. (2021). Effective use of online review systems: Congruent managerial responses and firm competitive performance. International Journal of Hospitality Management, 96 , 102976. https://doi.org/10.1016/j.ijhm.2021.102976

Park, D.-H., Lee, J., & Han, I. (2007). The effect of on-line consumer reviews on consumer purchasing intention: The moderating role of involvement. International Journal of Electronic Commerce, 11 (4), 125–148. https://doi.org/10.2753/JEC1086-4415110405

Paul, J., Lim, W. M., O’Cass, A., Hao, A. W., & Bresciani, S. (2021). Scientific procedures and rationales for systematic literature reviews (SPAR-4-SLR). International Journal of Consumer Studies, 45 (4), O1–O16. https://doi.org/10.1111/ijcs.12695

Picazo-Vela, S., Chou, S. Y., Melcher, A. J., & Pearson, J. M. (2010). Why provide an online review? An extended theory of planned behavior and the role of Big-Five personality traits. Computers in Human Behavior, 26 (4), 685–696. https://doi.org/10.1016/j.chb.2010.01.005

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879

Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of documentation, 25 (4), 348–349.

Google Scholar  

Qin, L. (2011). Word-of-blog for movies: A predictor and an outcome of box office revenue? Journal of Electronic Commerce Research, 12 (3), 187–198.

Rese, A., Schreiber, S., & Baier, D. (2014). Technology acceptance modeling of augmented reality at the point of sale: Can surveys be replaced by an analysis of online reviews? Journal of Retailing and Consumer Services, 21 (5), 869–876. https://doi.org/10.1016/j.jretconser.2014.02.011

Reyes-Gonzalez, L., Gonzalez-Brambila, C. N., & Veloso, F. (2016). Using co-authorship and citation analysis to identify research groups: A new way to assess performance. Scientometrics, 108 (3), 1171–1191. https://doi.org/10.1007/s11192-016-2029-8

Rimé, B. (2009). Emotion elicits the social sharing of emotion: Theory and empirical review. Emotion Review, 1 (1), 60–85. https://doi.org/10.1177/1754073908097189

Ruths, D., & Pfeffer, J. (2014). Social media for large studies of behavior. Science, 346 (6213), 1063–1064. https://doi.org/10.1126/science.346.6213.1063

Salehan, M., & Kim, D. J. (2016). Predicting the performance of online consumer reviews: A sentiment mining approach to big data analytics. Decision Support Systems, 81 , 30–40. https://doi.org/10.1016/j.dss.2015.10.006

Schuckert, M., Liu, X., & Law, R. (2015). Hospitality and tourism online reviews: Recent trends and future directions. Journal of Travel & Tourism Marketing, 32 (5), 608–621. https://doi.org/10.1080/10548408.2014.933154

Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the web. Journal of Interactive Marketing, 21 (4), 76–94. https://doi.org/10.1002/dir.20090

Senecal, S., & Nantel, J. (2004). The influence of online product recommendations on consumers’ online choices. Journal of Retailing, 80 (2), 159–169. https://doi.org/10.1016/j.jretai.2004.04.001

Singh, A., Lim, W. M., Jha, S., Kumar, S., & Ciasullo, M. V. (2023). The state of the art of strategic leadership. Journal of Business Research, 158 , 113676. https://doi.org/10.1016/j.jbusres.2023.113676

Sinkovics, R. R., & Sinkovics, N. (2016). Enhancing the foundations for theorising through bibliometric mapping. International Marketing Review, 33 (3), 327–350. https://doi.org/10.1108/IMR-10-2014-0341

Small, H. (1973). Co-citation in the scientific literature: A new measure of the relationship between two documents. Journal of the American Society for information Science, 24 (4), 265–269. https://doi.org/10.1002/asi.4630240406

Soares, J. C., Limongi, R., De Sousa Júnior, J. H., Santos, W. S., Raasch, M., & Hoeckesfeld, L. (2022). Assessing the effects of COVID-19-related risk on online shopping behavior. Journal of Marketing Analytics, 11 , 82–94. https://doi.org/10.1057/s41270-022-00156-9

Stamolampros, P., Korfiatis, N., Chalvatzis, K., & Buhalis, D. (2019). Job satisfaction and employee turnover determinants in high contact services: Insights from employees’ online reviews. Tourism Management, 75 , 130–147. https://doi.org/10.1016/j.tourman.2019.04.030

Su, H.-N., & Lee, P.-C. (2010). Mapping knowledge structure by keyword co-occurrence: A first look at journal papers in Technology Foresight. Scientometrics, 85 (1), 65–79. https://doi.org/10.1007/s11192-010-0259-8

Sundaram, D. S., Mitra, K., & Webster, C. (1998). Word-of-mouth communications: A motivational analysis. In J. W. Alba, & J. W. Hutchinson (Eds.), Advances in Consumer Research (Vol. 25, pp. 527–531). Provo.

Tankovska, H. (2021). Number of social network users worldwide from 2017 to 2025(in billions) https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/

Thorson, K. S., & Rodgers, S. (2006). Relationships between blogs as eWOM and interactivity, perceived interactivity, and parasocial interaction. Journal of Interactive Advertising, 6 (2), 5–44. https://doi.org/10.1080/15252019.2006.10722117

Timoshenko, A., & Hauser, J. R. (2019). Identifying customer needs from user-generated content. Marketing Science, 38 (1), 1–20. https://doi.org/10.1287/mksc.2018.1123

Trusov, M., Bucklin, R. E., & Pauwels, K. (2009). Effects of word-of-mouth versus traditional marketing: Findings from an internet social networking site. Journal of Marketing, 73 (5), 90–102. https://doi.org/10.1509/jmkg.73.5.90

Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84 (2), 523–538. https://doi.org/10.1007/s11192-009-0146-3

Vermeulen, I. E., & Seegers, D. (2009). Tried and tested: The impact of online hotel reviews on consumer consideration. Tourism Management, 30 (1), 123–127. https://doi.org/10.1016/j.tourman.2008.04.008

Vošner, H. B., Kokol, P., Bobek, S., Železnik, D., & Završnik, J. (2016). A bibliometric retrospective of the journal computers in human behavior (1991–2015). Computers in Human Behavior, 65 , 46–58. https://doi.org/10.1016/j.chb.2016.08.026

Wu, P. F. (2019). Motivation crowding in online product reviewing: A qualitative study of Amazon reviewers. Information & Management, 56 (8), 103163. https://doi.org/10.1016/j.im.2019.04.006

Xiang, Z., Du, Q., Ma, Y., & Fan, W. (2017). A comparative analysis of major online review platforms: Implications for social media analytics in hospitality and tourism. Tourism Management, 58 , 51–65. https://doi.org/10.1016/j.tourman.2016.10.001

Xie, K. L., Zhang, Z., & Zhang, Z. (2014). The business value of online consumer reviews and management response to hotel performance. International Journal of Hospitality Management, 43 , 1–12. https://doi.org/10.1016/j.ijhm.2014.07.007

Yang, Y., Wang, C.-C., & Lai, M.-C. (2012). Using bibliometric analysis to explore research trend of electronic word-of-mouth from 1999 to 2011. International Journal of Innovation, Management and Technology, 3 (4), 337–342. https://doi.org/10.7763/IJIMT.2012.V3.250

Yeap, J. A., Ignatius, J., & Ramayah, T. (2014). Determining consumers’ most preferred eWOM platform for movie reviews: A fuzzy analytic hierarchy process approach. Computers in Human Behavior, 31 , 250–258. https://doi.org/10.1016/j.chb.2013.10.034

Yin, D., Bond, S. D., & Zhang, H. (2014). Anxious or angry? Effects of discrete emotions on the perceived helpfulness of online reviews. MIS Quarterly, 38 (2), 539–560. https://doi.org/10.25300/MISQ/2014/38.2.10

You, Y., Vadakkepatt, G. G., & Joshi, A. M. (2015). A meta-analysis of electronic word-of-mouth elasticity. Journal of Marketing, 79 (2), 19–39. https://doi.org/10.1509/jm.14.0169

Zhao, Y., Xu, X., & Wang, M. (2019). Predicting overall customer satisfaction: Big data evidence from hotel online textual reviews. International Journal of Hospitality Management, 76 , 111–121. https://doi.org/10.1016/j.ijhm.2018.03.017

Zhu, F., & Zhang, X. (2010). Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. Journal of Marketing, 74 (2), 133–148. https://doi.org/10.1509/jm.74.2.133

Zhu, J., Song, L. J., Zhu, L., & Johnson, R. E. (2019). Visualizing the landscape and evolution of leadership research. The Leadership Quarterly, 30 (2), 215–232. https://doi.org/10.1016/j.leaqua.2018.06.003

Zupic, I., & Čater, T. (2015). Bibliometric methods in management and organization. Organizational Research Methods, 18 (3), 429–472. https://doi.org/10.1177/1094428114562629

Download references

Acknowledgements

Yucheng Zhang recieved financial support provided by the national science foundation of china (grant nos. 71972065, 72272048, and 71872077) and the ministry of education of Project (grant no. 21JhQ088).

Author information

Authors and affiliations.

School of Economics and Management, Hebei University of Technology, Tianjin, 30041, China

Yucheng Zhang & Zhiling Wang

College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China

School of Business, East China University of Science and Technology, Shanghai, 200237, China

School of Art and Science, University of Rochester, Rochester, NY, 14602, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lin Xiao .

Additional information

Responsible Editor: Fabio Lobato

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Zhang, Y., Wang, Z., Xiao, L. et al. Discovering the evolution of online reviews: A bibliometric review. Electron Markets 33 , 49 (2023). https://doi.org/10.1007/s12525-023-00667-y

Download citation

Received : 07 May 2023

Accepted : 22 August 2023

Published : 22 September 2023

DOI : https://doi.org/10.1007/s12525-023-00667-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online reviews
  • Bibliometrics
  • Science mapping
  • Literature review

JEL classification

  • Find a journal
  • Publish with us
  • Track your research

How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A systematic review of online examinations: A pedagogical innovation for scalable authentication and integrity

Kerryn butler-henderson.

a College of Health and Medicine, University of Tasmania, Locked Bag 1322, Launceston, Tasmania, 7250, Australia

Joseph Crawford

b Academic Division, University of Tasmania, Locked Bag 1322, Launceston, Tasmania, 7250, Australia

Digitization and automation across all industries has resulted in improvements in efficiencies and effectiveness to systems and process, and the higher education sector is not immune. Online learning, e-learning, electronic teaching tools, and digital assessments are not innovations. However, there has been limited implementation of online invigilated examinations in many countries. This paper provides a brief background on online examinations, followed by the results of a systematic review on the topic to explore the challenges and opportunities. We follow on with an explication of results from thirty-six papers, exploring nine key themes: student perceptions, student performance, anxiety, cheating, staff perceptions, authentication and security, interface design, and technology issues. While the literature on online examinations is growing, there is still a dearth of discussion at the pedagogical and governance levels.

  • • There is a lack of score variation between examination modalities.
  • • Online exams offer various methods for mitigating cheating.
  • • There is a favorable ratings for online examinations by students.
  • • Staff preferred online examinations for their ease of completion and logistics.
  • • The interface of a system continues to be an enabler or barrier of online exams.

1. Introduction

Learning and teaching is transforming away from the conventional lecture theatre designed to seat 100 to 10,000 passive students towards more active learning environments. In our current climate, this is exacerbated by COVID-19 responses ( Crawford et al., 2020 ), where thousands of students are involved in online adaptions of face-to-face examinations (e.g. online Zoom rooms with all microphones and videos locked on). This evolution has grown from the need to recognize that students now rarely study exclusively and have commitments that conflict with their University life (e.g. work, family, social obligations). Students have more diverse digitally capability ( Margaryan et al., 2011 ) and higher age and gender diversity ( Eagly & Sczesny, 2009 ; Schwalb & Sedlacek, 1990 ). Continual change of the demographic and profile of students creates a challenge for scholars seeking to develop a student experience that demonstrates quality and maintains financial and academic viability ( Gross et al., 2013 ; Hainline et al., 2010 ).

Universities are developing extensive online offerings to grow their international loads and facilitate the massification of higher learning. These protocols, informed by growing policy targets to educate a larger quantity of graduates (e.g. Kemp, 1999 ; Reiko, 2001 ), have challenged traditional university models of fully on-campus student attendance. The development of online examination software has offered a systematic and technological alternative to the end-of-course summative examination designed for final authentication and testing of student knowledge retention, application, and extension. As a result of the COVID-19 pandemic, the initial response in higher education across many countries was to postpone examinations ( Crawford et al., 2020 ). However, as the pandemic continued, the need to move to either an online examination format or alternative assessment became more urgent.

This paper is a timely exploration of the contemporary literature related to online examinations in the university setting, with the hopes to consolidate information on this relatively new pedagogy in higher education. This paper begins with a brief background of traditional examinations, as the assumptions applied in many online examination environments build on the techniques and assumptions of the traditional face-to-face gymnasium-housed invigilated examinations. This is followed by a summary of the systematic review method, including search strategy, procedure, quality review, analysis, and summary of the sample.

Print-based educational examinations designed to test knowledge have existed for hundreds of years. The New York State Education Department has “the oldest educational testing service in the United States” and has been delivering entrance examinations since 1865 ( Johnson, 2009 , p. 1; NYSED, 2012 ). In pre-Revolution Russia, it was not possible to obtain a diploma to enter university without passing a high-stakes graduation examinations ( Karp, 2007 ). These high school examinations assessed and assured learning of students in rigid and high-security conditions. Under traditional classroom conditions, these were likely a reasonable practice to validate knowledge. The discussion of authenticating learning was not a consideration at this stage, as students were face to face only. For many high school jurisdictions, these are designed to strengthen the accountability of teachers and assess student performance ( Mueller & Colley, 2015 ).

In tertiary education, the use of an end-of-course summative examination as a form of validating knowledge has been informed significantly by accreditation bodies and streamlined financially viable assessment options. The American Bar Association has required a final course examination to remain accredited ( Sheppard, 1996 ). Law examinations typically contained brief didactic questions focused on assessing rote memory through to problem-based assessment to evaluate students’ ability to apply knowledge ( Sheppard, 1996 ). In accredited courses, there are significant parallels. Alternatives to traditional gymnasium-sized classroom paper-and-pencil invigilated examinations have been developed with educators recognizing the limitations associated with single-point summative examinations ( Butt, 2018 ).

The objective structured clinical examinations (OSCE) incorporate multiple workstations with students performing specific practical tasks from physical examinations on mannequins to short-answer written responses to scenarios ( Turner & Dankoski, 2008 ). The OSCE has parallels with the patient simulation examination used in some medical schools ( Botezatu et al., 2010 ). Portfolios assess and demonstrate learning over a whole course and for extracurricular learning ( Wasley, 2008 ).

The inclusion of online examinations, e-examinations, and bring-your-own-device models have offered alternatives to the large-scale examination rooms with paper-and-pencil invigilated examinations. Each of these offer new opportunities for the inclusion of innovative pedagogies and assessment where examinations are considered necessary. Further, some research indicates online examinations are able to discern a true pass from a true fail with a high level of accuracy ( Ardid et al., 2015 ), yet there is no systematic consolidation of the literature. We believe this timely review is critical for the progression of the field in first stepping back and consolidating the existing practices to support dissemination and further innovation. The pursuit of such systems may be to provide formative feedback and to assess learning outcomes, but a dominant rationale for final examinations is to authenticate learning. That is, to ensure the student whose name is on the student register, is the student who is completing the assessed work. The development of digitalized examination pilot studies and case studies are becoming an expected norm with universities developing responses to a growing online curriculum offering (e.g. Al-Hakeem & Abdulrahman, 2017 ; Alzu'bi, 2015 ; Anderson et al., 2005 ; Fluck et al., 2009 ; Fluck et al., 2017 ; Fluck, 2019 ; Seow & Soong, 2014 ; Sindre & Vegendla, 2015 ; Steel et al., 2019 ; Wibowo et al., 2016 ).

As many scholars highlight, cheating is a common component of the contemporary student experience ( Jordan, 2001 ; Rettinger & Kramer, 2009 ) despite that it should not be. Some are theorizing responses to the inevitability of cheating from developing student capacity for integrity ( Crawford, 2015 ; Wright, 2011 ) to enhancing detection of cheating ( Dawson & Sutherland-Smith, 2018 , 2019 ) and legislation to ban contract cheating ( Amigud & Dawson, 2020 ). We see value in the pursuit of methods that can support integrity in student assessment, including during rapid changes to the curriculum. The objective of this paper is to summarize the current evidence on online examination methods, and scholarly responses to authentication of learning and the mitigation of cheating, within the confines of assessment that enables learning and student wellbeing. We scope out preparation for examinations (e.g. Nguyen & Henderson, 2020 ) to enable focus on the online exam setting specifically.

2. Material and methods

2.1. search strategy.

To address the objective of this paper, a systematic literature review was undertaken, following the PRISMA approach for article selection ( Moher et al., 2009 ). The keyword string was developed incorporating the U.S. National Library of Medicine (2019) MeSH (Medical Subject Headings) terms: [(“online” OR “electronic” OR “digital”) AND (“exam*” OR “test”) AND (“university” OR “educat*” OR “teach” OR “school” OR “college”)]. The following databases were queried: A + Education (Informit), ERIC (EBSCO), Education Database (ProQuest), Education Research Complete (EBSCO), Educational Research Abstracts Online (Taylor & Francis), Informit, and Scopus. These search phrases will enable the collection of a broad range of literature on online examinations as well as terms often used synonymously, such as e-examination/eExams and BYOD (bring-your-own-device) examinations. The eligibility criteria included peer-reviewed journal articles or full conference papers on online examinations in the university sector, published between 2009 and 2018, available in English. As other sources (e.g. dissertations) are not peer-reviewed, and we aimed to identify rigorous best practice literature, we excluded these. We subsequently conducted a general search in Google Scholar and found no additional results. All records returned from the search were extracted and imported into the Covidence® online software by the first author.

2.2. Selection procedure and quality assessment

The online Covidence® software facilitated article selection following the PRISMA approach. Each of the 1906 titles and abstracts were double-screened by the authors based on the eligibility criteria. We also excluded non-higher education examinations, given the context around student demographics is often considerably different than vocational education, primary and high schools. Where there was discordance between the authors on a title or abstract inclusion or exclusion, consensus discussions were undertaken. The screening reduced the volume of papers significantly because numerous papers related to a different education context or involved online or digital forms of medical examinations. Next, the full-text for selected abstracts were double-reviewed, with discordance managed through a consensus discussion. The papers selected following the double full-text review were accepted for this review. Each accepted paper was reviewed for quality using the MMAT system ( Hong et al., 2018 ) and the scores were calculated as high, medium, or low quality based on the matrix ( Hong et al., 2018 ). A summary of this assessment is presented in Table 1 .

Summary of article characteristics.

QAS, quality assessment score.

2.3. Thematic analysis

Following the process described by Braun and Clarke (2006) , an inductive thematic approach was undertaken to identify common themes identified in each article. This process involves six stages: data familiarization, data coding, theme searching, theme review, defining themes, and naming themes. Familiarization with the literature was achieved during the screening, full-text, and quality review process by triple exposure to works. The named authors then inductively coded half the manuscripts each. The research team consolidated the data together to identify themes. Upon final agreement of themes and their definitions, the write-up was split among the team with subsequent review and revision of ideas in themes through independent and collaborative writing and reviewing ( Creswell & Miller, 2000 ; Lincoln & Guba, 1985 ). This resulted in nine final themes, each discussed in-depth during the discussion.

There were thirty-six (36) articles identified that met the eligibility criteria and were selected following the PRISMA approach, as shown in Fig. 1 .

Fig. 1

PRISMA results.

3.1. Characteristics of selected articles

The selected articles are from a wide range of discipline areas and countries. Table 1 summarizes the characteristics of the selected articles. The United States of America held a vast majority (14, 38.9%) of the publications on online examinations, followed by Saudi Arabia (4, 11.1%), China (2, 5.6%), and Australia (2, 5.6%). When aggregated at the region-level, there was an equality of papers from North America and Asia (14, 38.9% each), with Europe (6, 16.7%) and Oceania (2, 5.6%) least represented in the selection of articles. There has been considerable growth in publications in the past five years, concerning online examinations. Publications between the years 2009 and 2015 represented a third (12, 33.3%) of the total number of selected papers. The majority (24, 66.7%) of papers were published in the last three years. Papers that described a system but did not include empirical evidence scored a low-quality rank as they did not meet many of the criteria that relate to the evaluation of a system.

When examining the types of papers, the majority (30, 83.3%) were empirical research, with the remainder commentary papers (6, 16.7%). Of the empirical research papers, three-quarters of the paper reported a quantitative study design (32, 88.9%) compared to two (5.6%) qualitative study designs and two (5.6%) that used a mixed method. For quantitative studies, there was a range between nine and 1800 student participants ( x ̄  = 291.62) across 26 studies, and a range between two and 85 staff participants ( x ̄  = 30.67) in one study. The most common quantitative methods were self-administered surveys and analysis of numerical examination student grades (38% each). Qualitative and mixed methods studies only adopted interviews (6%). Only one qualitative study reported a sample of students ( n  = 4), with two qualitative studies reporting a sample of staff ( n  = 2, n  = 5).

3.2. Student perceptions

Today's students prefer online examinations compared to paper exams ([68.75% preference of online over paper-based examinations: Attia, 2014 ; 56–62.5%: Böhmer et al., 2018 ; no percentage: ( Schmidt, Ralph & Buskirk, 2009 ); 92%: Matthíasdóttir & Arnalds, 2016 ; no percentage: Pagram et al., 2018 ; 51%: Park, 2017 ; 84%: Schmidt, Ralph & Williams & Wong, 2009 ). Two reasons provided for the preference is the increased speed and ease of editing responses ( Pagram et al., 2018 ), with one study finding two-thirds (67%) of students reported a positive experience in online examination environment ( Matthíasdóttir & Arnalds, 2016 ). Students believe online examinations allows a more authentic assessment experience ( Williams & Wong, 2009 ), with 78 percent of students reporting consistencies between the online environment and their future real-world environment ( Matthíasdóttir & Arnalds, 2016 ).

Students perceive the online examinations saves time (75.0% of students surveyed) and is more economical (87.5%) than paper examinations ( Attia, 2014 ). It provides greater flexibility for completing examinations ( Schmidt et al., 2009 ) with faster access to remote student papers (87.5%) and students trust the result of online over paper-based examinations (78.1%: Attia, 2014 ). The majority of students (59.4%: Attia, 2014 ; 55.5%: Pagram et al., 2018 ) perceive that the online examination environment makes it easier to cheat. More than half (56.25%) of students believe that a lack of information communication and technology (ICT) skill do not adversely affect performance in online examinations ( Attia, 2014 ). Nearly a quarter (23%) of students reported ( Abdel Karim & Shukur, 2016 ) the most preferred font face (type) was Arial, a font also recommended by Vision Australia (2014) in their guidelines for online and print inclusive design and legibility considerations. Nearly all (87%) students preferred black text color on a white background color (87%). With regards to onscreen time counters, a countdown counter was the most preferred option (42%) compared to a traditional analogue clock (30%) or an ascending counter (22%). Many systems allow students to set their preferred remaining time reminder or alert, including 15 min remaining (35% students preferred), 5 min remaining (26%), mid-examination (15%) or 30 min remaining (13%).

3.3. Student performance

Several studies in the sample referred to a lack of score variation between the results of examination across different administration methods. For example, student performance did not have significant difference in final examination scores across online and traditional examination modalities ( Gold & Mozes-Carmel, 2017 ). This is reinforced by a test of validity and reliability of computer-based and paper-based assessment that demonstrated no significant difference ( Oz & Ozturan, 2018 ), and equality of grades identified across the two modalities ( Stowell & Bennett, 2010 ).

When considering student perceptions, of the studies documented in our sample, there tended to be favorable ratings of online examinations. In a small sample of 34 postgraduate students, the respondents had positive perceptions towards online learning assessments (67.4%). The students also believed it contributed to improved learning and feedback (67.4%), and 77 percent had favorable attitudes towards online assessment ( Attia, 2014 ). In a pre-examination survey, students indicated they preferred to type than to write, felt more confident about the examination, and had limited issues with software and hardware ( Pagram, 2018 ). With the same sample in a post-examination survey, within the design and technology examination, students felt the software and hardware were simple to use, yet many students did not feel at ease from their use of an e-examination.

Rios and Liu (2017) compared proctored and non-proctored online examinations across several aspects, including test-taking behavior. Their study did not identify any difference in the test-taking behavior of students between the two environments. There was no significant difference between omitted items and not-reached items. Furthermore, with regards to rapid guessing, there was no significant difference. A negligible difference existed for students aged older than thirty-five years, yet gender was a nonsignificant factor.

3.4. Anxiety

Scholars have an increasing awareness of the role that test anxiety has in reducing student success in online learning environments ( Kolski & Weible, 2018 ). The manuscripts identified by the literature scan, identified inconsistencies of results for the effect that examination modalities have on student test anxiety. A study of 69 psychology undergraduates identified that students who typically experienced high anxiety in traditional test environments had lower anxiety levels when completing an online examination ( Stowell & Bennett, 2010 ). In a quasi-experimental study ( n  = 38 nursing students), when baseline anxiety is controlled, students in computer-based examinations had higher degrees of test anxiety.

In 34 postgraduate student interviews, only three opposed online assessment based on perceived lack of technical skill (e.g. typing; Attia, 2014 ). Around two-thirds of participants identified some form of fear-based on internet disconnection, electricity, slow typing, or family disturbances at home. A 37 participant Community College study used proximal indicators (e.g. lip licking and biting, furrowed eyebrows, and seat squirming) to assess the rate of test anxiety in webcam-based examination proctoring ( Kolski & Weible, 2018 ). Teacher strategies to reduce anxiety in their students include enabling students to consider, review, and acknowledge their anxieties ( Kolski & Weible, 2018 ). Responses such as students writing of their anxiety, or responding to multiple-choice questionnaire on test anxiety, reduced anxiety. Students in the test group and provided anxiety items or expressive writing exercises, performed better ( Kumar, 2014 ).

3.5. Cheating

Cheating was the most prevalent area among all the themes identified. Cheating in asynchronous, objective, and online assessments is argued by some to be at unconscionable levels ( Sullivan, 2016 ). In one survey, 73.6 percent of students felt it was easier to cheat on online examinations than regular examinations ( Aisyah et al., 2018 ). This is perhaps because students are monitored in paper and pencil examinations, compared to online examinations where greater control of variables is required to mitigate cheating. Some instructors have used randomized examination batteries to minimize cheating potential through peer-to-peer sharing ( Schmidt et al., 2009 ).

Scholars identify various methods for mitigating cheating. Identifying the test taker, preventing examination theft, unauthorized use of textbook/notes, preparing a set-up for online examination, unauthorized student access to a test bank, preventing the use of devices (e.g. phone, Bluetooth, and calculators), limiting access to other people during the examination, equitable access to equipment, identifying computer crashes, inconsistency of method for proctoring ( Hearn Moore et al., 2017 ). In another, the issue for solving cheating is social as well as technological. While technology is considered the current norm for reducing cheating, these tools have been mostly ineffective ( Sullivan, 2016 ). Access to multiple question banks through effective quiz design and delivery is a mechanism to reduce the propensity to cheat, by reducing the stakes through multiple delivery attempts ( Sullivan, 2016 ). Question and answer randomization, continuous question development, multiple examination versions, open book options, time stamps, and diversity in question formats, sequences, types, and frequency are used to manage the perception and potential for cheating. In the study with MBA students, perception of the ability to cheat seemed to be critical for the development of a safe online examination environment ( Sullivan, 2016 ).

Dawson (2016) in a review of bring-your-own-device examinations including:

  • • Copying contents of USB to a hard drive to make a copy of the digital examination available to others,
  • • Use of a virtual machine to maintain access to standard applications on their device,
  • • USB keyboard hacks to allow easy access to other documents (e.g. personal notes),
  • • Modifying software to maintain complete control of their own device, and
  • • A cold boot attack to maintain a copy of the examination.

The research on cheating has focused mainly on technical challenges (e.g. hardware to support cheating), rather than ethical and social issues (e.g. behavioral development to curb future cheating behaviors). The latter has been researched in more depth in traditional assessment methods (e.g. Wright, 2015 ). In a study on Massive Open Online Courses (MOOCs), motivations for students to engage in optional learning stemmed from knowledge, work, convenience, and personal interest ( Shapiro et al., 2017 ). This provides possible opportunities for future research to consider behavioral elements for responding to cheating, rather than institutional punitive arrangements.

3.6. Staff perception

Schmidt et al. (2009) also examined the perceptions of academics with regards to online examination. Academics reported that their biggest concern with using online examinations is the potential for cheating. There was a perception that students may get assistance during an examination. The reliability of the technology is the second more critical concern of academic staff. This includes concerns about internet connectivity as well as computer or software issues. The third concern is related to ease of use, both for the academic and for students. Academics want a system that is easy and quick to create, manage and mark examinations, and students can use with proficient ICT skills ( Schmidt et al., 2009 ). Furthermore, staff reported in a different study that marking digital work was easier and preferred it over paper examinations because of the reduction in paper ( Pagram et al., 2018 ). They believe preference should be given to using university machines instead of the student using their computer, mainly due to issues around operating system compatibility and data loss.

3.7. Authentication and security

Authentication was recognized as a significant issue for examination. Some scholars indicate that the primary reason for requiring physical attendance to proctored examinations is to validate and authenticate the student taking the assessment ( Chao et al., 2012 ). Importantly, the validity of online proctored examination administration procedures is argued as lower than proctored on-campus examinations ( Rios & Liu, 2017 ). Most responses to online examinations use bring-your-own-device models where laptops are brought to traditional lecture theatres, use of software on personal devices in any location desired, or use of prescribed devices in a classroom setting. The primary goal of each is to balance the authentication of students and maintain the integrity and value of achieving learning outcomes.

In a review of current authentication options ( AbuMansoor, 2017 ), the use of fingerprint reading, streaming media, and follow-up identifications were used to authenticate small cohorts of students. Some learning management systems (LMS) have developed subsidiary products (e.g. Weaver within Moodle) to support authentication processes. Some biometric software uses different levels to authenticate keystrokes for motor controls, stylometry for linguistics, application behavior for semantics, capture to physical or behavioral samples, extraction of unique data, comparison of distance measures, and recording decision-making. Development of online examinations should be oriented towards the same theory of open book examinations.

A series of models are proposed in our literature sample. AbuMansoor (2017) propose to use a series of processes into place to develop examinations that minimize cheating (e.g. question batteries), deploying authentication techniques (e.g. keystrokes and fingerprints), and conduct posthoc assessments to search for cheating. The Aisyah et al. (2018) model identifies two perspectives to conceptualize authentication systems: examinee and admin. From the examinee perspective, points of authentication at the pre-, intra-, and post-examination periods. From the administrative perspective, accessing photographic authentication from pre- and intra-examination periods can be used to validate the examinee. The open book open web (OBOW: Mohanna & Patel, 2016 ) model uses the application of authentic assessment to place the learner in the role of a decision-maker and expert witness, with validation by avoiding any question that could have a generic answer.

The Smart Authenticated Fast Exams (SAFE: Chebrolu et al., 2017 ) model uses application focus (e.g. continuously tracking focus of examinee), logging (phone state, phone identification, and Wi-Fi status), visual password (a password that is visually presented but not easily communicated without photograph), Bluetooth neighborhood logging (to check for nearby devices), ID checks, digitally signed application, random device swap, and the avoidance of ‘bring your own device’ models. The online comprehensive examination (OCE) was used in a National Board Dental Examination to test knowledge in a home environment with 200 multiple choice questions, and the ability to take the test multiple times for formative knowledge development.

Some scholars recommend online synchronous assessments as an alternative to traditional proctored examinations while maintaining the ability to manually authenticate ( Chao et al., 2012 ). In these assessments: quizzes are designed to test factual knowledge, practice for procedural, essay for conceptual, and oral for metacognitive knowledge. A ‘cyber face-to-face’ element is required to enable the validation of students.

3.8. Interface design

The interface of a system will impact on whether a student perceives the environment to be an enabler or barrier for online examinations. Abdel Karim and Shukur (2016) summarized the potential interface design features that emerged from a systematic review of the literature on this topic, as shown in Table 2 . The incorporation of navigation tools has also been identified by students and staff as an essential design feature ( Rios & Liu, 2017 ), as is an auto-save functionality ( Pagram et al., 2018 ).

Potential interface design features ( Abdel Karim & Shukur, 2016 ).

3.9. Technology issues

None of the studies that included technological problems in its design reported any issues ( Böhmer et al., 2018 ; Matthíasdóttir & Arnalds, 2016 ; Schmidt et al., 2009 ). One study stated that 5 percent of students reported some problem ranging from a slow system through to the system not working well with the computer operating system, however, the authors stated no technical problems that resulted in the inability to complete the examination were reported ( Matthíasdóttir & Arnalds, 2016 ). In a separate study, students reported that they would prefer to use university technology to complete the examination due to distrust of the system working with their home computer or laptop operating system or the fear of losing data during the examination ( Pagram et al., 2018 ). While the study did not report any problems loading on desktop machines, some student laptops from their workplace had firewalls, and as such had to load the system from a USB.

4. Discussion

This systematic literature review sought to assess the current state of literature concerning online examinations and its equivalents. For most students, online learning environments created a system more supportive of their wellbeing, personal lives, and learning performance. Staff preferred online examinations for their workload implications and ease of completion, and basic evaluation of print-based examination logistics could identify some substantial ongoing cost savings. Not all staff and students preferred the idea of online test environments, yet studies that considered age and gender identified only negligible differences ( Rios & Liu, 2017 ).

While the literature on online examinations is growing, there is still a dearth of discussion at the pedagogical and governance levels. Our review and new familiarity with papers led us to point researchers in two principal directions: accreditation and authenticity. We acknowledge that there are many possible pathways to consider, with reference to the consistency of application, the validity and reliability of online examinations, and whether online examinations enable better measurement and greater student success. There are also opportunities to synthesize online examination literature with other innovative digital pedagogical devices. For example, immersive learning environments ( Herrington et al., 2007 ), mobile technologies ( Jahnke & Liebscher, 2020 ); social media ( Giannikas, 2020 ), and web 2.0 technologies ( Bennett et al., 2012 ). The literature examined acknowledges key elements of the underlying needs for online examinations from student, academic, and technical perspectives. This has included the need for online examinations need to accessible, need to be able to distinguish a true pass from a true fail, secure, minimize opportunities for cheating, accurately authenticates the student, reduce marking time, and designed to be agile in software or technological failure.

We turn attention now to areas of need in future research, and focus on accreditation and authenticity over these alternates given there is a real need for more research prior to synthesis of knowledge on the latter pathways.

4.1. The accreditation question

The influence of external accreditation bodies was named frequently and ominously among the sample group, but lacked clarity surrounding exact parameters and expectations. Rios (2017, p. 231) identified a specific measure was used “for accreditation purposes”. Hylton et al. (2016 , p. 54) specified that the US Department of Education requires “appropriate procedures or technology are implemented” to authentic distance students. Gehringer and Peddycord (2013) empirically found that online/open-web examinations provided more significant data for accreditation. Underlying university decisions to use face-to-face invigilated examination settings is to enable authentication of learning – a requirement of many governing bodies globally. The continual refinement of rules has enabled a degree of assurance that students are who they say they are.

Nevertheless, sophisticated networks have been established globally to support direct student cheating from completing quick assessments and calculators with secret search engine capability through to full completion of a course inclusive of attending on-campus invigilated examinations. The authentication process in invigilated examinations does not typically account for distance students who have a forged student identification card to enable a contract service to complete their examinations. Under the requirement assure authentication of learning, invigilated examinations will require revision to meet contemporary environments. The inclusion of a broader range of big data from keystroke patterns, linguistics analysis, and whole-of-student analytics over a student lifecycle is necessary to identify areas of risk from the institutional perspective. Where a student has a significantly different method of typing or sentence structure, it is necessary to review.

An experimental study on the detection of cheating in a psychology unit found teachers could detect cheating 62 percent of the time ( Dawson & Sutherland-Smith, 2017 ). Automated algorithms could be used to support the pre-identification of this process, given lecturers and professors are unlikely to be explicitly coding for cheating propensity when grading multiple hundreds of papers on the same topic. Future scholars should be considering the innate differences that exist among test-taking behaviors that could be codified to create pattern recognition software. Even in traditional invigilated examinations, the use of linguistics and handwriting evaluations could be used for cheating identification.

4.2. Authentic assessments and examinations

The literature identified in the sample discussed with limited depth the role of authentic assessment in examinations. The evolution of pedagogy and teaching principles (e.g. constructive alignment; Biggs, 1996 ) have paved the way for revised approaches to assessment and student learning. In the case of invigilated examinations, universities have been far slower to progress innovative solutions despite growing evidence that students prefer the flexibility and opportunities afforded by digitalizing exams. University commitments to the development of authentic assessment environments will require a radical revision of current examination practice to incorporate real-life learning processes and unstructured problem-solving ( Williams & Wong, 2009 ). While traditional examinations may be influenced by financial efficacy, accreditation, and authentication pressures, there are upward pressures from student demand, student success, and student wellbeing to create more authentic learning opportunities.

The online examination setting offers greater connectivity to the kinds of environments graduates will be expected to engage in on a regular basis. The development of time management skills to plan times to complete a fixed time examination is reflected in the business student's need to pitch and present at certain times of the day to corporate stakeholders, or a dentist maintaining a specific time allotment for the extraction of a tooth. The completion of a self-regulated task online with tangible performance outcomes is reflected in many roles from lawyer briefs on time-sensitive court cases to high school teacher completions of student reports at the end of a calendar year. Future practitioner implementation and evaluation should be focused on embedding authenticity into the examination setting, and future researchers should seek to understand better the parameters by which online examinations can create authentic learning experiences for students. In some cases, the inclusion of examinations may not be appropriate; and in these cases, they should be progressively extracted from the curriculum.

4.3. Where to next?

As institutions begin to provide higher learning flexibility to students with digital and blended offerings, there is scholarly need to consider the efficacy of the examination environment associated with these settings. Home computers and high-speed internet are becoming commonplace ( Rainie & Horrigan, 2005 ), recognizing that such an assumption has implications for student equity. As Warschauer (2007 , p. 41) puts it, “the future of learning is digital”. Our ability as educators will be in seeking to understand how we can create high impact learning opportunities while responding to an era of digitalization. Research considering digital fluency in students will be pivotal ( Crawford & Butler-Henderson, 2020 ). Important too, is the scholarly imperative to examine the implementation barriers and successes associated with online examinations in higher education institutions given the lack of clear cross-institutional case studies. There is also a symbiotic question that requires addressing by scholars in our field, beginning with understanding how online examinations can enable higher education, and likewise how higher education can shape and inform the implementation and delivery of online examinations.

4.4. Limitations

This study adopted a rigorous PRISMA method for preliminary identification of papers for inclusion, the MMAT protocol for identifying the quality of papers, and an inductive thematic analysis for analyzing papers included. These processes respond directly to limitations of subjectivity and assurance of breadth and depth of literature. However, the systematic literature review method limits the papers included by the search criteria used. While we opted for a broad set of terms, it is possible we missed papers that would typically have been identified in other manual and critical identification processes. The lack of research published provided a substantial opportunity to develop a systematic literature review to summarize the state of the evidence, but the availability of data limits each comment. A meta-analysis on quantitative research in this area of study would be complicated because of the lack of replication. Indeed, our ability to unpack which institutions currently use online examinations (and variants thereof) relied on scholars publishing on such implementations; many of which have not. The findings of this systematic literature review are also limited by the lack of replication in this infant field. The systematic literature review was, in our opinions, the most appropriate method to summarize the current state of literature despite the above limitations and provides a strong foundation for an evidence-based future of online examinations. We also acknowledge the deep connection that this research may have in relation to the contemporary COVID-19 climate in higher education, with many universities opting for online forms of examinations to support physically distanced education and emergency remote teaching. There were 138 publications on broad learning and teaching topics during the first half of 2020 ( Butler-Henderson et al., 2020 ). Future research may consider how this has changed or influenced the nature of rapid innovation for online examinations.

5. Conclusion

This systematic literature review considered the contemporary literature on online examinations and their equivalents. We discussed student, staff, and technological research as it was identified in our sample. The dominant focus of the literature is still oriented on preliminary evaluations of implementation. These include what processes changed at a technological level, and how students and staff rated their preferences. There were some early attempts to explore the effect of online examinations on student wellbeing and student performance, along with how the changes affect the ability for staff to achieve.

Higher education needs this succinct summary of the literature on online examinations to understand the barriers and how they can be overcome, encouraging greater uptake of online examinations in tertiary education. One of the largest barriers is perceptions of using online examinations. Once students have experienced online examinations, there is a preference for this format due to its ease of use. The literature reported student performance did not have significant difference in final examination scores across online and traditional examination modalities. Student anxiety decreased once they had used the online examination software. This information needs to be provided to students to change students’ perceptions and decrease anxiety when implementing an online examination system. Similarly, the information summarized in this paper needs to be provided to staff, such as the data related to cheating, reliability of the technology, ease of use, and reduction in time for establishing and marking examinations. When selecting a system, institutions should seek one that includes biometrics with a high level of precision, such as user authentication, and movement, sound, and keystroke monitoring (reporting deviations so the recording can be reviewed). These features reduce the need for online examinations to be invigilated. Other system features should include locking the system or browser, cloud-based technology so local updates are not required, and an interface design that makes using the online examination intuitive. Institutions should also consider how it will address technological failures and digital disparities, such as literacy and access to technology.

We recognize the need for substantially more evidence surrounding the post-implementation stages of online examinations. The current use of online examinations across disciplines, institutions, and countries needs to be examined to understand the successes and gaps. Beyond questions of ‘do students prefer online or on-campus exams’, serious questions of how student mental wellbeing, employability, and achievement of learning outcomes can be improved as a result of an online examination pedagogy is critical. In conjunction is the need to break down the facets and types of digitally enhanced examinations (e.g. online, e-examination, BYOD examinations, and similar) and compare each of these for their respective efficacy in enabling student success against institutional implications. While this paper was only able to capture the literature that does exist, we believe the next stage of literature needs to consider broader implications than immediate student perceptions toward the achievement of institutional strategic imperatives that may include student wellbeing, student success, student retention, financial viability, staff enrichment, and student employability.

Author statement

Both authors Kerryn Butler-Henderson and Joseph Crawford contributed to the design of this study, literature searches, data abstraction and cleaning, data analysis, and development of this manuscript. All contributions were equal.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

  • Abdel Karim N., Shukur Z. Proposed features of an online examination interface design and its optimal values. Computers in Human Behavior. 2016; 64 :414–422. doi: 10.1016/j.chb.2016.07.013. [ CrossRef ] [ Google Scholar ]
  • AbuMansour H. 2017 IEEE/ACS 14th international conference on computer systems and applications (AICCSA) 2017. Proposed bio-authentication system for question bank in learning management systems; pp. 489–494. [ CrossRef ] [ Google Scholar ]
  • Aisyah S., Bandung Y., Subekti L.B. 2018 international conference on information technology systems and innovation (ICITSI) 2018. Development of continuous authentication system on android-based online exam application; pp. 171–176. [ CrossRef ] [ Google Scholar ]
  • Al-Hakeem M.S., Abdulrahman M.S. Developing a new e-exam platform to enhance the university academic examinations: The case of Lebanese French University. International Journal of Modern Education and Computer Science. 2017; 9 (5):9. doi: 10.5815/ijmecs.2017.05.02. [ CrossRef ] [ Google Scholar ]
  • Alzu'bi M. Proceedings of conference of the international journal of arts & sciences. 2015. The effect of using electronic exams on students' achievement and test takers' motivation in an English 101 course; pp. 207–215. [ Google Scholar ]
  • Amigud A., Dawson P. The law and the outlaw: is legal prohibition a viable solution to the contract cheating problem? Assessment & Evaluation in Higher Education. 2020; 45 (1):98–108. doi: 10.1080/02602938.2019.1612851. [ CrossRef ] [ Google Scholar ]
  • Anderson H.M., Cain J., Bird E. Online course evaluations: Review of literature and a pilot study. American Journal of Pharmaceutical Education. 2005; 69 (1):34–43. doi: 10.5688/aj690105. [ CrossRef ] [ Google Scholar ]
  • Ardid M., Gómez-Tejedor J.A., Meseguer-Dueñas J.M., Riera J., Vidaurre A. Online exams for blended assessment. Study of different application methodologies. Computers & Education. 2015; 81 :296–303. doi: 10.1016/j.compedu.2014.10.010. [ CrossRef ] [ Google Scholar ]
  • Attia M. Postgraduate students' perceptions toward online assessment: The case of the faculty of education, Umm Al-Qura university. In: Wiseman A., Alromi N., Alshumrani S., editors. Education for a knowledge society in Arabian Gulf countries. Emerald Group Publishing Limited; Bingley, United Kingdom: 2014. pp. 151–173. [ CrossRef ] [ Google Scholar ]
  • Bennett S., Bishop A., Dalgarno B., Waycott J., Kennedy G. Implementing web 2.0 technologies in higher education: A collective case study. Computers & Education. 2012; 59 (2):524–534. [ Google Scholar ]
  • Biggs J. Enhancing teaching through constructive alignment. Higher Education. 1996; 32 (3):347–364. doi: 10.1007/bf00138871. [ CrossRef ] [ Google Scholar ]
  • Böhmer C., Feldmann N., Ibsen M. 2018 IEEE global engineering education conference (EDUCON) 2018. E-exams in engineering education—online testing of engineering competencies: Experiences and lessons learned; pp. 571–576. [ CrossRef ] [ Google Scholar ]
  • Botezatu M., Hult H., Tessma M.K., Fors U.G. Virtual patient simulation for learning and assessment: Superior results in comparison with regular course exams. Medical Teacher. 2010; 32 (10):845–850. doi: 10.3109/01421591003695287. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Braun V., Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006; 3 (2):77–101. doi: 10.1191/1478088706qp063oa. [ CrossRef ] [ Google Scholar ]
  • Butler-Henderson K., Crawford J., Rudolph J., Lalani K., Sabu K.M. COVID-19 in Higher Education Literature Database (CHELD V1): An open access systematic literature review database with coding rules. Journal of Applied Learning and Teaching. 2020; 3 (2) doi: 10.37074/jalt.2020.3.2.11. Advanced Online Publication. [ CrossRef ] [ Google Scholar ]
  • Butt A. Quantification of influences on student perceptions of group work. Journal of University Teaching and Learning Practice. 2018; 15 (5) [ Google Scholar ]
  • Chao K.J., Hung I.C., Chen N.S. On the design of online synchronous assessments in a synchronous cyber classroom. Journal of Computer Assisted Learning. 2012; 28 (4):379–395. doi: 10.1111/j.1365-2729.2011.00463.x. [ CrossRef ] [ Google Scholar ]
  • Chebrolu K., Raman B., Dommeti V.C., Boddu A.V., Zacharia K., Babu A., Chandan P. Proceedings of the 2017 ACM SIGCSE technical symposium on computer science education. 2017. Safe: Smart authenticated Fast exams for student evaluation in classrooms; pp. 117–122. [ CrossRef ] [ Google Scholar ]
  • Chen Q. Proceedings of ACM turing celebration conference-China. 2018. An application of online exam in discrete mathematics course; pp. 91–95. [ CrossRef ] [ Google Scholar ]
  • Chytrý V., Nováková A., Rícan J., Simonová I. 2018 international symposium on educational technology (ISET) 2018. Comparative analysis of online and printed form of testing in scientific reasoning and metacognitive monitoring; pp. 13–17. [ CrossRef ] [ Google Scholar ]
  • Crawford J. University of Tasmania, Australia: Honours Dissertation; 2015. Authentic leadership in student leaders: An empirical study in an Australian university. [ Google Scholar ]
  • Crawford J., Butler-Henderson K. Digitally empowered workers and authentic leaders: The capabilities required for digital services. In: Sandhu K., editor. Leadership, management, and adoption techniques for digital service innovation. IGI Global; Hershey, Pennsylvania: 2020. pp. 103–124. [ CrossRef ] [ Google Scholar ]
  • Crawford J., Butler-Henderson K., Rudolph J., Malkawi B., Glowatz M., Burton R., Magni P., Lam S. COVID-19: 20 countries' higher education intra-period digital pedagogy responses. Journal of Applied Teaching and Learning. 2020; 3 (1):9–28. doi: 10.37074/jalt.2020.3.1.7. [ CrossRef ] [ Google Scholar ]
  • Creswell J., Miller D. Determining validity in qualitative inquiry. Theory into Practice. 2000; 39 (3):124–130. doi: 10.1207/s15430421tip3903_2. [ CrossRef ] [ Google Scholar ]
  • Daffin L., Jr., Jones A. Comparing student performance on proctored and non-proctored exams in online psychology courses. Online Learning. 2018; 22 (1):131–145. doi: 10.24059/olj.v22i1.1079. [ CrossRef ] [ Google Scholar ]
  • Dawson P. Five ways to hack and cheat with bring‐your‐own‐device electronic examinations. British Journal of Educational Technology. 2016; 47 (4):592–600. doi: 10.1111/bjet.12246. [ CrossRef ] [ Google Scholar ]
  • Dawson P., Sutherland-Smith W. Can markers detect contract cheating? Results from a pilot study. Assessment & Evaluation in Higher Education. 2018; 43 (2):286–293. doi: 10.1080/02602938.2017.1336746. [ CrossRef ] [ Google Scholar ]
  • Dawson P., Sutherland-Smith W. Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study. Assessment & Evaluation in Higher Education. 2019; 44 (5):715–725. doi: 10.1080/02602938.2018.1531109. [ CrossRef ] [ Google Scholar ]
  • Eagly A., Sczesny S. Stereotypes about women, men, and leaders: Have times changed? In: Barreto M., Ryan M.K., Schmitt M.T., editors. Psychology of women book series. The glass ceiling in the 21st century: Understanding barriers to gender equality. American Psychological Association; 2009. pp. 21–47. [ CrossRef ] [ Google Scholar ]
  • Ellis S., Barber J. Expanding and personalizing feedback in online assessment: A case study in a school of pharmacy. Practitioner Research in Higher Education. 2016; 10 (1):121–129. [ Google Scholar ]
  • Fluck A. An international review of eExam technologies and impact. Computers & Education. 2019; 132 :1–15. doi: 10.1016/j.compedu.2018.12.008. [ CrossRef ] [ Google Scholar ]
  • Fluck A., Adebayo O.S., Abdulhamid S.I.M. Secure e-examination systems compared: Case studies from two countries. Journal of Information Technology Education: Innovations in Practice. 2017; 16 :107–125. doi: 10.28945/3705. [ CrossRef ] [ Google Scholar ]
  • Fluck A., Pullen D., Harper C. Case study of a computer based examination system. Australasian Journal of Educational Technology. 2009; 25 (4):509–533. doi: 10.14742/ajet.1126. [ CrossRef ] [ Google Scholar ]
  • Gehringer E., Peddycord B., III Experience with online and open-web exams. Journal of Instructional Research. 2013; 2 :10–18. doi: 10.9743/jir.2013.2.12. [ CrossRef ] [ Google Scholar ]
  • Giannikas C. Facebook in tertiary education: The impact of social media in e-learning. Journal of University Teaching and Learning Practice. 2020; 17 (1):3. [ Google Scholar ]
  • Gold S.S., Mozes-Carmel A. A comparison of online vs. proctored final exams in online classes. Journal of Educational Technology. 2009; 6 (1):76–81. doi: 10.26634/jet.6.1.212. [ CrossRef ] [ Google Scholar ]
  • Gross J., Torres V., Zerquera D. Financial aid and attainment among students in a state with changing demographics. Research in Higher Education. 2013; 54 (4):383–406. doi: 10.1007/s11162-012-9276-1. [ CrossRef ] [ Google Scholar ]
  • Guillén-Gámez F.D., García-Magariño I., Bravo J., Plaza I. Exploring the influence of facial verification software on student academic performance in online learning environments. International Journal of Engineering Education. 2015; 31 (6A):1622–1628. [ Google Scholar ]
  • Hainline L., Gaines M., Feather C.L., Padilla E., Terry E. Changing students, faculty, and institutions in the twenty-first century. Peer Review. 2010; 12 (3):7–10. [ Google Scholar ]
  • Hearn Moore P., Head J.D., Griffin R.B. Impeding students' efforts to cheat in online classes. Journal of Learning in Higher Education. 2017; 13 (1):9–23. [ Google Scholar ]
  • Herrington J., Reeves T.C., Oliver R. Immersive learning technologies: Realism and online authentic learning. Journal of Computing in Higher Education. 2007; 19 (1):80–99. [ Google Scholar ]
  • Hong Q.N., Fàbregues S., Bartlett G., Boardman F., Cargo M., Dagenais P., Gagnon M.P., Griffiths F., Nicolau B., O'Cathain A., Rousseau M.C., Vedel I., Pluye P. The mixed methods appraisal tool (MMAT) version 2018 for information professionals and researchers. Education for Information. 2018; 34 (4):285–291. doi: 10.3233/EFI-180221. [ CrossRef ] [ Google Scholar ]
  • Hylton K., Levy Y., Dringus L.P. Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education. 2016; 92 :53–63. doi: 10.1016/j.compedu.2015.10.002. [ CrossRef ] [ Google Scholar ]
  • Jahnke I., Liebscher J. Three types of integrated course designs for using mobile technologies to support creativity in higher education. Computers & Education. 2020; 146 doi: 10.1016/j.compedu.2019.103782. Advanced Online Publication. [ CrossRef ] [ Google Scholar ]
  • Johnson C. 2009. History of New York state regents exams. Unpublished manuscript. [ Google Scholar ]
  • Jordan A. College student cheating: The role of motivation, perceived norms, attitudes, and knowledge of institutional policy. Ethics & Behavior. 2001; 11 (3):233–247. doi: 10.1207/s15327019eb1103_3. [ CrossRef ] [ Google Scholar ]
  • Karp A. Exams in algebra in Russia: Toward a history of high stakes testing. International Journal for the History of Mathematics Education. 2007; 2 (1):39–57. [ Google Scholar ]
  • Kemp D. Australian Government Printing Service; Canberra: 1999. Knowledge and innovation: A policy statement on research and research training. [ Google Scholar ]
  • Kolagari S., Modanloo M., Rahmati R., Sabzi Z., Ataee A.J. The effect of computer-based tests on nursing students' test anxiety: A quasi-experimental study. Acta Informatica Medica. 2018; 26 (2):115. doi: 10.5455/aim.2018.26.115-118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kolski T., Weible J. Examining the relationship between student test anxiety and webcam based exam proctoring. Online Journal of Distance Learning Administration. 2018; 21 (3):1–15. [ Google Scholar ]
  • Kumar A. 2014 IEEE Frontiers in Education Conference (FIE) Proceedings. 2014. Test anxiety and online testing: A study; pp. 1–6. [ CrossRef ] [ Google Scholar ]
  • Li X., Chang K.M., Yuan Y., Hauptmann A. Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 2015. Massive open online proctor: Protecting the credibility of MOOCs certificates; pp. 1129–1137. [ CrossRef ] [ Google Scholar ]
  • Lincoln Y., Guba E. Sage Publications; California: 1985. Naturalistic inquiry. [ Google Scholar ]
  • Margaryan A., Littlejohn A., Vojt G. Are digital natives a myth or reality? University students' use of digital technologies. Computers & Education. 2011; 56 (2):429–440. doi: 10.1016/j.compedu.2010.09.004. [ CrossRef ] [ Google Scholar ]
  • Matthíasdóttir Á., Arnalds H. Proceedings of the 17th international conference on computer systems and technologies 2016. 2016. e-assessment: students' point of view; pp. 369–374. [ CrossRef ] [ Google Scholar ]
  • Mitra S., Gofman M. Proceedings of the twenty-second americas conference on information systems (28) 2016. Towards greater integrity in online exams. [ Google Scholar ]
  • Mohanna K., Patel A. 2015 fifth international conference on e-learning. 2015. Overview of open book-open web exam over blackboard under e-Learning system; pp. 396–402. [ CrossRef ] [ Google Scholar ]
  • Moher D., Liberati A., Tetzlaff J., Altman D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine. 2009; 151 (4) doi: 10.7326/0003-4819-151-4-200908180-00135. 264-249. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mueller R.G., Colley L.M. An evaluation of the impact of end-of-course exams and ACT-QualityCore on US history instruction in a Kentucky high school. Journal of Social Studies Research. 2015; 39 (2):95–106. doi: 10.1016/j.jssr.2014.07.002. [ CrossRef ] [ Google Scholar ]
  • Nguyen H., Henderson A. Can the reading load Be engaging? Connecting the instrumental, critical and aesthetic in academic reading for student learning. Journal of University Teaching and Learning Practice. 2020; 17 (2):6. [ Google Scholar ]
  • NYSED . 2012. History of regent examinations: 1865 – 1987. Office of state assessment. http://www.p12.nysed.gov/assessment/hsgen/archive/rehistory.htm [ Google Scholar ]
  • Oz H., Ozturan T. Computer-based and paper-based testing: Does the test administration mode influence the reliability and validity of achievement tests? Journal of Language and Linguistic Studies. 2018; 14 (1):67. [ Google Scholar ]
  • Pagram J., Cooper M., Jin H., Campbell A. Tales from the exam room: Trialing an e-exam system for computer education and design and technology students. Education Sciences. 2018; 8 (4):188. doi: 10.3390/educsci8040188. [ CrossRef ] [ Google Scholar ]
  • Park S. Proceedings of the 21st world multi-conference on systemics, cybernetics and informatics. WMSCI 2017; 2017. Online exams as a formative learning tool in health science education; pp. 281–282. [ Google Scholar ]
  • Patel A.A., Amanullah M., Mohanna K., Afaq S. Third international conference on e-technologies and networks for development. ICeND2014; 2014. E-exams under e-learning system: Evaluation of onscreen distraction by first year medical students in relation to on-paper exams; pp. 116–126. [ CrossRef ] [ Google Scholar ]
  • Petrović J., Vitas D., Pale P. 2017 international symposium ELMAR. 2017. Experiences with supervised vs. unsupervised online knowledge assessments in formal education; pp. 255–258. [ CrossRef ] [ Google Scholar ]
  • Rainie L., Horrigan J. Pew Internet & American Life Project; Washington, DC: 2005. A decade of adoption: How the internet has woven itself into American life. [ Google Scholar ]
  • Reiko Y. University reform in the post-massification era in Japan: Analysis of government education policy for the 21st century. Higher Education Policy. 2001; 14 (4):277–291. doi: 10.1016/s0952-8733(01)00022-8. [ CrossRef ] [ Google Scholar ]
  • Rettinger D.A., Kramer Y. Situational and personal causes of student cheating. Research in Higher Education. 2009; 50 (3):293–313. doi: 10.1007/s11162-008-9116-5. [ CrossRef ] [ Google Scholar ]
  • Rios J.A., Liu O.L. Online proctored versus unproctored low-stakes internet test administration: Is there differential test-taking behavior and performance? American Journal of Distance Education. 2017; 31 (4):226–241. doi: 10.1080/08923647.2017.1258628. [ CrossRef ] [ Google Scholar ]
  • Rodchua S., Yiadom-Boakye G., Woolsey R. Student verification system for online assessments: Bolstering quality and integrity of distance learning. Journal of Industrial Technology. 2011; 27 (3) [ Google Scholar ]
  • Schmidt S.M., Ralph D.L., Buskirk B. Utilizing online exams: A case study. Journal of College Teaching & Learning (TLC) 2009; 6 (8) doi: 10.19030/tlc.v6i8.1108. [ CrossRef ] [ Google Scholar ]
  • Schwalb S.J., Sedlacek W.E. Have college students' attitudes toward older people changed. Journal of College Student Development. 1990; 31 (2):125–132. [ Google Scholar ]
  • Seow T., Soong S. Proceedings of the australasian society for computers in learning in tertiary education, Dunedin. 2014. Students' perceptions of BYOD open-book examinations in a large class: A pilot study; pp. 604–608. [ Google Scholar ]
  • Sheppard S. An informal history of how law schools evaluate students, with a predictable emphasis on law school final exams. UMKC Law Review. 1996; 65 :657. [ Google Scholar ]
  • Sindre G., Vegendla A. NIK: Norsk Informatikkonferanse (n.p.) 2015, November. E-exams and exam process improvement. [ Google Scholar ]
  • Steel A., Moses L.B., Laurens J., Brady C. Use of e-exams in high stakes law school examinations: Student and staff reactions. Legal Education Review. 2019; 29 (1):1. [ Google Scholar ]
  • Stowell J.R., Bennett D. Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research. 2010; 42 (2):161–171. doi: 10.2190/ec.42.2.b. [ CrossRef ] [ Google Scholar ]
  • Sullivan D.P. An integrated approach to preempt cheating on asynchronous, objective, online assessments in graduate business classes. Online Learning. 2016; 20 (3):195–209. doi: 10.24059/olj.v20i3.650. [ CrossRef ] [ Google Scholar ]
  • Turner J.L., Dankoski M.E. Objective structured clinical exams: A critical review. Family Medicine. 2008; 40 (8):574–578. [ PubMed ] [ Google Scholar ]
  • US National Library of Medicine . 2019. Medical subject headings. https://www.nlm.nih.gov/mesh/meshhome.html [ Google Scholar ]
  • Vision Australia . 2014. Online and print inclusive design and legibility considerations. Vision Australia. https://www.visionaustralia.org/services/digital-access/blog/12-03-2014/online-and-print-inclusive-design-and-legibility-considerations [ Google Scholar ]
  • Warschauer M. The paradoxical future of digital learning. Learning Inquiry. 2007; 1 (1):41–49. doi: 10.1007/s11519-007-0001-5. [ CrossRef ] [ Google Scholar ]
  • Wibowo S., Grandhi S., Chugh R., Sawir E. A pilot study of an electronic exam system at an Australian University. Journal of Educational Technology Systems. 2016; 45 (1):5–33. doi: 10.1177/0047239516646746. [ CrossRef ] [ Google Scholar ]
  • Williams J.B., Wong A. The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web exams. British Journal of Educational Technology. 2009; 40 (2):227–236. doi: 10.1111/j.1467-8535.2008.00929.x. [ CrossRef ] [ Google Scholar ]
  • Wright T.A. Distinguished Scholar Invited Essay: Reflections on the role of character in business education and student leadership development. Journal of Leadership & Organizational Studies. 2015; 22 (3):253–264. doi: 10.1177/1548051815578950. [ CrossRef ] [ Google Scholar ]
  • Yong-Sheng Z., Xiu-Mei F., Ai-Qin B. 2015 7th international conference on information technology in medicine and education (ITME) 2015. The research and design of online examination system; pp. 687–691. [ CrossRef ] [ Google Scholar ]

ORIGINAL RESEARCH article

The impact of online reviews on consumers’ purchasing decisions: evidence from an eye-tracking study.

Tao Chen

  • 1 School of Business, Ningbo University, Ningbo, China
  • 2 School of Business, Western Sydney University, Penrith, NSW, Australia

This study investigated the impact of online product reviews on consumers purchasing decisions by using eye-tracking. The research methodology involved (i) development of a conceptual framework of online product review and purchasing intention through the moderation role of gender and visual attention in comments, and (ii) empirical investigation into the region of interest (ROI) analysis of consumers fixation during the purchase decision process and behavioral analysis. The results showed that consumers’ attention to negative comments was significantly greater than that to positive comments, especially for female consumers. Furthermore, the study identified a significant correlation between the visual browsing behavior of consumers and their purchase intention. It also found that consumers were not able to identify false comments. The current study provides a deep understanding of the underlying mechanism of how online reviews influence shopping behavior, reveals the effect of gender on this effect for the first time and explains it from the perspective of attentional bias, which is essential for the theory of online consumer behavior. Specifically, the different effects of consumers’ attention to negative comments seem to be moderated through gender with female consumers’ attention to negative comments being significantly greater than to positive ones. These findings suggest that practitioners need to pay particular attention to negative comments and resolve them promptly through the customization of product/service information, taking into consideration consumer characteristics, including gender.

Introduction

E-commerce has grown substantially over the past years and has become increasingly important in our daily life, especially under the influence of COVID-19 recently ( Hasanat et al., 2020 ). In terms of online shopping, consumers are increasingly inclined to obtain product information from reviews. Compared with the official product information provided by the sellers, reviews are provided by other consumers who have already purchased the product via online shopping websites ( Baek et al., 2012 ). Meanwhile, there is also an increasing trend for consumers to share their shopping experiences on the network platform ( Floh et al., 2013 ). In response to these trends, a large number of studies ( Floh et al., 2013 ; Lackermair et al., 2013 ; Kang et al., 2020 ; Chen and Ku, 2021 ) have investigated the effects of online reviews on purchasing intention. These studies have yielded strong evidence of the valence intensity of online reviews on purchasing intention. Lackermair et al. (2013) , for example, showed that reviews and ratings are an important source of information for consumers. Similarly, through investigating the effects of review source and product type, Bae and Lee (2011) concluded that a review from an online community is the most credible for consumers seeking information about an established product. Since reviews are comments from consumers’ perspectives and often describe their experience using the product, it is easier for other consumers to accept them, thus assisting their decision-making process ( Mudambi and Schuff, 2010 ).

A survey conducted by Zhong-Gang et al. (2015) reveals that nearly 60% of consumers browse online product reviews at least once a week and 93% of whom believe that these online reviews help them to improve the accuracy of purchase decisions, reduce the risk of loss and affect their shopping options. When it comes to e-consumers in commercial activities on B2B and B2C platforms, 82% of the consumers read product reviews before making shopping choices, and 60% of them refer to comments every week. Research shows that 93% of consumers say online reviews will affect shopping choices, indicating that most consumers have the habit of reading online reviews regularly and rely on the comments for their purchasing decisions ( Vimaladevi and Dhanabhakaym, 2012 ).

Consumer purchasing decision after reading online comments is a psychological process combining vision and information processing. As evident from the literature, much of the research has focused on the outcome and impact of online reviews affecting purchasing decisions but has shed less light on the underlying processes that influence customer perception ( Sen and Lerman, 2007 ; Zhang et al., 2010 ; Racherla and Friske, 2013 ). While some studies have attempted to investigate the underlying processes, including how people are influenced by information around the product/service using online reviews, there is limited research on the psychological process and information processing involved in purchasing decisions. The eye-tracking method has become popular in exploring and interpreting consumer decisions making behavior and cognitive processing ( Wang and Minor, 2008 ). However, there is very limited attention to how the emotional valence and the content of comments, especially those negative comments, influence consumers’ final decisions by adopting the eye-tracking method, including a gender comparison in consumption, and to whether consumers are suspicious of false comments.

Thus, the main purpose of this research is to investigate the impact of online reviews on consumers’ purchasing decisions, from the perspective of information processing by employing the eye-tracking method. A comprehensive literature review on key themes including online reviews, the impact of online reviews on purchasing decisions, and underlying processes including the level and credibility of product review information, and processing speed/effectiveness to drive customer perceptions on online reviews, was used to identify current research gaps and establish the rationale for this research. This study simulated a network shopping scenario and conducted an eye movement experiment to capture how product reviews affect consumers purchasing behavior by collecting eye movement indicators and their behavioral datum, in order to determine whether the value of the fixation dwell time and fixation count for negative comment areas is greater than that for positive comment area and to what extent the consumers are suspicious about false comments. Visual attention by both fixation dwell time and count is considered as part of moderating effect on the relationship between the valence of comment and purchase intention, and as the basis for accommodating underlying processes.

The paper is organized as follows. The next section presents literature reviews of relevant themes, including the role of online reviews and the application of eye movement experiments in online consumer decision research. Then, the hypotheses based on the relevant theories are presented. The research methodology including data collection methods is presented subsequently. This is followed by the presentation of data analysis, results, and discussion of key findings. Finally, the impact of academic practical research and the direction of future research are discussed, respectively.

Literature Review

Online product review.

Several studies have reported on the influence of online reviews, in particular on purchasing decisions in recent times ( Zhang et al., 2014 ; Zhong-Gang et al., 2015 ; Ruiz-Mafe et al., 2018 ; Von Helversen et al., 2018 ; Guo et al., 2020 ; Kang et al., 2020 ; Wu et al., 2021 ). These studies have reported on various aspects of online reviews on consumers’ behavior, including consideration of textual factors ( Ghose and Ipeirotiss, 2010 ), the effect of the level of detail in a product review, and the level of reviewer agreement with it on the credibility of a review, and consumers’ purchase intentions for search and experience products ( Jiménez and Mendoza, 2013 ). For example, by means of text mining, Ghose and Ipeirotiss (2010) concluded that the use of product reviews is influenced by textual features, such as subjectivity, informality, readability, and linguistic accuracy. Likewise, Boardman and Mccormick (2021) found that consumer attention and behavior differ across web pages throughout the shopping journey depending on its content, function, and consumer’s goal. Furthermore, Guo et al. (2020) showed that pleasant online customer reviews lead to a higher purchase likelihood compared to unpleasant ones. They also found that perceived credibility and perceived diagnosticity have a significant influence on purchase decisions, but only in the context of unpleasant online customer reviews. These studies suggest that online product reviews will influence consumer behavior but the overall effect will be influenced by many factors.

In addition, studies have considered broader online product information (OPI), comprising both online reviews and vendor-supplied product information (VSPI), and have reported on different attempts to understand the various ways in which OPI influences consumers. For example, Kang et al. (2020) showed that VSPI adoption affected online review adoption. Lately, Chen and Ku (2021) found a positive relationship between diversified online review websites as accelerators for online impulsive buying. Furthermore, some studies have reported on other aspects of online product reviews, including the impact of online reviews on product satisfaction ( Changchit and Klaus, 2020 ), relative effects of review credibility, and review relevance on overall online product review impact ( Mumuni et al., 2020 ), functions of reviewer’s gender, reputation and emotion on the credibility of negative online product reviews ( Craciun and Moore, 2019 ) and influence of vendor cues like the brand reputation on purchasing intention ( Kaur et al., 2017 ). Recently, an investigation into the impact of online review variance of new products on consumer adoption intentions showed that product newness and review variance interact to impinge on consumers’ adoption intentions ( Wu et al., 2021 ). In particular, indulgent consumers tend to prefer incrementally new products (INPs) with high variance reviews while restrained consumers are more likely to adopt new products (RNPs) with low variance.

Emotion Valence of Online Product Review and Purchase Intention

Although numerous studies have investigated factors that may influence the effects of online review on consumer behavior, few studies have focused on consumers’ perceptions, emotions, and cognition, such as perceived review helpfulness, ease of understanding, and perceived cognitive effort. This is because these studies are mainly based on traditional self-report-based methods, such as questionnaires, interviews, and so on, which are not well equipped to measure implicit emotion and cognitive factors objectively and accurately ( Plassmann et al., 2015 ). However, emotional factors are also recognized as important in purchase intention. For example, a study on the usefulness of online film reviews showed that positive emotional tendencies, longer sentences, the degree of a mix of the greater different emotional tendencies, and distinct expressions in critics had a significant positive effect on online comments ( Yuanyuan et al., 2009 ).

Yu et al. (2010) also demonstrated that the different emotional tendencies expressed in film reviews have a significant impact on the actual box office. This means that consumer reviews contain both positive and negative emotions. Generally, positive comments tend to prompt consumers to generate emotional trust, increase confidence and trust in the product and have a strong persuasive effect. On the contrary, negative comments can reduce the generation of emotional trust and hinder consumers’ buying intentions ( Archak et al., 2010 ). This can be explained by the rational behavior hypothesis, which holds that consumers will avoid risk in shopping as much as possible. Hence, when there is poor comment information presented, consumers tend to choose not to buy the product ( Mayzlin and Chevalier, 2003 ). Furthermore, consumers generally believe that negative information is more valuable than positive information when making a judgment ( Ahluwalia et al., 2000 ). For example, a single-star rating (criticism) tends to have a greater influence on consumers’ buying tendencies than that of a five-star rating (compliment), a phenomenon known as the negative deviation.

Since consumers can access and process information quickly through various means and consumers’ emotions influence product evaluation and purchasing intention, this research set out to investigate to what extent and how the emotional valence of online product review would influence their purchase intention. Therefore, the following hypothesis was proposed:

H1 : For hedonic products, consumer purchase intention after viewing positive emotion reviews is higher than that of negative emotion ones; On the other hand, for utilitarian products, it is believed that negative comments are more useful than positive ones and have a greater impact on consumers purchase intention by and large.

It is important to investigate Hypothesis one (H1) although it seems obvious. Many online merchants pay more attention to products with negative comments and make relevant improvements to them rather than those with positive comments. Goods with positive comments can promote online consumers’ purchase intention more than those with negative comments and will bring more profits to businesses.

Sen and Lerman (2007) found that compared with the utilitarian case, readers of negative hedonic product reviews are more likely to attribute the negative opinions expressed, to the reviewer’s internal (or non-product-related) reasons, and therefore, are less likely to find the negative reviews useful. However, in the utilitarian case, readers are more likely to attribute the reviewer’s negative opinions to external (or product-related) motivations, and therefore, find negative reviews more useful than positive reviews on average. Product type moderates the effect of review valence, Therefore, Hypothesis one is based on hedonic product types, such as fiction books.

Guo et al. (2020) found pleasant online customer reviews to lead to a higher purchase likelihood than unpleasant ones. This confirms hypothesis one from another side. The product selected in our experiment is a mobile phone, which is not only a utilitarian product but also a hedonic one. It can be used to make a phone call or watch videos, depending on the user’s demands.

Eye-Tracking, Online Product Review, and Purchase Intention

The eye-tracking method is commonly used in cognitive psychology research. Many researchers are calling for the use of neurobiological, neurocognitive, and physiological approaches to advance information system research ( Pavlou and Dimoka, 2010 ; Liu et al., 2011 ; Song et al., 2017 ). Several studies have been conducted to explore consumers’ online behavior by using eye-tracking. For example, using the eye-tracking method, Luan et al. (2016) found that when searching for products, customers’ attention to attribute-based evaluation is significantly longer than that of experience-based evaluation, while there is no significant difference for the experiential products. Moreover, their results indicated eye-tracking indexes, for example, fixation dwell time, could intuitively reflect consumers’ search behavior when they attend to the reviews. Also, Hong et al. (2017) confirmed that female consumers pay more attention to picture comments when they buy experience goods; when they buy searched products, they are more focused on the pure text comments. When the price and comment clues are consistent, consumers’ purchase rates significantly improve.

Eye-tracking method to explore and interpret consumers’ decision-making behavior and cognitive processing is primarily based on the eye-mind hypothesis proposed by Just and Carpenter (1992) . Just and Carpenter (1992) stated that when an individual is looking, he or she is currently perceiving, thinking about, or attending to something, and his or her cognitive processing can be identified by tracking eye movement. Several studies on consumers’ decision-making behavior have adopted the eye-tracking approach to quantify consumers’ visual attention, from various perspectives including determining how specific visual features of the shopping website influenced their attitudes and reflected their cognitive processes ( Renshaw et al., 2004 ), exploring gender differences in visual attention and shopping attitudes ( Hwang and Lee, 2018 ), investigating how employing human brands affects consumers decision quality ( Chae and Lee, 2013 ), consumer attention and different behavior depending on website content, functions and consumers goals ( Boardman and McCormick, 2019 ). Measuring the attention to the website and time spent on each purchasing task in different product categories shows that shoppers attend to more areas of the website for purposes of website exploration than for performing purchase tasks. The most complex and time-consuming task for shoppers is the assessment of purchase options ( Cortinas et al., 2019 ). Several studies have investigated fashion retail websites using the eye-tracking method and addressed various research questions, including how consumers interact with product presentation features and how consumers use smartphones for fashion shopping ( Tupikovskaja-Omovie and Tyler, 2021 ). Yet, these studies considered users without consideration of user categories, particularly gender. Since this research is to explore consumers’ decision-making behavior and the effects of gender on visual attention, the eye-tracking approach was employed as part of the overall approach of this research project. Based on existing studies, it could be that consumers may pay more attention to negative evaluations, will experience cognitive conflict when there are contradictory false comments presented, and will be unable to judge good or bad ( Cui et al., 2012 ). Therefore, the following hypothesis was proposed:

H2 : Consumers’ purchasing intention associated with online reviews is moderated/influenced by the level of visual attention.

To test the above hypothesis, the following two hypotheses were derived, taking into consideration positive and negative review comments from H1, and visual attention associated with fixation dwell time and fixation count.

H2a : When consumers intend to purchase a product, fixation dwell time and fixation count for negative comment areas are greater than those for positive comment areas.

Furthermore, when consumers browse fake comments, they are suspicious and actively seek out relevant information to identify the authenticity of the comments, which will result in more visual attention. Therefore, H2b was proposed:

H2b : Fixation dwell time and fixation count for fake comments are greater than those for authentic comments.

When considering the effect of gender on individual information processing, some differences were noted. For example, Meyers-Levy and Sternthal (1993) put forward the selectivity hypothesis, a theory of choice hypothesis, which implies that women gather all information possible, process it in an integrative manner, and make a comprehensive comparison before making a decision, while men tend to select only partial information to process and compare according to their existing knowledge—a heuristic and selective strategy. Furthermore, for an online product review, it was also reported that gender can easily lead consumers to different perceptions of the usefulness of online word-of-mouth. For example, Zhang et al. (2014) confirmed that a mixed comment has a mediating effect on the relationship between effective trust and purchasing decisions, which is stronger in women. This means that men and women may have different ways of processing information in the context of making purchasing decisions using online reviews. To test the above proposition, the following hypothesis was proposed:

H3 : Gender factors have a significant impact on the indicators of fixation dwell time and fixation count on the area of interest (AOI). Male purchasing practices differ from those of female consumers. Male consumers’ attention to positive comments is greater than that of female ones, they are more likely than female consumers to make purchase decisions easily.

Furthermore, according to the eye-mind hypothesis, eye movements can reflect people’s cognitive processes during their decision process ( Just and Carpenter, 1980 ). Moreover, neurocognitive studies have indicated that consumers’ cognitive processing can reflect the strategy of their purchase decision-making ( Rosa, 2015 ; Yang, 2015 ). Hence, the focus on the degree of attention to different polarities and the specific content of comments can lead consumers to make different purchasing decisions. Based on the key aspects outlined and discussed above, the following hypothesis was proposed:

H4 : Attention to consumers’ comments is positively correlated with consumers’ purchasing intentions: Consumers differ in the content of comments to which they gaze according to gender factors.

Thus, the framework of the current study is shown in Figure 1 .

www.frontiersin.org

Figure 1 . Conceptual framework of the study.

Materials and Methods

The research adopted an experimental approach using simulated lab environmental settings for collecting experimental data from a selected set of participants who have experience with online shopping. The setting of the task was based on guidelines for shopping provided on Taobao.com , which is the most famous and frequently used C2C platform in China. Each experiment was set with the guidelines provided and carried out for a set time. Both behavioral and eye movement data were collected during the experiment.

Participants

A total of 40 healthy participants (20 males and 20 females) with online shopping experiences were selected to participate in the experiment. The participants were screened to ensure normal or correct-to-normal vision, no color blindness or poor color perception, or other eye diseases. All participants provided their written consent before the experiment started. The study was approved by the Internal Review Board of the Academy of Neuroeconomics and Neuromanagement at Ningbo University and by the Declaration of Helsinki ( World Medical Association, 2014 ).

With standardization and small selection differences among individuals, search products can be objectively evaluated and easily compared, to effectively control the influence of individual preferences on the experimental results ( Huang et al., 2009 ). Therefore, this research focused on consumer electronics products, essential products in our life, as the experiment stimulus material. To be specific, as shown in Figure 2 , a simulated shopping scenario was presented to participants, with a product presentation designed in a way that products are shown on Taobao.com . Figure 2 includes two segments: One shows mobile phone information ( Figure 2A ) and the other shows comments ( Figure 2B ). Commodity description information in Figure 2A was collected from product introductions on Taobao.com , mainly presenting some parameter information about the product, such as memory size, pixels, and screen size. There was little difference in these parameters, so quality was basically at the same level across smartphones. Prices and brand information were hidden to ensure that reviews were the sole factor influencing consumer decision-making. Product review areas in Figure 2B are the AOI, presented as a double-column layout. Each panel included 10 (positive or negative) reviews taken from real online shopping evaluations, amounting to a total of 20 reviews for each product. To eliminate the impact of different locations of comments on experimental results, the positions of the positive and negative comment areas were exchanged, namely, 50% of the subjects had positive comments presented on the left and negative comments on the right, with the remaining 50% of the participants receiving the opposite set up.

www.frontiersin.org

Figure 2 . Commodity information and reviews. (A) Commodity information, (B) Commodity reviews. Screenshots of Alibaba shopfront reproduced with permission of Alibaba and Shenzhen Genuine Mobile Phone Store.

A total of 12,403 product reviews were crawled through and extracted from the two most popular online shopping platforms in China (e.g., Taobao.com and JD.com ) by using GooSeeker (2015) , a web crawler tool. The retrieved reviews were then further processed. At first, brand-related, price-related, transaction-related, and prestige-related contents were removed from comments. Then, the reviews were classified in terms of appearance, memory, running speed, logistics, and so on into two categories: positive reviews and negative reviews. Furthermore, the content of the reviews was refined to retain the original intention but to meet the requirements of the experiment. In short, reviews were modified to ensure brevity, comprehensibility, and equal length, so as to avoid causing cognitive difficulties or ambiguities in semantic understanding. In the end, 80 comments were selected for the experiment: 40 positive and 40 negative reviews (one of the negative comments was a fictitious comment, formulated for the needs of the experiment). To increase the number of experiments and the accuracy of the statistical results, four sets of mobile phone products were set up. There were eight pairs of pictures in total.

Before the experiment started, subjects were asked to read the experimental guide including an overview of the experiment, an introduction of the basic requirements and precautions in the test, and details of two practice trials that were conducted. When participants were cognizant of the experimental scenario, the formal experiment was ready to begin. Participants were required to adjust their bodies to a comfortable sitting position. The 9 points correction program was used for calibration before the experiment. Only those with a deviation angle of less than 1-degree angle could enter the formal eye movement experiment. In our eye-tracking experiment, whether the participant wears glasses or not was identified as a key issue. If the optical power of the participant’s glasses exceeds 200 degrees, due to the reflective effect of the lens, the eye movement instrument will cause great errors in the recording of eye movements. In order to ensure the accuracy of the data recorded by the eye tracker, the experimenter needs to test the power of each participant’s glasses and ensure that the degree of the participant’s glasses does not exceed 200 degrees before the experiment. After drift correction of eye movements, the formal experiment began. The following prompt was presented on the screen: “you will browse four similar mobile phone products; please make your purchase decision for each mobile phone.” Participants then had 8,000 ms to browse the product information. Next, they were allowed to look at the comments image as long as required, after which they were asked to press any key on the keyboard and answer the question “are you willing to buy this cell phone?.”

In this experiment, experimental materials were displayed on a 17-inch monitor with a resolution of 1,024 × 768 pixels. Participants’ eye movements were tracked and recorded by the Eyelink 1,000 desktop eye tracker which is a precise and accurate video-based eye tracker instrument, integrating with SR Research Experiment Builder, Data Viewer, and third-party software tools, with a sampling rate of 1,000 Hz. ( Hwang and Lee, 2018 ). Data processing was conducted by the matching Data Viewer analysis tool.

The experiment flow of each trial is shown in Figure 3 . Every subject was required to complete four trials, with mobile phone style information and comment content different and randomly presented in each trial. After the experiment, a brief interview was conducted to learn about participants’ browsing behavior when they purchased the phone and collected basic information via a matching questionnaire. The whole experiment took about 15 min.

www.frontiersin.org

Figure 3 . Experimental flow diagram. Screenshots of Alibaba shopfront reproduced with permission of Alibaba and Shenzhen Genuine Mobile Phone Store.

Data Analysis

Key measures of data collected from the eye-tracking experiment included fixation dwell time and fixation count. AOI is a focus area constructed according to experimental purposes and needs, where pertinent eye movement indicators are extracted. It can guarantee the precision of eye movement data, and successfully eliminate interference from other visual factors in the image. Product review areas are our AOIs, with positive comments (IA1) and negative comments (IA2) divided into two equal-sized rectangular areas.

Fixation can indicate the information acquisition process. Tracking eye fixation is the most efficient way to capture individual information from the external environment ( Hwang and Lee, 2018 ). In this study, fixation dwell time and fixation count were used to indicate users’ cognitive activity and visual attention ( Jacob and Karn, 2003 ). It can reflect the degree of digging into information and engaging in a specific situation. Generally, a more frequent fixation frequency indicates that the individual is more interested in the target resulting in the distribution of fixation points. Valuable and interesting comments attract users to pay more attention throughout the browsing process and focus on the AOIs for much longer. Since these two dependent variables (fixation dwell time and fixation count) comprised our measurement of the browsing process, comprehensive analysis can effectively measure consumers’ reactions to different review contents.

The findings are presented in each section including descriptive statistical analysis, analysis from the perspective of gender and review type using ANOVA, correlation analysis of purchasing decisions, and qualitative analysis of observations.

Descriptive Statistical Analysis

Fixation dwell time and fixation count were extracted in this study for each record. In this case, 160 valid data records were recorded from 40 participants. Each participant generated four records which corresponded to four combinations of two conditions (positive and negative) and two eye-tracking indices (fixation dwell time and fixation count). Each record represented a review comment. Table 1 shows pertinent means and standard deviations.

www.frontiersin.org

Table 1 . Results of mean and standard deviations.

It can be noted from the descriptive statistics for both fixation dwell time and fixation count that the mean of positive reviews was less than that of negative ones, suggesting that subjects spent more time on and had more interest in negative reviews. This tendency was more obvious in female subjects, indicating a role of gender.

Fixation results can be reported using a heat mapping plot to provide a more intuitive understanding. In a heat mapping plot, fixation data are displayed as different colors, which can manifest the degree of user fixation ( Wang et al., 2014 ). Red represents the highest level of fixation, followed by yellow and then green, and areas without color represent no fixation count. Figure 4 implies that participants spent more time and cognitive effort on negative reviews than positive ones, as evidenced by the wider red areas in the negative reviews. However, in order to determine whether this difference is statistically significant or not, further inferential statistical analyses were required.

www.frontiersin.org

Figure 4 . Heat map of review picture.

Repeated Measures From Gender and Review Type Perspectives—Analysis of Variance

The two independent variables for this experiment were the emotional tendency of the review and gender. A preliminary ANOVA analysis was performed, respectively, on fixation dwell time and fixation count values, with gender (man vs. woman) and review type (positive vs. negative) being the between-subjects independent variables in both cases.

A significant dominant effect of review type was found for both fixation dwell time ( p 1  < 0.001) and fixation count ( p 2  < 0.001; see Table 2 ). However, no significant dominant effect of gender was identified for either fixation dwell time ( p 1  = 0.234) or fixation count ( p 2  = 0.805). These results indicated that there were significant differences in eye movement indicators between positive and negative commentary areas, which confirms Hypothesis 2a. The interaction effect between gender and comment type was significant for both fixation dwell time ( p 1  = 0.002) and fixation count ( p 2  = 0.001). Therefore, a simple-effect analysis was carried out. The effects of different comment types with fixed gender factors and different gender with fixed comment type factors on those two dependent variables (fixation dwell time and fixation count) were investigated and the results are shown in Table 3 .

www.frontiersin.org

Table 2 . Results of ANOVA analysis.

www.frontiersin.org

Table 3 . Results of simple-effect analysis.

When the subject was female, comment type had a significant dominant effect for both fixation dwell time ( p 1  < 0.001) and fixation count ( p 2  < 0.001). This indicates that female users’ attention time and cognitive level on negative comments were greater than those on positive comments. However, the dominant effect of comment type was not significant ( p 1  = 0.336 > 0.05, p 2  = 0.43 > 0.05) for men, suggesting no difference in concern about the two types of comments for men.

Similarly, when scanning positive reviews, gender had a significant dominant effect ( p 1  = 0.003 < 0.05, p 2  = 0.025 < 0.05) on both fixation dwell time and fixation count, indicating that men exerted longer focus and deeper cognitive efforts to dig out positive reviews than women. In addition, the results for fixation count showed that gender had significant dominant effects ( p 1  = 0.18 > 0.05, p 2  = 0.01 < 0.05) when browsing negative reviews, suggesting that to some extent men pay significantly less cognitive attention to negative reviews than women, which is consistent with the conclusion that men’s attention to positive comments is greater than women’s. Although the dominant effect of gender was not significant ( p 1  = 0.234 > 0.05, p 2  = 0.805 > 0.05) in repeated measures ANOVA, there was an interaction effect with review type. For a specific type of comment, gender had significant influences, because the eye movement index between men and women was different. Thus, gender plays a moderating role in the impact of comments on consumers purchasing behavior.

Correlation Analysis of Purchase Decision

Integrating eye movement and behavioral data, whether participants’ focus on positive or negative reviews is linked to their final purchasing decisions were explored. Combined with the participants’ purchase decision results, the areas with large fixation dwell time and concerns of consumers in the picture were screened out. The frequency statistics are shown in Table 4 .

www.frontiersin.org

Table 4 . Frequency statistics of purchasing decisions.

The correlation analysis between the type of comment and the decision data shows that users’ attention level on positive and negative comments was significantly correlated with the purchase decision ( p  = 0.006 < 0.05). Thus, Hypothesis H4 is supported. As shown in Table 4 above, 114 records paid more attention to negative reviews, and 70% of the participants chose not to buy mobile phones. Also, in the 101 records of not buying, 80% of the subjects paid more attention to negative comments and chose not to buy mobile phones, while more than 50% of the subjects who were more interested in positive reviews chose to buy mobile phones. These experimental results are consistent with Hypothesis H1. They suggest that consumers purchasing decisions were based on the preliminary information they gathered and were concerned about, from which we can deduce customers’ final decision results from their visual behavior. Thus, the eye movement experiment analysis in this paper has practical significance.

Furthermore, a significant correlation ( p  = 0.007 < 0.05) was found between the comments area attracting more interest and purchase decisions for women, while no significant correlation was found for men ( p  = 0.195 > 0.05). This finding is consistent with the previous conclusion that men’s attention to positive and negative comments is not significantly different. Similarly, this also explains the moderating effect of gender. This result can be explained further by the subsequent interview of each participant after the experiment was completed. It was noted from the interviews that most of the male subjects claimed that they were more concerned about the hardware parameters of the phone provided in the product information picture. Depending on whether it met expectations, their purchasing decisions were formed, and mobile phone reviews were taken as secondary references that could not completely change their minds.

Figure 5 shows an example of the relationship between visual behavior randomly selected from female participants and the correlative decision-making behavior. The English translation of words that appeared in Figure 5 is shown in Figure 4 .

www.frontiersin.org

Figure 5 . Fixation count distribution.

The subjects’ fixation dwell time and fixation count for negative reviews were significantly greater than those for positive ones. Focusing on the screen and running smoothly, the female participant decided not to purchase this product. This leads to the conclusion that this subject thought a lot about the phone screen quality and running speed while selecting a mobile phone. When other consumers expressed negative criticism about these features, the female participant tended to give up buying them.

Furthermore, combined with the result of each subject’s gaze distribution map and AOI heat map, it was found that different subjects paid attention to different features of mobile phones. Subjects all had clear concerns about some features of the product. The top five mobile phone features that subjects were concerned about are listed in Table 5 . Contrary to expectations, factors, such as appearance and logistics, were no longer a priority. Consequently, the reasons why participants chose to buy or not to buy mobile phones can be inferred from the gazing distribution map recorded in the product review picture. Therefore we can provide suggestions on how to improve the design of mobile phone products for businesses according to the features that users are more concerned about.

www.frontiersin.org

Table 5 . Top 5 features of mobile phones.

Fictitious Comments Recognition Analysis

The authenticity of reviews is an important factor affecting the helpfulness of online reviews. To enhance the reputation and ratings of online stores, in the Chinese e-commerce market, more and more sellers are employing a network “water army”—a group of people who praise the shop and add many fake comments without buying any goods from the store. Combined with online comments, eye movement fixation, and information extraction theory, Song et al. (2017) found that fake praise significantly affects consumers’ judgment of the authenticity of reviews, thereby affecting consumers’ purchase intention. These fictitious comments glutted in the purchasers’ real ones are easy to mislead customers. Hence, this experiment was designed to randomly insert a fictitious comment into the remaining 79 real comments without notifying the participants in advance, to test whether potential buyers could identify the false comments and find out their impact on consumers’ purchase decisions.

The analysis of the eye movement data from 40 product review pictures containing this false commentary found that only several subjects’ visual trajectories were back and forth in this comment, and most participants exhibited no differences relative to other comments, indicating that the vast majority of users did not identify the lack of authenticity of this comment. Moreover, when asked whether they had taken note of this hidden false comment in interviews, almost 96% of the participants answered they had not. Thus, Hypothesis H2b is not supported.

This result explains why network “water armies” are so popular in China, as the consumer cannot distinguish false comments. Thus, it is necessary to standardize the e-commerce market, establish an online comment authenticity automatic identification information system, and crack down on illegal acts of employing network troops to disseminate fraudulent information.

Discussion and Conclusion

In the e-commerce market, online comments facilitate online shopping for consumers; in turn, consumers are increasingly dependent on review information to judge the quality of products and make a buying decision. Consequently, studies on the influence of online reviews on consumers’ behavior have important theoretical significance and practical implications. Using traditional empirical methodologies, such as self-report surveys, it is difficult to elucidate the effects of some variables, such as review choosing preference because they are associated with automatic or subconscious cognitive processing. In this paper, the eye-tracking experiment as a methodology was employed to test congruity hypotheses of product reviews and explore consumers’ online review search behavior by incorporating the moderating effect of gender.

Hypotheses testing results indicate that the emotional valence of online reviews has a significant influence on fixation dwell time and fixation count of AOI, suggesting that consumers exert more cognitive attention and effort on negative reviews than on positive ones. This finding is consistent with Ahluwalia et al.’s (2000) observation that negative information is more valuable than positive information when making a judgment. Specifically, consumers use comments from other users to avoid possible risks from information asymmetry ( Hong et al., 2017 ) due to the untouchability of online shopping. These findings provide the information processing evidence that customers are inclined to acquire more information for deeper thinking and to make a comparison when negative comments appear which could more likely result in choosing not to buy the product to reduce their risk. In addition, in real online shopping, consumers are accustomed to giving positive reviews as long as any dissatisfaction in the shopping process is within their tolerance limits. Furthermore, some e-sellers may be forging fake praise ( Wu et al., 2020 ). The above two phenomena exaggerate the word-of-mouth effect of negative comments, resulting in their greater effect in contrast to positive reviews; hence, consumers pay more attention to negative reviews. Thus, Hypothesis H2a is supported. However, when limited fake criticism was mixed in with a large amount of normal commentary, the subject’s eye movements did not change significantly, indicating that little cognitive conflict was produced. Consumers could not identify fake comments. Therefore, H2b is not supported.

Although the dominant effect of gender was not significant on the indicators of the fixation dwell time and fixation count, a significant interaction effect between user gender and review polarity was observed, suggesting that consumers’ gender can regulate their comment-browsing behavior. Therefore, H3 is partly supported. For female consumers, attention to negative comments was significantly greater than positive ones. Men’s attention was more homogeneous, and men paid more attention to positive comments than women. This is attributed to the fact that men and women have different risk perceptions of online shopping ( Garbarino and Strahilevitz, 2004 ). As reported in previous studies, men tend to focus more on specific, concrete information, such as the technical features of mobile phones, as the basis for their purchase decision. They have a weaker perception of the risks of online shopping than women. Women would be worried more about the various shopping risks and be more easily affected by others’ evaluations. Specifically, women considered all aspects of the available information, including the attributes of the product itself and other post-use evaluations. They tended to believe that the more comprehensive the information they considered, the lower the risk they faced of a failed purchase ( Garbarino and Strahilevitz, 2004 ; Kanungo and Jain, 2012 ). Therefore, women hope to reduce the risk of loss by drawing on as much overall information as possible because they are more likely to focus on negative reviews.

The main finding from the fixation count distribution is that consumers’ visual attention is mainly focused on reviews containing the following five mobile phone characteristics: running smoothly, battery life, fever condition of phones, pixels, and after-sales service. Considering the behavior results, when they pay more attention to negative comments, consumers tend to give up buying mobile phones. When they pay more attention to positive comments, consumers often choose to buy. Consequently, there is a significant correlation between visual attention and behavioral decision results. Thus, H4 is supported. Consumers’ decision-making intention can be reflected in the visual browsing process. In brief, the results of the eye movement experiment can be used as a basis for sellers not only to formulate marketing strategies but also to prove the feasibility and strictness of applying the eye movement tracking method to the study of consumer decision-making behavior.

Theoretical Implications

This study has focused on how online reviews affect consumer purchasing decisions by employing eye-tracking. The results contribute to the literature on consumer behavior and provide practical implications for the development of e-business markets. This study has several theoretical contributions. Firstly, it contributes to the literature related to online review valence in online shopping by tracking the visual information acquisition process underlying consumers’ purchase decisions. Although several studies have been conducted to examine the effect of online review valence, very limited research has been conducted to investigate the underlying mechanisms. Our study advances this research area by proposing visual processing models of reviews information. The findings provide useful information and guidelines on the underlying mechanism of how online reviews influence consumers’ online shopping behavior, which is essential for the theory of online consumer behavior.

Secondly, the current study offers a deeper understanding of the relationships between online review valence and gender difference by uncovering the moderating role of gender. Although previous studies have found the effect of review valence on online consumer behavior, the current study first reveals the effect of gender on this effect and explains it from the perspective of attention bias.

Finally, the current study investigated the effect of online reviews on consumer behavior from both eye-tracking and behavioral self-reports, the results are consistent with each other, which increased the credibility of the current results and also provides strong evidence of whether and how online reviews influence consumer behavior.

Implications for Practice

This study also has implications for practice. According to the analysis of experimental results and findings presented above, it is recommended that online merchants should pay particular attention to negative comments and resolve them promptly through careful analysis of negative comments and customization of product information according to consumer characteristics including gender factors. Based on the findings that consumers cannot identify false comments, it is very important to establish an online review screening system that could automatically screen untrue content in product reviews, and create a safer, reliable, and better online shopping environment for consumers.

Limitations and Future Research

Although the research makes some contributions to both theoretical and empirical literature, it still has some limitations. In the case of experiments, the number of positive and negative reviews of each mobile phone was limited to 10 positive and 10 negative reviews (20 in total) due to the size restrictions on the product review picture. The number of comments could be considered relatively small. Efforts should be made in the future to develop a dynamic experimental design where participants can flip the page automatically to increase the number of comments. Also, the research was conducted to study the impact of reviews on consumers’ purchase decisions by hiding the brand of the products. The results would be different if the brand of the products is exposed since consumers might be moderated through brand preferences and brand loyalty, which could be taken into account in future research projects.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author Contributions

TC conceived and designed this study. TC, PS, and MQ wrote the first draft of the manuscript. TC, XC, and MQ designed and performed related experiments, material preparation, data collection, and analysis. TC, PS, XC, and Y-CL revised the manuscript. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors wish to thank the Editor-in-Chief, Associate Editor, reviewers and typesetters for their highly constructive comments. The authors would like to thank Jia Jin and Hao Ding for assistance in experimental data collection and Jun Lei for the text-polishing of this paper. The authors thank all the researchers who graciously shared their findings with us which allowed this eye-tracking study to be more comprehensive than it would have been without their help.

Ahluwalia, R., Burnkrant, R., and Unnava, H. (2000). Consumer response to negative publicity: the moderating role of commitment. J. Mark. Res. 37, 203–214. doi: 10.2307/1558500

CrossRef Full Text | Google Scholar

Archak, N., Ghose, A., and Ipeirotis, P. (2010). Deriving the pricing power of product features by mining. Con. Rev. Manag. Sci. 57, 1485–1509. doi: 10.1287/mnsc.1110.1370

Bae, S., and Lee, T. (2011). Product type and consumers’ perception of online consumer reviews. Electron. Mark. 21, 255–266. doi: 10.1007/s12525-011-0072-0

Baek, H., Ahn, J., and Choi, Y. (2012). Helpfulness of online consumer reviews: readers’ objectives and review cues. Int. J. Electron. Commer. 17, 99–126. doi: 10.2753/jec1086-4415170204

Boardman, R., and McCormick, H. (2019). The impact of product presentation on decision making and purchasing. Qual. Mark. Res. Int. J. 22, 365–380. doi: 10.1108/QMR-09-2017-0124

Boardman, R., and Mccormick, H. (2021). Attention and behaviour on fashion retail websites: an eye-tracking study. Inf. Technol. People . doi: 10.1108/ITP-08-2020-0580 [Epub ahead of print]

Chae, S. W., and Lee, K. (2013). Exploring the effect of the human brand on consumers’ decision quality in online shopping: An eye-tracking approach. Online Inf. Rev. 37, 83–100. doi: 10.1108/14684521311311649

Changchit, C., and Klaus, T. (2020). Determinants and impact of online reviews on product satisfaction. J. Internet Commer. 19, 82–102. doi: 10.1080/15332861.2019.1672135

Chen, C. D., and Ku, E. C. (2021). Diversified online review websites as accelerators for online impulsive buying: the moderating effect of price dispersion. J. Internet Commer. 20, 113–135. doi: 10.1080/15332861.2020.1868227

Cortinas, M., Cabeza, R., Chocarro, R., and Villanueva, A. (2019). Attention to online channels across the path to purchase: an eye-tracking study. Electron. Commer. Res. Appl. 36:100864. doi: 10.1016/j.elerap.2019.100864

Craciun, G., and Moore, K. (2019). Credibility of negative online product reviews: reviewer gender, reputation and emotion effects. Comput. Hum. Behav. 97, 104–115. doi: 10.1016/j.chb.2019.03.010

Cui, G., Lui, H.-K., and Guo, X. (2012). The effect of online consumer reviews on new product sales. International. J. Elect. Com. 17, 39–58. doi: 10.2753/jec1086-4415170102

Floh, A., Koller, M., and Zauner, A. (2013). Taking a deeper look at online reviews: The asymmetric effect of valence intensity on shopping behaviour. J. Mark. Manag. 29:646670, 646–670. doi: 10.1080/0267257X.2013.776620

Garbarino, E., and Strahilevitz, M. (2004). Gender differences in the perceived risk of buying online and the effects of receiving a site recommendation. J. Bus. Res. 57, 768–775. doi: 10.1016/S0148-2963(02)00363-6

Ghose, A., and Ipeirotiss, P. G. (2010). Estimating the helpfulness and economic impact of product reviews: mining text and reviewer characteristics. IEEE Trans. Knowl. Data Eng. 23:188. doi: 10.1109/TKDE.2010.188

GooSeeker (2015), E. coli . Available at: http://www.gooseeker.com/pro/product.html , (Accessed January 20, 2020).

Google Scholar

Guo, J., Wang, X., and Wu, Y. (2020). Positive emotion bias: role of emotional content from online customer reviews in purchase decisions. J. Retail. Consum. Serv. 52:101891. doi: 10.1016/j.jretconser.2019.101891

Hasanat, M., Hoque, A., Shikha, F., Anwar, M., Abdul Hamid, A. B., and Huam, T. (2020). The impact of coronavirus (COVID-19) on E-Business in Malaysia. Asian J. Multidisc. Stud. 3, 85–90.

Hong, H., Xu, D., Wang, G., and Fan, W. (2017). Understanding the determinants of online review helpfulness: a meta-analytic investigation. Decis. Support. Syst. 102, 1–11. doi: 10.1016/j.dss.2017.06.007

Huang, P., Lurie, N., and Mitra, S. (2009). Searching for experience on the web: an empirical examination of consumer behavior for search and experience goods. J. Mark. Am. Mark. Assoc. 73, 55–69. doi: 10.2307/20619010

Hwang, Y. M., and Lee, K. C. (2018). Using an eye-tracking approach to explore gender differences in visual attention and shopping attitudes in an online shopping environment. Int. J. Human–Comp. Inter. 34, 15–24. doi: 10.1080/10447318.2017.1314611

Jacob, R., and Karn, K. (2003). “Eye tracking in human-computer interaction and usability research: ready to deliver the promises,” in The mind’s eye North-Holland (New York: Elsevier), 573–605.

Jiménez, F. R., and Mendoza, N. A. (2013). Too popular to ignore: the influence of online reviews on purchase intentions of search and experience products. J. Interact. Mark. 27, 226–235. doi: 10.1016/j.intmar.2013.04.004

Just, M., and Carpenter, P. (1980). A theory of reading: from eye fixations to comprehension. Psychol. Rev. 87, 329–354. doi: 10.1037/0033-295X.87.4.329

PubMed Abstract | CrossRef Full Text | Google Scholar

Just, M., and Carpenter, P. (1992). A capacity theory of comprehension: individual differences in working memory. Psychol. Rev. 99, 122–149. doi: 10.1037/0033-295x.99.1.122

Kang, T. C., Hung, S. Y., and Huang, A. H. (2020). The adoption of online product information: cognitive and affective evaluations. J. Internet Commer. 19, 373–403. doi: 10.1080/15332861.2020.1816315

Kanungo, S., and Jain, V. (2012). Online shopping behaviour: moderating role of gender and product category. Int. J. Bus. Inform. Syst. 10, 197–221. doi: 10.1504/ijbis.2012.047147

Kaur, S., Lal, A. K., and Bedi, S. S. (2017). Do vendor cues influence purchase intention of online shoppers? An empirical study using SOR framework. J. Internet Commer. 16, 343–363. doi: 10.1080/15332861.2017.1347861

Lackermair, G., Kailer, D., and Kanmaz, K. (2013). Importance of online product reviews from a consumer’s perspective. Adv. Econ. Bus. 1, 1–5. doi: 10.13189/aeb.2013.010101

Liu, H.-C., Lai, M.-L., and Chuang, H.-H. (2011). Using eye-tracking technology to investigate the redundant effect of multimedia web pages on viewers’ cognitive processes. Comput. Hum. Behav. 27, 2410–2417. doi: 10.1016/j.chb.2011.06.012

Luan, J., Yao, Z., Zhao, F., and Liu, H. (2016). Search product and experience product online reviews: an eye-tracking study on consumers’ review search behavior. Comput. Hum. Behav. 65, 420–430. doi: 10.1016/j.chb.2016.08.037

Mayzlin, D., and Chevalier, J. (2003). The effect of word of mouth on sales: online book reviews. J. Mark. Res. 43:409. doi: 10.2307/30162409

Meyers-Levy, J., and Sternthal, B. (1993). A two-factor explanation of assimilation and contrast effects. J. Mark. Res. 30, 359–368. doi: 10.1177/002224379303000307

Mudambi, S., and Schuff, D. (2010). What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Q. 34, 185–200. doi: 10.1007/s10107-008-0244-7

Mumuni, A. G., O’Reilly, K., MacMillan, A., Cowley, S., and Kelley, B. (2020). Online product review impact: the relative effects of review credibility and review relevance. J. Internet Commer. 19, 153–191. doi: 10.1080/15332861.2019.1700740

Pavlou, P., and Dimoka, A. (2010). NeuroIS: the potential of cognitive neuroscience for information systems research. Inform. Sys. Res. Art. Adv. 19, 153–191. doi: 10.1080/15332861.2019.1700740

Plassmann, H., Venkatraman, V., Huettel, S., and Yoon, C. (2015). Consumer neuroscience: applications, challenges, and possible solutions. J. Mark. Res. 52, 427–435. doi: 10.1509/jmr.14.0048

Racherla, P., and Friske, W. (2013). Perceived “usefulness” of online consumer reviews: an exploratory investigation across three services categories. Electron. Commer. Res. Appl. 11, 548–559. doi: 10.1016/j.elerap.2012.06.003

Renshaw, J. A., Finlay, J. E., Tyfa, D., and Ward, R. D. (2004). Understanding visual influence in graph design through temporal and spatial eye movement characteristics. Interact. Comput. 16, 557–578. doi: 10.1016/j.intcom.2004.03.001

Rosa, P. J. (2015). What do your eyes say? Bridging eye movements to consumer behavior. Int. J. Psychol. Res. 15, 1250–1256. doi: 10.1116/1.580598

Ruiz-Mafe, C., Chatzipanagiotou, K., and Curras-Perez, R. (2018). The role of emotions and conflicting online reviews on consumers’ purchase intentions. J. Bus. Res. 89, 336–344. doi: 10.1016/j.jbusres.2018.01.027

Sen, S., and Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the web. J. Interact. Mark. 21, 76–94. doi: 10.1002/dir.20090

Song, W., Park, S., and Ryu, D. (2017). Information quality of online reviews in the presence of potentially fake reviews. Korean Eco. Rev. 33, 5–34.

Tupikovskaja-Omovie, Z., and Tyler, D. (2021). Eye tracking technology to audit google analytics: analysing digital consumer shopping journey in fashion m-retail. Int. J. Inf. Manag. 59:102294. doi: 10.1016/j.ijinfomgt.2020.102294

Vimaladevi, K., and Dhanabhakaym, M. (2012). A study on the effects of online consumer reviews on purchasing decision. Prestige In. J. Manag. 7, 51–99. doi: 10.1504/IJIMA.2012.044958

Von Helversen, B., Abramczuk, K., Kopeć, W., and Nielek, R. (2018). Influence of consumer reviews on online purchasing decisions in older and younger adults. Decis. Support. Syst. 113, 1–10. doi: 10.1016/j.dss.2018.05.006

Wang, Y., and Minor, M. (2008). Validity, reliability, and applicability of psychophysiological techniques in marketing research. Psychol. Mark. 25, 197–232. doi: 10.1002/mar.20206

Wang, Q., Yang, S., Cao, Z., Liu, M., and Ma, Q. (2014). An eye-tracking study of website complexity from cognitive load perspective. Decis. Support. Syst. 62, 1–10. doi: 10.1016/j.dss.2014.02.007

World Medical Association (2014). World medical association declaration of Helsinki: ethical principles for medical research involving human subjects. J. Am. Coll. Dent. 81, 14–18. doi: 10.1111/j.1447-0756.2001.tb01222.x

Wu, Y., Liu, T., Teng, L., Zhang, H., and Xie, C. (2021). The impact of online review variance of new products on consumer adoption intentions. J. Bus. Res. 136, 209–218. doi: 10.1016/J.JBUSRES.2021.07.014

Wu, Y., Ngai, E., Pengkun, W., and Wu, C. (2020). Fake online reviews: literature review, synthesis, and directions for future research. Decis. Support. Syst. 132:113280. doi: 10.1016/j.dss.2020.113280

Yang, S. F. (2015). An eye-tracking study of the elaboration likelihood model in online shopping. Electron. Commer. Res. Appl. 14, 233–240. doi: 10.1016/j.elerap.2014.11.007

Yu, X., Liu, Y., Huang, X., and An, A. (2010). Mining online reviews for predicting sales performance: a case study in the movie domain. IEEE Trans. Knowl. Data Eng. 24, 720–734. doi: 10.1109/TKDE.2010.269

Yuanyuan, H., Peng, Z., Yijun, L., and Qiang, Y. J. M. R. (2009). An empirical study on the impact of online reviews sentimental orientation on sale based on movie panel data. Manag. Rev. 21, 95–103. doi: 10.1007/978-3-642-00205-2_9

Zhang, K., Cheung, C., and Lee, M. (2014). Examining the moderating effect of inconsistent reviews and its gender differences on consumers’ online shopping decision. Int. J. Inf. Manag. 34, 89–98. doi: 10.1016/j.ijinfomgt.2013.12.001

Zhang, J., Craciun, G., and Shin, D. (2010). When does electronic word-of-mouth matter? A study of consumer product reviews. J. Bus. Res. 63, 1336–1341. doi: 10.1016/j.jbusres.2009.12.011

Zhong-Gang, Y., Xiao-Ya, W., and Economics, S. O. J. S. E. (2015). Research progress and future prospect on online reviews and consumer behavior. Soft Science. 6:20. doi: 10.3760/cma.j.cn112137-20200714-02111

Keywords: online reviews, eye-tracking, consumers purchasing decisions, emotion valence, gender

Citation: Chen T, Samaranayake P, Cen X, Qi M and Lan Y-C (2022) The Impact of Online Reviews on Consumers’ Purchasing Decisions: Evidence From an Eye-Tracking Study. Front. Psychol . 13:865702. doi: 10.3389/fpsyg.2022.865702

Received: 30 January 2022; Accepted: 02 May 2022; Published: 08 June 2022.

Reviewed by:

Copyright © 2022 Chen, Samaranayake, Cen, Qi and Lan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: XiongYing Cen, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    online research paper review

  2. The review paper writing tips

    online research paper review

  3. How To Make A Literature Review For A Research Paper

    online research paper review

  4. Research Paper vs. Review Paper: Differences Between Research Papers

    online research paper review

  5. How to write a literature review in research paper

    online research paper review

  6. Research Paper vs Review: 5 Main Differences

    online research paper review

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. Orthopaedics Paper Presentation

  3. Quantitative Research Paper Review

  4. Writing a Review Paper: What,Why, How?

  5. Research Paper Review

  6. Research Paper Review

COMMENTS

  1. How to review a paper

    22 Sep 2016. By Elisabeth Pain. Share: A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  2. THE PAPER REVIEW GENERATOR

    THE PAPER REVIEW GENERATOR . This tool is designed to speed up writing reviews for research papers for computer science. It provides a list of items that can be used to automatically generate a review draft. This website should not replace a human. Generated text should be edited by the reviewer to add more details.

  3. Litmaps

    Our newly released course, Mastering Literature Review with Litmaps, allows instructors to seamlessly bring Litmaps into the classroom to teach fundamental literature review and research concepts. Join the 250,000+ researchers, students, and professionals using Litmaps to accelerate their literature review. Find the right papers faster.

  4. How to write a superb literature review

    One of my favourite review-style articles 3 presents a plot bringing together data from multiple research papers (many of which directly contradict each other). This is then used to identify broad ...

  5. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  6. How to write a good scientific review article

    With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and ...

  7. How to write a review paper

    Writing the Review. 1Good scientific writing tells a story, so come up with a logical structure for your paper, with a beginning, middle, and end. Use appropriate headings and sequencing of ideas to make the content flow and guide readers seamlessly from start to finish.

  8. How to write a review paper

    Include this information when writing up the method for your review. 5 Look for previous reviews on the topic. Use them as a springboard for your own review, critiquing the earlier reviews, adding more recently published material, and pos-sibly exploring a different perspective. Exploit their refer-ences as another entry point into the literature.

  9. The classification of online consumer reviews: A systematic literature

    Review-level features. Previous research has decomposed online reviews into valence, rating scores, and volume, for example, and analyzed the relevance of these elements for consumer decision making and business performance. Consumers pay close attention to various dimensions when reviewing consumer comments (Phillips, Barnes, Zigan, & Schegg ...

  10. What makes an online review credible? A systematic review of the

    This paper uses the systematic literature review method (Linnenluecke et al. 2020; Moher et al. 2009; Neumann 2021; Okoli 2015; Snyder 2019) to synthesize the research findings.Liberati et al. explains systematic review as a process for identifying, critically appraising relevant research and analyzing data.Systematic reviews differ from meta-analysis with respect to methods of analysis used.

  11. Basics of Writing Review Articles

    Just like research papers, the most common and convenient practice is to write review papers in "introduction, methods, results, and discussion (IMRaD)" format accompanied by title, abstract, key words, and references. The title makes the first introductory and is the most important sentence of the review paper.

  12. Scholarcy

    Generate bibliographies in a click. Export your flashcards to your favourite citation manager or generate a one-click, fully formatted bibliography in Word. Apply what you've learned. Write that magnum opus 🤌. Transform all that knowledge you've built up into a perfectly articulated argument. Journal Article (Draft)

  13. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results. Writing a review of literature is to provide a ...

  14. Writing, reading, and critiquing reviews

    Scoping Review: Aims to quickly map a research area, documenting key concepts, sources of evidence, methodologies used. Typically, scoping reviews do not judge the quality of the papers included in the review. They tend to produce descriptive accounts of a topic area. Kalun P, Dunn K, Wagner N, Pulakunta T, Sonnadara R.

  15. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  16. Discovering the evolution of online reviews: A bibliometric review

    As a rapidly developing topic, online reviews have aroused great interest among researchers. Although the existing research can help to explain issues related to online reviews, the scattered and diversified nature of previous research hinders an overall understanding of this area. Based on bibliometrics, this study analyzes 3089 primary articles and 100,783 secondary articles published ...

  17. Online Reviews: A Literature Review and Roadmap for Future Research

    This paper reviews and synthesizes existing research on online reviews, addresses the knowledge gaps, and proposes directions for future research. Taking an interdisciplinary approach, we delve into the stages of the online review process, including review creation, exposure, and evaluation, and discuss the role of behavioral biases, review ...

  18. How to write a good scientific review article

    A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the ...

  19. AI Literature Review Generator

    A literature review is a comprehensive analysis and evaluation of scholarly articles, books and other sources concerning a particular field of study or a research question. This process involves discussing the state of the art of an area of research and identifying pivotal works and researchers in the domain. The primary purpose of a literature ...

  20. Measuring the impact of online reviews on consumer purchase decisions

    1. Introduction. In October 2020, research by Wall Street Journal revealed surprising factual statistics every business would want to know and the importance of online reviews (The Wall Street Journal, 2020).Firms need to capitalize on their understanding of online reviews as online shoppers consider online reviews as channels of getting product information while making purchase decisions (Fu ...

  21. Measuring the impact of online reviews on consumer purchase decisions

    This paper proposes a novel empirical framework based on Source Credibility Theory and Cognitive Theory of Multimedia Learning to identify the effect of features (such as review text, review title and reviewer attributes) on the perceived helpfulness of an online review in the presence of product type (tangible vs. intangible) as a moderator.In addition, we employed quantile regression as a ...

  22. A systematic review of online examinations: A pedagogical innovation

    The papers selected following the double full-text review were accepted for this review. Each accepted paper was reviewed for quality using the MMAT system ... (30, 83.3%) were empirical research, with the remainder commentary papers (6, 16.7%). Of the empirical research papers, three-quarters of the paper reported a quantitative study design ...

  23. Frontiers

    1 School of Business, Ningbo University, Ningbo, China; 2 School of Business, Western Sydney University, Penrith, NSW, Australia; This study investigated the impact of online product reviews on consumers purchasing decisions by using eye-tracking. The research methodology involved (i) development of a conceptual framework of online product review and purchasing intention through the moderation ...

  24. A review of the effects of fermentation on the structure, properties

    This work was supported by the National Natural Science Foundation of China (32072257, 32160530), the major science & technology development-specific projects of Jiangxi province (20223AAF02017), central government guide local special fund project for scientific and technological development of Jiangxi province (20221ZDD02001) and the research program of state key laboratory of food science ...

  25. FTX'd: Conflicting Public and Private Interests in Chapter 11

    Stanford Law Review, Forthcoming. 70 Pages Posted: 15 Mar 2024. See all articles by Jonathan C. Lipson Jonathan C. Lipson. Temple University - James E. Beasley School of Law. ... Law & Economics Research Paper Series. Subscribe to this free journal for more curated articles on this topic FOLLOWERS. 5,003. PAPERS. 856. This Journal is curated by

  26. Research Shows Even Positive Online Reviews are a Minefield for Firms

    Customer's online reviews of products and services are highly influential and have an immediate impact on brand value and customer buying behaviors. According to the Pew Research Center, "82% of U.S. adults say they at least sometimes read online customer ratings or reviews before purchasing items for the first time, including 40% who say they always or almost always do so."