A Systematic Literature Review of Assessment Tools for Programming Assignments

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

a systematic literature review of assessment tools for programming assignments

  • solidarity - (ua) - (ru)
  • news - (ua) - (ru)
  • donate - donate - donate

for scientists:

  • ERA4Ukraine
  • Assistance in Germany
  • Ukrainian Global University
  • #ScienceForUkraine

search dblp

default search action

  • combined dblp search
  • author search
  • venue search
  • publication search

clear

"A Systematic Literature Review of Assessment Tools for Programming ..."

Details and statistics.

DOI: 10.1109/CSEET.2016.48

access: closed

type: Conference or Workshop Paper

metadata version: 2024-02-20

a systematic literature review of assessment tools for programming assignments

manage site settings

To protect your privacy, all features that rely on external API calls from your browser are turned off by default . You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.

Unpaywalled article links

unpaywall.org

load links from unpaywall.org

Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy .

Archived links via Wayback Machine

web.archive.org

load content from archive.org

Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy .

Reference lists

crossref.org

load references from crossref.org and opencitations.net

Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org , opencitations.net , and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy , as well as the AI2 Privacy Policy covering Semantic Scholar.

Citation data

load citations from opencitations.net

Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.

OpenAlex data

openalex.org

load data from openalex.org

Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex .

last updated on 2024-02-20 13:39 CET by the dblp team

cc zero

see also: Terms of Use | Privacy Policy | Imprint

dblp was originally created in 1993 at:

University of Trier

since 2018, dblp has been operated and maintained by:

Schloss Dagstuhl - Leibniz Center for Informatics

the dblp computer science bibliography is funded and supported by:

BMBF

King's College London Logo

Automated Grading and Feedback Tools for Programming Education: A Systematic Review

  • Informatics
  • Software Systems

Research output : Contribution to journal › Article › peer-review

  • Automated Grading
  • Computer Science Education
  • Systematic Literature Review
  • Automatic Assessment Tools

Access to Document

  • 10.1145/3636515
  • Systematic_Literature_Review-v1.5.1 Accepted author manuscript, 949 KB

Other files and links

  • Link to publication in Scopus

Fingerprint

  • Education Computer Science 100%
  • Analysis Technique Computer Science 100%
  • Evaluation Computer Science 100%
  • Static Program Analysis Computer Science 100%
  • Survey Computer Science 50%
  • Research Paper Computer Science 50%
  • Automation Computer Science 50%
  • Testing Computer Science 50%

T1 - Automated Grading and Feedback Tools for Programming Education

T2 - A Systematic Review

AU - Messer, Marcus

AU - Brown, Neil

AU - Kölling, Michael

N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).

PY - 2024/2/19

Y1 - 2024/2/19

N2 - We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques. Most papers assess the correctness of assignments in object-oriented languages. Typically, these tools use a dynamic technique, primarily unit testing, to provide grades and feedback to the students or static analysis techniques to compare a submission with a reference solution or with a set of correct student submissions. However, these techniques’ feedback is often limited to whether the unit tests have passed or failed, the expected and actual output, or how they differ from the reference solution. Furthermore, few tools assess the maintainability, readability or documentation of the source code, with most using static analysis techniques, such as code quality metrics, in conjunction with grading correctness. Additionally, we found that most tools offered fully automated assessment to allow for near-instantaneous feedback and multiple resubmissions, which can increase student satisfaction and provide them with more opportunities to succeed. In terms of techniques used to evaluate the tools’ performance, most papers primarily use student surveys or compare the automatic assessment tools to grades or feedback provided by human graders. However, because the evaluation dataset is frequently unavailable, it is more difficult to reproduce results and compare tools to a collection of common assignments.

AB - We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques. Most papers assess the correctness of assignments in object-oriented languages. Typically, these tools use a dynamic technique, primarily unit testing, to provide grades and feedback to the students or static analysis techniques to compare a submission with a reference solution or with a set of correct student submissions. However, these techniques’ feedback is often limited to whether the unit tests have passed or failed, the expected and actual output, or how they differ from the reference solution. Furthermore, few tools assess the maintainability, readability or documentation of the source code, with most using static analysis techniques, such as code quality metrics, in conjunction with grading correctness. Additionally, we found that most tools offered fully automated assessment to allow for near-instantaneous feedback and multiple resubmissions, which can increase student satisfaction and provide them with more opportunities to succeed. In terms of techniques used to evaluate the tools’ performance, most papers primarily use student surveys or compare the automatic assessment tools to grades or feedback provided by human graders. However, because the evaluation dataset is frequently unavailable, it is more difficult to reproduce results and compare tools to a collection of common assignments.

KW - Automated Grading

KW - Feedback

KW - Assessment

KW - Computer Science Education

KW - Systematic Literature Review

KW - Automatic Assessment Tools

UR - http://www.scopus.com/inward/record.url?scp=85190979567&partnerID=8YFLogxK

U2 - 10.1145/3636515

DO - 10.1145/3636515

M3 - Article

SN - 1946-6226

JO - Transactions of Computing Education

JF - Transactions of Computing Education

Help | Advanced Search

Computer Science > Software Engineering

Title: automated grading and feedback tools for programming education: a systematic review.

Abstract: We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques. Most papers assess the correctness of assignments in object-oriented languages. Typically, these tools use a dynamic technique, primarily unit testing, to provide grades and feedback to the students or static analysis techniques to compare a submission with a reference solution or with a set of correct student submissions. However, these techniques' feedback is often limited to whether the unit tests have passed or failed, the expected and actual output, or how they differ from the reference solution. Furthermore, few tools assess the maintainability, readability or documentation of the source code, with most using static analysis techniques, such as code quality metrics, in conjunction with grading correctness. Additionally, we found that most tools offered fully automated assessment to allow for near-instantaneous feedback and multiple resubmissions, which can increase student satisfaction and provide them with more opportunities to succeed. In terms of techniques used to evaluate the tools' performance, most papers primarily use student surveys or compare the automatic assessment tools to grades or feedback provided by human graders. However, because the evaluation dataset is frequently unavailable, it is more difficult to reproduce results and compare tools to a collection of common assignments.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Research Skills Blog

5 software tools to support your systematic review processes

By Dr. Mina Kalantar on 19-Jan-2021 13:01:01

4 software tools to support your systematic review processes | IFIS Publishing

Systematic reviews are a reassessment of scholarly literature to facilitate decision making. This methodical approach of re-evaluating evidence was initially applied in healthcare, to set policies, create guidelines and answer medical questions.

Systematic reviews are large, complex projects and, depending on the purpose, they can be quite expensive to conduct. A team of researchers, data analysts and experts from various fields may collaborate to review and examine incredibly large numbers of research articles for evidence synthesis. Depending on the spectrum, systematic reviews often take at least 6 months, and sometimes upwards of 18 months to complete.

The main principles of transparency and reproducibility require a pragmatic approach in the organisation of the required research activities and detailed documentation of the outcomes. As a result, many software tools have been developed to help researchers with some of the tedious tasks required as part of the systematic review process.

hbspt.cta._relativeUrls=true;hbspt.cta.load(97439, 'ccc20645-09e2-4098-838f-091ed1bf1f4e', {"useNewLoader":"true","region":"na1"});

The first generation of these software tools were produced to accommodate and manage collaborations, but gradually developed to help with screening literature and reporting outcomes. Some of these software packages were initially designed for medical and healthcare studies and have specific protocols and customised steps integrated for various types of systematic reviews. However, some are designed for general processing, and by extending the application of the systematic review approach to other fields, they are being increasingly adopted and used in software engineering, health-related nutrition, agriculture, environmental science, social sciences and education.

Software tools

There are various free and subscription-based tools to help with conducting a systematic review. Many of these tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Some are designed to facilitate the entire process of review, including protocol development, reporting of the outcomes and help with fast project completion.

As time goes on, more functions are being integrated into such software tools. Technological advancement has allowed for more sophisticated and user-friendly features, including visual graphics for pattern recognition and linking multiple concepts. The idea is to digitalise the cumbersome parts of the process to increase efficiency, thus allowing researchers to focus their time and efforts on assessing the rigorousness and robustness of the research articles.

This article introduces commonly used systematic review tools that are relevant to food research and related disciplines, which can be used in a similar context to the process in healthcare disciplines.

These reviews are based on IFIS' internal research, thus are unbiased and not affiliated with the companies.

ross-sneddon-sWlDOWk0Jp8-unsplash-1-2

This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting.

The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all levels of experience.

From a user perspective, the interface is intuitive, and the citation screening is directed step-by-step through a well-defined workflow. Imports and exports are straightforward, with easy export options to Excel and CVS.

Access is free for Cochrane authors (a single reviewer), and Cochrane provides a free trial to other researchers in healthcare. Universities can also subscribe on an institutional basis.

Rayyan is a free and open access web-based platform funded by the Qatar Foundation, a non-profit organisation supporting education and community development initiative . Rayyan is used to screen and code literature through a systematic review process.

Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening. The web-based platform is known for its accessible user interface, with easy and clear export options.

Function comparison of 5 software tools to support the systematic review process

Eppi-reviewer.

EPPI-Reviewer is a web-based software programme developed by the Evidence for Policy and Practice Information and Co-ordinating Centre  (EPPI) at the UCL Institute for Education, London .

It provides comprehensive functionalities for coding and screening. Users can create different levels of coding in a code set tool for clustering, screening, and administration of documents. EPPI-Reviewer allows direct search and import from PubMed. The import of search results from other databases is feasible in different formats. It stores, references, identifies and removes duplicates automatically. EPPI-Reviewer allows full-text screening, text mining, meta-analysis and the export of data into different types of reports.

There is no limit for concurrent use of the software and the number of articles being reviewed. Cochrane reviewers can access EPPI reviews using their Cochrane subscription details.

EPPI-Centre has other tools for facilitating the systematic review process, including coding guidelines and data management tools.

CADIMA is a free, online, open access review management tool, developed to facilitate research synthesis and structure documentation of the outcomes.

The Julius Institute and the Collaboration for Environmental Evidence established the software programme to support and guide users through the entire systematic review process, including protocol development, literature searching, study selection, critical appraisal, and documentation of the outcomes. The flexibility in choosing the steps also makes CADIMA suitable for conducting systematic mapping and rapid reviews.

CADIMA was initially developed for research questions in agriculture and environment but it is not limited to these, and as such, can be used for managing review processes in other disciplines. It enables users to export files and work offline.

The software allows for statistical analysis of the collated data using the R statistical software. Unlike EPPI-Reviewer, CADIMA does not have a built-in search engine to allow for searching in literature databases like PubMed.

DistillerSR

DistillerSR is an online software maintained by the Canadian company, Evidence Partners which specialises in literature review automation. DistillerSR provides a collaborative platform for every stage of literature review management. The framework is flexible and can accommodate literature reviews of different sizes. It is configurable to different data curation procedures, workflows and reporting standards. The platform integrates necessary features for screening, quality assessment, data extraction and reporting. The software uses Artificial Learning (AL)-enabled technologies in priority screening. It is to cut the screening process short by reranking the most relevant references nearer to the top. It can also use AL, as a second reviewer, in quality control checks of screened studies by human reviewers. DistillerSR is used to manage systematic reviews in various medical disciplines, surveillance, pharmacovigilance and public health reviews including food and nutrition topics. The software does not support statistical analyses. It provides configurable forms in standard formats for data extraction.

DistillerSR allows direct search and import of references from PubMed. It provides an add on feature called LitConnect which can be set to automatically import newly published references from data providers to keep reviews up to date during their progress.

The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to find the right tools for their research.

If you enjoyed this blog post, you may also be interested in our recently published blog post addressing the difference between a systematic review and a systematic literature review.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

IMAGES

  1. 10 Steps to Write a Systematic Literature Review Paper in 2023

    a systematic literature review of assessment tools for programming assignments

  2. systematic literature review steps

    a systematic literature review of assessment tools for programming assignments

  3. systematic literature review use cases

    a systematic literature review of assessment tools for programming assignments

  4. How to conduct a Systematic Literature Review

    a systematic literature review of assessment tools for programming assignments

  5. A Step by Step Guide for Conducting a Systematic Review

    a systematic literature review of assessment tools for programming assignments

  6. How to Conduct a Systematic Review

    a systematic literature review of assessment tools for programming assignments

VIDEO

  1. English Literature Final Summative Assessment LM English Practice Assignments Siti Hidayatul Anifah

  2. A Comprehensive Systematic Literature Review on Intrusion Detection Systems

  3. Systematic Literature Review and Meta Analysis(literature review)(quantitative analysis)

  4. Fundamentals of Systematic Literature Review #SLR #Urdu #Hindi #English

  5. Systematic Literature Review and Meta

  6. Systematic Literature Review Technique

COMMENTS

  1. A Systematic Literature Review of Assessment Tools for Programming

    The benefits of using assessment tools for programming assignments have been widely discussed in computing education. However, as both researchers and instructors are unaware of the characteristics of existing tools, they are either not used or are reimplemented. This paper presents the results of a study conducted to collect and evaluate evidence about tools that assist in the assessment of ...

  2. A Systematic Literature Review of Assessment Tools for Programming

    Souza et al. (2016) systematically reviewed 49 studies with the aim of investigating what assessment tools have been developed for programming assignments and what their main characteristics are ...

  3. PDF A Systematic Literature Review of Assessment Tools for Programming

    A systematic literature review provides an overview of a research area to assess the amount of existing evidence on a topic of interest. Thus, this study can be useful in guiding the development ...

  4. A Systematic Literature Review of Assessment Tools for Programming

    A systematic literature review was performed to collect and evaluate evidence about tools that assist in the assessment of programming assignments and identified subjects in the development of new assessment tools that researchers could better investigate and characteristics of Assessment tools that could help instructors make selections for their programming courses.

  5. Automated Grading and Feedback Tools for Programming Education: A

    We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques. Most papers assess the correctness of assignments ...

  6. Automated Grading and Feedback Tools for Programming Education: A

    A systematic literature review of assessment tools for programming assignments. In Proceedings of the 2016 IEEE 29th International Conference on Software Engineering Education and Training. 147 - 156. DOI: Google Scholar [65] Stamelos Ioannis, Angelis Lefteris, Oikonomou Apostolos, and Bleris Georgios L.. 2002.

  7. Automated Grading and Feedback Tools for Programming Education: A

    We conducted a systematic literature review to investigate recent research into automated grading and feedback tools. Our review offers new insights into the current state of auto-grading tools and the associated research by contributing the following: •Categorised automatic assessment tools based on core programming skills [127].

  8. Bibliometric Analysis of Automated Assessment in Programming ...

    The systematic literature review on automated assessment by Paiva et al. is the most closely related to the one and recent review. This review identified a new era of automated assessment in computer science, the era of containerization, among other interesting findings; in particular, the growing interest in static analysis techniques for ...

  9. Review of recent systems for automatic assessment of programming

    This paper presents a systematic literature review of the recent (2006--2010) development of automatic assessment tools for programming exercises. We discuss the major features that the tools support and the different approaches they are using both from the pedagogical and the technical point of view. ... Teaching software quality assurance by ...

  10. A Systematic Literature Review of Assessment Tools for Programming

    The benefits of using assessment tools for programming assignments have been widely discussed in computing education. However, as both researchers and instructors are unaware of the characteristics of existing tools, they are either not used or are reimplemented. This paper presents the results of a study conducted to collect and evaluate evidence about tools that assist in the assessment of ...

  11. Automated Grading and Feedback Tools for Programming ...

    Abstract. We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised ...

  12. Automated Grading Systems for Programming Assignments: A Literature Review

    A systematic literature review on automated grading and feedback tools for programming education found that most tools offered fully automated assessment to allow for near-instantaneous feedback and multiple resubmissions, which can increase student satisfaction and provide them with more opportunities to succeed. Expand

  13. "A Systematic Literature Review of Assessment Tools for Programming ..."

    Bibliographic details on A Systematic Literature Review of Assessment Tools for Programming Assignments. Stop the war! ... "A Systematic Literature Review of Assessment Tools for Programming ..." help us. How can I correct errors in dblp? ... A Systematic Literature Review of Assessment Tools for Programming Assignments. CSEE&T 2016: 147-156. a ...

  14. Automated Grading and Feedback Tools for Programming Education: A

    This trend for fully automated tools has not changed in our review, and we also found that most tools were fully automated, with only 14% of tools using a semi-automated approach. Similarly, Keuning et al. [33] focused on conducting a systematic literature review of feedback provided by AATs between 1960 and 2015.

  15. Effect of an Instructor-Centered Tool for Automatic Assessment of

    Ihantola et al. presented a systematic literature review of automatic assessment tools for programming assignments comprising the 2006-2010 period, in which a total of 17 tools were analyzed. This review aimed to investigate the features of these systems that were reported in the literature after 2005.

  16. Automated Grading and Feedback Tools for Programming Education: A

    N2 - We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques.

  17. Automated Assessment Tools for grading of programming Assignments: A review

    We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them ...

  18. PDF E-Assessment Tools for Programming Languages: A Review

    1. Manual Assessment: Assessment of the programming assignment is done "manually by the instructor with assistance of the tool"[3]. 2. Automatic Assessment: Assessment of a programming assignment is done automatically by the tool.But, the instructor will have to clearly state the parameters on which the program code should be

  19. Automated Grading and Feedback Tools for Programming Education: A

    We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation and evaluation techniques. Most papers assess the correctness of assignments in object-oriented languages. Typically, these ...

  20. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  21. Automated Grading Systems for Programming Assignments: A Literature Review

    We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them ...

  22. 5 software tools to support your systematic review processes

    Covidence. This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting. The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all ...