• Miami University Libraries
  • Subject & Course Guides
  • Avoiding Bad or Fake News: Evaluating News Sources
  • Lesson Plans and Assignments

Avoiding Bad or Fake News: Evaluating News Sources: Lesson Plans and Assignments

  • Identifying Fake or Bad News
  • Let's Check a Claim
  • Video Tutorials

Project Cora and Fake News

  • Evaluating News Sites: Credible or Clickbait?

In this workshop, students learn how to evaluate whether a news site is reliable. This group activity takes about 30 minutes and can be used for many different audiences by adjusting the examples used.

  • Fake News, Lies, and Propaganda: How to Sort Fact from Fiction

What is “fake news” anyway? Are we living in a post-truth world? These University of Michigan course materials will provide opportunities to discuss and analyze news production, consumption and evaluation.

  • Fake News: Fight Back

A one-shot or seminar class on fake news tied to source evaluation.

  • Keepin' It Real: Tips and Strategies for Evaluating Fake News

In an effort to provide students with an open space to learn about and discuss recent national concerns over “fake news,” the library offered four sessions of the workshop “Keepin’ It Real: Tips & Strategies for Evaluating Fake News” 

Evaluating Headlines

Outcome:  Using a provided rubric, learners will be able to evaluate news article/video headlines based on their language, tone and credibility..

Materials:   Headlines from various sources on a recent news event.   The TACT Test for Headlines Rubric  

Lesson:   Learners will evaluate several news headlines (which can lead to articles, posts or videos) on a current news event.  Use the Headline Rubric to lead a class discussion that leads to evaluating the accuracy of a headline on a current news topic.  Learners will then break into groups and use the Headline Rubric to evaluate several headlines based on Language, Tone and Credibility to determine if the article/video is likely to provide accurate information. Groups will share their results with the rest of the class.

Activity:   Learners will work in groups of 3-5 to evaluate 3-5 headlines or news article headlines from a variety of sources on a current news event. Using the provided Headline Rubric, each group will rate the headlines based on tone, language, and credibility. 

Reflection:   Groups will share/compare results with the rest of the class.  Why did your group rate the headline this way?  Which headlines make you want to read the article/watch the video?  Which parts of the rubric were the most helpful?  How can you use this as you read about news events in the future?

Fake News on Social Media

Opposing Viewpoints in Context covers the topic of Fake News on Social Media . This resource contains academic journals, reference sources, images, videos, and statistics. 

Stanford Education Group Activities

In 2016, the Stanford History Education Group published an extensive study based on their year and a half long project to assess students' ability to judge the credibility of online information.  The summary of their results could be boiled down to one word:   bleak .  The study covered students from middle school through college, with each group demonstrating that they are 'easily duped' when evaluating information on social media.

Assessments used in the study are available through the report linked below, and may be used in classes.  College level assessments begin on page 15, though other levels can be appropriate for beginning learners of any age. 

Evaluating Information: the Cornerstone of Civic Online Reasoning 2016 report of a study conducted by the Stanford History Education Group to assess students' ability to evaluate information available through social media

How Students Engage with News: Five Takeaways for Educators, Journalists, and Librarians - Project Information Literacy

The News Study research report presents findings about how a sample of U.S. college students gather information and engage with news in the digital age. Results are included from an online survey of 5,844 respondents and telephone interviews with 37 participants from 11 U.S. colleges and universities selected for their regional, demographic, and red/blue state diversity. A computational analysis was conducted using Twitter data associated with the survey respondents and a Twitter panel of 135,891 college-age people. Six recommendations are included for educators, journalists, and librarians working to make students effective news consumers. To explore the implications of this study’s findings, concise commentaries from leading thinkers in education, libraries, media research, and journalism are included

  • ​Alison J. Head, John Wihbey, P. Takis Metaxas, Margy MacMillan, and Dan Cohen, “How Students Engage with News: Five Takeaways for Educators, Journalists, and Librarians,” Project Information Literacy Research Institute. (October 16, 2018).  http://www.projectinfolit.org/uploads/2/7/5/4/27541717/newsreport.pdf

European Response to 'Fake News'

  • Common Sense Wanted Resilience to ‘Post-Truth’ and Its Predictors in the New Media Literacy Index 2018

The Media Literacy Index was created in 2017 and measures the resilience to ‘post-truth’, ‘fake-news’ and their consequence in a number of European countries and offers a useful instrument to finding solutions. This is second edition of the Media Literacy Index (2018) scores and ranks 35 countries in Europe according to their potential to withstand the ‘post-truth’ and its negative consequences.

Spotting Fake News

How to Spot Fake News

Image by IFLA [CC BY 4.0 

(https://creativecommons.org /licenses/by/4.0)], via Wikimedia Commons

PDF and JPG for English and other languages can be found at  https://www.ifla.org/publications/ node/11174  

Other Lesson Plans and Assignment examples

  • Debunking Fake News
  • E.S.C.A.P.E. Junk News
  • Nicole A. Cooke’s Resources for Combating Fake News
  • Media Bias Chart
  • PowerPoint 
  • The Credibility Challenge
  • Real or Satire?

Gaming Your News Knowledge

  • Bad News Spread misinformation via a choose-your-own-adventure setup. Your task is to get as many followers as you can while slowly building up fake credibility as a news site.
  • Fake It to Make It Set a financial goal and write fake news stories to meet it.
  • Newsfeed Defenders Become a social media admin and get your group's profile recognized as a beacon of accuracy and focus with this game.

The Truthful, the Fake, and the Biased

  • The Truthful, the Fake, and the Biased Presentation given on July 29th at OLSSI
  • << Previous: Video Tutorials
  • Last Updated: Mar 13, 2024 10:20 PM
  • URL: https://libguides.lib.miamioh.edu/evaluating_news_sources

evaluating news reporting assignment

Elevate Your News Evaluation

  • About the Exhibit
  • Digital Exhibit

News Source Spectrum

  • News Sources - Posters
  • News Evaluation - Resources
  • Instructional Materials
  • Special Events
  • Sources Consulted
  • Acknowledgments

Instructional Resources

  • Lesson Plans
  • CETL Workshop
  • About Sources
  • Reports & News

Avoiding Fake News (UC Merced Library)

Through this lesson, students will corroborate information to determine source credibility and identify characteristics of credible sources.

Evaluating News Articles (UC Merced Library)

Through this jigsaw activity, students will assess an article's accuracy and bias based on a variety of factors.  They will discover the importance of corroborating information, by comparing three articles in small groups, and recognize that they need to consider many factors when evaluating news articles.

Evaluating Claims: Facebook Edition (@ Community of Online Research Assignments)

This lesson has students use credible sources to support or debunk a claim (pseudoscientific claim or conspiracy theory).  Students are required to explain why the sources they are using are credible.  Details available at CORA site.

Evaluating News Sites: Credible or Clickbait? (@ Community of Online Research Assignments)

Students compare two articles, one from Reuters and another from BiPartisan Review, reporting on the same event.  They compare how the content is presented in order to determine its credibility.  Details available at CORA site.

Criteria to Evaluate the Credibility of WWW Resources (George Mason University) This web pages offers questions to ask when evaluating the credibility of online sources.  It also explains how to deconstruct a URL.

9 Questions to Help you Evaluate the Credibility of News Sources (Poytner) Krueger outlines questions to ask while evaluating news.  Poytner has additional resources about the code of professional journalists.

Pew Research Center - Journalism & Media See Media and News for regular reporting on news consumption in the United States as well as changes in the new industry.  The Modern News Consumer: News attitudes and practices in the digital era (July 2016) is an insightful report with topics like "Pathways to News" and "Loyalty and Source Attention".

Article Comparison

  • CBO report: 24 million fewer insured by 2026 under GOP health care bill (14 Mar. 2017) CNN
  • Trump administration disagrees with CBO report on health care (13 Mar. 2017) FOX
  • "Criteria to Evaluate the Credibiltiy of WWW Resources" (George Mason University) annotated PDF - Winek
  • Reader's Guide (New York Time) annotated PDF - Winek
  • 9 Questions Credibility - Think Like a Journalist (Poynter) annotated PDF - Winek
  • The Modern News Consumer report (Pew Research) annotated PDF - Winek
  • Critical Thinking and Argumentation - presentation Jeremy Mumford
  • Critical Thinking and Argumentation Presentation - PDF Mumford
  • News Context & News Evaluation Lesson presentation (PDF) Davidson Squibb

Red Feed, Blue Feed ( Wall Street Journal )

This site visually displays news posts on selected topics (ISIS, immigration etc.) from two opposite perspectives.  WSJ created these feeds based on a 2015 study by Facebook researchers.  To display the se very conservatively aligned (red) and very liberally aligned (blue) feeds, posts have been shared 100+ times by FB users, and the source has more than 100,000 followers. *The purpose of this site is to show differing viewpoints side by side.  The feed do not represent actual feeds; users, identified as very X aligned , could have a greater diversity of views in their personal feeds.  See the Methodology for details.

Trust Project (Santa Clara University)

“The Trust Project explores how journalism can stand out from the chaotic crowd and signal its trustworthiness.”  Part of the project has involved identifying indicators of trust for news through asking the public what they value about the news and working with editors from news organizations.  Those involved in the project would like to use technology to help readers and news distribution platforms (e.g. social media) to identify accurate, quality, and ethical news. See Trust Project Summit Report (PDF) pg. 38 for a prototype of user-interface prototypes with trust indicators.  The indicators can be useful criteria for news evaluation when working with students.  The Trust Project site also includes information on journalism ethics.

Climate Feedback (UC Merced, Center for Climate Communications)

More than 200 scientists from around the world are annotating news articles on climate change and rating them for accuracy and credibility at climatefeedback.org. The goal is to help readers identify trustworthy sources of information, promote critical thinking and challenge misinformation.  You can view overall rating and summaries or view scientistists' annotations in context.  Annotations are made using hypothes.is .

Media Bias Fact Check (MBFC News)   https://mediabiasfactcheck.com/

This independent online media outlet classifies news sources into five categories: least biased, left-center bias, right-center bias, left bias and right bias.  At Search Sources , type in a news publication, e.g. CNN, to learn more about its reporting and bias.  MBFC explains how it determines bias in its Methodology section.

AllSides   http://www.allsides.com/

This is an interactive news and educational site with a bias rating system intended to help news consumers see and understand different perspectives.  Under  News , view a current news topic with reporting from center, left and right leaning news sources.  AllSides also classifies news sources into categories: center, lean left bias, lean right bias, left bias and right bias. AllSides determines its bias ratings through blind surveys, third party data, and community feedback.

Wikipedia   https://www.wikipedia.org/

Wikipedia provides background information on many news sources.  After finding a Wikipedia entry about a specific source, e.g. The Washington Post , look for sections on the history, political stance or controversies associated with the news source.  It is also helpful to look at the linked references cited at the bottom of the entry.

“About” the Source

A new source’s own site or publication often offers an About section.  Visit this section for information about the publication and the perspective it brings.  Some news sources may be forthcoming about their focus and intent.

Otero, Vanessa. “The Reasoning and Methodology Behind the Chart.” All Generalizations are False.  19 Dec. 2016, http://www.allgeneralizationsarefalse.com/?p=65 .  See Oteros' website for her News Quality chart graphic and blank versions of the News Quality chart for instructional purposes.  See more about the news source spectrum on this guide.

Pew Research Center - Journalism & Media

See Media and News for regular reporting on news consumption in the United States as well as changes in the new industry.  The Modern News Consumer: News attitudes and practices in the digital era (July 2016) is an insightful report with topics like "Pathways to News" and "Loyalty and Source Attention".

How to Fact Check Fact News Sites (Channel 4 News)

This 2 minute video shares practical ways to check that a news story is not fake.  The three tips revolve around checking the sources and its writers, checking quotes for original attribution, and ensuring images are associated with a story through a reverse image search.

FactCheck.org (A Project of The Annenberg Public Policy Center)

In "How to Spot Fake News" Kiely and Robertson give advice for evaluating for spotting fake news and give examples that could be used in a classroom for discussion and analysis.

Fake News Resource Guide (Indiana University East Campus Library)

This library guide provides information for students on the topic of fake news.  Instructors may find value in this guide as the Resources tab includes articles about “fake news in the news”, known fake sites, and fact checking links. The Check Your Own Claims! tab offers claims for students to examine.

The News Literacy Project (NLP)

A nonpartisan nonprofit working with educators and journalists to provides middle and high school “students with the essential skills they need to become smart, active consumers of news and information and engaged, informed citizens.”  See Checkology, a virtual classroom experience to build students news literacy. Checkology offers basic and premium access.

NLP and Facing History and Ourselves co-created a unit on the role of journalism in a democratic society called “ Facing Ferguson: News Literacy in a Digital Age ”.  The project includes learning goals, and 11 lessons each with materials and activities.

Making Sense of the News: News Literacy Lessons for Digital Citizens

A six-week online course on distinguishing fake news from reliable information. Offered through Coursera starting May 2017.  Some content can be audited but there is a fee to complete the course.  Created by the University of Hong Kon and The State University of New York (Stony Brook University School of Journalism.)

News Literacy Curriculum for Educators (American Press Institute)

Includes materials and lesson plans for educators though the focus is on middle and high school students.

  • << Previous: News Evaluation - Resources
  • Next: Special Events >>
  • Last Updated: Feb 16, 2024 2:03 PM
  • URL: https://libguides.ucmerced.edu/elevate-news-evaluation

University of California, Merced

Banner

  • EMU Library
  • Research Guides

Evaluating News Sources

For faculty.

  • Evaluating News

Activities & Lesson Plans

Textbooks and course outlines, additional readings.

  • News Overview

The resources below supplement the information and links on the Evaluating News and News Overview pages.

Evaluating News screenshot

  • Bias in Your Search Results Workshop with the goal of helping students recognize that search tools, including Google, YouTube, and library systems, reflect power structures and have biases.
  • Introduction to Fact Checking Class session outline with activities for first year writing students in STEM majors.
  • Media Observer assignment An assignment in which students observe and reflect on their use of media over a 3-day period.
  • Researching a Controversy Activity on thinking critically about sources and information found in Wikipedia.
  • Science in the Media Assignment Assignment description with rubric. Students give small group presentations examining how the popular media reports scientific findings.
  • Source Evaluation via SIFT Technique Activity that teaches lateral reading techniques to evaluate an information sources.
  • Check, Please! Starter Course Five lessons on fact and source checking that can be copied and adapted. Based on the SIFT approach: 1. Stop 2. Investigate the source 3. Find trusted coverage 4. Trace claims
  • Fake News, Lies and Propaganda: The Class Course materials for a 7-week course, taught at the University of Michigan, including assignments and an Open Canvas version of the course, available for re-using.
  • Stony Brook Model of News Literacy Model course syllabus and lessons "designed to teach students how to use critical thinking skills to judge the reliability of news and information they need to be productive citizens".
  • Web Literacy for Student Fact-Checkers Open source textbook that explores approaches to evaluating the information in social media streams. By Mike Caufield, director of blended and networked learning at Washington State University Vancouver.
  • How students engage with news: Five takeaways for educators, journalists, and librarians October, 2018. The Project Information Literacy News Study research report presents findings about how a sample of U.S. college students gather information and engage with news in the digital age.
  • Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course Research article. Brodsky et al., Cognitive Research: Principles and Implications, 2021. Student who received a fact-checking curriculum improved their ability to evaluate online information. Instructional materials used in the study are available.
  • Information Literacy in the Age of Algorithms January, 2020. Project Information Literacy study on how college students understand the modern information landscape.
  • Lateral Reading: Reading Less and Learning More When Evaluating Digital Information Stanford History Education Group, 2017. The full report on research assessing the information evaluation practices of students, faculty, and professional fact checkers.
  • Reflective Judgement Model - King & Kitchner King's website summaries the theory developed with Kitchner’s work that describes “the development of epistemic assumptions and how young adults and adults learn to make truly reflective judgments.” The page dedicated to educational implications is particularly useful as a reference for developing classroom activities and assignments. For additional discussion see: Love, P. G., & Guthrie, V. L. (1999). King and kitchener's reflective judgment model. New Directions for Student Services, 1999(88), 41-51.
  • Stanford researchers find students have trouble judging the credibility of information online Research found that students from middle school through college have difficulty determining when online information is trustworthy. November 22, 2016.

evaluating news reporting assignment

  • << Previous: Evaluating News
  • Next: News Overview >>

Get Research Help

Use 24/7 live chat below or:

Academic Projects Center in-person help, Mon-Thur, 10am - 5pm

Email or phone replies

Appointments with librarians

 Access  Library and Research Help tutorials

  • Last Updated: Apr 1, 2024 4:44 PM
  • URL: https://guides.emich.edu/newseval

College & Research Libraries ( C&RL ) is the official, bi-monthly, online-only scholarly research journal of the Association of College & Research Libraries, a division of the American Library Association.

C&RL is now on Instragram! Follow us today.

Piotr S. Bobkowski is Associate Professor at the University of Kansas; email: [email protected] .

Karna Younger is Open Pedagogy Librarian and Assistant Librarian at University of Kansas Libraries; email: [email protected] .

evaluating news reporting assignment

C&RL News

ALA JobLIST

Advertising Information

  • Research is an Activity and a Subject of Study: A Proposed Metaconcept and Its Practical Application (71455 views)
  • Information Code-Switching: A Study of Language Preferences in Academic Libraries (38563 views)
  • Three Perspectives on Information Literacy in Academia: Talking to Librarians, Faculty, and Students (27151 views)

News Credibility: Adapting and Testing a Source Evaluation Assessment in Journalism

Piotr S. Bobkowski and Karna Younger *

This paper discusses the development of a source evaluation assessment, and presents the results of using this instrument in a one-semester information literacy course for journalism students. The assessment was developed using the threshold concept perspective, the “authority is constructed and contextual” frame, and an established source evaluation rubric. As formative assessment, the instrument showed that students’ source evaluations lacked evidence and included ritualized language. As summative assessment, it showed that students used a greater range of indicators of authority than they used initially, and used evidence more frequently. The assessment can measure students’ source evaluations across the disciplines.

Introduction

Source evaluation is a fundamental information literacy skill for all undergraduate students and is indispensable for aspiring journalists and other communication professionals. A journalist’s credibility and livelihood depend on their ability to locate, evaluate, verify, and accurately report credible sources, 1 as illustrated by the fates of disgraced journalists like Jayson Blair, Stephen Glass, and Brian Williams, who fabricated or used inappropriate sources. 2 Accreditation requirements for departments and schools of journalism include demonstrating that graduates can “evaluate information by methods appropriate to the communications professions in which they work” and “critically evaluate their own work and that of others for accuracy and fairness.” 3 According to a survey of journalism faculty, most journalism students need greater proficiency in evaluating and selecting quality information sources. 4

Although the literature contains a number of published information literacy assessments, 5 journalistic writing uses unique sources and treats sources differently from many other academic disciplines, justifying the need for a specialized source evaluation assessment. Journalism students learn to use not only scholarly research as sources but also news reports, official statements, public records, and interview subjects, among others. Unlike traditional academic standards, journalists generally attribute their sources directly inside their articles, either by name or unnamed, and do not produce works cited or reference lists. Correspondingly, an Association of College and Research Libraries (ACRL) working group mapped ACRL’s Information Literacy Competency Standards (Standards) to undergraduate journalism education in 2011. 6 This document includes the learning outcomes that journalism students and professionals should achieve to be able to evaluate the credibility of their sources. There has been little subsequent published work at the intersection of information literacy and journalism education, particularly since the revision of ACRL’s Standards to the Framework for Information Literacy for Higher Education ( Framework ). 7 Likewise, there is, as yet, no published Framework -based source evaluation assessment that fits journalism education. Thus, despite a history of discipline-specific information literacy recommendations, collaborating librarians and journalism instructors do not have standardized and reliable assessment tools, such as rubrics, for assessing their students’ source evaluations under the Framework . In her assessment of high school students, for example, Sarah McGrew cautioned that evaluation checklists mislead students into using superficial features of websites, such as spelling and grammar, to judge the credibility of information. 8 McGrew’s related rubric, however, was not based on the Framework . 9 The need to fully integrate information literacy into a learner-centered journalism course motivated this article’s authors to develop the Framework -based source evaluation assessment presented here.

At the University of Kansas, the ability to evaluate and determine source credibility is a central learning outcome in a one-semester course titled Infomania: Information Management, which is required of all students majoring or minoring in journalism and mass communication. This is the second course that students take in the journalism sequence, following a large introductory survey course, and before or concurrently to taking a media writing course. The source credibility skills that students are expected to develop in this course are meant to prepare them to identify and use credible sources accurately in their writing. The course has been delivered in 30-student sections by four or five independent instructors each semester. Most instructors collaborated with the university’s librarians to deliver some of the course content. Prior to 2017, these collaborations were limited to Standards-based one-shot instructional sessions focused on using the library catalog or specialized databases accessible through the library website. When conducting instructional sessions in subsequent courses, however, the librarian observed inconsistencies in students’ abilities to identify indicators of credibility in information sources and to argue how these indicators contribute to or diminish the credibility of sources. The librarian and the lead instructor of the Infomania course—this article’s authors—thus determined to integrate information literacy instruction more uniformly in the Infomania course. The source evaluation assessment discussed here stands at the core of the resulting multisemester course redesign. The redesign eventually encompassed the development of an OER textbook, common assignments across all sections, and a shift in how the course and information literacy instruction are delivered. This article discusses the process used to develop the source evaluation assessment, as well as the initial results generated from its implementation.

The article’s literature review and research framework sections discuss research that predicated the development of the assessment and present the assessment’s conceptual framework. In short, the threshold concepts perspective shaped the authors’ understanding of the assessment’s role, and ACRL’s information literacy frame “authority is constructed and contextual” best described the source evaluation skills and thinking to be assessed. 10 Aligning this frame with the journalistic concept of credibility, two learning outcomes were developed: 1) students can identify indicators of credibility in an information source; and 2) students can argue how these indicators contribute to or diminish the credibility of the source. Research questions articulated at the conclusion of the research framework section guided the deployment of the assessment as a formative and summative assessment tool 11 and the analysis of the assessment’s scores.

The methods section details how the article’s authors adapted Erin Daniels’ assessment and rubric 12 to the parameters and needs of the Infomania course, as well as the first instance they used it to measure students’ source evaluations. The assessment, which asks students to evaluate a news source as they will in future professional settings, 13 generates scores on two dimensions, which align with the two learning outcomes. The assessment evaluates students on their ability to justify their source evaluations and prioritizes reasoning over “correctness,” allowing instructors to rate the degree of student understanding of credibility. 14

The article’s results section presents the findings of the assessment’s initial deployment. As formative assessment, the results quantify the characteristics of students’ source evaluations at the beginning of the information literacy course. As summative assessment, end-of-semester results show both students’ progress and lack of progress over the duration of the course and thus quantify the effectiveness of source evaluation instruction in the course. In the article’s discussion, the authors reflect on these results and report how they informed modifications to information literacy instruction in the course. Despite its initial application in a journalism course, this assessment can be adapted across the disciplines to measure and track students’ source evaluation efficacy.

Literature Review

The path toward developing a source evaluation assessment began with a review of published research on college students’ source evaluation skills. University and college students’ shortcomings in evaluating the information they encounter are well documented. Alison Head and Michael Eisenberg of Project Information Literacy found that students struggle to evaluate the credibility of information, which they typically find using strategies seemingly “learned by rote” instead of through innovation, experimentation, and developmental approaches to seeking and evaluating information. 15 Subsequent studies have shown that college students acknowledge the need to evaluate the credibility of the sources they use, 16 but a majority evaluate sources against memorized norms like timeliness, author expertise, and a website’s top-level domain (such as .org, .gov, .com) or rely on the advice of instructors or friends with trusted expertise. 17 Students generally do not navigate the internet effectively or efficiently to assess information credibility 18 and are unable to determine the authorship of digital sources, assess the expertise of authors, and establish the objectivity of information. 19 Students also admit to relying on superficial evaluation cues such as the graphic design of a digital source, their familiarity with the source, or its inclusion in a research database. 20 To complicate matters further, the existence of fake news and the speed of the news cycle have negatively affected students’ ability to evaluate news credibility. 21 Although students tend to be satisfied with their evaluation skills, 22 in practice, many foreclose the source evaluation process in favor of easily available information. 23

While information literacy research studies are calibrated to detect deficiencies in students’ source evaluations, these deficiencies can be obscured from disciplinary instructors because, ostensibly, students know the language of source evaluation. 24 The following quote from one Infomania student’s source evaluation illustrates that, while a student may know to focus on the author of an information source and to seek evidence outside the source in question to determine the author’s authority, the student may lack the context and reasoning skills to fully evaluate this authority. This student wrote:

My conclusion about the article is that it is not credible because the author is not credible. She made valid points[;] however, she is not a journalist. She has a background in technology and not in writing. I looked her up online and she did not seem like a credible source. She has done research for three months[;] therefore[,] she is not an expert in this area.

An instructor may score this student well on a source evaluation because this student appears to know some source evaluation criteria. Reflecting prior research, 25 however, the student applies these criteria superficially and incompletely in the evaluation. Specifically, the student knows to research the author but fails to critically argue why the author’s lack of journalism experience and background in technology negate her ability to write about the topic at hand. In other words, it appears that the student is well practiced in deploying buzz words such as “credible” and “expert” but does not yet fully understand how to critically and accurately apply these words in a source evaluation.

Because students such as this one use source evaluation language to mask their difficulties navigating and evaluating information, source evaluation skills can be classified as troublesome knowledge, and, more specifically, as ritual and conceptually difficult knowledge. 26 Ritual knowledge is part of a social routine and is “rather meaningless,” while conceptually difficult knowledge results from a “mix of misunderstandings and ritual knowledge.” 27 An effective source evaluation assessment can expose superficial and ritually foreclosed evaluations, identifying where in the evaluation process students are succeeding and falling short. This article’s authors used the threshold concept perspective as the organizing principle for developing such an assessment.

Research Framework

The threshold concepts perspective 28 suggests that students fall back on ritualized language when completing source evaluation tasks because they have not crossed a key threshold that informs source evaluations. The threshold concept theory describes the moment a learner is transformed by a “shift in perception” that awakens them to a new way of thinking about a particular concept or even about an entire discipline. 29 Having successfully crossed a conceptual threshold, students cannot unlearn their new knowledge but integrate it into and develop a deeper understanding of interrelated concepts. 30

A threshold concept is not necessarily bound to a discipline and leads to new conceptual frontiers. 31 Subject experts in several fields have identified and used threshold concepts to improve instruction and student learning. 32 In electrical engineering, for instance, instructors correlated transparent instruction of threshold concepts with students’ improved comprehension and lower attrition. 33 A business instructor showed that “power” as a threshold concept helped students better understand how political institutions and actors influence business knowledge, attitudes, and skills. 34 A journalism instructor used a threshold concepts approach to increase students’ data confidence, quantitative literacy, and data journalism skills. 35

In information and library science, ACRL based its six frames on threshold concepts and linked source evaluation with the “authority is constructed and contextual” frame. 36 ACRL defines authority as “a type of influence recognized or exerted within a community.” 37 The rationale supporting the notion of constructed authority is that “various communities” and their standards as well as the needs of the learner will have different standards for what constitutes a trusted source. 38 This means that, in the process of determining the authority of a source, learners must “use research tools and indicators of authority to determine the credibility of sources” and understand “the elements that might temper this credibility,” as ACRL detailed in an example of a knowledge practice relative to this frame. 39 The concept of authority is not bound to the field of information science; 40 in journalism, it is analogous with the concept of credibility. 41 Communications librarian Margy MacMillan, for example, argued that “authority is contextual and constructed” in the information literacy concept with which journalism students contend when they learn to fact-check information by consulting multiple sources. 42

Approaching and crossing a threshold is not easy, however, and the “authority is constructed and contextual” concept can be troublesome for novices. While progressing toward a threshold, students can engage in mimicry instead of authentically embracing the threshold concept. 43 While experts may detect authority by critiquing a source’s expertise and experience in light of the “societal structures of power” of time and place, 44 novices often lack such a nuanced understanding of authority. Instead, they may rely on “basic indicators of authority, such as type of publication or author credentials.” 45 Indeed, MacMillan acknowledged that the journalism students in her study of student source evaluation skills may have relied on some “performativity” that prevented a precise and “objective measure” of students’ abilities to evaluate sources. 46

In sum, identifying a threshold concept that underlies source evaluation skills can facilitate the development of an assessment of these skills that detects students’ masking and mimicking language. The “authority is constructed and contextual” threshold concept was used as a foundation for an effective source evaluation assessment.

Threshold Concepts and Assessment

Following the introduction of ACRL’s Framework , library instruction and assessment expert Megan Oakleaf urged librarians and instructors to tackle information literacy frames with measurable learning outcomes, authentic assessment activities, and new or adapted rubrics. 47 Following Oakleaf’s advice, this article’s authors reasoned that the frame “authority is constructed and contextual” suggests that evaluating a source entails understanding what constitutes authority or credibility within a discipline and accepting or challenging the constructed and contextual nature of this authority or credibility. 48 The authors coupled Oakleaf’s guidance with extant research, particularly Lea Currie and colleagues’ call for course-integrated instruction to provide students with a sense of credibility criteria and a deeper understanding and context for evaluating information. 49 This led to the formulation of the following two learning outcomes for credibility evaluation: 1) students can identify indicators of credibility in an information source; and 2) students can argue how these indicators contribute to or diminish the credibility of the source. In terms of assessment format, these learning objectives dictated using an open-ended assessment in which students demonstrate their reasoning rather than an adherence to a set of rules. 50

To develop an assessment that matched these learning objectives and assessment characteristics, the authors reviewed several published information literacy assessment strategies and rubrics. 51 A number of these tools proved unsuitable for this project because they are based on out-of-practice Standards, assess broadly the entire suite of information literacy outcomes, and would not facilitate the type of open-ended assessments that the learning objectives of the Infomania course necessitate. 52 Other published rubrics are focused narrowly on specific information literacy elements like search or citation, not source evaluation. 53 Several rubrics that do focus on source evaluation, meanwhile, evaluate the sources that students cite in their research papers or portfolios but do not use open-ended prompts to probe students’ arguments for selecting these sources. 54

Erin Daniels’ assessment stands out among the reviewed assessments for being narrowly tailored to source evaluations and for its open-ended nature, which facilitates assessing students’ reasoning. 55 This tool also aligns with the two learning outcomes of the Infomania course. Daniels’ assessment expects students to identify one or more credibility cues in an information source. In the language of this assessment, a credibility cue is any element of an information source (such as author, publisher, tone, or sources cited) that points to a source’s credibility. After identifying a cue, a student is expected to collect and present evidence about whether or not the cue contributes to a source’s credibility. A student’s response about an information source is assessed based on how well the student uses credibility cues and associated evidence to articulate an argument about the overall credibility of the information source.

Research Questions

Having identified an open-ended assessment focused on source evaluation, the authors proceeded to adapt it to the parameters of the Infomania course. In addition to aligning with the course learning outcomes, the authors aimed for the assessment to identify students’ abilities and difficulties with source evaluation at a single time point (in other words, at the beginning of a semester), and over a period between two time points (that is to say, over the course of a semester). The authors thus used the assessment as an instrument of formative and summative assessment. 56 The summative assessment would yield information about the effectiveness of the course to advance students’ source evaluation knowledge and skills. The authors thus identified the following research questions:

RQ1: Early in the semester, (a) how well do students identify indicators of credibility in an information source, and (b) what indicators of credibility do they identify?

RQ2: Early in the semester, how well do students argue about the credibility of an information source?

RQ3: Late in the semester, compared to early in the semester, (a) how well do students identify indicators of credibility in an information source, and (b) what indicators of credibility do they identify?

RQ4: Late in the semester, compared to early in the semester, how well do students argue about the credibility of an information source?

Adapting the Assessment

Daniels’s original assessment consists of evaluating students’ annotated bibliographies in which students are expected to judge the credibility of each source they list. Each annotation receives a score on a seven-point rubric (see table 1). 57 This article’s authors implemented four modifications to the original assessment to address differences between its original context and how it would be used in the Infomania course. The first modification is discipline-specific. Recall that journalism students typically do not produce bibliographies but instead write news articles, broadcast scripts, or news releases that identify sources in text only. Instead of asking journalism students to compile annotated bibliographies, the revised assessment uses the discipline-appropriate strategy of asking students to determine the credibility of an article as a news source. 58

The second modification reflects the authors’ desire to compare assessment scores both at a single time point and between time points (that is, beginning and end of the semester). In the original assessment, student scores are not comparable because each student’s annotated bibliography features a different number of entries and a corresponding different number of scores. 59 This is because Daniels’ original assessment functions “as a feedback mechanism to students 
 rather than as a firm grading system.” 60 To generate comparable scores, the revised assessment asks all students to evaluate the same article, which is presented in the assessment prompt. Instead of evaluating a variable number of bibliography sources, as called for in the original assessment, students in the revised assessment evaluate only one article.

The next modification was motivated by the need for different raters to score students’ work with consistency. If the assessment was to be used in the Infomania course over successive semesters, it needed to be replicable by the instructors assigned to the course. The rating criteria were simplified to help each independent rater apply the scoring criteria the same way (that is, to increase the criteria’s reliability). 61 The original scoring scheme was first divided into two dimensions: breadth and depth. Scoring a student’s response in the revised assessment proceeds as follows (see figure 1). A rater first identifies if a student’s response contains any credibility cues. Recall that a credibility cue is any element of an information source that indicates whether or not the information source is credible (such as publisher, author, date, or sources). The rater assigns a breadth score, representing the number of credibility cues in the evaluation (range: 0 to n , where n is the number of credibility cues identified in the response). If the evaluation does not identify any credibility cues, the breadth score is 0, and the depth dimension is not scored.

If the evaluation does contain one or more credibility cues, the evaluation receives a score on the depth dimension for each identified cue. Depth is scored using a three-point scale, which was derived from the original seven-point scale (see table 1), using a survey design best practice of asking about only one concept per question. 62 The depth score criteria are as follows:

  • 1 means that the cue is identified in the evaluation, but that there is no evaluation argument associated with it (this corresponds to 4 in the original assessment)
  • 2 means that a cue is used to articulate an evaluation argument, but no evidence is provided to support this argument (this corresponds to 5 in the original assessment)
  • 3 means that a cue is used to articulate an evaluation argument, and evidence is presented that supports this argument (this corresponds to 6 in the original assessment)

For reliability and redundancy reasons, the revised rubric omits the original rubric’s last level. 63 See table 2 for examples of statements scored at each of the three levels of depth.

The last modification concerns the scores that each student’s evaluation receives. In the original assessment, each source in a student’s bibliography receives one score, regardless of how many credibility cues a student articulates for that source. This procedure potentially masks information when a student considers more than one indicator of authority for an information source. In the original assessment, students also receive as many scores as they have annotations in their bibliographies. In the revised assessment, students evaluate only one source, which is equivalent to one annotation in an annotated bibliography. For this one evaluation, however, a student receives two scores: a breadth score, which shows how many indicators of authority (that is to say, credibility cues) they consider in their evaluation; and a depth score, indicating how much evidence they use in their evaluation. Each student’s evaluation generally receives one breadth score and several depth scores. The depth scores can be averaged for analysis purposes.

Having modified the original assessment to fit the needs and goals of the course, the authors used the assessment as an in-class activity at two time points during the same semester to address this project’s research questions. The initial assessment took place in the second week of the semester, before instruction on credibility evaluation began. The end-of-semester assessment took place during the last week of class. A total of 152 students, out of 164 enrolled (93%), completed the assessment at both time points. These students’ classifications ranged from sophomore to senior.

Procedure and Materials

The authors introduced the assessment to students in the course of a regular class meeting. Students completed the assessment for class credit, and were given the option to participate in the research study for extra credit. Research participation consisted of allowing researchers to access and evaluate the assessment assignment. These procedures were approved by the university’s human subjects protection program. All but two students in the course consented to participate in the study.

Using the Qualtrics online platform, students were presented with a news article and asked to evaluate its credibility. Students were provided with an online link to the article, and a paper copy of it. The prompt read as follows:

In the space below, write an evaluation of the article’s credibility as a news source. You may use any source at your fingertips to evaluate this article. Your evaluation should include: Your overall conclusion about the article’s credibility; A list of the article elements you used in examining its credibility; Evidence about these elements that explains how you arrived at your conclusion.

To prevent a familiarity effect on the end-of-semester assessment, students did not evaluate the same article at the two timepoints. To ensure that the beginning- and end-of semester assessments consisted of similar conditions, two articles that were matched on the quality of their credibility cues were used. Both articles represented a genre of information that students likely come across in their social media feeds. Both articles were published by nonlegacy news sources (such as BuzzFeed, Refinery29), were recent to the date of each assessment, were written by individuals who were not staff writers at each publication, focused on timely topics (such as political echo chambers, Twitter verification process), used other news articles and social media as sources, cited these sources inconsistently, were written in a casual tone, and included both factual and opinion-based statements.

This article’s two authors trained together to apply the coding scheme using a set of responses from a previous class, in which the assessment was pilot-tested. The authors also developed a grid to score each student response (see figure 2). Each author then scored the same 20 percent of the responses, arriving at an acceptable level of intercoder reliability. Percent agreement between the two authors was 91 percent, meaning that each author scored about 9 out of every 10 response elements the same way. Because some of the agreements may have been due to chance, Cohen’s kappa , a metric that adjusts for such chance agreement, was also calculated. 64 This value was .82, which falls in the “almost perfect” category of interrater agreement. 65 Having established good reliability, each author then coded half of the remaining responses.

The first two research questions informed formative assessment results early in the semester. RQ1a asked about how well students identified indicators of credibility in the article they were presented and asked to evaluate. Breadth scores (that is, the number of credibility cues that students identified, such as author or date) were used to address this question. On average, students evaluated 3.47 credibility cues in their early-semester responses. Overall, students’ responses featured between 1 and 6 credibility cues.

RQ1b asked about what indicators of credibility (that is, credibility cues) students identified in their evaluations. Figure 3 (light-colored bars) illustrates the percentages of students who identified each credibility cue. Early in the semester, most students identified as a credibility cue an article’s content (86%) or author (84%). Fewer, but still a majority, identified an article’s sources (66%) and publisher (57%). Just over a third of the students identified an article’s writing style (35%), and few identified its publication date (4%).

RQ2 asked about how well students argue about the credibility of an information source. The depth score, which is a measure of argument quality, was used to address this question. Students evaluated a majority (58%) of the cues they identified at level 2, which means that the students primarily relied on their personal opinions to support credibility arguments. They evaluated about a third (35%) of the cues at level 1, which means that they did not offer any evidence for their credibility arguments. Students evaluated only 7 percent of the cues at level 3, which means that they used little external evidence to support their credibility arguments. On average, students evaluated a credibility cue at a depth of 1.73.

The remaining research questions addressed summative assessment (that is, the differences in assessment scores between early and late in the semester). RQ3a concerned the difference in how well students identified indicators of credibility, indicated by how many credibility cues students identified early versus late in the semester. As figure 4 illustrates, on average, students identified 3.45 cues late in the semester, which was essentially equal to the number of cues they had identified early in the semester, which was 3.47 (see RQ1a). The range of the cues that students identified in their responses late in the semester was also the same as early in the semester: between 1 and 6 cues.

To evaluate statistically the summative assessment results, an independent-samples t -test with 95% confidence intervals was used. This test indicates whether there is a statistically significant difference between two averages. Each set of early- and late-semester scores was tested to determine if they were statistically different. There was no statistically significant difference on breadth—the number of cues that students identified—early and late in the semester, t (272) = .16, p = .88.

RQ3b asked about differences in the categories of cues that students used between early- and late-semester evaluations. The dark-colored bars in figure 3 illustrate the number of cues in each category at the end of the semester. A majority of the students evaluated an article’s sources (84%), which represented a statistically significant increase of 18 percent from early in the semester, t (272) = 3.42, p = .001. A majority of the students also identified the article’s author (69%), but this represented a significant decrease of 15 percent from early in the semester, t (272) = 3.02, p = .003. There was only a 2 percent increase in the proportion of students who evaluated the article’s publisher (59%) late in the course. This difference was not statistically significant, t (272) = .37, p = .72.

Significantly fewer students evaluated an article’s content late in the semester (55%), a 31% decrease, t (272) = 6.04, p < .001. Finally, about the same percentages of students evaluated the article’s visuals (29%), style (25%), and date (25%) at the end of the semester. These values represented significant increases for visuals (15%), t (272) = 3.00, p = .003; and date (20%), t (272) = 4.99, p = .001. The frequency with which writing style was evaluated late in the semester was not significantly different from early in the semester, t (272) = 1.85, p = .07.

RQ4 asked about the difference in how well students argued about the credibility of the source, that is, the depth of students’ evaluations. While a majority of the cues still were evaluated at level 2 late in the semester (43%), this was a decrease from early in the semester. Significantly more cues were evaluated at level 3 (34%), and significantly fewer cues were evaluated at level 1 (23%). As figure 5 illustrates, there was a significant increase of .40 in average evaluation depth, with the average cue being evaluated at a depth of 2.13 late in the semester, t (272) = 7.77, p < .001.

When viewed as an instrument of formative assessment, assessment results showed students’ baseline knowledge. As summative assessment, the results documented students’ progress in source evaluation over the semester. Positioned between the two assessments, the Infomania course functions as an intervention aimed at developing students’ abilities to identify, research, and contextualize markers of credibility. The authors used the assessment results to gauge how efficacious the course was in meeting these learning objectives, and to identify opportunities for tailoring instruction in subsequent iterations of the course. The following sections discuss the key insights that emerged from the two administrations of the assessment.

Formative Assessment

At the beginning of the semester, most students were novices at determining source credibility because they failed to offer evidence-based evaluations. Some students also showed ritual knowledge (that is, rehearsed evaluation language that did not match the source under consideration). In their evaluations, most students identified at least one of these four cues as indicators of credibility: author, article’s argument, publisher, or an article’s sources.

A majority of students (more than 80%; see figure 3) referenced the author of the article they were evaluating, and thus earned a point on breadth. Students then scored either 1 on the depth of their evaluation for indicating the existence of an author, 2 for voicing their opinion of the author’s credibility, or 3 if they supported their evaluation with evidence from other sources. Only 14 students (9%), however, reached the third level. Instead, students typically either mentioned the existence of the author or offered their opinion of the author without providing supporting evidence. The following excerpt from a student’s evaluation of an article about echo chambers written by a fellow in BuzzFeed’s Open Lab for Journalism, Technology, and the Arts illustrates the latter: “The writer’s job title is ‘BuzzFeed Open Lab Fellow,’ I am not sure what that position is or what is required to have that title, therefore this also takes away credibility.” The author’s title was presented in the article’s byline and was mentioned in the text of the article. This student’s evaluation, thus, indicates that while this student read the article, they did not advance an evaluation beyond, what appears to be, a superficial opinion.

A comparable majority of students scored a point on the breadth dimension for mentioning in their evaluations the argument presented in the article. Hardly any of these students, however, provided research as evidence of their credibility determinations. One student, for instance, wrote: “I agree with the author’s standpoint but I don’t believe it is a credible article.” This student, along with many peers, did not summarize or otherwise express their understanding of the author’s argument, that echo chambers limit a person’s diversity of information online, and thus scored 1 on the depth dimension.

The article that students evaluated at the beginning of the semester was published by BuzzFeed. Perhaps because BuzzFeed is a popular information source among undergraduates, 66 students drew more on their own experiences with this website in their credibility calculations than they did for any other credibility cue. One student’s response exemplifies this practice:

While looking for credibility in anything I look where it came from, and who wrote it. In this case it comes from BuzzFeed, an online website with quizzes to tell what kind of cupcake you are, with the occasional reporting on big events happening around the world. I find it hard to separate fact from [opinion] on this site. I think that a majority of the content is biased journalism, instead of a trusted new [sic] source. This alone leads me to think it is not credible.

This student’s experience with BuzzFeed’s entertainment section colored their perception of BuzzFeed’s news section, highlighting an inability to disambiguate entertainment from news stories. Students rarely substantiated their claims of bias or trust by researching BuzzFeed’s editorial standards or publication processes. Instead, many students garnered a depth score of 2 for the publisher by stating the opinion that “anyone” could post to the website.

A majority of students (more than 60%) also scored a point on the breadth dimension by noting the existence of sources in the article. A mere 20 percent of these students, however, discussed validating or researching the sources that were either hyperlinked or mentioned in the article. Another 25 percent of the students who mentioned sources scored 2 on the depth dimension for offering their general opinions of citation practices or markers of credibility. The following excerpt represents a typical 2-point statement about sources: “The article used sources with facts and statistics and cited them correctly. Not only that but they cited it within the article using hyper links [sic] making it very easy for us to check the sources.” The student noticed the author’s use of statistics to discuss the 2016 U.S. presidential election results but apparently did not click on any of the hyperlinked sources to evaluate their credibility or relevancy to the article.

Students’ rare use of external evidence in their evaluations at the beginning of the semester reflected prior research findings. Studies indicate that undergraduates typically evaluate sources against established norms like timeliness, author expertise, and a website’s top-level domain 67 but that they fail to validate a source’s claims, authorship, and sources of information. 68 Students’ ability to do so may be even further complicated by undergraduates’ larger attitudes toward, and misunderstandings about, news. Recent Project Information Literacy research has found that embarrassed students may go with their “gut feeling” to determine the legitimacy of a news source when lacking proper source evaluation skills. 69 Such “gut feelings” may be clouded by an idealization of news as an “objective reporting of facts” or by disillusionment that news sources cannot be trusted or discerned from “fake news.” 70 Within this greater context, overly skeptical and underresearched student opinions may be understood as proof that students default to preconceptions of the news because they lack source evaluation skills.

In addition to the absence of external evidence, some evaluations also exhibited students’ use of ritual knowledge (that is, learned phrases that did not fit the work being evaluated). This tendency was most evident when students wrote about the sources of the article they were evaluating. Using ritual knowledge, some students held journalistic writing to the same source and citation standards as scholarly research. One student, for instance, wrote: “I look to see if any of the information in the article is backed up with any other sources or sightings. [sic] There [are] no footnotes or bibliography, which again leads me to believe in a lack of credibility.” Other students faulted the article for lacking specific source types that they, evidently, had been taught were components of evaluation checklists. The following excerpt illustrates this tendency:

I do not think this article is very credible. The author did not cite the information she uses for data which makes me wonder if it was made up. Citing is an element I used to examine its credibility. If there were citations from a scholarly journal or a gov. [sic] website I would think the data used is credible.

Responses such as this suggested that students were mimicking the use of basic indicators of authority, such as top-level domains, to determine credibility of sources and were unable to use the context of the source under consideration to formulate a more nuanced evaluation. 71

It is possible that the ritual knowledge students used in early-semester evaluations resulted from their prior reliance on information evaluation checklists, which are promoted in some high school and university information literacy programs and are easily findable online. 72 Such checklists, however, can fail to prepare students to properly evaluate sources or the news on the social web in these “post-truth” 73 times. It is possible that many of the students who either missed the target in their evaluations, or failed to support their evaluations with evidence, relied on such limited evaluation tools from their repository of ritualized knowledge to mime responses they believed to be appropriate. 74 It was evident that students needed to develop a more nuanced understanding of how to weigh, contextualize, and judge the credibility of a source. 75 That is to say, students needed to move beyond checklists and their personal opinions to develop a process for critically researching, evaluating, and contextualizing the credibility of sources. 76

Summative Assessment

The information literacy course for journalism students that served as the context for this assessment focused on finding and accessing information using a variety of source types (such as public records, news archives, business filings, scholarly literature), and on evaluating the credibility of this information. By the end of the semester, students were expected to show some improvement in their source evaluation skills. The summative assessment illustrated how much students learned during the semester and how effective course instruction was in advancing this learning.

The end-of-semester assessment revealed little change in the breadth of students’ evaluations (that is, the average number of credibility cues that students identified as indicators of credibility in the article they were evaluating). In both assessments, students averaged between three and four cues per evaluation. Students did identify a greater variety of credibility cues at the end of the semester, however, citing author, argument, and style less often but noting date, sources, and visuals more frequently. This suggests that, during the course of the semester, students expanded their repertoire of what constitutes an indicator of credibility in journalism.

The clearest difference between the two assessments was the increase in the depth of students’ evaluations. At the end of the semester, a greater proportion of cues received scores of 3, and a smaller proportion received scores of 2, than at the beginning of the semester. This means that students used more external evidence to support their evaluation arguments at the end of the semester than they did initially. Many students improved their reasoning in the evaluations, going beyond simply identifying cues or offering instinctive opinions about the cues.

The following evaluation exemplifies how some students validated their opinions through research in the end-of-semester assessment. In the assessment, students had been asked to evaluate a Refinery29 article about Twitter ceasing to verify accounts. The freelance reporter who authored the article largely based it on The Hollywood Reporter ’s coverage and referenced administrative messages from Twitter. One student began their argument with external evidence about the publication: “Refinery 29 is a relatively new entertainment company that began as a startup but is now worth more than $100 million.” Next, the student used the website Media Bias Fact Check to look up two sources cited in the article, which said that these sources typically were accurate, but that they had a liberal bias. The student then reflected on the timeliness of the article, which covered a news event that occurred the same day that the article was published. The student cited the exact times when an official tweet about the event was posted and when the article was published. The student concluded with a summation of the author’s experience and recent publication history drawn from LinkedIn.

Focusing on the article’s publication, sources, date, and author, this student researched and reasoned about the credibility of this article within the context of its creation. First, the student consulted and referenced external sources. While it was common for students to discuss bias or the alleged political leanings of a publication absent evidence, this student cited information from the website Media Bias Fact Check in evaluating the article’s sources. Likewise, the student used information from the article author’s LinkedIn account to evaluate the author’s expertise. Finally, the student considered the timeliness of the article by placing the article’s publication within the daily news cycle. In all, this student’s response demonstrates progress toward using external evidence in support of source evaluation arguments.

While students tended to provide more researched answers at the end of the semester than they did initially, they did not abandon unsubstantiated opinions altogether. This case was particularly evident in students’ evaluations of the article’s writing style, a category that included biased writing. During both assessments, most writing style comments scored a 1 or 2 on depth, indicating that students only noted the existence of writing style or offered an unsupported opinion of it. Some students struggled with the concept of bias, using it to dismiss elements of an article that could have been better understood if researched. One student, for instance, wrote, “I think that this article is not credible because it is written with an opinion about twitter [sic]. Although they cite some of their sources, they still seem like they have a bias towards twitter [sic].” Given that the article under review was about Twitter, the author’s discussion of the company may not have implied biased reporting; and, without further evidence in the evaluation, it is impossible to know what the student perceived as biased. Such superficial responses and the consistency in student scores on writing style between the early- and late-semester assessments suggest that writing style and bias were not addressed adequately in the information literacy course.

In all, the summative assessment results suggest that, during the information literacy course, students advanced their ability to seek and articulate evidence in support of their source evaluations. While they did not rely on more credibility cues at the end of the semester than they did initially, students did appear to use a greater variety of these cues at the end of the semester. The course did not fully inoculate students against flawed reasoning and unsupported opinions, but it did appear to help many of them think more substantively about the credibility of a source.

Implications

Undergraduates’ struggle to successfully evaluate some of the cues as indicators of credibility (such as author, article’s argument, publisher, or article’s sources) seeded a revised information literacy instruction session for the course. To combat the historical problem with inconsistent library instruction in independent course sections, the authors mandated an information literacy instruction session across all sections to provide instructional consistency and to better address the source evaluation learning outcomes.

The session focused on teaching the “lateral reading” approach 77 to evaluate the overall credibility of a news article. In addition to concentrating on the frame “authority is constructed and contextual,” the session sparked conversation related to other ACRL information literacy frames. Using a New York Times article about Serena Williams’s loss at the 2009 US Open, students were prompted to examine such cues as the reporting expertise of the journalist, and her sources and argument, specifically focusing on the language used to describe Williams and her opponent. Researching these cues allowed students to experience “research as inquiry” and a “strategic exploration” that may have specific goals but also allow for serendipity to find the best information. The students’ research processes involved watching replays of the match on YouTube, exploring a black feminist blog penned by academics, and skimming scholarly sources about the depiction of African American women, particularly Williams, in the media. Debating the difference between a scholarly blog and a journal article granted students the opportunity to better understand and question the creation process behind the different formats and how the creation process and time factored into the scholarly and popular values of YouTube videos, news articles, scholarly blogs, and journal articles, depending on the information need at hand. Turning to the topic of “scholarship as conversation,” the class discussed how they could use the found sources to support their evaluation of the article and challenge the authority of The New York Times as well as the journalist and her argument using their research. After successfully critiquing one of the most established newspapers in the country, students reported feeling empowered to evaluate a source’s credibility, despite their previous acceptance of the source’s authority. Student feedback on the session indicated that the session equipped them with some of the needed skills and authority to enter a professional and scholarly conversation, which many undergraduates lack. 78

Future Considerations

The successful use of the assessment as a source of formative and summative data suggests future uses and informs instruction. It may be beneficial to use the assessment on an ongoing basis throughout the semester. Ongoing formative assessment would supply more frequent student feedback and better reveal the ebbs and flows of student understanding and misunderstanding. 79 Armed with this information, instructors could better scaffold the various credibility cues and evaluation methods such as “lateral reading” throughout the semester and beyond. 80 So doing also may enable instructors to better locate students’ “stuck places” and provide responsive instruction to advance students beyond their “epistemological obstacles.” 81 Instructors, for example, could offer responsive instruction in how to properly evaluate writing style, especially as it pertains to bias, and the possible social factors that influence students’ distrust of news. 82

The assessment also can be used early in a curriculum to allow disciplinary and library instructors to scaffold instruction on specific information literacy skills throughout the remainder of the curriculum. The authors plan to use this assessment’s results to inform information literacy sessions in journalism courses that follow the information literacy course, such as media writing, research methods for strategic communications, and special topics. The assessment can be used in these subsequent courses to continually gauge student development. In addition, while the assessment discussed here focused on the ACRL frame “authority is constructed and contextual,” the redesigned information literacy session guided students through interrelated ACRL information literacy frames, suggesting that this assessment may be useful for determining student comprehension of information literacy concepts beyond “authority is constructed and contextual.”

A limitation of the assessment presented here is that it does not account explicitly for the accuracy of students’ evaluations. An evaluation’s accuracy is assumed to emerge in the process of researching and articulating the credibility of individual cues. The assessment, however, does not interrogate the completeness of the research that students conduct on each cue, and the assessment does not include a score for the accuracy of an evaluation at cue or overall source levels. As some of the excerpts from student evaluations illustrate, evaluation accuracy is not guaranteed, even when students provide evidence of their credibility arguments. In the future, it may be necessary to expand the assessment to include dimensions of accuracy and research depth.

This paper discusses the process used to develop a source credibility assessment for a journalism information literacy course and reports the results from using this assessment as formative and summative assessment in the one-semester course. Despite being developed for a journalism course, the assessment has utility outside of this discipline. Being rooted in the universal frame of “authority is constructed and contextual,” the assessment can be adapted to any setting in which students are expected to perform source evaluation by articulating what constitutes disciplinary authority and how well a source reflects this authority. While news articles were used as the stimuli for students’ source evaluations in the instance reported here, nonjournalism instructors can ask their students to evaluate materials commonly used as information sources in their disciplines. Erin Daniels’ rubric and the derivative assessment presented here involves a general process of identifying indicators of authority within a source—which are called credibility cues here—and evaluating whether each indicator contributes to or detracts from the overall credibility of the source. This general process should be transferable across the disciplines such that its use can inform instructors and improve information literacy instruction beyond journalism education.

1. Matt Carlson and Bob Franklin, Journalists, Sources, and Credibility: New Perspectives (New York, NY: Routledge, 2011).

2. Jefferson Spurlock, “Why Journalists Lie: The Troublesome Times for Janet Cooke, Stephen Glass, Jayson Blair, and Brian Williams,” ETC: A Review of General Semantics 73, no. 1 (2016): 71–76.

3. Accrediting Council on Education in Journalism and Mass Communications, “ACEJMC Accrediting Standards,” section 3, last modified September 2013, http://acejmc.ku.edu/PROGRAM/STANDARDS.SHTML .

4. Annmarie B. Singh, “A Report on Faculty Perceptions of Students’ Information Literacy Competencies in Journalism and Mass Communication Programs: The ACEJMC Survey,” College & Research Libraries 66, no. 4 (2005): 294–311.

5. Katelyn Angell and Eamon Tewell, “Teaching and Un-Teaching Source Evaluation: Questioning Authority in Information Literacy Instruction,” Communications in Information Literacy 11, no. 1 (2017): 95–121; Erin Daniels, “Using a Targeted Rubric to Deepen Direct Assessment of College Students’ Abilities to Evaluate the Credibility of Sources,” College & Undergraduate Libraries 17, no. 1 (2010): 31–43, https://doi.org/10.1080/10691310903584767 ; Karen R. Diller and Sue F. Phelps, “Learning Outcomes, Portfolios, and Rubrics, Oh My! Authentic Assessment of an Information Literacy Program,” portal: Libraries and the Academy 8, no. 1 (2008): 75–89, https://doi.org/10.1353/pla.2008.0000 ; Jos van Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy in Dutch Higher Education,” Journal of Information Literacy 4, no. 1 (2010): 22–39, https://doi.org/10.11645/4.1.1256 ; Debra Hoffmann and Kristen LaBonte, “Meeting Information Literacy Outcomes: Partnering with Faculty to Create Effective Information Literacy Assessment,” Journal of Information Literacy 6, no. 2 (2012), 70–85, https://doi.org/10.11645/6.2.1615 ; Iris Jastram, Danya Leebaw, and Heather Tompkins, “Situating Information Literacy within the Curriculum: Using a Rubric to Shape a Program,” portal: Libraries and the Academy 14, no. 2 (2014): 165–86, https://doi.org/10.1353/pla.2014.0011 ; Lorrie A. Knight, “Using Rubrics to Assess Information Literacy,” Reference Services Review 34, no. 1 (2006): 43–55, https://doi.org/10.1108/00907320610640752 ; Davida Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” Journal of Academic Librarianship 33, no. 4 (2007): 462–78, https://doi.org/10.1016/j.acalib.2007.03.005 ; Lara Ursin, Elizabeth Blakesley Lindsay, and Corey M. Johnson, “Assessing Library Instruction in the Freshman Seminar: A Citation Analysis Study,” Reference Services Review 32, no. 3 (2004): 284–92, https://doi.org/10.1108/00907320410553696 ; Dorothy Anne Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” Journal on Excellence in College Teaching 20, no. 1 (2009): 149–65.

6. Association of College & Research Libraries [ACRL], “Information Literacy Competency Standards for Journalism Students and Professionals,” American Library Association , last modified October 2011, http:// www.ala.org/acrl/sites/ala.org.acrl/files/content/standards/il_journalism.pdf .

7. Adam J. Kuban and Laura MacLeod Mulligan, “Screencasts and Standards: Connecting an Introductory Journalism Research Course with Information Literacy,” Communication Teacher 28, no. 3 (2014): 188–95, https://doi.org/10.1080/17404622.2014.911335 ; Margy Elizabeth MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice: A Long-Term Study of Journalism Students,” Journal of Information Literacy 8, no. 2 (2014): 3–12, https://doi.org/10.11645/8.2.1941 ; Carol Perruso Brown and Barbara Kingsley‐Wilson, “Assessing Organically: Turning an Assignment into an Assessment,” Reference Services Review 38, no. 4 (November 16, 2010): 536–56, .

8. Sarah McGrew, “Learning to Evaluate: An Intervention in Civic Online Reasoning,” Computers & Education 145 (February 2020): 144–45, https://doi.org/10.1016/j.compedu.2019.103711 .

9. Sarah McGrew et al., “Can Students Evaluate Online Sources: Learning from Assessments of Civic Online Reasoning,” Theory & Research in Social Education 46, no. 2 (January 8, 2018): 165–93, https://doi.org/10.1080/00933104.2017.1416320 .

10. ACRL, Standards to the Framework for Information Literacy for Higher Education ; Amy R. Hofer, Lori Townsend, and Korey Brunetti, “Troublesome Concepts and Information Literacy: Investigating Threshold Concepts for IL Instruction,” portal: Libraries & The Academy 12, no. 4 (2012): 398–99, https://doi.org/10.1353/pla.2012.0039 ; Lori Townsend, Korey Brunetti, and Amy R. Hofer, “Threshold Concepts and Information Literacy,” portal: Libraries and the Academy 11, no. 3 (2011): 17–19, https://doi.org/10.1353/pla.2011.0030 ; Lori Townsend et al., “Identifying Threshold Concepts for Information Literacy: A Delphi Study,” Communications in Information Literacy 10, no. 1 (2016): 33–34, https://files.eric.ed.gov/fulltext/EJ1103398.pdf .

11. Wynne Harlen and Mary James, “Assessment and Learning: Differences and Relationships Between Formative and Summative Assessment,” Assessment in Education 4, no. 3 (1997): 365–79, https://doi.org/10.1080/0969594970040304 ; Mantz Yorke, “Formative Assessment in Higher Education: Moves Toward Theory and the Enhancement of Pedagogic Practice,” Higher Education 45 (2003): 477–501.

12. Daniels, “Using a Targeted Rubric,” 34–38.

13. Grant Wiggins and Jay McTighe, Understanding by Design , 2nd ed. (Alexandria, VA: Association for Supervision and Curriculum Development, 2005), 152–57.

14. Wiggins and McTighe, Understanding by Design , 183.

15. Alison J. Head and Michael B. Eisenberg, “Lessons Learned: How College Students Seek Information in the Digital Age” (Project Information Literacy Progress Report, University of Washington Information School, December 1, 2009): 32–35, http:// www.projectinfolit.org/uploads/2/7/5/4/27541717/pil_fall2009_finalv_yr1_12_2009v2.pdf .

16. Lea Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” New Library World 111, no. 3/4 (2010): 113–24, https://doi.org/10.1108/03074801011027628 .

17. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 95–121; Alison J. Head and Michael B. Eisenberg, “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age” (Project Information Literacy Progress Report, University of Washington Information School, November 1, 2010), http:// www.projectinfolit.org/uploads/2/7/5/4/27541717/pil_fall2010_survey_fullreport1.pdf .

18. Sam Wineburg et al., “Evaluating Information: The Cornerstone of Civic Online Reasoning” (Stanford History Education Group, Graduate School of Education Open Archive, November 22, 2016), http://purl.stanford.edu/fv751yt5934 .

19. Arthur Taylor and Heather A. Dalal, “Information Literacy Standards and the World Wide Web: Results from a Student Survey on Evaluation of Internet Information Sources,” Information Research 19, no. 4 (2014).

20. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 10.

21. Alison J. Head et al., “How Students Engage with News: Five Takeaways for Educators, Journalists, and Librarians” (Project Information Literacy Research Institute, October 16, 2018): 13–16, http:// www.projectinfolit.org/uploads/2/7/5/4/27541717/newsreport.pdf .

22. J. Patrick Biddix, Chung Joo Chung, and Han Woo Park, “Convenience or Credibility? A Study of College Student Online Research Behaviors,” The Internet and Higher Education 14, no. 3 (July 2011): 175–82, https://doi.org/10.1016/j.iheduc.2011.01.003 .

23. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 5; Jason Martin, “The Information Seeking Behavior of Undergraduate Education Majors: Does Library Instruction Play a Role?” Evidence Based Library and Information Practice 3, no. 4 (2008), https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/1838/3696 .

24. Head and Eisenberg, “Truth Be Told,” 10–12; Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

25. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

26. David Perkins, “The Many Faces of Constructivism,” Educational Leadership 57, no. 3 (1999): 6.

27. Perkins, “The Many Faces of Constructivism,” 8–10.

28. Jan H.F. Meyer and Ray Land, “Threshold Concepts and Troublesome Knowledge 1: Linkages to Ways of Thinking and Practising within the Disciplines,” in Improving Student Learning: Ten Years On , ed. Chris Rust (Oxford, England: Centre for Staff & Learning Development, 2003): 1–16, https://www.dkit.ie/system/files/Threshold_Concepts__and_Troublesome_Knowledge_by_Professor_Ray_Land_0.pdf ; Jan H.F. Meyer and Ray Land, “Threshold Concepts and Troublesome Knowledge (2): Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” Higher Education 49, no. 3 (2005): 373–88, https://doi.org/10.1007/s10734-004-6779-5 .

29. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 1–5.

30. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 1–5.

31. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 6.

32. Glynis Cousins, “Threshold Concepts: Old Wine in New Bottles or a New Form of Transactional Curriculum Inquiry?” in Threshold Concepts within the Disciplines , eds. Ray Land, Jan H.F. Meyer, and Jan Smith (Rotterdam, The Netherlands: Sense Publishers, 2008); Mick Flanagan, “Threshold Concepts: Undergraduate Teaching, Postgraduate Training, Professional Development and School Education: A Short Introduction and a Bibliography,” last modified October 10, 2018, https://www.ee.ucl.ac.uk/~mflanaga/thresholds.html ; Threshold Concepts and Transformational Learning , eds. Jan H.F. Meyer, Ray Land, and Caroline Baillie (Rotterdam, The Netherlands: Sense Publishers, 2010); Threshold Concepts within the Disciplines , eds. Ray Land, Jan Meyer, and Jan Smith (Rotterdam, The Netherlands: Sense Publishers, 2008).

33. Ann Harlow et al., “‘Getting Stuck’ in Analogue Electronics: Threshold Concepts as an Explanatory Model,” European Journal of Engineering Education 36, no. 5 (2011): 435–47, https://doi.org/10.1080/03043797.2011.606500 .

34. Paul D. Williams, “What’s Politics Got to Do with It? ‘Power’ as a ‘Threshold’ Concept for Undergraduate Business Students,” Australian Journal of Adult Learning 54, no. 1 (2014): 8–29, http://files.eric.ed.gov/fulltext/EJ1031000.pdf .

35. Glen Fuller, “Enthusiasm for Making a Difference: Adapting Data Journalism Skills for Digital Campaigning,” Asia Pacific Media Educator 28, no. 1 (2018): 112–23, https://doi.org/10.1177/1326365X18768134 .

36. ACRL, Framework; Hofer, Townsend, and Brunetti, “Troublesome Concepts and Information Literacy,” 398–99; Townsend, Brunetti, and Hofer, “Threshold Concepts and Information Literacy,” 17–19; Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 33–34.

37. ACRL, Framework.

38. ACRL, Framework.

39. ACRL, Framework.

40. Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 34.

41. Alyssa Appleman and S. Shyam Sundar, “Measuring Message Credibility: Construction and Validation of an Exclusive Scale,” Journalism and Mass Communication Quarterly 93, no. 1 (2015): 59–79, https://doi.org/10.1177/1077699015606057 .

42. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 3–12.

43. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377–83.

44. Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 33.

45. ACRL, Framework .

46. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 18.

47. Megan Oakleaf, “A Roadmap for Assessing Student Learning Using the New Framework for Information Literacy for Higher Education ,” Journal of Academic Librarianship 40, no. 5 (2014): 510–14, https://doi.org/10.1016/j.acalib.2014.08.001 .

48. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 13; Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 34; Townsend, Brunetti, and Hofer, “Threshold Concepts and Information Literacy,” 18–19.

49. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

50. Oakleaf, “A Roadmap for Assessing Student Learning,” 513.

51. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 95–121; Daniels, “Using a Targeted Rubric,” 31–43; Diller and Phelps, “Learning Outcomes, Portfolios, and Rubrics, Oh My!” 75–89; van Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy,” 22–39; Hoffmann and LaBonte, “Meeting Information Literacy Outcomes,” 70–85; Jastram, Leebaw, and Tompkins, “Situating Information Literacy within the Curriculum,” 165–86; Knight, “Using Rubrics to Assess Information Literacy,” 43–55; Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” 462–78; Ursin, Lindsay, and Johnson, “Assessing Library Instruction in the Freshman Seminar,” 284–92; Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” 149–65.

52. ACRL, “Information Literacy Competency Standards for Higher Education,” American Libraries Association , last modified January 18, 2000, http:// www.ala.org/Template.cfm?Section=Home&template=/ContentManagement/ContentDisplay.cfm&ContentID=33553 ; Knight, “Using Rubrics to Assess Information Literacy,” 47–48; Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” 473–75; Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” 151.

53. Katelyn Angell, “Using Quantitative Methods to Determine the Validity and Reliability of an Undergraduate Citation Rubric,” Qualitative and Quantitative Methods in Libraries 4 (2015): 755–65; Laura W. Gariepy, Jennifer A. Stout, and Megan L. Hodge, “Using Rubrics to Assess Learning in Course-Integrated Library Instruction,” portal: Libraries and the Academy 16, no. 3 (2016): 491–509, https://doi.org/10.1353/pla.2016.0043 .

54. Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy,” 38–39; Jastram, Leebaw, and Tompkins, “Situating Information Literacy within the Curriculum,” 181–83.

55. Daniels, “Using a Targeted Rubric,” 34–38.

56. Harlen and James, “Assessment and Learning,” 370–75; Yorke, “Formative Assessment in Higher Education,” 478–80.

57. Daniels, “Using a Targeted Rubric,” 35.

58. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 8–14.

59. Daniels, “Using a Targeted Rubric,” 35–36.

60. Daniels, “Using a Targeted Rubric,” 36.

61. Megan Oakleaf, “Using Rubrics to Assess Information Literacy: An Examination of Methodology and Interrater Reliability,” Journal of the American Society for Information Science and Technology 60, no. 5 (2009): 969–83; Wiggins and McTighe, Understanding by Design , 188–89.

62. Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski, The Psychology of Survey Response (New York, NY: Cambridge University Press, 2000), 9–61.

63. Daniels, “Using a Targeted Rubric,” 38.

64. Oakleaf, “Using Rubrics to Assess Information Literacy,” 971–72.

65. Mary L. McHugh, “Interrater Reliability: The Kappa Statistic,” Biochemia Medica 22, no. 3 (2012): 276–82, PubMed PMID: 23092060; PubMed Central PMCID: PMC3900052.

66. Head et al., “How Students Engage with News,” 19.

67. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 9–12; Head et al., “How Students Engage with News,” 24–28.

68. Wineburg et al., “Evaluating Information”; Sam Wineburg and Sarah McGrew, “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information,” SSRN Scholarly Paper No. ID 3048994 (Rochester, NY: Social Science Research Network, 2017), https://papers.ssrn.com/abstract=3048994 .

69. Head et al., “How Students Engage with News,” 20–22, figure 7.

70. Head et al., “How Students Engage with News,” 13–15, figure 4.

71. ACRL, Framework ; Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 9–18; Head et al., “How Students Engage with News,” figure 7.

72. Sarah Blakeslee, “The CRAAP Test,” LOEX Quarterly 31, no. 3 (2004): 6–7, https://commons.emich.edu/loexquarterly/vol31/iss3/4/ ; Mike Caulfield, “A Short History of CRAAP,” Hapgood , last modified September 15, 2018, ; Maddie Crum, “After Trump Was Elected, Librarians Had to Rethink Their System for Fact-Checking,” Huffington Post , March 9, 2017, ; Kevin Seeber, “Wiretaps and CRAAP,” Kevin Seeber / MLIS , last modified March 18, 2017, ; Wineburg and McGrew, “Lateral Reading,” 44–46; Head et al., “How Students Engage with News,” 24–28, 31–35.

73. Head et al., “How Students Engage with News,” quote, 24, 24–28, 31–35.

74. Perkins, “The Many Faces of Constructivism,” 8–9; Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 6–7; Caulfield, “A Short History of CRAAP.”

75. Wiggins and McTighe, Understanding by Design , 39–40, 340; Yu-Mei Wang and Marge Artero, “Caught in the Web: University Student Use of Web Resources,” Educational Media International 42, no. 1 (2005): 71–82, https://doi.org/10.1080/09523980500116670 ; Wineburg and McGrew, “Lateral Reading.”

76. Head et al., “How Students Engage with News,” 31–35; Alison King, “From Sage on the Stage to Guide on the Side,” College Teaching 41, no. 1 (1993): 30–35, https://www.jstor.org/stable/27558571 ; Wineburg and McGrew, “Lateral Reading,” 39–46.

77. Wineburg et al., “Evaluating Information”; Wineburg and McGrew, “Lateral Reading.”

78. Gloria J. Leckie, “Desperately Seeking Citations: Uncovering Faculty Assumptions about the Undergraduate Research,” Journal of Academic Librarianship 22, no. 3 (1996): 201–08, https://doi.org/10.1016/S0099-1333(96)90059-2 .

79. Wiggins and McTighe, Understanding by Design , 169.

80. Wineburg and McGrew, “Lateral Reading.”

81. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377.

82. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377–79; Head et al., “How Students Engage with News,” 13–15, figure 4.

* Piotr S. Bobkowski is Associate Professor at the University of Kansas; email: [email protected] . Karna Younger is Open Pedagogy Librarian and Assistant Librarian at University of Kansas Libraries; email: [email protected] . ©2020 Piotr S. Bobkowski and Karna Younger, Attribution-NonCommercial ( https://creativecommons.org/licenses/by-nc/4.0/ ) CC BY-NC.

Creative Commons License

Article Views (Last 12 Months)

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

© 2024 Association of College and Research Libraries , a division of the American Library Association

Print ISSN: 0010-0870 | Online ISSN: 2150-6701

ALA Privacy Policy

ISSN: 2150-6701

Banner

Media Literacy Guide: Evaluating the News

  • Media Bias Charts
  • Evaluating the News
  • Reliable Journalism
  • Satire and Tabloids
  • Fact Finding
  • Social Media

What is this page for?

This page provides some checklists to use as quick tools to evaluate news articles, smart check, the smart check is particularly helpful when evaluating news stories.  determine if your news source is smart before believing what is reported.  .

S      Source:

  • Where did the story come from? 
  • Is it a reputable news source?

M     Motive  

  • Why do they say so?  Do they have a special interest or bias that may cause them to slant information

A   Authority

  • Who is the author of the story?
  • What are their credentials?

R   Review

  • Go over the story carefully
  • Does it make sense?

T   Two-source test

  • Check for other sources about the story.
  • Does the two-source test confirm or contradict the story?  

SMART comes from the University of Washington Libraries

Understanding Bias in the news

  • Understanding Bias Self-Guided lesson on bias in the news from Checkology at the News Literacy Project
  • Testimony: “A Growing Threat: The Impact of Disinformation Targeted at Communities of Color” Testimony of Samuel Woolley, PHD, from the University of Texas at Austin

Organizations tracking media bias

Organizations monitoring media bias - from descriptions from Wikipedia

Non-partisan:

  • Center for Media and Public Affairs
  • Facts on File

Conservative :

  • Accuracy in Media
  • Media Research Center
  • NewsBusters
  • Center for Media and Democracy
  • Fairness and Accuracy in Reporting
  • Media Matters for America

The CRAAP Test

The crap test is a checklist of questions to ask when evaluating any source. not all the items need to be checked but one or two from each should help you decide the credibility of a source..

C Currency: the timeliness of information

  • When was the information published or posted?
  • Has the information been revised or updated?
  • Is the information current or out of date for your topic?
  • Are the links functional?

R Reliability: consistently verifiable and credible information

  • What kind of information is included in the resource?
  • Is content of the resource primarily opinion?  Is is balanced?
  • Does the creator provide references or sources for data or quotations?

A Authority: the source of the information

  • Who is the author/publisher/source/sponsor?
  • Are the author's credentials or organizational affiliations given?
  • What are the author's credentials or organizational affiliations given?
  • What are the author's qualifications to write on the topic? What identifies them as an expert on that topic?
  • Is there contact information, such as a publisher or e-mail address?
  • Does the URL reveal anything about the author or source?

P Purpose: the reason the information exists   

  • Why was this source created? To sell, inform, persuade?
  • Who is the intended audience? Experts or new-comers? Students or educators?
  • Check language, content of page for bias. Are there any red flags?
  • If advertising is present on a page, is it separate or related to the informational content?
  • Look for “About the Author” or “About Us” links
  • Shorten the URL to find out about hosting site
  • Search for author/organization in search engines like Google
  • Search for author/organization in periodical databases like EBSCO and ProQuest

SIFT method for evaluating information in a digital world

evaluating news reporting assignment

Read more about this resource

News Evaluation Games

It can be difficult to determine what is trustworthy news and what is not. Try these games to see how you do:

  • Factitious Game    - test your ability to discern real news from false
  • The Fakeout Game  -  Your social media feed has been infected by false information. Your job is to learn the skills of verification, so you can sort fact from fiction — in the game, and in real life.

What Makes a News Story Fake?

It can't be verified    A fake news article may or may not have links in it tracing its sources; if it does, these links may not lead to articles outside of the site's domain or many not contain information pertinent to the article topic.

Fake news appeals to emotion : Fake news plays on your feelings - it makes you angry or happy or scared. This is to ensure you won't do anything as pesky as fact-checking.

Authors usually aren't experts : Most authors aren't even journalists, but paid trolls.

It can't be found anywhere else : If you look up the main idea of a fake news article, you might not find any other news outlet (real or not) reporting on the issue.

Fake news comes from fake sites : Did your article come from abcnews.co? or mercola.com? Realnewsrightnow.com? These and a host of other URLs are fake news sites.

  • << Previous: Media Bias Charts
  • Next: Reliable Journalism >>
  • Last Updated: Sep 27, 2023 4:01 PM
  • URL: https://subjectguides.library.skagit.edu/medialiteracy

Status message

Evaluating science in the news.

Illustration showing a mock science newspaper and science headlines appearing on a cell phone.

  • Explanations & Argumentation
  • Science & Society

Resource Type

  • Skill Builders

Description

In this activity, students evaluate a science news article to determine whether it is a trustworthy source of information.

Science news articles are a great way to learn about new ideas, discoveries, and research. However, it’s important to evaluate the authority and credibility of sources of information. In this activity, students practice their reading comprehension and source evaluation skills by answering a series of questions about a science news article. They then synthesize their answers to determine whether the article is trustworthy. This activity can be used with any print or online news articles.

Two versions of the “Student Handout” are available for this activity. The short handout focuses on evaluating a science news article, and the extended handout also has students respond to the ideas presented in the article. The additional “Criteria for Evaluating Sources” handout provides more questions for evaluating sources of information based on the CRAP (Currency, Reliability, Authority, and Purpose) test.

The “Resource Google Folder” link directs to a Google Drive folder of resource documents in the Google Docs format. Not all downloadable documents for the resource may be available in this format. The Google Drive folder is set as “View Only”; to save a copy of a document in this folder to your Google Drive, open that document, then select File → “Make a copy.” These documents can be copied, modified, and distributed online following the Terms of Use listed in the “Details” section below, including crediting BioInteractive.

Student Learning Targets

  • Evaluate the currency, reliability, authority, and purpose of a source of information.
  • Justify the reasoning used to determine whether a source of information is trustworthy.
  • (extended handout only) Identify the main idea and supporting details of a science news article.
  • (extended handout only) Respond to the ideas presented in a science news article.   

Estimated Time

authority, bias, CRAP test, currency, evidence, reliability, scientific literacy

Terms of Use

Please see the Terms of Use for information on how this resource can be used.

Accessibility Level (WCAG compliance)

Version history, curriculum connections, ngss (2013), ap biology (2019), common core (2010).

ELA.RST, ELA.WHST

Vision and Change (2009)

Explore related content, other related resources.

A map of vegetation and wildebeest locations highlighting the wildebeest moving toward an area of greater vegetation.

VO/SOT News Reporting Assignment

Students will practice their news gathering skills by reporting on a mock “news event” and then turn their raw footage into a vosot, content objectives:.

  • Students will practice the news gathering process
  • Students will apply video composition techniques to a broadcast setting
  • Students will understand how to construct a VO and a SOT for a news broadcast
  • Students will learn how to work in an efficient manner to meet a hard deadline

Expectations:

  • Student reporters (and actors) will familiarize themselves with the press release for a mock news event (made up, and written by the teacher) the night before class. They will prepare at least five questions to ask when they arrive on the scene (this will be a two day activity, students will switch roles the next day with a different press release)
  • Students will arrive to class and immediately start preparing for their role. Actors will go out to the scene, reporters will begin getting all of their camera equipment set up and ready to go
  • Reporters will go out to the news event scene and “cover it” like a professional news reporter. They will get video footage for a news VO, and interview people on the scene to get a soundbite for their news SOT
  • Once each student has all of their raw footage, they will go back inside and edit their VO and SOT and have them submitted, with a script, before the show deadline (end of class)
  • VO must be at least 45 seconds long, SOT must be 15 seconds (with padding at the beginning and end)
  • VO should have a variety of different shot types, SOT should be an engaging soundbite that doesn’t parrot the VO script
  • All video should be framed properly, exposed, focused, and white balanced correctly
  • Script must be written in concise and active voice, matching what is seen in the VO. Students may use the press release to get accurate information for the script. SOT must be written verbatim into the script, and there must be a lower third with the person’s name spelled correctly

Modifications:

  • If school camera or editing equipment are unavailable for any reason, this project can be completed with a cell phone and phone editing application
  • This project can be done inside or outside of class

Necessary Knowledge:

  • Understand what a VOSOT is and how they are used in a broadcast news context
  • Shot composition, camera technique/operation, editing techniques
  • Understand the news gathering process, preparing questions, and mentally brainstorming and visualizing before arriving on scene
  • Ability to focus and work efficiently to meet a hard deadline

Possible Materials:

  • School camera / cell phone
  • Editing software / video editing app

Evaluation:

  • Grading according to the attached rubric - this project will be evaluated for both technical knowledge and creative output
  • This assignment can be done either inside or outside of class

https://intro-to-film-tv-production.lsupathways.org/images/uploads/vosot-news-project-rubric.pdf

  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Basic newswriting: Learn how to originate, research and write breaking-news stories

Syllabus for semester-long course on the fundamentals of covering and writing the news, including how identify a story, gather information efficiently and place it in a meaningful context.

Notepad and a pen

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by The Journalist's Resource, The Journalist's Resource January 22, 2010

This <a target="_blank" href="https://journalistsresource.org/home/syllabus-covering-the-news/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

This course introduces tomorrow’s journalists to the fundamentals of covering and writing news. Mastering these skills is no simple task. In an Internet age of instantaneous access, demand for high-quality accounts of fast-breaking news has never been greater. Nor has the temptation to cut corners and deliver something less.

To resist this temptation, reporters must acquire skills to identify a story and its essential elements, gather information efficiently, place it in a meaningful context, and write concise and compelling accounts, sometimes at breathtaking speed. The readings, discussions, exercises and assignments of this course are designed to help students acquire such skills and understand how to exercise them wisely.

Photo: Memorial to four slain Lakewood, Wash., police officers. The Seattle Times earned the 2010 Pulitzer Prize for Breaking News Reporting for their coverage of the crime.

Course objective

To give students the background and skills needed to originate, research, focus and craft clear, compelling and contextual accounts of breaking news in a deadline environment.

Learning objectives

  • Build an understanding of the role news plays in American democracy.
  • Discuss basic journalistic principles such as accuracy, integrity and fairness.
  • Evaluate how practices such as rooting and stereotyping can undermine them.
  • Analyze what kinds of information make news and why.
  • Evaluate the elements of news by deconstructing award-winning stories.
  • Evaluate the sources and resources from which news content is drawn.
  • Analyze how information is attributed, quoted and paraphrased in news.
  • Gain competence in focusing a story’s dominant theme in a single sentence.
  • Introduce the structure, style and language of basic news writing.
  • Gain competence in building basic news stories, from lead through their close.
  • Gain confidence and competence in writing under deadline pressure.
  • Practice how to identify, background and contact appropriate sources.
  • Discuss and apply the skills needed to interview effectively.
  • Analyze data and how it is used and abused in news coverage.
  • Review basic math skills needed to evaluate and use statistics in news.
  • Report and write basic stories about news events on deadline.

Suggested reading

  • A standard textbook of the instructor’s choosing.
  • America ‘s Best Newspaper Writing , Roy Peter Clark and Christopher Scanlan, Bedford/St. Martin’s, 2006
  • The Elements of Journalism , Bill Kovach and Tom Rosenstiel, Three Rivers Press, 2001.
  • Talk Straight, Listen Carefully: The Art of Interviewing , M.L. Stein and Susan E. Paterno, Iowa State University Press, 2001
  • Math Tools for Journalists , Kathleen Woodruff Wickham, Marion Street Press, Inc., 2002
  • On Writing Well: 30th Anniversary Edition , William Zinsser, Collins, 2006
  • Associated Press Stylebook 2009 , Associated Press, Basic Books, 2009

Weekly schedule and exercises (13-week course)

We encourage faculty to assign students to read on their own Kovach and Rosentiel’s The Elements of Journalism in its entirety during the early phase of the course. Only a few chapters of their book are explicitly assigned for the class sessions listed below.

The assumption for this syllabus is that the class meets twice weekly.

Week 1 | Week 2 | Week 3 | Week 4 | Week 5 | Week 6 | Week 7 Week 8 | Week 9 | Week 10 | Week 11 | Week 12 | Weeks 13/14

Week 1: Why journalism matters

Previous week | Next week | Back to top

Class 1: The role of journalism in society

The word journalism elicits considerable confusion in contemporary American society. Citizens often confuse the role of reporting with that of advocacy. They mistake those who promote opinions or push their personal agendas on cable news or in the blogosphere for those who report. But reporters play a different role: that of gatherer of evidence, unbiased and unvarnished, placed in a context of past events that gives current events weight beyond the ways opinion leaders or propagandists might misinterpret or exploit them.

This session’s discussion will focus on the traditional role of journalism eloquently summarized by Bill Kovach and Tom Rosenstiel in The Elements of Journalism . The class will then examine whether they believe that the journalist’s role has changed or needs to change in today’s news environment. What is the reporter’s role in contemporary society? Is objectivity, sometimes called fairness, an antiquated concept or an essential one, as the authors argue, for maintaining a democratic society? How has the term been subverted? What are the reporter’s fundamental responsibilities? This discussion will touch on such fundamental issues as journalists’ obligation to the truth, their loyalty to the citizens who are their audience and the demands of their discipline to verify information, act independently, provide a forum for public discourse and seek not only competing viewpoints but carefully vetted facts that help establish which viewpoints are grounded in evidence.

Reading: Kovach and Rosenstiel, Chapter 1, and relevant pages of the course text

Assignments:

  • Students should compare the news reporting on a breaking political story in The Wall Street Journal , considered editorially conservative, and The New York Times , considered editorially liberal. They should write a two-page memo that considers the following questions: Do the stories emphasize the same information? Does either story appear to slant the news toward a particular perspective? How? Do the stories support the notion of fact-based journalism and unbiased reporting or do they appear to infuse opinion into news? Students should provide specific examples that support their conclusions.
  • Students should look for an example of reporting in any medium in which reporters appear have compromised the notion of fairness to intentionally or inadvertently espouse a point of view. What impact did the incorporation of such material have on the story? Did its inclusion have any effect on the reader’s perception of the story?

Class 2: Objectivity, fairness and contemporary confusion about both

In his book Discovering the News , Michael Schudson traced the roots of objectivity to the era following World War I and a desire by journalists to guard against the rapid growth of public relations practitioners intent on spinning the news. Objectivity was, and remains, an ideal, a method for guarding against spin and personal bias by examining all sides of a story and testing claims through a process of evidentiary verification. Practiced well, it attempts to find where something approaching truth lies in a sea of conflicting views. Today, objectivity often is mistaken for tit-for-tat journalism, in which the reporters only responsibility is to give equal weight to the conflicting views of different parties without regard for which, if any, are saying something approximating truth. This definition cedes the journalist’s responsibility to seek and verify evidence that informs the citizenry.

Focusing on the “Journalism of Verification” chapter in The Elements of Journalism , this class will review the evolution and transformation of concepts of objectivity and fairness and, using the homework assignment, consider how objectivity is being practiced and sometimes skewed in the contemporary new media.

Reading: Kovach and Rosenstiel, Chapter 4, and relevant pages of the course text.

Assignment: Students should evaluate stories on the front page and metro front of their daily newspaper. In a two-page memo, they should describe what elements of news judgment made the stories worthy of significant coverage and play. Finally, they should analyze whether, based on what else is in the paper, they believe the editors reached the right decision.

Week 2: Where news comes from

Class 1: News judgment

When editors sit down together to choose the top stories, they use experience and intuition. The beginner journalist, however, can acquire a sense of news judgment by evaluating news decisions through the filter of a variety of factors that influence news play. These factors range from traditional measures such as when the story took place and how close it was to the local readership area to more contemporary ones, such as the story’s educational value.

Using the assignment and the reading, students should evaluate what kinds of information make for interesting news stories and why.

In this session, instructors might consider discussing the layers of news from the simplest breaking news event to the purely enterprise investigative story.

Assignment: Students should read and deconstruct coverage of a major news event. One excellent source for quality examples is the site of the Pulitzer Prizes , which has a category for breaking news reporting. All students should read the same article (assigned by the instructor), and write a two- or three-page memo that describes how the story is organized, what information it contains and what sources of information it uses, both human and digital. Among the questions they should ask are:

  • Does the first (or lead) paragraph summarize the dominant point?
  • What specific information does the lead include?
  • What does it leave out?
  • How do the second and third paragraphs relate to the first paragraph and the information it contains? Do they give unrelated information, information that provides further details about what’s established in the lead paragraph or both?
  • Does the story at any time place the news into a broader context of similar events or past events? If so, when and how?
  • What information in the story is attributed , specifically tied to an individual or to documentary information from which it was taken? What information is not attributed? Where does the information appear in the sentence? Give examples of some of the ways the sources of information are identified? Give examples of the verbs of attribution that are chosen.
  • Where and how often in the story are people quoted, their exact words placed in quotation marks? What kind of information tends to be quoted — basic facts or more colorful commentary? What information that’s attributed is paraphrased , summing up what someone said but not in their exact words.
  • How is the story organized — by theme, by geography, by chronology (time) or by some other means?
  • What human sources are used in the story? Are some authorities? Are some experts? Are some ordinary people affected by the event? Who are some of the people in each category? What do they contribute to the story? Does the reporter (or reporters) rely on a single source or a wide range? Why do you think that’s the case?
  • What specific facts and details make the story more vivid to you? How do you think the reporter was able to gather those details?
  • What documents (paper or digital) are detailed in the story? Do they lend authority to the story? Why or why not?
  • Is any specific data (numbers, statistics) used in the story? What does it lend to the story? Would you be satisfied substituting words such as “many” or “few” for the specific numbers and statistics used? Why or why not?

Class 2: Deconstructing the story

By carefully deconstructing major news stories, students will begin to internalize some of the major principles of this course, from crafting and supporting the lead of a story to spreading a wide and authoritative net for information. This class will focus on the lessons of a Pulitzer Prize winner.

Reading: Clark/Scanlan, Pages 287-294

Assignment: Writers typically draft a focus statement after conceiving an idea and conducting preliminary research or reporting. This focus statement helps to set the direction of reporting and writing. Sometimes reporting dictates a change of direction. But the statement itself keeps the reporter from getting off course. Focus statements typically are 50 words or less and summarize the story’s central point. They work best when driven by a strong, active verb and written after preliminary reporting.

  • Students should write a focus statement that encapsulates the news of the Pulitzer Prize winning reporting the class critiqued.

Week 3: Finding the focus, building the lead

Class 1: News writing as a process

Student reporters often conceive of writing as something that begins only after all their reporting is finished. Such an approach often leaves gaps in information and leads the reporter to search broadly instead of with targeted depth. The best reporters begin thinking about story the minute they get an assignment. The approach they envision for telling the story informs their choice of whom they seek interviews with and what information they gather. This class will introduce students to writing as a process that begins with story concept and continues through initial research, focus, reporting, organizing and outlining, drafting and revising.

During this session, the class will review the focus statements written for homework in small breakout groups and then as a class. Professors are encouraged to draft and hand out a mock or real press release or hold a mock press conference from which students can draft a focus statement.

Reading: Zinsser, pages 1-45, Clark/Scanlan, pages 294-302, and relevant pages of the course text

Class 2: The language of news

Newswriting has its own sentence structure and syntax. Most sentences branch rightward, following a pattern of subject/active verb/object. Reporters choose simple, familiar words. They write spare, concise sentences. They try to make a single point in each. But journalistic writing is specific and concrete. While reporters generally avoid formal or fancy word choices and complex sentence structures, they do not write in generalities. They convey information. Each sentence builds on what came before. This class will center on the language of news, evaluating the language in selections from America’s Best Newspaper Writing , local newspapers or the Pulitzers.

Reading: Relevant pages of the course text

Assignment: Students should choose a traditional news lead they like and one they do not like from a local or national newspaper. In a one- or two-page memo, they should print the leads, summarize the stories and evaluate why they believe the leads were effective or not.

Week 4: Crafting the first sentence

Class 1: The lead

No sentence counts more than a story’s first sentence. In most direct news stories, it stands alone as the story’s lead. It must summarize the news, establish the storyline, convey specific information and do all this simply and succinctly. Readers confused or bored by the lead read no further. It takes practice to craft clear, concise and conversational leads. This week will be devoted to that practice.

Students should discuss the assigned leads in groups of three or four, with each group choosing one lead to read to the entire class. The class should then discuss the elements of effective leads (active voice; active verb; single, dominant theme; simple sentences) and write leads in practice exercises.

Assignment: Have students revise the leads they wrote in class and craft a second lead from fact patterns.

Class 2: The lead continued

Some leads snap or entice instead of summarize. When the news is neither urgent nor earnest, these can work well. Though this class will introduce students to other kinds of leads, instructors should continue to emphasize traditional leads, typically found atop breaking news stories.

Class time should largely be devoted to writing traditional news leads under a 15-minute deadline pressure. Students should then be encouraged to read their own leads aloud and critique classmates’ leads. At least one such exercise might focus on students writing a traditional lead and a less traditional lead from the same information.

Assignment: Students should find a political or international story that includes various types (direct and indirect) and levels (on-the-record, not for attribution and deep background) of attribution. They should write a one- or two-page memo describing and evaluating the attribution. Did the reporter make clear the affiliation of those who expressed opinions? Is information attributed to specific people by name? Are anonymous figures given the opportunity to criticize others by name? Is that fair?

Week 5: Establishing the credibility of news

Class 1: Attribution

All news is based on information, painstakingly gathered, verified and checked again. Even so, “truth” is an elusive concept. What reporters cobble together instead are facts and assertions drawn from interviews and documentary evidence.

To lend authority to this information and tell readers from where it comes, reporters attribute all information that is not established fact. It is neither necessary, for example, to attribute that Franklin Delano Roosevelt was first elected president in 1932 nor that he was elected four times. On the other hand, it would be necessary to attribute, at least indirectly, the claim that he was one of America’s best presidents. Why? Because that assertion is a matter of opinion.

In this session, students should learn about different levels of attribution, where attribution is best placed in a sentence, and why it can be crucial for the protection of the accused, the credibility of reporters and the authoritativeness of the story.

Assignment: Working from a fact pattern, students should write a lead that demands attribution.

Class 2: Quoting and paraphrasing

“Great quote,” ranks closely behind “great lead” in the pecking order of journalistic praise. Reporters listen for great quotes as intensely as piano tuners listen for the perfect pitch of middle C. But what makes a great quote? And when should reporters paraphrase instead?

This class should cover a range of issues surrounding the quoted word from what it is used to convey (color and emotion, not basic information) to how frequently quotes should be used and how long they should run on. Other issues include the use and abuse of partial quotes, when a quote is not a quote, and how to deal with rambling and ungrammatical subjects.

As an exercise, students might either interview the instructor or a classmate about an exciting personal experience. After their interviews, they should review their notes choose what they consider the three best quotes to include a story on the subject. They should then discuss why they chose them.

Assignment: After completing the reading, students should analyze a summary news story no more than 15 paragraphs long. In a two- or three-page memo, they should reprint the story and then evaluate whether the lead summarizes the news, whether the subsequent paragraphs elaborate on or “support” the lead, whether the story has a lead quote, whether it attributes effectively, whether it provides any context for the news and whether and how it incorporates secondary themes.

Week 6: The building blocks of basic stories

Class 1: Supporting the lead

Unlike stories told around a campfire or dinner table, news stories front load information. Such a structure delivers the most important information first and the least important last. If a news lead summarizes, the subsequent few paragraphs support or elaborate by providing details the lead may have merely suggested. So, for example, a story might lead with news that a 27-year-old unemployed chef has been arrested on charges of robbing the desk clerk of an upscale hotel near closing time. The second paragraph would “support” this lead with detail. It would name the arrested chef, identify the hotel and its address, elaborate on the charges and, perhaps, say exactly when the robbery took place and how. (It would not immediately name the desk clerk; too many specifics at once clutter the story.)

Wire service stories use a standard structure in building their stories. First comes the lead sentence. Then comes a sentence or two of lead support. Then comes a lead quote — spoken words that reinforce the story’s direction, emphasize the main theme and add color. During this class students should practice writing the lead through the lead quote on deadline. They should then read assignments aloud for critique by classmates and the professor.

Assignment: Using a fact pattern assigned by the instructor or taken from a text, students should write a story from the lead through the lead quote. They should determine whether the story needs context to support the lead and, if so, include it.

Class 2: When context matters

Sometimes a story’s importance rests on what came before. If one fancy restaurant closes its doors in the face of the faltering economy, it may warrant a few paragraphs mention. If it’s the fourth restaurant to close on the same block in the last two weeks, that’s likely front-page news. If two other restaurants closed last year, that might be worth noting in the story’s last sentence. It is far less important. Patterns provide context and, when significant, generally are mentioned either as part of the lead or in the support paragraph that immediately follows. This class will look at the difference between context — information needed near the top of a story to establish its significance as part of a broader pattern, and background — information that gives historical perspective but doesn’t define the news at hand.

Assignment: The course to this point has focused on writing the news. But reporters, of course, usually can’t write until they’ve reported. This typically starts with background research to establish what has come before, what hasn’t been covered well and who speaks with authority on an issue. Using databases such as Lexis/Nexis, students should background or read specific articles about an issue in science or policy that either is highlighted in the Policy Areas section of Journalist’s Resource website or is currently being researched on your campus. They should engage in this assignment knowing that a new development on the topic will be brought to light when they arrive at the next class.

Week 7: The reporter at work

Class 1: Research

Discuss the homework assignment. Where do reporters look to background an issue? How do they find documents, sources and resources that enable them to gather good information or identify key people who can help provide it? After the discussion, students should be given a study from the Policy Areas section of Journalist’s Resource website related to the subject they’ve been asked to explore.

The instructor should use this study to evaluate the nature structure of government/scientific reports. After giving students 15 minutes to scan the report, ask students to identify its most newsworthy point. Discuss what context might be needed to write a story about the study or report. Discuss what concepts or language students are having difficulty understanding.

Reading: Clark, Scanlan, pages 305-313, and relevant pages of the course text

Assignment: Students should (a) write a lead for a story based exclusively on the report (b) do additional background work related to the study in preparation for writing a full story on deadline. (c) translate at least one term used in the study that is not familiar to a lay audience.

Class 2: Writing the basic story on deadline

This class should begin with a discussion of the challenges of translating jargon and the importance of such translation in news reporting. Reporters translate by substituting a simple definition or, generally with the help of experts, comparing the unfamiliar to the familiar through use of analogy.

The remainder of the class should be devoted to writing a 15- to 20-line news report, based on the study, background research and, if one is available, a press release.

Reading: Pages 1-47 of Stein/Paterno, and relevant pages of the course text

Assignment: Prepare a list of questions that you would ask either the lead author of the study you wrote about on deadline or an expert who might offer an outside perspective.

Week 8: Effective interviewing

Class 1: Preparing and getting the interview

Successful interviews build from strong preparation. Reporters need to identify the right interview subjects, know what they’ve said before, interview them in a setting that makes them comfortable and ask questions that elicit interesting answers. Each step requires thought.

The professor should begin this class by critiquing some of the questions students drew up for homework. Are they open-ended or close-ended? Do they push beyond the obvious? Do they seek specific examples that explain the importance of the research or its applications? Do they probe the study’s potential weaknesses? Do they explore what directions the researcher might take next?

Discuss the readings and what steps reporters can take to background for an interview, track down a subject and prepare and rehearse questions in advance.

Reading: Stein/Paterno, pages 47-146, and relevant pages of the course text

Assignment: Students should prepare to interview their professor about his or her approach to and philosophy of teaching. Before crafting their questions, the students should background the instructor’s syllabi, public course evaluations and any pertinent writings.

Class 2: The interview and its aftermath

The interview, says Pulitzer Prize-winning journalist Jacqui Banaszynski, is a dance which the reporter leads but does so to music the interview subject chooses. Though reporters prepare and rehearse their interviews, they should never read the questions they’ve considered in advance and always be prepared to change directions. To hear the subject’s music, reporters must be more focused on the answers than their next question. Good listeners make good interviewers — good listeners, that is, who don’t forget that it is also their responsibility to also lead.

Divide the class. As a team, five students should interview the professor about his/her approach to teaching. Each of these five should build on the focus and question of the previous questioner. The rest of the class should critique the questions, their clarity and their focus. Are the questioners listening? Are they maintaining control? Are they following up? The class also should discuss the reading, paying particularly close attention to the dynamics of an interview, the pace of questions, the nature of questions, its close and the reporter’s responsibility once an interview ends.

Assignment: Students should be assigned to small groups and asked to critique the news stories classmates wrote on deadline during the previous class.

Week 9: Building the story

Class 1: Critiquing the story

The instructor should separate students into groups of two or three and tell them to read their news stories to one another aloud. After each reading, the listeners should discuss what they liked and struggled with as the story audience. The reader in each case should reflect on what he or she learned from the process of reading the story aloud.

The instructor then should distribute one or two of the class stories that provide good and bad examples of story structure, information selection, content, organization and writing. These should be critiqued as a class.

Assignment: Students, working in teams, should develop an angle for a news follow to the study or report they covered on deadline. Each team should write a focus statement for the story it is proposing.

Class 2: Following the news

The instructor should lead a discussion about how reporters “enterprise,” or find original angles or approaches, by looking to the corners of news, identifying patterns of news, establishing who is affected by news, investigating the “why” of news, and examining what comes next.

Students should be asked to discuss the ideas they’ve developed to follow the news story. These can be assigned as longer-term team final projects for the semester. As part of this discussion, the instructor can help students map their next steps.

Reading: Wickham, Chapters 1-4 and 7, and relevant pages of the course text

Assignment: Students should find a news report that uses data to support or develop its main point. They should consider what and how much data is used, whether it is clear, whether it’s cluttered and whether it answers their questions. They should bring the article and a brief memo analyzing it to class.

Week 10: Making sense of data and statistics

Class 1: Basic math and the journalist’s job

Many reporters don’t like math. But in their jobs, it is everywhere. Reporters must interpret political polls, calculate percentage change in everything from property taxes to real estate values, make sense of municipal bids and municipal budgets, and divine data in government reports.

First discuss some of the examples of good and bad use of data that students found in their homework. Then, using examples from Journalist’s Resource website, discuss good and poor use of data in news reporting. (Reporters, for example, should not overwhelm readers with paragraphs stuffed with statistics.) Finally lead students through some of the basic skills sets outlined in Wickham’s book, using her exercises to practice everything from calculating percentage change to interpreting polls.

Assignment: Give students a report or study linked to the Journalist’s Resource website that requires some degree of statistical evaluation or interpretation. Have students read the report and compile a list of questions they would ask to help them understand and interpret this data.

Class 2: The use and abuse of statistics

Discuss the students’ questions. Then evaluate one or more articles drawn from the report they’ve analyzed that attempt to make sense of the data in the study. Discuss what these articles do well and what they do poorly.

Reading: Zinsser, Chapter 13, “Macabre Reminder: The Corpse on Union Street,” Dan Barry, The New York Times

Week 11: The reporter as observer

Class 1: Using the senses

Veteran reporters covering an event don’t only return with facts, quotes and documents that support them. They fill their notebooks with details that capture what they’ve witnessed. They use all their senses, listening for telling snippets of conversation and dialogue, watching for images, details and actions that help bring readers to the scene. Details that develop character and place breathe vitality into news. But description for description’s sake merely clutters and obscures the news. Using the senses takes practice.

The class should deconstruct “Macabre Reminder: The Corpse on Union Street,” a remarkable journey around New Orleans a few days after Hurricane Katrina devastated the city in 2005. The story starts with one corpse, left to rot on a once-busy street and then pans the city as a camera might. The dead body serves as a metaphor for the rotting city, largely abandoned and without order.

Assignment: This is an exercise in observation. Students may not ask questions. Their task is to observe, listen and describe a short scene, a serendipitous vignette of day-to-day life. They should take up a perch in a lively location of their choosing — a student dining hall or gym, a street corner, a pool hall or bus stop or beauty salon, to name a few — wait and watch. When a small scene unfolds, one with beginning, middle and end, students should record it. They then should write a brief story describing the scene that unfolded, taking care to leave themselves and their opinions out of the story. This is pure observation, designed to build the tools of observation and description. These stories should be no longer than 200 words.

Class 2: Sharpening the story

Students should read their observation pieces aloud to a classmate. Both students should consider these questions: Do the words describe or characterize? Which words show and which words tell? What words are extraneous? Does the piece convey character through action? Does it have a clear beginning, middle and end? Students then should revise, shortening the original scene to no longer than 150 words. After the revision, the instructor should critique some of the students’ efforts.

Assignment: Using campus, governmental or media calendars, students should identify, background and prepare to cover a speech, press conference or other news event, preferably on a topic related to one of the research-based areas covered in the Policy Areas section of Journalist’s Resource website. Students should write a focus statement (50 words or less) for their story and draw up a list of some of the questions they intend to ask.

Week 12: Reporting on deadline

Class 1: Coaching the story

Meetings, press conferences and speeches serve as a staple for much news reporting. Reporters should arrive at such events knowledgeable about the key players, their past positions or research, and the issues these sources are likely discuss. Reporters can discover this information in various ways. They can research topic and speaker online and in journalistic databases, peruse past correspondence sent to public offices, and review the writings and statements of key speakers with the help of their assistants or secretaries.

In this class, the instructor should discuss the nature of event coverage, review students’ focus statements and questions, and offer suggestions about how they cover the events.

Assignment: Cover the event proposed in the class above and draft a 600-word story, double-spaced, based on its news and any context needed to understand it.

Class 2: Critiquing and revising the story

Students should exchange story drafts and suggest changes. After students revise, the instructor should lead a discussion about the challenges of reporting and writing live on deadline. These likely will include issues of access and understanding and challenges of writing around and through gaps of information.

Weeks 13/14: Coaching the final project

Previous week | Back to top

The final week or two of the class is reserved for drill in areas needing further development and for coaching students through the final reporting, drafting and revision of the enterprise stories off the study or report they covered in class.

Tags: training

About The Author

' src=

The Journalist's Resource

IMAGES

  1. Rubric For Evaluating NEWS REPORTS, EOSC 310

    evaluating news reporting assignment

  2. 8+ News Report Samples

    evaluating news reporting assignment

  3. News Report Template

    evaluating news reporting assignment

  4. evaluating-news-reporting-project-media-coverage 1 1 .pdf

    evaluating news reporting assignment

  5. Evaluating News Reporting Project Media Coverage

    evaluating news reporting assignment

  6. Evaluating News Reporting.docx

    evaluating news reporting assignment

VIDEO

  1. 3A Video Assignment 3 Limit Laws

  2. News Report School Project sample for students

  3. Evaluating News

  4. Evaluating News Sources

  5. NEU Reporting Assignment 3

  6. 01 PMWeb Budget Management

COMMENTS

  1. Evaluating News Reporting Quiz 100% Flashcards

    Study with Quizlet and memorize flashcards containing terms like Read the scenario. Joe, a reporter for Main Street News, covered a campaign rally for Jiya Patel, who is running for county commissioner. While at the rally, he interviewed the candidate and some audience members as his research for the article. He then wrote an article describing the rally, the candidate's platform, and the ...

  2. Evaluating News Reporting Flashcards

    Study with Quizlet and memorize flashcards containing terms like Read the scenario. Joe, a reporter for Main Street News, covered a campaign rally for Jiya Patel, who is running for county commissioner. While at the rally, he interviewed the candidate and some audience members as his research for the article. He then wrote an article describing the rally, the candidate's platform, and the ...

  3. PDF Rubric for evaluating NEWS REPORTS, EOSC 310

    Rubric for evaluating NEWS REPORTS, EOSC 310 Use this rubric as a guide. Write the categories (left side) on your index card. Evaluate each category on a scale of 0-4. Write comments on reverse side of card. Category Excellent (4) Good (3) Adequate (2) Inadequate (1) Opening & intro Clearly, quickly established the focus of the presentation,

  4. Evaluating News Reporting

    Media Coverage Student Guide Assignment Summary For this assignment, you will use a graphic organizer to monitor and analyze multiple global news outlets across a period of time. Once you have completed your organizer, you will answer key questions about the media's coverage of a story and how this coverage changed and developed over time.

  5. Identifying Bias

    The failure of a straight news report to present a fair and balanced representation of the event or issue. Framing: The way that journalists approach and organize a story. Various types of news media bias can be expressed in how a story is framed. Story selection: The process that news outlets use to decide which issues and events to cover. Tone

  6. Lesson Plans and Assignments

    The News Study research report presents findings about how a sample of U.S. college students gather information and engage with news in the digital age. Results are included from an online survey of 5,844 respondents and telephone interviews with 37 participants from 11 U.S. colleges and universities selected for their regional, demographic ...

  7. Evaluating News

    How to Spot 16 Types of Media Bias. An online guide (also available as a PDF) on the 16 types of media bias, created by AllSides, a multipartisan group that rates the bias of online media sources. Interactive Media Bias Chart. This interactive chart allows you to look at media sources mapped across two dimensions--reliability and bias.

  8. PDF Warm-Up Evaluating News Reporting

    Cheryl Hopkins is a reporter for a local television news station. She is investigating a business owner who is accused of trying to bribe a public official. This business owner has a long history of public service and is highly respected. The accuser is a 21-year-old college student who is an intern at the business.

  9. LibGuides: Elevate Your News Evaluation: Instructional Materials

    See Media and News for regular reporting on news consumption in the United States as well as changes in the new industry. The Modern News Consumer: News attitudes and practices in the digital era (July 2016) is an insightful report with topics like "Pathways to News" and "Loyalty and Source Attention". Article Comparison

  10. Research Guides: Evaluating News Sources: For Faculty

    2. Investigate the source. 3. Find trusted coverage. 4. Trace claims. Fake News, Lies and Propaganda: The Class. Course materials for a 7-week course, taught at the University of Michigan, including assignments and an Open Canvas version of the course, available for re-using. Stony Brook Model of News Literacy.

  11. News Credibility: Adapting and Testing a Source Evaluation Assessment

    Recent Project Information Literacy research has found that embarrassed students may go with their "gut feeling" to determine the legitimacy of a news source when lacking proper source evaluation skills. 69 Such "gut feelings" may be clouded by an idealization of news as an "objective reporting of facts" or by disillusionment that ...

  12. Media Literacy Guide: Evaluating the News

    News Evaluation Games. It can be difficult to determine what is trustworthy news and what is not. Try these games to see how you do: Factitious Game - test your ability to discern real news from false. The Fakeout Game - Your social media feed has been infected by false information. Your job is to learn the skills of verification, so you can ...

  13. Evaluating News Stories

    Use the chart below to evaluate a news story. Using the criteria on the left, you can see which characteristics are desirable in a credible news story. Please note that a news article can still be considered credible even if it contains characteristics from the far-right column. Criteria: In favor of credibility: Not in favor of credibility ...

  14. PDF Covering the news: A syllabus

    news event to the purely enterprise investigative story. ASSIGNMENT: Students should read and deconstruct coverage of a major news event. One excellent source for quality examples is the website of the . Pulitzer Prizes, which has a category for breaking news reporting. All students should read the same article (assigned by the instructor), and

  15. Evaluating News Resources

    Evaluating News Sources . STOP: ... Learn more about bias, types of bias, objectivity in reporting, and the difference between reporting and opinion pieces with these tutorials. Allsides.com. Provides news stories from a variety of sources with multiple biases. Recommended to get an understanding of how the same facts are reported from ...

  16. Evaluating Science in the News

    Description. In this activity, students evaluate a science news article to determine whether it is a trustworthy source of information. Science news articles are a great way to learn about new ideas, discoveries, and research. However, it's important to evaluate the authority and credibility of sources of information.

  17. VO/SOT News Reporting Assignment :: Intro to Film and TV Production

    VO must be at least 45 seconds long, SOT must be 15 seconds (with padding at the beginning and end) VO should have a variety of different shot types, SOT should be an engaging soundbite that doesn't parrot the VO script. All video should be framed properly, exposed, focused, and white balanced correctly. Script must be written in concise and ...

  18. Evaluating News Reporting Project Media Coverage

    Evaluating News Reporting Project Media Coverage this is an essay for edgenuity. If you have this feel free to copy it. evaluating news reporting project: media. ... Assignments. 100% (21) 2. Edgar J. 1040 (Personal) Finance. Assignments. 100% (39) 3. Act ch02 l05 english - Budgeting (Personal) Finance. Assignments.

  19. Basic newswriting: Learn how to originate, research and write breaking

    The Seattle Times earned the 2010 Pulitzer Prize for Breaking News Reporting for their coverage of the crime. Course objective . To give students the background and skills needed to originate, research, focus and craft clear, compelling and contextual accounts of breaking news in a deadline environment. ... Assignment: Students should evaluate ...

  20. Evaluating News Reporting Flashcards

    Evaluating News Reporting. Ethics. Click the card to flip 👆. are the principles and beliefs that guide the actions of a individual or a group: -voluntary principles that guide people. -based on standards, morals, and ideals. -not enforced with legal punishments. -not always written.

  21. Evaluating News Reporting Project: Media Coverage

    Evaluating News Reporting Project: Media Coverage Instructions Click the links to open the resources below. These resources will help you complete the assignment. Once you have created your file(s) and are ready to upload your assignment, click the Add Files button below and select each file from your desktop or network folder.