A man watching a video on his computer.

What is Closed Captioning? Everything You Need to Know is Here

Saving time and effort with Notta, starting from today!

Closed captions were first demoed in the United States during the First National Conference on Television for the Hearing Impaired  in 1971. Later, they were successfully introduced and broadcast widely in 1980, with real-time captioning launching in 1982. Today, there are many more types of closed captioning options available.

Unfortunately, knowing where to start with the many kinds of captions available for video production can be challenging. Should you use open captions or closed captions? What kind of closed captions are best for viewers, and what are the benefits of closed captions for both your audience and your video SEO?

Today, we’re sharing everything you need to know about closed captioning, its many benefits, and how you can efficiently add closed captions to videos.

What is closed captioning?

Closed captioning is the process of showing a text version of the spoken parts of a video. For example, closed captions would include the text spoken during the dialogue in a movie or the presentation audio during a computer presentation. Closed captions  do not include notes about audio cues, music, or background music and assume that the audience can hear these sound cues. Open captioning was later developed to include text captions about the sound effects and music, making it an even more accessible option.

Closed captions were initially developed to make video content more accessible to people who are hard of hearing or deaf. However, it’s widely used by people learning the language, people in noisy environments, and people learning to read.

Types of Closed Captions

Pop-on captions.

As the name suggests, pop-on captions pop on your video screen as the audio is spoken, then disappear in time for the following captions to pop on screen. These are the most common type of closed captions  and are created for prerecorded broadcast, web, and streaming content.

Pop-on captions are never used for live broadcast content because each word is written on-screen needs to be immediately processed by an encoder, which requires all text and audio information before they can post the caption. As a result, pop-on captions would be delayed if they were used live, which would be a disservice to the viewer watching the program live.

Ideally, editors should use pop-on captions for prerecorded content. They are versatile and easy to customize and synchronize to the speaker’s timing, creating an incredible viewing experience.

Roll-up captions

Roll-up captions constantly roll onto your video screen, one following the other, creating more time for viewers to read the captions than other caption options. The top line of the roll-up captions will disappear every time a new caption rolls on screen. 

Typically, live programming uses roll-up captions because the longer screen time allows the dialogue to be synchronized in real-time. Each sentence appears quickly but remains longer on screen than pop-up captions would. They also require less load time.

Paint-on captions

Paint-on captions are stylized closed captions that populate letter by letter on the video screen. This animation gives the effect that the captions are being typed or painted on while you’re reading. However, the animation happens rapidly, so it’s largely unnoticeable unless your entire video is captioned in this style. 

Typically, editors will use paint-on captions for a prerecorded program as a stylish opening. They aren’t generally used for an entire, long-form video because they demand higher load-time requirements and can have a slight delay compared to pop-on captions. Generally, paint-on captions are considered outside the industry standard and are only used for a stylistic, fast-paced program with short speech patterns.

SEO block

When it comes to closed captions, insist on nothing less than swiftness and accuracy. See how Notta stands out with an impressive 98.86% precision for flawless media Transcription.

音声ファイル文字起こしする

Benefits of Captioning

1. accessibility.

The most crucial benefit of captioning is accessibility to those that are hard of hearing or deaf. It’s estimated that 48 million Americans  struggle with hearing loss. Without closed captions, these Americans cannot watch your video content, losing valuable audience members in the process.

2. Video SEO

Adding transcripts to your videos can boost SEO by giving search engines a way to examine the entire text of your audio or video clip. Providing search engines with specific information from your video makes it easier for them to properly index your video, allowing it to show up in organic search results better.

3. Better watch time

Studies have shown that videos that add captions or subtitles increase view time by up to 12% , which is a significant improvement in watch time, especially considering the small time commitment it takes to create and add captions.

4. Legally complaint

Depending on where you live and where your content is published, you may be legally required to provide accurate closed captions on your videos, even those released on the Internet. The Federal Communications Commission  strictly regulates the closed captioning requirements in the United States, and organizations have been sued in the past for not adhering to these standards. Avoid legal trouble altogether by including accurate closed captions on all your video content.

5. Improved viewer comprehension

Reading closed captions while listening to the video can be a helpful learning aid to reinforce the information presented. A national study by Oregon State University  found that 52% of students improved comprehension  through captions. The most commonly reported reason students use captions is as a focus aid, which is must-know information for anyone trying to improve their audience’s information retention.

6. Flexible viewing options

Many viewers watch videos in places where they cannot listen to the audio, such as on the bus or in a noisy office, so closed captions allow them to mute your video but still understand your content. It’s reported that 85% of Facebook videos  are played without sound, so you miss out on a valuable demographic if you don’t have captions available.

7. Improved video search

Interactive transcripts allow viewers to search your video for specific keywords and jump to the specific spot in your video when a topic is being discussed. Captions can boost viewer experience and satisfaction with their video as they can immediately reap value from your video rather than wait for the information they want.

Closed Caption Formats

How to add closed captions to videos.

Before adding closed captions to your videos, you’ll need to create a transcript of the audio. Creating a transcript yourself can be a tedious, frustrating process, from having to go over your audio file multiple times while typing to correcting punctuation and inaccuracies in your transcript.

We recommend using Notta , a speech-to-text online converter that can quickly transcribe your video  to skip the frustration of creating the transcript yourself. After creating a transcript using Notta, choose one of the following methods to add closed captions to your video.

A woman filming

Methods to add closed captions to a video:

1. Upload as a “sidecar” file

If you’re uploading your video to YouTube or another social media platform, consider uploading your closed captions as a “sidecar” file. Take your transcript, usually in a .SRT file format, and upload it alongside your video. Most websites will allow you to review your closed captions before publishing the video, allowing you to check the timing and accuracy of your captions.

2. Caption encoding

Caption encoding is a more direct approach to adding closing captioning. Instead of uploading the closed captions separately, the captions are embedded into the video and uploaded as one single asset to the platform. This option is perfect for video streaming services, like Netflix, or video players, like QuickTime, that allow users to download the videos to watch offline. The captions will still be available offline because they are directly “burned” into the video, unlike captions uploaded to the platform. 

Caption encoding can be done by third-party software, such as 3Play Media . Simply upload your transcript to the website with your video, then select caption encoding for your captions to be “burned” into the video file.  

3. Use an integration or API workflow

If you publish videos frequently and want to cut down on your workflow, we strongly recommend you consider an integration or API workflow. This process is similar to uploading a “sidecar” file, but third-party software automatically creates and uploads the captions to your video. All you need to do is log in to the video platform, upload your video, then notify or tag the software to your video upload. From there, the software creates captions and posts them on your video, saving you the hassle of creating the transcript, timing it, and uploading it. The downside to this method is that it is more expensive  since you are outsourcing the entire process.

A interface of a software.

FAQs about Closed Captioning

1. what is the difference between open captions and closed captions.

The difference between open captions and closed captions is that open captions are a permanent part of the video and cannot be turned off. Open captions also have the advantage of being playable on all devices and video players and give publishers control over the size and style of the captions. In contrast, closed captions can be inconsistent across different platforms and need special devices, decoders, to be played on certain video devices.  

2. What is the difference between closed captions and subtitles?

The difference between closed captions and subtitles comes down to the intended audience.  Closed captions provide both dialogue and other important parts of the video’s soundtrack, such as audio cues, description of music, background noises, and the phone ringing. In contrast, subtitles are created for an audience that can hear the audio but may need additional information provided in text form, typically the dialogue. Often, subtitles are used to translate a film into a foreign language. The audience can still hear the music, background noises, and audio cues but relies on the subtitles for the translated dialogue. In contrast, closed captions are created for an audience that cannot hear or has impaired hearing.

3. What is captioning accuracy?

Captioning accuracy is an accuracy rate of 99% or more to ensure that people who are hard of hearing or deaf can fully understand and access the video content. Currently, many video platforms like YouTube offer automatic captions. However, their accuracy is usually only about 60 to 70% , leaving hard-of-hearing people with inaccurate, messy captions. Using software for closed captioning can significantly improve your captioning accuracy and help you avoid excluding groups of people who rely on accurate closed captions to consume content.

4. What is the most used software for closed captioning?

The most used software for closed captioning includes Rev, Notta, and GoTranscript. All three of these software programs provide accurate transcriptions for video content.

5. What percentage of people use closed captions?

The percentage of people who use closed captions always or often is approximately 35% . Another 19% of survey respondents said they used closed captions sometimes, resulting in a total of 54% of people saying they use closed captions regularly. As a result, closed captions are an essential part of publishing video content and making it more accessible, SEO-forward, and helpful to viewers.  

Adding closed captioning to your video is critical in making your content more accessible, searchable, SEO-forward, and increasing audience retention. We hope today’s guide on everything you need to know about closed captioning helps you add closed captions to your next video with ease. Notta  can quickly transcribe your videos, both live and post-recording, making adding closed captions a fast, straightforward process.

Notta provides complimentary real-time transcription, analysis, and summarization of your audio and video material, instantly transforming spoken words into searchable text on any device. This enables you to effortlessly access knowledge from any content, no matter where you are.

cta5

Chrome Extension

Help Center

vs Otter.ai

vs Fireflies.ai

vs Happy Scribe

vs Sonix.ai

Integrations

Microsoft Teams

Google Meet

Google Drive

Audio to Text Converter

Video to Text Converter

Online Video Converter

Online Audio Converter

Online Vocal Remover

YouTube Video Summarizer

Accessibility

What’s that you say? Present with captions in Google Slides

Oct 08, 2018

[[read-time]] min read

Years ago in a Long Island doctor’s office, four-year-old Laura was fitted with her first pair of hearing aids, customized to compensate for her specific hearing loss. However, they didn’t work very well, particularly in noisy backgrounds, so she eventually stopped wearing them.

A few years later on a school bus in Bethesda, MD, nine-year-old Abigail sat next to a classmate who taught her how to communicate using American Sign Language . In high school, she worked in a biology lab at the National Eye Institute where she researched retinitis pigmentosa, a genetic disorder that causes loss of vision.

Flash forward to today where we, Laura and Abigail, work at Google, building products with accessibility features that help billions of users across the globe. We met earlier, through the accessibility community at MIT, where we studied computer science with the hopes of using our technical skills to make a difference in people’s lives.

During our time at university, Abigail built a solution that helped a blind man use his touch-screen oven, led a team that enabled blind individuals to sign legal documents independently, and co-founded an assistive technology hackathon . Laura researched a new signal processing algorithm for hearing aids in noisy environments, built an app for residents in a neurological disease care facility to call for help in a more accessible way , and worked on a hands-free page turner for individuals unable to use their arms. This work not only made us see what an impact technology can make on people with accessibility needs, but also motivated us to focus our careers in this area when we graduated.

When we landed at Google, we both independently joined the G Suite accessibility team. As part of this team, we've improved screen reader , Braille and screen magnifier support on Google Docs, Sheets and Slides, and we have represented the Google Accessibility team at external conferences. We’re also involved with the American Sign Language community at Google, which promotes inclusivity among all Googlers through shared language.

Recently, an internal hackathon led us to work on a project that is deeply personal. Upon observing that presentations can be challenging for individuals who are deaf or hard of hearing to follow along, we both teamed up with the idea to add automated closed captions to G Suite’s presentation tool, Google Slides.

This work has moved from a passion project to our full-time job, and today we’re officially launching automated closed captions in Google Slides. The feature will gradually roll out to all Slides users starting this week.

Google Slides Closed Captions

An example of closed captions in Google Slides

How it works

The closed captions feature is available when presenting in Google Slides. It uses your computer’s microphone to detect your spoken presentation, then transcribes—in real time—what you say as captions on the slides you’re presenting.  When you begin presenting, click the “CC” button in the navigation box (or use the shortcut Ctrl + Shift + c in Chrome OS / Windows or ⌘ + Shift + c in Mac).

As you start speaking into your device’s microphone, automated captions will appear in real time at the bottom of your screen for your audience to see. The feature works for a single user presenting in U.S. English on a laptop or desktop computer, using the Chrome browser. We’re looking to expand the feature to more countries and languages over time. The captions are powered by machine learning and heavily influenced by the speaker's accent, voice modulation, and intonation. We’re continuing to work on improving caption quality.

Closed captioning in Slides can help audience members like Laura who are deaf or hard of hearing, but it can also be useful for audience members without hearing loss who are listening in noisy auditoriums or rooms with poor sound settings. Closed captioning can also be a benefit when the presenter is speaking a non-native language or is not projecting their voice. The fact that the feature was built primarily for accessibility purposes but is also helpful to all users shows the overall value for everyone of incorporating accessibility into product design.

You might think that the experiences we had growing up are the reasons we were inspired to work on accessibility at Google. That’s partly true. But we really got into this work for its potential to improve the lives of people with disabilities, for the interesting technologies and design constraints, and because of our desire to use our skills to make the world a better place. We’re excited to contribute to that effort with closed captions in Google Slides, and we’re eager to share it with you. Visit our help center to learn more.

Related stories

whmrefit

Find even more content celebrating women on Google Play

imageedit_3_7212997859

Introducing the 2024 Tech Equity Collective Impact Fund Grantees

Hero_Option

Native-led organizations bridging the digital divide with Google.org support

womens health_2

How AI is helping advance women’s health around the world

Women founders, North America and Europe, 2024

28 startups join our accelerator program for women founders

Untitled design (11)

4 things to remember this International Women's Day and Women's History Month

Let’s stay in touch. Get the latest news from Google in your inbox.

Eos

Science News by AGU

Caption This! Best Practices for Live Captioning Presentations

Share this:.

  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)

A large microphone sits in front of a computer monitor displaying a graph on captioning

Presentations that have captions are better understood, whether they are in-person or remote.

Captions make verbal material more accessible to a wider variety of people. A study of BBC television viewers reported that 80% of caption users are not deaf or hard of hearing. During English-spoken scientific presentations, not-yet-fluent English speakers, people who are deaf or hard of hearing, and people who have auditory processing disorder develop listening fatigue that can inhibit their understanding and limit their participation in discussions.

Increasing the accessibility of presentations and improving inclusivity of discussions provide a path toward increasing diversity within the sciences. Studies have shown that subtitles or captions improve both English language skills [e.g., Vanderplank , 2016; Wang and Liu , 2011] and accessibility of science for deaf and hard of hearing participants [e.g., Kawas et al. , 2016; Vanderplank , 2016]. Furthermore, for remote presentations, audio may not be accessible in all shared workspaces.

A myriad of tools and platforms can provide captioning for live presentations. Why then don’t we regularly caption geoscience presentations? Our resistance may be due to such factors as not knowing or believing that captioning is needed, not knowing how to use these tools, and/or believing that the resulting captioning will be inadequate. However, presenters should make their talks accessible without requiring participants to request captions each time.

This article outlines different strategies for providing effective captions using widely available captioning tools and presents results of our performance assessment of artificial intelligence (AI)–based auto-captioning of jargon-rich geological passages. Because most scientific presentations are delivered using either Microsoft PowerPoint or Google Slides presentation software, we focus our performance assessment on the built-in auto-captioning provided by these platforms.

Our evidence supports five best practices and key takeaways:

  • Implement AI-based auto-captioning directly within the presentation software.
  • Use an external microphone.
  • Speak deliberately and clearly.
  • Practice with the presentation software beforehand and add to text of the slides words that are typically missed with your accent.
  • Always accommodate requests for human captionists.

In-Person Presentations

For in-person presentations, either trained human captionists or AI-based auto-caption or transcription software can provide live captioning (Figure 1). Captionists use stenography tools to provide accurate transcriptions. For everyone to access the captions, the captionist’s transcriptions can be projected onto a separate screen near the presentation slides.

Linked boxes show how different platforms can accommodate captioning for in-person and online meetings

Both Microsoft PowerPoint (with Office 365 or Presentation Translator ) and Google Slides (with the Chrome browser) provide built-in AI-based auto-captioning directly onto the presented slides that can be used by anyone ( instructions here ). Third-party software, such as Ava , Rev , and Otter.ai , can also provide AI-based transcriptions. In addition to their wide availability, an advantage of Slides and PowerPoint auto-captions over third-party transcription software is that the captioning is projected onto the same screen as the presentation. Having captions within the presented slides frees the audience from repeatedly having to shift its focus from the presentation material to a separate caption screen.

Online Presentations

For remote online presentations, any of the in-person strategies can also work. Human captionists anywhere in the world can join remote meetings. In addition, the online meeting platforms Google Meet and Microsoft Teams offer built-in live auto-captioning that uses the same AI-based transcription tools as their presentation software. With the Webex and Zoom platforms, captioning can be available to everyone if the host appoints the captionist within the meeting software. Zoom and Webex also allow for third-party auto-captions if the host has paid for those services.

The benefits of providing captioning directly within Microsoft PowerPoint and Google Slides is that the built-in AI-based captioning means you don’t need to add another tool and pay for that service. Many online presentations are also recorded. While a variety of tools can add carefully edited captions to recorded lectures that didn’t have live captioning, offering a transcript after a live presentation is not a suitable solution to improving participation.

How Accurate Are Captions for Scientific Talks?

If you have watched auto-captions provided by YouTube, then you have seen low-quality captions, sometimes called craptions.

If you have watched auto-captions provided by YouTube, then you have seen low-quality captions, sometimes called craptions . The word error rate (WER) of YouTube’s non-AI-based auto-captioning is 20%–50%, which renders it practically useless unless creators manually edit the autogenerated transcript. Typical word errors include split or blended words, incorrect spelling, and incorrect guesses. For both AI-based and human captioning, WER is affected by microphone quality, Internet quality, accent and style of the speaker, and advance access of the captionist to the presentation content.

Jargon, such as is often encountered in geoscience presentations, can be particularly challenging for accurate captioning. To challenge the performance of live auto-captioning software to capture scientific presentations, we chose two passages rich with geological jargon taken from Van der Pluijm and Marshak [2004] and Weil [2006]. Both passages have complex words that are rarely used outside the discipline as well as common English words that are used differently by experts. For example, “thrust” is typically a verb, but geologists use it as an adjective for a type of fault. The second passage also tests the recognition of acronyms. Prior to testing the auto-caption performance, we identified words that we expected to be challenging (Table 1).

Table 1. Words Missed with Captioning of American-Accented English and Standard Sound Quality

*Captioned correctly under best practices and after some training.

We measured the WER of Microsoft PowerPoint and Google Slides AI-based live auto-captioning for both passages under a variety of conditions. WER indicates occurrence of error, so if the captioning never caught the acronym WEVB (Western European Variscan belt), for example, this would count as four mistakes in the second passage.

With a recording of an American-accented English female voice, we repeatedly tested the caption performance of both PowerPoint and Slides. For some tests, we decreased the sound quality by adding background noise and lowering input volume. In another set of tests, we assessed the WER of recordings of nonnative English-speaking geologists reading the two passages. The accents (Chinese, Mexican, Spanish, and German) are not meant to provide a complete accounting of the potential WER of nonnative English speakers but instead to show the relative performance of the AI-based auto-captioning for native and nonnative speakers.

Surprisingly, many technical words that we expected to be missed were accurately captioned (Table 1). Some words and phrases were missed in some, but not all, of the repeated tests. For example, while the phrase “hinge zone” comprises common English words, the captioning sometimes made this unfamiliar phrase into a single word. Repeating each recording at least three times allowed us to assess the variability of performance due to Internet quality and other fluctuations. Only six words from the two passages were never correctly captioned with the AI-based auto-captioning using the American English voice recorded under typical sound conditions (Table 1). Words that were missed much of the time for American-accented English were missed more often with non-American-accented English recordings.

When flummoxed, Google Slides captioning, at the time of our testing, would sometimes omit parts of the passage, whereas Microsoft PowerPoint misguessed a few words. This difference accounts for the larger range of WER for Slides captions in Figure 2. Otherwise, the performance of Microsoft PowerPoint and Google Slides AI-based captioning was similar under most of the scenarios tested. While analyzing recordings of different accents, we noticed that some words, such as Variscan , were learned by the AI-based captioning and later recognized by the English recording, yielding a 2% improvement in WER.

Our experience suggests that jargon may be learned if the AI-based software hears the word in different ways. These codes are updated all the time and might in the future also yield improved caption performance with consistent recognition of jargon placed within the slides or notes.

We tested the effect of audio quality by adding background noise and reducing the sound level of the American-accented English. The tests showed that poor sound quality has a dramatic impact on the quality of the captions (Figure 2). The WER with poor sound quality reached the error levels of auto-captions ,  exceeding 20% in some cases.

Recordings of nonnative English speakers produced a word error rate (WER) of 10%–40% with a median of 20%. Spoken Spanish to English captioning has a WER of about 7%. Poor sound quality of the American recording has a WER of 12%–40% with a median of 17%. American-accented recordings have a WER of 5%–15% with a median of 9%. Following best practices, we produce a WER of 3%–12% with a median of 5%. Within many data sets, PowerPoint has more consistency and performs a bit better than Google Slides.

The WER from recordings of several different people with nonnative English accents showed that accents strongly decrease the quality of captioning. Microsoft PowerPoint allows the user to choose among several variants on English accents, such as British and Australian, that were not tested in this investigation. Presumably, if one spoke with an Australian accent with this accent setting chosen, the performance would be similar to that presented here of American-accented English (Figure 2). PowerPoint also provides captioning of an extensive set of languages. In a limited test, we found that spoken Spanish to Spanish captions performed as well as spoken American English to English. PowerPoint also provides translation from one spoken language to another captioned language. We found that the WER for captioning of spoken Spanish to captioned English (~7%) was less than most of the nonnative English recordings tested here, and the resulting captions missed much of the same jargon presented in Table 1. Some nonnative English speakers may find a reasonable WER if they use the PowerPoint translation feature and speak in their native language, allowing the software to translate the captions into another language.

Best Practices

Implementing AI-based auto-captioning in live presentations using Microsoft PowerPoint or Google Slides is straightforward and can yield acceptable quality captioning.

Implementing AI-based auto-captioning in live presentations using Microsoft PowerPoint or Google Slides is straightforward and can yield acceptable quality captioning. Our findings highlighted the following best practices.

  • Implement AI-based auto-captioning directly within the presentation software. Your audience or meeting participants won’t have to run a separate transcription service and switch attention between the presentation and the transcription.
  • Speak deliberately and clearly. The tests in Figure 2 for American-accented English were from recordings spoken at a conversational pace (average WER of 7.5%). When the same speaker spoke more intentionally, the WER dropped to less than 6%. The geological jargon was still missed, but the captioning caught nearly all of the nonjargon words when the speaker pace was slowed.
In our tests, having the presenter use a lapel microphone produced the greatest improvement to caption quality regardless of other variables.
  • Practice with the presentation software beforehand and see which words are typically missed with your accent. Adding that missed jargon within the text of the slide ensures that the audience can see what the word should be and understand your message. As you repeat jargon in different ways, the AI-based captioning may learn this new word.
  • Use an external microphone to improve audio quality. In our tests, having the presenter use a lapel microphone produced the greatest improvement to caption quality regardless of other variables.

Following these best practices of speaking intentionally with a good quality microphone decreased the WER for the two passages to approximately 5% over several recordings, a reasonable rate for jargon-rich material (Figure 2). Some jargon that was often missed in early tests using the built-in microphone and conversational pace was captured accurately using these best practices, which also eliminated other errors from blended and missed words.

Finally, a deaf or hard of hearing person may specifically request a human captionist for live presentations, because captionists provide more accurate captions. Accommodation requests should always be honored. Captionists are expected to have a word error rate of 1% for nonjargon speech. While this level of accuracy is required for some participants, many of us can benefit greatly from captioning with an error rate of up to 5% such as provided with AI-based live auto-captioning.

Always include captioning in your live meetings, workshops, webinars, and presentations.

Acknowledgments

The authors thank Alina Valop, Xiaotao Yang, David Fernández-Blanco, and Kevin A. Frings for recording their readings of the two passages and David Fernández-Blanco for reviewing this article.

Kawas, S., et al. (2016), Improving real-time captioning experiences for deaf and hard of hearing students, in Assets’16: Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility , pp. 15–23, Assoc. for Comput. Mach., New York.

Kim, J. Y., et al. (2019), A comparison of online automatic speech recognition systems and the nonverbal responses to unintelligible speech, preprint, arXiv:1904.12403 .

Vanderplank, R. (2016), Captioned Media in Foreign Language Learning and Teaching: Subtitles for the Deaf and Hard-of-Hearing As Tools for Language Learning , xiii + 269 pp., Springer, New York, https://doi.org/10.1002/tesq.407 .

Van der Pluijm, B. A., and S. Marshak (2004), Earth Structure: An Introduction to Structural Geology and Tectonics , 2nd ed., Norton, New York.

Wang, K., and H. Liu (2011), Language acquisition with the help of captions, Stud. Lit. Lang., 3 (3), 41–45, https://doi.org/10.3968/j.sll.1923156320110303.1200 .

Weil, A. B. (2006), Kinematics of orocline tightening in the core of an arc: Paleomagnetic analysis of the Ponga Unit, Cantabrian Arc, northern Spain, Tectonics , 25 (3), TC3012, https://doi.org/10.1029/2005TC001861 .

Supplementary Materials

We used two passages to test the AI-based live auto-captioning.

Passage 1, from Van der Pluijm and Marshak [2004]:

“Since the Alpine nappes exclusively consist of thin slices of upper crustal basement and/or its cover, detached from their lower crustal and mantle substratum, all European lower crust, including parts of the upper crust, must have been subducted together with the mantle lithosphere. Hence, north vergent nappe stacking during this collisional stage took place within an accretionary wedge that starts to grow as more nonsubductable upper crustal granitic material of the European margin enters the subduction zone. Radiogenic heat production within this granitic basement, perhaps in combination with slab break-off, leads to a change in the thermal regime and to Barrovian type metamorphism . ”

Passage 2, from Weil [2006]:

“Paleomagnetic and structural analyses of the Western European Variscan Belt (WEVB) suggest that the most viable kinematic model for Variscan deformation in northern Iberia is oroclinal bending of an originally linear belt in a two-stage tectonic history. This history represents two regional compression phases (East West in the Late Carboniferous and North South in the Permian, both in present day coordinates), which resulted in the refolding (about steeply plunging axes) of initially north south trending thrusts and folds in the hinge zone, and oroclinal tightening due to vertical axis rotation of the belt’s limbs. However, the orocline model has yet to be critically tested in the WEVB’s core. This study reports new paleomagnetic, rock magnetic, and structural data from the inner core of the WEVB in order to test opposing kinematic models for the well documented fault and fold interference structures formed by late stage Variscan deformation and to better understand the overall development of the WEVB arc.”

Author Information

Michele Cooke ( @geomechCooke ), Department of Geosciences, University of Massachusetts Amherst; Celia R. Child, Department of Geology, Bryn Mawr College, Bryn Mawr, Pa.; Elizabeth C. Sibert ( @elizabethsibert ), Department of Earth and Planetary Sciences, Yale University, New Haven, Conn.; Christoph von Hagke ( @StrucGeology ), Department of Geography and Geology, University of Salzburg, Salzburg, Austria; and S. G. Zihms ( @geomechSteph ), University of the West of Scotland, Paisley, U.K.

Cooke, M.,Child, C. R.,Sibert, E. C.,von Hagke, C., and Zihms, S. G. (2020), Caption this! Best practices for live captioning presentations, Eos, 101 , https://doi.org/10.1029/2020EO150246 . Published on 09 October 2020.

Text © 2020. The authors. CC BY-NC-ND 3.0 Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Features from AGU Publications

Animals deserve to be included in global carbon cycle models, too, integrating science, art, and engagement to strengthen communities, exploring alfvén waves across space—and disciplines.

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

  • API solutions

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

chevron right

What is Closed Captioning and How Does it Differ From Subtitles

Matthew Patel

In this media-driven world, it's common to come across the terms "closed captions" and "subtitles." Most people don't give them much thought, but they're of great significance for those who utilize them. However, many don't know there's a distinction between the two. So, what is closed captioning vs subtitles? Why is their difference important? 

The Word on Closed Captioning and Subtitles

Closed captioning and subtitles have two entirely different purposes. Nevertheless, they often cause confusion. It's not uncommon for even larger media entities to interchange the two terms. 

Closed Captioning

First thing's first. What is closed captioning, and what is its purpose? Captions are a transcript of sound. Whether from a movie on TV or a documentary online, captions take verbalized words and visually interpret them, turning them into readable words.

Furthermore, captions relay more than just spoken language. Their function is actually an accessibility feature that assists the deaf or hearing-impaired. Captions cover features such as: 

  • Background noises so the viewer can experience the video as a whole
  • Music lyrics and descriptions that help emphasize the plot's current tone
  • Speaker differentiation and narration indicators to clarify who's saying what
  • Moving on-screen word placement so reading doesn't interfere with viewing.      

Subtitles also offer assistance but do so with a contrasting purpose. They're geared toward viewers without any hearing issues who have a language barrier. If the production is not in their native tongue, a person can opt to add subtitles to the footage.

In essence, subtitles are translations. Typically, outlets include subtitles for the countries where the footage is distributed. For example, if an American movie releases in France, then French will be a subtitle option.

More than just thrown-together words, subtitles have some significant features, including: 

  • Professional translations with precise editing
  • Timed transcriptions that coincide exactly with the footage
  • Uniform bottom-screen display to provide consistency    

Caption Options

Closed captioning has the unique feature of live typing. If there's a broadcast with impromptu or unscripted occurrences, a typist can include these in real-time writing. However, this may sometimes lead to a delay.

There's also the option of open captions. Whereas closed captions allow the viewer to turn them on or off, open captions are embedded in videos and can't be shut off. 

Subtitle Variables

On occasion, subtitles include "subtitles for the deaf and hard of hearing." This feature is a hybrid between captions and subtitles. Providing words in the host language and other languages, it takes on the characteristics of closed captions, such as denoting sound effects and music. However, it cannot move around the screen like closed captions, so the words are always in the same place.

Subtitles are often used as a learning tool as well. Many people trying to absorb a new language opt to turn them on and follow along. Research proves this method of review is highly beneficial. 

Concluding Comparison

Both closed captioning and subtitles serve important communication purposes. While captions help those with auditory impairments, subtitles assist with language translation.

Thanks to technology's constant evolution, providing visual on-screen scripts is becoming the norm. Though some countries, like America, are bound to legal stipulations for disability requirements, this is only a minimum. Most media outlets recognize the moral and practical advantages of visual text, offering a number of inclusive options so people across the world can stay informed or be entertained.

Transcriptions

man at desk working on computer in home office

May 18, 2022

Make PowerPoint presentations more accessible with closed captions in embedded videos

Share on Facebook

  • Add our blog to your RSS feed reader" href="/rss/blog.xml" data-bi-name="RSS Subscription" data-bi-id="rss_feed" data-bi-bhvr="120" data-bi-socchn="RSSSubscription" title="Microsoft 365 Insider Blog RSS" target="blank" class="d-inline-block">

Hi, Office Insiders! I’m Peter Wu, a Principal Software Engineer on the PowerPoint team. In honor of Global Accessibility Awareness Day , I wanted to highlight the importance of making your presentations accessible to people with disabilities. And to help you do that, I’m thrilled to announce you can now add closed captions to embedded videos for your presentations in PowerPoint for Mac.

Closed captions in embedded videos

You’ve no doubt seen captions before while watching a video—they are the words that appear on top of the video as it plays (often at the bottom of the screen). Studies show that captions benefit everyone who watches videos , especially those watching videos in their non-native language, people learning to read, and individuals who are deaf or hard of hearing. Others might choose to use captions when their surroundings are too noisy to hear the video or they need to be quiet (and don’t have headphones).

Closed captions are content that is stored separately from the video pixels so that the person watching can turn them on or off. Videos often include multiple closed captions tracks: one in the language of the video and others in additional languages.

While PowerPoint for Mac can play closed captions that are encoded into the video file, it can be a challenging process to encode closed captions into a video. It’s typically easier to store the closed captions in a separate file. Now you can take closed caption files in WebVTT format and insert them into an embedded video in PowerPoint for Mac.

How it works

PowerPoint for Mac screenshot highlighting the Insert Captions option on the Playback tab.

NOTE: Each file that you insert appears as a separate track on the menu. The captions will appear overlaid on the video as it plays.

Tips and tricks

  • PowerPoint for Windows already supports closed captions in WebVTT format, so now closed captions will play in your PowerPoint videos for both Windows and Mac (regardless of which version you used to insert them). PowerPoint for Windows has recently been updated to allow multiple files to be inserted at the same time as well.
  • There are many different apps and services you can use to create WebVTT files. For example, you can use Microsoft Stream to automatically generate closed captions , edit them for accuracy, and then download them as a WebVTT file. And because the format is very simple, you can also just type the captions in a text editor.
  • Even if you have live captions enabled in PowerPoint, or you have a professional captioner or sign language interpreter for your presentation, it is a best practice to create and insert closed captions into your videos ahead of time. They will be more accurate and better synchronized with the video, and participants will better be able to understand your message.

PowerPoint for Mac screenshot highlighting the Insert Captions option on the Accessibility ribbon.

Scenarios to try

  • Add closed captions to embedded videos in your PowerPoint presentation to make them more accessible.
  • Add closed captions in additional languages.

Availability

Closed captions in embedded videos is rolling out to Office Insiders running Beta Channel Version 16.62 (Build 22051100) or later.

Don’t have it yet? It’s probably us, not you.

Features are released over some time to ensure things are working smoothly. We highlight features that you may not have because they’re slowly releasing to larger numbers of Insiders. Sometimes we remove elements to further improve them based on your feedback. Though this is rare, we also reserve the option to pull a feature entirely out of the product, even if you, as an Insider, have had the opportunity to try it.

We want to hear from you! Please click  Help  >  Feedback  to submit your thoughts about this feature.

Learn what  other information you should include in your feedback  to ensure it’s actionable and reaches the right people. We’re excited to hear from you!

Sign up for the Office Insider newsletter  and get the latest information about Insider features in your inbox once a month!

Sorry, JavaScript must be enabled to use this app.

the following presentation has been closed captioned

Add closed captions or subtitles to media in PowerPoint

In PowerPoint for Windows and macOS, and web, you can add closed captions or subtitles to videos and audio files in your presentations. Adding closed captions makes your presentation accessible to a larger audience, including people with hearing disabilities and those who speak languages other than the one in your video.

To read about best practices for accessibility, see Make your PowerPoint presentations accessible to people with disabilities . 

Beginning with version 2016, PowerPoint has a new, simpler format for caption files, called WebVTT. The video player in the following versions of PowerPoint can show those captions when you play the video:

PowerPoint 2016

PowerPoint 2019

PowerPoint 2021

PowerPoint for Microsoft 365

The closed captions are stored in a text-based file with a .vtt filename extension. You can create a closed caption file on your own or use a caption-creation tool. To search online for available tools and detailed instructions, type "create vtt file" in your search engine .

For instructions on showing captions when watching a video in these versions of PowerPoint, see Accessibility features in video and audio playback on PowerPoint .

Requirements for this feature

In Office 2016, the availability of the closed-captioning feature depends on the way Microsoft 365 was installed. Closed-captioning is only available for Office 2016  Click-to-Run installations; MSI-based installations don't have closed-captioning features. Read the next section to see whether the feature is available to your installation of PowerPoint 2016.

Check whether Microsoft 365 was installed using Click-to-Run or MSI

Open an Office 2016 application.

On the File menu, select Account .

For Office 2016   Click-to-Run installations, you'll have an Update Options button.

MSI-based installations don't have an Update Options button. You'll only see the About <application name> button.

Click-to-run installations have an Update Options button on the Account page. MSI-based installations don't have this button.

If you have an MSI-based installation of Office 2016, refer to the Office 2010-2013  tab of this article to see what captioning features are available to you.

Create closed captions

Prepare a text-based caption file with a .vtt filename extension before adding captions. For instructions on how to create the caption file, see Create closed captions for a video .

Add closed captions to a video

You can add captions to presentations that you've recorded with video narration, screen recordings, and any other video (except online videos) that you insert into PowerPoint.

In PowerPoint, in the Normal view, open the slide that has the video that you want to add captions to.

Select the video on the slide.

On the Playback tab, select Insert Captions , and then select Insert Captions .

Insert captions for a video in PowerPoint.

In the Insert Captions dialog box, select the file or files and then click Insert .

If you need to add more caption files, just repeat the process.

Play the video and check that the captions appear correctly.

Remove closed captions from a video

If you need to edit a closed caption file that is inserted in a video in PowerPoint, you can first remove the file, modify it, and then add it back to the video. Before removing the file from the PowerPoint video, make sure you have the original copy of the closed caption file stored on your PC.

If you have added more than one caption file to a video, the following process removes all caption files assigned to the video.

In PowerPoint, in the Normal view, open the slide that has the video containing the captions.

On the Playback tab, select Insert Captions , and then select Remove All Captions .

Remove all captions for a video in PowerPoint.

Additional ways to add closed captions

Beginning with version 2111, you can also insert closed captions from the Accessibility ribbon using the Insert Captions button.

Beginning with version 2208, you can also insert closed captions from the context menu that appears when you right-click on a video.

Tip:  If you're using Microsoft 365, you can also show live subtitles, including live translation to another language if you like, of your speech as you present. For more information, refer to Present with real-time, automatic captions or subtitles in PowerPoint .

Add captions to an audio file

Beginning with version 2303, you can insert closed captions for audio in the same way as for video.

Tip:  The closed captions will only be displayed on the slide that the audio file is inserted in even if the audio continues playing on other slides because the Play Across Slides setting is on.  

Related topics

Make your PowerPoint presentations accessible to people with disabilities

Create closed captions for a video

Beginning with version 16.63, PowerPoint for Mac supports closed captions in the WebVTT format. 

The closed captions are stored in a text-based file with a .vtt filename extension. You can create a closed caption file on your own or use a caption creation tool. To search online for available tools and detailed instructions, type "create vtt file" in your search engine .

For instructions on showing captions when watching a video in the supported versions of PowerPoint, refer to the section "Turn on closed captions or subtitles by using the keyboard" in the article Accessibility features in video and audio playback on PowerPoint .

Prepare a text-based caption file with a .vtt filename extension before adding captions. For instructions on how to create the caption file, refer to  Create closed captions for a video .

Insert captions for a video in PowerPoint.

In the Insert Captions dialog box, browse to your caption file. Select the file or files and then select  Insert .

If you need to edit a closed caption file that is inserted in a video in PowerPoint, you have to first remove the file, modify it, and then add it back to the video. Before removing the file from the PowerPoint video, make sure you have the original copy of the closed caption file stored on your computer.

Remove all captions for a video in PowerPoint.

You can also insert closed captions from the Accessibility ribbon using the Insert Captions button.

Beginning with version 16.64, you can also insert closed captions from the context menu that appears when you Control-click on a video.

Beginning with version 16.71, you can insert closed captions for audio in the same way as for video.

Tip:  The closed captions will only be displayed on the slide that the audio file is inserted in even if the audio continues playing on other slides because the Play Across Slides setting is on.   

Beginning with build 16.0.17201.40500, you can insert closed captions for embedded videos in PowerPoint for the web. Closed captions stored in a text-based file in WebVTT format with a .vtt filename extension are supported.

You can create a closed caption file on your own or use a caption-creation tool. To search online for available tools and detailed instructions, type "create vtt file" in your search engine .

On the Video tab, select Insert Captions , and then select Insert Captions .

Insert captions for a video in PowerPoint

In the Insert Captions dialog, select the file or files and then click Insert .

On the Video tab, select Insert Captions , and then select Remove All Captions .

Remove all captions for a video in PowerPoint

You can also insert closed captions from the context menu that appears when you right-click on a video.

Facebook

Need more help?

Want more options.

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

the following presentation has been closed captioned

Microsoft 365 subscription benefits

the following presentation has been closed captioned

Microsoft 365 training

the following presentation has been closed captioned

Microsoft security

the following presentation has been closed captioned

Accessibility center

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

the following presentation has been closed captioned

Ask the Microsoft Community

the following presentation has been closed captioned

Microsoft Tech Community

the following presentation has been closed captioned

Windows Insiders

Microsoft 365 Insiders

Was this information helpful?

Thank you for your feedback.

Disney Channel Closed Caption Bumpers

  • View history
  • 1 1st Bumper (April 18, 1983-1986)
  • 2 2nd ID (1986-1991)
  • 3 3rd ID (1991-1994)
  • 4 4th ID ( Adventures in Wonderland variant) (March 1992-1994)
  • 5 5th ID (Kids variant) (1988-1994)
  • 6 6th ID (1994-April 1997)
  • 7 7th ID (Kids variant) (1994-April 1997)
  • 8 8th ID (TKO variant) (1994-1997)

1st Bumper (April 18, 1983-1986) [ ]

Bumper: On a black background a blue "TV speech balloon" symbol rotates and zooms in towards the top portion of the screen. Afterward, the white text "CLOSED CAPTIONED FOR THE HEARING IMPAIRED" swiftly zooms in with a curve and stops under the symbol, leaving behind a blue-colored trail of the text that soon catches up to it.

FX/SFX: The "TV speech balloon" symbol rotating and zooming it, and the text zooming/swinging in with a blue "trail" effect.

Music/Sounds: A violin sting is heard throughout, accompanied by an '80s-sounding fanfare. It's actually an excerpt from the track "Cloudburst" by Craig Palmer.

Availability: Extinct. The Disney Channel began airing closed-captioned programs since its launch, and this bumper was seen before select movies like Pollyanna , Pete's Dragon , Robin Hood , Alice in Wonderland and The Sword in the Stone . It also been on Request Television from 1985 to 1988, as well as Embassy Night at the Movies (1983-1988) and especially the 1986 VHS of A Chorus Line . Check your old tapes.

2nd ID (1986-1991) [ ]

Disney Channel - Closed Captioned announcement - 1990-2

Bumper: On a magenta/purple gradient background, we see three pink spheres that form the shape of Mickey Mouse's head. In front of them is the text "The following presentation is closed captioned for the hearing impaired."

FX/SFX: The screen fading in and out.

Music/Sounds/Voiceover: An announcer (Jerry Bishop) reads the onscreen text.

Availability: Same as the previous bumper.

3rd ID (1991-1994) [ ]

Disney Channel Bumper- Closed Captioned 2 (1993)-2

Bumper: On a shiny purple background with The Disney Channel's "Mickey TV" logo embossed on it, we see the white text "THE FOLLOWING PRESENTATION HAS BEEN CLOSED-CAPTIONED FOR THE HEARING IMPAIRED." Below the text is a silver "CC" logo.

Music/Sounds/Voiceover: The same Bishop voiceover reading the on-screen text.

Availability: Same as the previous bumpers.

4th ID ( Adventures in Wonderland variant) (March 1992-1994) [ ]

Bumper: On a grainy blue/orange background, a white "CC" symbol zooms into the center of the screen from the top-right. Afterward, the logo for the American Federation of Teachers slides in from behind it toward the top-left, followed by the logo for the National Education Association doing the same toward the bottom-right.

FX/SFX: The "CC" logo zooming out, and the AFT and NEA logos sliding in from behind it.

Music/Sounds/Voiceover: An announcer (Bishop again) says the following: "The following program is closed-captioned, and has been recommended for viewing by the American Federation of Teachers and the National Education Association."

Availability: Extinct; was seen only before Adventures in Wonderland . Check your old tapes.

5th ID (Kids variant) (1988-1994) [ ]

Disney Channel closed caption

The Kid variant logo. Used from kids programs.

Logo: We see a purple background on the left with the black side background on the right. The purple/orange Disney Channel logo appears on the top right side. One the purple background, We see a green spinning square typing the word, "The following presentation has been closed captioned for the hearing impaired". After the word is completed, The green square transfers to the black side background and turns in to the "CC" logo.

Technique: The screen fading in and out.

Music/Sounds: An announcer (a younger-sounding voice this time) reads the text.

Availability: Same as the last bumper.

6th ID (1994-April 1997) [ ]

Bumper: On a black background we see a yellow/orange "CC" logo positioned towards the left. To the right if it is the words "The following presentation has been CLOSED CAPTIONED". Below all that are the words "The Disney Channel" (with "Disney" colored blue and in its familiar script logotype).

Early Variant: The early version of the logo had a yellow/orange "CC" logo positioned towards the right. On the left, the text reads "The following presentation has been CLOSED CAPTIONED for the deaf and hard of hearing". Below all that are the words "The Disney Channel" (with "Disney" colored blue and in its familiar script logotype).

Music/Sounds/Voiceover: An announcer (Ether Jerry Bishop or a younger-sounding voice) reads the text.

Availability: Extinct; was seen before primetime programs during the mid-1990s. Check your old tapes.

7th ID (Kids variant) (1994-April 1997) [ ]

Bumper: On a black background, we see the white text "The following presentation has been CLOSED CAPTIONED" in the center of the screen. Above it is a multicolored box with a blue Mickey Mouse icon placed diagonally in it. Below the text is an orange "CC" logo.

Music/Sounds/Voiceover: An announcer (a younger-sounding voice this time) reads the text.

Availability: Extinct; was seen before afternoon children's programs during the mid-1990s. Check your old tapes.

8th ID (TKO variant) (1994-1997) [ ]

The Disney Channel Spring Preview TKO 1996 Commercials and Promos

Bumper: On a white background, the words "The following presentation has been CLOSED CAPTIONED" is in the center of the screen, written like a child would with multicolored crayons. Above the text, the TKO block's logo, consisting of a cube, pyramid and cylinder with the letters "T", "K" and "O", respectively, animated by rotating on alternating axes. Below the text, a ball of green construction paper grows and spreads out to form forms a "CC" logo (similar to Rugrats title cards).

FX/SFX: The "TKO" logo rotating and the "CC" logo materializing, all done in stop-motion.

Music/Sounds/Voiceover: Same as the last ID.

Availability: Extinct; was only seen during The Disney Channel's "TKO" (Totally Kids Only) weekday morning block.

  • 1 Warner Bros. Home Entertainment Title Cards Bumpers
  • 2 20th Century Studios Home Entertainment Warning Screens/United States of America and Canada
  • 3 Walt Disney Studios Home Entertainment Warning Screens/United States of America and Canada

the following presentation has been closed captioned

Contribute to the Microsoft 365 and Office forum! Click  here  to learn more  💡

April 9, 2024

Contribute to the Microsoft 365 and Office forum!

Click  here  to learn more  💡

PowerPoint Top Forum Contributors: Steve Rindsberg  -  John Korchok   👍✅

April 17, 2024

PowerPoint Top Forum Contributors:

Steve Rindsberg  -  John Korchok   👍✅

  • Search the community and support articles
  • Microsoft 365 and Office
  • Search Community member

Ask a new question

Closed captions not embedded in exported video.

I’ve created a presentation with an inserted video using Powerpoint - office 365. I have added closed captions using VTT files which works successfully however when I export the presentation as a video the closed captions are not embedded.

Is this a known issue, is there a workaround?

Report abuse

Replies (1) .

  • Microsoft Agent |

Was this reply helpful? Yes No

Sorry this didn't help.

Great! Thanks for your feedback.

How satisfied are you with this reply?

Thanks for your feedback, it helps us improve the site.

Thanks for your feedback.

Question Info

  • Norsk Bokmål
  • Ελληνικά
  • Русский
  • עברית
  • العربية
  • ไทย
  • 한국어
  • 中文(简体)
  • 中文(繁體)
  • 日本語

the following presentation has been closed captioned

Use a screen reader to add closed captions to recorded PowerPoint presentations

This article is for people with visual or cognitive impairments who use a screen reader program such as Windows Narrator, JAWS, or NVDA with Microsoft 365 products. This article is part of the Microsoft 365 screen reader support  content set where you can find more accessibility information on our apps. For general help, visit  Microsoft Support .

Use PowerPoint with your keyboard and a screen reader to add closed captions to videos. We have tested it with Narrator, JAWS, and NVDA, but it might work with other screen readers as long as they follow common accessibility standards and techniques. With closed captions, you can open up your presentation to a larger audience, for example, people with hearing disabilities or those who speak languages other than the one in your video.

The video player in PowerPoint shows the captions when you play the video. For instructions, refer to the section "Turn on closed captions or subtitles by using the keyboard" in Accessibility features in video and audio playback on PowerPoint .

Closed captions are stored in a text-based file with a .vtt filename extension. You can create a closed caption file on your own or use a caption creation tool. For more info, refer to Create closed captions for a video . To search online for available tools and detailed instructions, type "create vtt file" in your search engine.

To learn which caption file types are supported, refer to Closed Caption file types supported by PowerPoint .

New Microsoft 365 features are released gradually to Microsoft 365 subscribers, so your app might not have these features yet. To learn how you can get new features faster, join the Office Insider program .

To learn more about screen readers, go to How screen readers work with Microsoft 365 .

In this topic

Add closed captions to a video, remove captions from a video.

You can add captions to presentations that you've recorded with video narration, screen recordings, and any other video except online videos that you insert into PowerPoint. Adding captions to a recorded presentation that has only audio narration is currently not supported.

Prepare a text-based captions file with a .vtt filename extension before adding captions. For instructions on how to create closed captions, refer to  Create closed captions for a video .

In PowerPoint, in the Normal view, navigate to the slide that has the video you want to add captions to. For instructions, refer to Use a screen reader to explore and navigate PowerPoint .

On the slide, press the Tab key until you hear the video announced.

With the focus on the video, press Alt+J, N, C, and then 2. You hear: "Captions options, Insert captions menu item."

Press Enter. The Insert Captions dialog box opens. The focus is on the File name: text field.

In the Insert Captions dialog box, press the Tab key and the arrow keys until you locate the captions file, and then press Spacebar.

To insert the captions file, press the Tab key until you hear "Insert, collapsed split button," and then press Enter. If you need to add another captions file, repeat steps from 2 to 6.

If you need to edit a closed captions file that is inserted to a video in PowerPoint, first remove the captions file, modify it, and then add it back to the video. Before removing the file from the PowerPoint video, make sure you have the original copy of the closed captions file stored on your PC or online storage.

Note:  If you have added more than one captions file to a video, the following process removes all captions files from the video.

In PowerPoint, in the Normal view, navigate to the slide that has the video you want to remove captions from. For instructions, refer to Use a screen reader to explore and navigate PowerPoint .

To remove the captions, press the Down arrow key until you hear "Remove all captions," and then press Enter. The closed captions are removed from the video.

Use a screen reader to create a presentation from a template in PowerPoint

Use a screen reader to insert and edit pictures and tables in PowerPoint

Use keyboard shortcuts to create PowerPoint presentations

Use keyboard shortcuts to deliver PowerPoint presentations

Basic tasks to create a presentation in PowerPoint with a screen reader

Set up your device to work with accessibility in Microsoft 365

Use a screen reader to explore and navigate PowerPoint

Technical support for customers with disabilities

Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, please contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies and can offer assistance in English, Spanish, French, and American Sign Language. Please go to the Microsoft Disability Answer Desk site to find out the contact details for your region.

If you are a government, commercial, or enterprise user, please contact the enterprise Disability Answer Desk .

Facebook

Need more help?

Want more options.

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

the following presentation has been closed captioned

Microsoft 365 subscription benefits

the following presentation has been closed captioned

Microsoft 365 training

the following presentation has been closed captioned

Microsoft security

the following presentation has been closed captioned

Accessibility center

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

the following presentation has been closed captioned

Ask the Microsoft Community

the following presentation has been closed captioned

Microsoft Tech Community

the following presentation has been closed captioned

Windows Insiders

Microsoft 365 Insiders

Find solutions to common problems or get help from a support agent.

the following presentation has been closed captioned

Online support

Was this information helpful?

Thank you for your feedback.

Closed Captioning Guidelines for TV, Movies, and Video Platforms

the following presentation has been closed captioned

Rev › Blog › Captions › Closed Captioning Guidelines for TV, Movies, and Video Platforms

Closed captioning is an essential way for over 6% of people worldwide to experience sound. These 466 million people, comprised of the deaf and hard-of-hearing, rely https://www.who.int/ on captions. Captions articulate sounds like dialogue, background noise, music, and other non-speech elements. With the help of captions, millions of people with hearing disabilities can watch education, entertainment, and news content.  

In the early 1970s captioning technology became available and was tested at Gallaudet University. This captioning technology was initially utilized by ABC News and PBS. By 1979 the National Captioning Institute was formed. Its mission was to provide captions for the deaf and hard of hearing.

Captioning in the United States is required for TV, movies, online media, and VOD services. 

What Is Closed Captioning? 

Let’s unpack the key differences for closed captioning, open captioning, and subtitles. Subtitles are used for translating dialogue and spoken audio from another language. Unlike captioning, subtitles don’t articulate extra sounds like background noises. So, subtitles are for audiences who can hear but don’t understand the on-screen language.

Open and closed captioning both serve the same goal. They provide an auditory experience for the hard of hearing. Yet, closed captions switch on or off, while open captions burn on to the screen. In other words, closed captioning gives you the option to add or remove captions. Open captioning imprints text indefinitely on the screen. 

Guidelines for Captioning

We covered the importance of captioning, but let’s take a look at the importance of its guidelines. Guidelines published by the DCMP, FCC, and WCAG provide principles for successful captioning. By following these standards, you’ll make sure your captions meet the requirements. Don’t worry if this all seems daunting, we outline the key takeaways for each of the organizations below. 

DCMP Guidelines

The DCMP is completely funded by the Department of Education. Its “elements of quality captioning” include accuracy, consistency, clarity, readability, and equal access. These elements are also referenced in mandates by the FCC in 2014. The guidelines listed appear in the Captioning Key , published by the DCMP. 

  • Accurate – The goal is to provide captions without errors for each production.  
  • Consistent – Maintain a uniform presentation and style to accommodate viewer comprehension. 
  • Clear – All forms of audio must be represented in captioning. Audio should include dialogue, noises, and other non-speech sound elements.   
  • Readable – Captions must mirror the audio as it transpires on the screen. Enough time should be given to read the text and presented in a way that doesn’t conceal visual content. 
  • Equal – Material must convey its original message for equal access. Content should also appear in its entirety.  

FCC Guidelines

Since the DCMP sources elements from the FCC’s 2014 mandate, they share similar models. The FCC’s closed captioning guidelines have adapted with the times. The 21st Century Communications and Video Accessibility ACT expanded them for online content. Programs that appear on TV in the U.S., even if dissected into “video clips,” must supplement captions. The FCC’s website lists the following standards.   

  • Accurate – Captions must describe accurate audio, including dialogue, background noises, and other sounds.  
  • Synchronous – The audio described in captioning should sync up with the content’s pacing. Captions must also appear long enough for viewers to read. 
  • Complete – The captions should cover the entirety of content from beginning to end.  
  • Properly placed – The captions should not obstruct the visuals on the screen. They also shouldn’t appear off-screen or overlap other captions.    

The FCC enforces Title II of the Americans with Disabilities Act (ADA). The ADA states , “A public entity shall take appropriate steps to ensure that communications with applicants, participants, members of the public, and companions with disabilities are as effective as communications with others.” 

WCAG Guidelines

Since 2013, ADA-related Title III lawsuits surged 182%. Non-government entities aren’t subject to WCAG guidelines. That said, eCommerce websites should err on the side of caution. Target faced a lawsuit in 2008 because its website was not ADA compliant. 

The WCAG guidelines exist as three versions: WCAG 1.0, WCAG, 2.0, and WCAG 2.1. The U.S. Access Board updated its standards in 2017 for online captioning. It’s detailed in Section 508 of the Rehabilitation Act, which is reflected in WCAG 2.0. Keep in mind, WCAG 2.1 is backwards compatible. If you use the newest set of guidelines, you won’t need to also check the previous versions. The guidelines have three levels of compliance. They include A, AA, and AAA. More information for WCAG practices can be found on the World Wide Web Consortium (W3C) . 

  • Perceivable – Content should suit various forms of presentation, like assistive technologies. Captions must supplement multimedia and text alternatives.
  • Operable – Make accessible with a keyboard with usable inputs for other devices. Enough time must be given for users to experience content. Visual aids and identifiers should be obvious for locating content. 
  • Understandable – The text on the screen must be legible. Appearance and operations must be predictable.  
  • Robust – Ensure current and future compatibility with user tools. 

Standards for Media Platforms

Various forms of media must yield to the different guidelines. These formats include live TV, movies, and VOD services like Netflix and YouTube.   

Live TV 

Captions for live broadcasts like news programs briefly trail behind the action on the screen. The FCC acknowledges the reality for the delay. Stenographers type in real-time to ensure simulated audio for the deaf and hard of hearing. Under the WCAG 2.0 “Success Criterion” 1.2.4 Captions (Live) , any content that’s broadcast live on TV must produce captions. The FCC mimes these sentiments while also including near-live programming . 

Yet, online web apps are not required to create captions. The FCC notes “responsibility for providing captions would fall to the content providers.” For clips of live programming online, the FCC allows a delay of 12 hours after initially appearing on TV. 

Movies 

The FCC requires movies that appear on television to include captions. Captions aren’t mandatory for movies online that haven’t appeared on TV. If you go to your local movie theater, chances are you haven’t encountered captions on the big screen. That’s because there are no regulations that compel captioning for first-run cinema. The DCMP lists systems for captioning movies and theaters that offer captions. 

Presentation rates for adults and children differ with DCMP requirements . Cultural movies for adults “should be captioned at a near-constant rate.” But, captioning should appear on the screen for at least two seconds and not go beyond 225 wpm. The DCMP advises that the captioning speed for children’s cultural films should occur “at a rate of 150 wpm.”  

Streaming Services 

VOD services like Amazon Video, Hulu, and YouTube are subject to captioning law. Like television, streaming services must provide closed captions for licensed and original content. On its website, regarding WCAG 2.0 and closed captioning, W3C states media outlets should include closed captioning. Some precedents have been set by class-action lawsuits instead of recommended guidelines. 

The Rise of Netflix Captioning

The National Association of the Deaf sued Netflix back in 2011 for “failing to provide adequate closed captioning.”  This, in turn, violated the ADA. Since then, Netflix furnishes captions in over 20 languages. Following its lawsuit, Netflix’s captioning services now embrace a higher quality of standards . Now, Netflix utilizes a detailed style guide that is strictly followed. Recently Netflix violated DCMP guidelines when it censored its Queer Eye series. Netflix has since noted and corrected its mistake and conforming with ADA policies. 

YouTube’s Answer to Captioning

Over 72 hours of content uploads to YouTube per minute. YouTube has two ways you can caption your videos. First, you can manually create your captions. To do so, access your video and navigate to the Advanced tab. From there you can either decide to write your captions on a timeline or upload a completed version. The second way is much easier of the two but less effective. You can enable automatic captioning which YouTube generates with voice recognition technology. Due to YouTube’s low accuracy rate, you’ll most likely fall behind the required WCAG 2.0 AA standards. Its low accuracy changes the meaning of onscreen content which violates DCMP guidelines. 

Captioning with Amazon Video  

Amazon ensured its entire Prime Video library was completely captioned by 2015. Currently, its new original content offers captioning at launch. The company works with studios to provide audio descriptions with content they receive. In all, Amazon adheres to WCAG 2.0 guidelines and AA compliance quality.  

Hulu Captioning

Hulu provides closed captioning for its licensed and original content via FCC guidelines. In 2016, Hulu experienced its own legal woes. In a settlement with the NAD , Hulu agreed to update its content with captions to follow ADA policy. 

Captions are customizable with Hulu. You’re able to resize the captioning text along with modifying the color and font style. Hulu offers models like its Netflix-like standard service and Hulu + Live TV option. Since Hulu Live TV broadcasts live content, it must also issue closed captions as observed by the FCC . This content also includes news and sporting events. 

It’s important for the millions of hearing disabled to have equal access to content. By complying with ADA standards, we’re able to help a growing populace that relies on captions. VOD services are increasing in popularity over traditional TV providers. As they continue to grow, so will their responsibility for accurate captions.

Want captions added directly to your videos? Rev now offers burned-in captions (open captions).  Just check the “burned-in captions” box at checkout and you’ll receive a video with permanent, hard-coded captions added straight to your videos . Also available for foreign language subtitles!

Request a Free Consultation

Related content, latest article.

the following presentation has been closed captioned

How to Add Closed Captions and Subtitles to TikTok Clips

Most popular.

the following presentation has been closed captioned

7 Best Fonts for Subtitles and Closed Captions in Videos

Featured article.

the following presentation has been closed captioned

Open Captions vs. Closed Captions: What’s the Difference?

Everybody’s favorite speech-to-text blog.

We combine AI and a huge community of freelancers to make speech-to-text greatness every day. Wanna hear more about it?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Video Captions Benefit Everyone

Morton ann gernsbacher.

1 University of Wisconsin–Madison, USA

Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.

Introduction

Imagine a technique that can improve children’s reading skills ( Linebarger, Piotrowski, & Greenwood, 2010 ), boost adolescents’ written and spoken vocabulary ( Davey & Parkhill, 2012 ), increase college students’ attention to lectures ( Steinfeld, 1998 ), enhance second-language learners’ pronunciation ( Mitterer & McQueen, 2009 ), and raise literacy rates in developing countries ( Kothari, Takeda, Joshi, & Pandey, 2002 ). The technique is simple: Display captions on videos.

Captions are like foreign-language subtitles; they translate a spoken language into a written language ( Garza, 1991 ). Like foreign-language subtitles, captions appear at the bottom of the screen. Unlike foreign-language subtitles, captions translate into writing the same language that is heard in speaking, which is why captions are also called same-language subtitles. Captions also translate sound effects (“raindrops falling,” “footsteps approaching,” “horses galloping”); captions transcribe song lyrics, and captions offer other helpful clues, such as identifying conversational partners by their name and indicating off-screen voices with italics.

More than 100 empirical studies, listed in the appendix , document the benefits of captions. These studies report benefits to a wide swath of participants as measured by a wide swath of criteria: summarizing main ideas ( Markham, 2000–2001 ), recalling facts ( Brasel & Gips, 2014 ), drawing inferences ( Linebarger et al., 2010 ), defining words ( Griffin & Dumestre, 1992–1993 ), identifying emotions ( Murphy-Berman & Whobrey, 1983 ), and of course, answering multiple-choice comprehension questions ( Hinkin, Harris, & Miranda, 2014 ; Markham & Peter, 2002–2003 ; Murphy-Berman & Jorgensen, 1980 ).

Eye-movement studies document that captions are read easily ( d’Ydewalle & de Bruycker, 2007 ), attended to effortlessly ( d’Ydewalle, Praet, Verfaillie, & van Rensbergen, 1991 ), and integrated smoothly with the soundtrack of the video ( d’Ydewalle & Gielen, 1992 ). Standard verbatim captions are as effective as more detailed or elaborated captions ( Anderson-Inman, Terrazas-Arellanes, & Slabin, 2009 ; Murphy-Berman & Jorgensen, 1980 ).

The numerous empirical studies referenced in the appendix demonstrate that captions benefit everyone who watches videos, from younger children to older adults. Captions are particularly beneficial to persons watching videos in their non-native language, children and adults learning to read, and persons who are D/deaf or hard of hearing, as illustrated below.

Captions Benefit Persons Who Are D/deaf or Hard of Hearing

The early 20th century’s golden age of cinema had created a level playing field for D/deaf and hard of hearing viewers. Silent films, with their interwoven screens of captions (called intertitles), created “the one brief time that deaf and hard of hearing citizens had comparatively equal access to motion pictures” ( Schuchman, 2004 , p. 231). But in the late 1920s, as talkies (films with synchronized speech) pushed out silent films, the D/deaf community was shut out.

In response, the D/deaf community created captions ( Downey, 2010 ), first by recapitulating the intertitles of the silent film era and then by reconfiguring the bottom-of-the-screen foreign-language subtitles that carried U.S. films across the world. In the late 1950s, U.S. President Eisenhower authorized a federal Captioned Films for the Deaf agency (as “part of the post-Sputnik, cold war education boom,” Downey, 2008 , p. 193).

Captions began appearing on television shows in the 1970s (with their earliest appearances on ABC’s Mod Squad and PBS’s The French Chef ; Withrow, 1994 ). In the 1980s, a handful of television shows began displaying captions in real time (e.g., the launch of the space shuttle Columbia and the acceptance speeches at the Academy Awards; Block & Okrand, 1983 ). By the 1990s, captions on TV shows were mandated by the U.S. law ( Erath & Larkin, 2004 ). The Twenty-First Century Communications and Video Accessibility Act of 2010 requires that captioned TV shows also be captioned when displayed on the Internet.

It is unsurprising that captions benefit persons who are D/deaf or hard of hearing. But early experiments demonstrating that captions benefit D/deaf persons demonstrated something further: Captions also benefit hearing persons. For example, Figure 1 displays the results of a study by Nugent (1983) . More than 30 D/deaf children and nearly 100 hearing children (9–14 years old) were randomly assigned to one of four conditions: watch a video with audio but without captions; read only the captions; watch the video with audio and with captions; or read and watch nothing, thereby serving as a control group.

An external file that holds a picture, illustration, etc.
Object name is nihms839470f1.jpg

Data from Nugent (1983) .

The children’s scores on a 23-item comprehension test are illustrated in Figure 1 . Statistical analyses identified two main effects: a main effect of hearing status (hearing children scored higher on the comprehension test than D/deaf children) and a second, even more powerful, main effect of captioning. A lack of a statistical interaction between hearing status and captioning indicated that captions were as beneficial to the hearing children as they were to the D/deaf children.

Several other studies demonstrate the same effect: Video with audio and with captions leads to the highest levels of comprehension, both for D/deaf children and for hearing children ( Anderson-Inman et al., 2009 ; Boyd & Vader, 1972 ; Cambra, Leal, & Silvestre, 2010 ; Fischer, 1971 ; Gulliver & Ghinea, 2003 ; Hertzog, Stinson, & Keiffer, 1989 ; Murphy-Berman & Jorgensen, 1980 ; Murphy-Berman & Whobrey, 1983 ; Nugent, 1983 ; Steinfeld, 1998 ; Yoon & Choi, 2010 ).

Captions Benefit Hearing Children Learning to Read

Even for hearing children, learning to read is a complex process, which requires learning to map sound and meaning onto text ( Linebarger, 2001 ). Soon after captions began appearing on TV shows for D/deaf audiences, educators of hearing children made a striking discovery: Because captions explicitly illustrate the mapping among sound, meaning, and text, captions could also benefit hearing children learning to read ( Adler, 1985 ; Kirkland, Byrom, MacDougall, & Corcoran, 1995 ; Koskinen, Wilson, & Jensema, 1986 ; Parkhill, Johnson, & Bates, 2011 ).

For example, Figure 2 displays the results of a study of 70 hearing children learning to read ( Linebarger et al., 2010 ). Second and third graders were randomly assigned either to watch videos with audio but without captions or to watch videos with audio and with captions. The children watched six ½-hr videos, which were episodes of PBS children’s shows (e.g., Arthur & Friends, Magic School Bus, Zoom ).

An external file that holds a picture, illustration, etc.
Object name is nihms839470f2.jpg

Data from Linebarger, Piotrowski, and Greenwood (2010) .

As Figure 2 illustrates, watching videos with audio and captions leads to significantly better reading skills. Children who watch captioned videos are better able to define content words that were heard in the videos, pronounce novel words, recognize vocabulary items (which may or may not have been heard in the videos), and draw inferences about what happened in the videos. Other studies demonstrate cumulative benefits from watching videos with captions, for example, cumulative growth in vocabulary both for hearing children ( Koskinen et al., 1986 ) and for hearing adults ( Griffin & Dumestre, 1992–1993 ).

Captions Benefit Hearing Adults

After discovering that captions benefit hearing children learning to read, researchers investigated whether captions also benefit hearing adults learning to read. They do ( Koskinen, Knable, Markham, Jensema, & Kane, 1995–1996 ; Kothari, Pandey, & Chudgar, 2004 ; Kruger, Kruger, & Verhoef, 2007 ).

For example, in the late 1990s, researchers encouraged India’s national television network to begin captioning popular Bollywood music videos, which were sung and captioned in Hindi. The literacy of thousands of adults was assessed before the captioned music videos began airing and several years later. The literacy of adults who frequently watched the captioned videos increased at a much greater pace than the literacy of adults who rarely or never watched the captioned videos ( Kothari & Bandyopadhyay, 2014 ).

Even highly literate adults benefit from captions. For example, when highly literate adults watch television commercials that are captioned, they remember brand names better ( Brasel & Gips, 2014 ), and when highly literate college students watch course lectures that are captioned, they remember course content better ( Steinfeld, 1998 ). Captions benefit hearing adults, just as captions benefit hearing children.

Captions Benefit Hearing Persons Learning a Second Language

Captions for D/deaf persons were co-opted from foreign- language subtitles for hearing persons. In the early 1980s, as captions for D/deaf persons became more prominent, second-language instructors began re-co-opting captions for hearing persons, to improve second-language literacy ( Price, 1983 ; Vanderplank, 2013 ). Scores of studies demonstrate that captions in a second language benefit hearing persons learning that second language; indeed, captions in a second language benefit hearing persons learning that second language even more than captions in the persons’ native language.

For example, Figure 3 displays the results from nearly 150 Japanese junior college and university students learning English as a second language ( Yoshino, Kano, & Akahori, 2000 ). The students watched three types of videos: videos with English audio but without any captions, videos with English audio and Japanese captions, videos with English audio and English captions. In a fourth condition, the students listened to only the English audio.

An external file that holds a picture, illustration, etc.
Object name is nihms839470f3.jpg

Data from Yoshino, Kano, and Akahori (2000) .

After watching each type of video (or listening to only the audio) twice, in counter-balanced order, the students recalled as much content as they could using either Japanese and English. The students recalled substantially more content after they watched the videos with English captions than after they watched the same videos with Japanese captions. In fact, after watching the videos with Japanese captions, the students recalled as little as they recalled after not even watching the videos (the audio only condition).

Captions (same-language subtitles) also improve second-language learners’ listening comprehension. Figure 4 displays data from University of Southern California students learning English as a second language ( Huang & Eskey, 1999–2000 ). The students were randomly assigned to watch videos with English audio and English captions or with English audio but without captions. Watching videos with English captions not only improved the students’ performance when tested with a written comprehension test, but also improved the students’ performance when tested with an auditory, listening, comprehension test.

An external file that holds a picture, illustration, etc.
Object name is nihms839470f4.jpg

Data from Huang and Eskey (1999–2000) .

Captions benefit hearing persons learning a second language, regardless of genre. Figure 5a displays data from 70 college students learning English as a second language, and Figure 5b displays data from 40 English-speaking college students learning Russian as a second language ( Garza, 1991 ). The students learning English as a second language were randomly assigned to watch videos with English audio and with or without English captions. The students learning Russian as a second language were randomly assignment to watch videos with Russian audio and with or without Russian captions.

An external file that holds a picture, illustration, etc.
Object name is nihms839470f5.jpg

Data from Garza (1991) .

As both Figures 5a and 5b illustrate, watching videos with same-language captions leads to significantly better comprehension. Captions benefit comprehension, regardless of the language being learned (Russian or English) and regardless of the genre being watched, from documentaries (The Sharks) to dramas (Hoosiers) to animations (An American Tail) to comedies (The Secret of My Success) to music videos (The Authority Song).

What Are the Policy Implications?

The empirical evidence is clear: Captions, also known as same-language subtitles, benefit everyone who watches videos. More than 100 studies document that captioning a video improves comprehension of, memory for, and attention to videos, for children, adolescents, college students, and adults. Although captions particularly benefit persons watching videos in their non-native language, children and adults learning to read, and persons who are D/deaf or hard of hearing, captions also benefit highly literate, hearing adults.

With so many studies documenting the benefits of captions, why does everyone not always turn on the captions every time they watch a video? Regrettably, the benefits of captions are not widely known. Some researchers are unaware of the wide-ranging benefits of captions because the empirical evidence is published across separate literatures (deaf education, second-language learning, adult literacy, and reading acquisition). Bringing together these separate literatures is the primary purpose of this article.

Reaping the benefits of captions is also impeded by erroneous attitudes (e.g., Weasenforth, 1994 ). Many people think captions are intended for, and therefore only beneficial to, persons who are D/deaf. For example, in a survey of several hundred K-12 educators across 45 U.S. states, almost all of whom were experienced teachers who frequently showed videos in their classroom, the majority had never turned on the captions on those videos. The minority who had, reported their students having reaped benefits from the captions ( Bowe & Kaufman, 2001 ).

Similarly, faculty and administrators in higher education are unlikely to be aware of the benefits of captions for university students, despite the fact that captions perfectly illustrate the fundamental principle of Universal Design. Like curb cuts and elevators, captions were initially developed for persons with disabilities, and, like curb cuts and elevators, captions benefit persons with and without disabilities. Indeed, the overwhelmingly vast majority of persons who benefit from curb cuts and elevators are not persons with disabilities, and the same could be true for captions.

The Institute of International Education reports that international students are enrolling in U.S. colleges and universities at an all-time high, a whopping 72% increase in only the past decade. Nearly a third of the international students studying in the United States are from China ( Redden, 2014 ). Given the increasing number of students in U.S. institutions of higher education who are not native English speakers and given the powerful benefits of captions to non-native speakers, it would behoove professors to turn on captions.

Unfortunately, a primary reason that everyone who watches videos is not benefitting from captions is that not all videos are captioned. Despite U.S. laws, which cover many workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions. Some organizations rely solely on automatically generated captions (e.g., the auto- generated captions found on many YouTube videos).

However, as recent litigation ( Orzeck, 2015 ) as well as empirical data ( Pan, Jiang, Yao, Picheny, & Qin, 2010 ) demonstrate, captions generated via automated speech recognition are not yet without interfering error. When auto-generated captions reach parity with human-transcribed captions, further technologies, including real-time captioning of lectures for all students ( Bain, Basson, Faisman, & Kanevsky, 2005 ), will be able to harness the power of captions for the broadest population ever.

  • Captions benefit everyone who watches videos, from younger children to older adults.
  • Captions are particularly beneficial to persons watching videos in their non-native language, children and adults learning to read, and persons who are D/deaf or hard of hearing.
  • Captions generated via automated speech recognition are not yet without interfering error, but when auto-generated captions reach parity with human-transcribed captions, technology will be able to harness the power of captions.
  • Despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.

Acknowledgments

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by Vilas Research Trust.

Studies cited in the article are in boldface.

Benefits of Captions: D/Deaf and Hard of Hearing Children, Adolescents, and Adults

  • Anderson-Inman L, Terrazas-Arellanes FE, Slabin U. Supported eText in captioned videos: A comparison of expanded versus standard captions on student comprehension of educational content. Journal of Special Education Technology. 2009; 24 :21–34. [ Google Scholar ]
  • Austin BA. The deaf audience for television. Journal of Communication. 1980; 30 :25–30. [ PubMed ] [ Google Scholar ]
  • Bain K, Basson S, Faisman A, Kanevsky D. Accessibility, transcription, and access everywhere. IBM Systems Journal. 2005; 44 :589–603. [ Google Scholar ]
  • Boyd J, Vader EA. Captioned television for the deaf. American Annals of the Deaf. 1972; 117 :34–37. [ PubMed ] [ Google Scholar ]
  • Braverman BB, Harrison MF, Bowker DO, Hertzog M. Effects of language level and visual display on learning from captioned instruction. Educational Communication and Technology Journal. 1981; 29 :147–154. [ Google Scholar ]
  • Burnham D, Leigh G, Noble W, Jones C, Tyler M, Grebennikov L, Varley A. Parameters in television captioning for deaf and hard-of-hearing adults: Effects of caption rate versus text reduction on comprehension. Journal of Deaf Studies and Deaf Education. 2008; 13 :391–404. [ PubMed ] [ Google Scholar ]
  • Caldwell DC. Use of graded captions with instructional television for deaf learners. American Annals of the Deaf. 1973; 118 :500–507. [ PubMed ] [ Google Scholar ]
  • Cambra C, Leal A, Silvestre N. How deaf and hearing adolescents comprehend a televised story. Deafness & Education International. 2010; 12 :34–51. [ Google Scholar ]
  • Cambra C, Silvestre N, Leal A. Comprehension of television messages by deaf students at various stages of education. American Annals of the Deaf. 2009; 153 :425–434. [ PubMed ] [ Google Scholar ]
  • Carney E, Verlinde R. Caption decoders: Expanding options for hearing impaired children and adults. American Annals of the Deaf. 1987; 132 :73–77. [ PubMed ] [ Google Scholar ]
  • Dowaliby FJ, Enders M, Schragle P, Verlinde R. A comparison of captioned, classroom, and prose instruction for hearing-impaired learners. American Annals of the Deaf. 1984; 129 :375–377. [ PubMed ] [ Google Scholar ]
  • Fischer DC. Unpublished doctoral dissertation. University of Nebraska; Lincoln: 1971. Improvement in the utilization of captioned films for the deaf. [ Google Scholar ]
  • Franco EPC, Araújo VLS. Reading television: Checking deaf people’s reactions to closed subtitling in Fortaleza, Brazil. The Translator. 2003; 9 :249–267. [ Google Scholar ]
  • Gulliver SR, Ghinea G. How level and type of deafness affect user perception of multimedia video clips. Universal Access in the Information Society. 2003; 2 :374–386. [ Google Scholar ]
  • Hertzog M, Stinson MS, Keiffer R. Effects of caption modification and instructor intervention on comprehension of a technical film. Educational Technology Research & Development. 1989; 37 :59–68. [ Google Scholar ]
  • Jelinek Lewis MS, Jackson DW. Television literacy: Comprehension of program content using closed captions for the deaf. Journal of Deaf Studies and Deaf Education. 2001; 6 :43–53. [ PubMed ] [ Google Scholar ]
  • Jensema CJ, El Sharkawy S, Danturthi RS, Burch R, Hsu D. Eye movement patterns of captioned television viewers. American Annals of the Deaf. 2000; 145 :275–285. [ PubMed ] [ Google Scholar ]
  • Kirkland CE. Evaluation of captioning features to inform development of digital television captioning capabilities. American Annals of the Deaf. 1999; 144 :250–260. [ PubMed ] [ Google Scholar ]
  • Koskinen PS, Wilson RM, Jensema CJ. Using closed-captioned television in the teaching of reading to deaf students. American Annals of the Deaf. 1986; 131 :43–46. [ PubMed ] [ Google Scholar ]
  • Lang HG, Steely D. Web-based science instruction for deaf students: What research says to the teacher. Instructional Science. 2003; 31 :277–298. [ Google Scholar ]
  • Loeterman M, Kelly RR, Samar VJ, Parasnis I, Berent GP. Personal captioning for students with language-related learning needs. Paper presented at the annual meeting of the American Educational Research Association; New Orleans, LA. 1994. Apr, [ Google Scholar ]
  • Marschark M, Leigh G, Sapere P, Burnham D, Convertino C, Stinson M, … Noble W. Benefits of sign language interpreting and text alternatives for deaf students’ classroom learning. Journal of Deaf Studies and Deaf Education. 2006; 11 :421–437. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • McCoy E, Shumway R. Real-time captioning: Promise for the future. American Annals of the Deaf. 1979; 124 :681–690. [ PubMed ] [ Google Scholar ]
  • Murphy-Berman V, Jorgensen J. Evaluation of a multi-level linguistic approach to captioning television for hearing impaired children. American Annals of the Deaf. 1980; 125 :1072–1081. [ PubMed ] [ Google Scholar ]
  • Murphy-Berman V, Whobrey L. The impact of captions on hearing-impaired children’s affective reactions to television. The Journal of Special Education. 1983; 17 :47–62. [ Google Scholar ]
  • Norwood MJ. Captioned films for the deaf. Exceptional Children. 1976; 43 :164–166. [ PubMed ] [ Google Scholar ]
  • Norwood MJ. Just don’t scramble the wrong egg. In: Braverman B, Cronin BJ, editors. Captioning: Shared perspectives. Rochester, NY: National Technical Institute for the Deaf; 1980. pp. 1–9. [ Google Scholar ]
  • Nugent GC. Deaf students’ learning from captioned instruction: The relationship between the visual and caption display. The Journal of Special Education. 1983; 17 :227–234. [ Google Scholar ]
  • Orzeck K. Deaf advocates sue Harvard, MIT for better webcast captions. Law360. 2015 Retrieved from http://www.law360.com/articles/621255/deaf-advocates-sue-harvard-mit-for-better-webcast-captions .
  • Schuchman JS. The silent film era: Silent films, NAD films, and the deaf community’s response. Sign Language Studies. 2004; 4 :231–238. [ Google Scholar ]
  • Shroyer EH, Birch J. Captions and reading rates of hearing-impaired students. American Annals of the Deaf. 1980; 125 :916–922. [ PubMed ] [ Google Scholar ]
  • Steinfeld A. The benefit of real-time captioning in a mainstream classroom as measured by working memory. Volta Review. 1998; 100 :29–44. [ Google Scholar ]
  • Stinson MS, Elliot LB, Kelly RR, Liu Y. Deaf and hard-of-hearing students’ memory of lectures with speech-to-text and interpreting/notetaking services. The Journal of Special Education. 2009; 43 :52–64. [ Google Scholar ]
  • Strassman BK, O’Dell K. Using open captions to revise writing in digital stories composed by D/deaf and hard of hearing students. American Annals of the Deaf. 2012; 157 :340–357. [ PubMed ] [ Google Scholar ]
  • Thorn F, Thorn S. Television captions for hearing-impaired people: A study of key factors that affect reading performance. Human Factors. 1996; 38 :452–463. [ PubMed ] [ Google Scholar ]
  • Yoon J-O, Choi H. The effects of captions on deaf students’ contents comprehension, cognitive load and motivation in online learning. Paper presented at the Technology and Deaf Education Symposium: Exploring Instructional and Access Technologies; Rochester, NY. 2010. Jun, [ PubMed ] [ Google Scholar ]

Benefits of Captions: Hearing Children and Adolescents

  • Adler R. Using closed-captioned television in the classroom. In: Gambrell L, McLaughlin E, editors. New directions in reading: Research and practice. Silver Spring, MD: Yearbook of the State of Maryland International Reading Association; 1985. pp. 11–18. [ Google Scholar ]
  • Bowe FG, Kaufman A. Captioned media: Teacher perceptions of potential value for students with no hearing impairments: A national survey of special educators. Spartanburg, SC: Described and Captioned Media Program; 2001. [ Google Scholar ]
  • Davey R, Parkhill F. Raising adolescent reading achievement: The use of sub-titled popular movies and high interest literacy activities. English in Aotearoa. 2012; 78 :61–71. [ Google Scholar ]
  • Goldman M, Goldman S. Reading with close-captioned TV. Journal of Reading. 1988; 31 :458–461. [ Google Scholar ]
  • Kirkland CE, Byrom EM, MacDougall MA, Corcoran MD. The effectiveness of television captioning on comprehension and preference. Paper presented at the annual meeting of the American Educational Research Association; San Francisco, CA. 1995. Apr, [ Google Scholar ]
  • Koskinen PS, Wilson RM, Gambrell LB, Neuman SB. Captioned video and vocabulary learning: An innovative practice in literacy instruction. The Reading Teacher. 1993; 47 :36–43. [ Google Scholar ]
  • Koskinen PS, Wilson RM, Jensema CJ. Closed-captioned television: A new tool for reading instruction. Reading World. 1985; 24 :1–7. [ Google Scholar ]
  • Koskinen P, Wilson RM, Gambrell LB, Jensema C. Using closed captioned television to enhance reading skills of learning disabled students. National Reading Conference Yearbook. 1986; 35 :61–65. [ Google Scholar ]
  • Kothari B, Bandyopadhyay T. Same language subtitling of Bollywood film songs on TV: Effects on literacy. Information Technologies & International Development. 2014; 10 :31–47. [ Google Scholar ]
  • Kothari B, Takeda J. Same language subtitling for literacy: Small change for colossal gains. In: Bhatnagar SC, Schware R, editors. Information and communication technology in development. New Delhi, India: SAGE; 2000. pp. 176–186. [ Google Scholar ]
  • Kothari B, Takeda J, Joshi A, Pandey A. Same language subtitling: A butterfly for literacy? International Journal of Lifelong Education. 2002; 21 :55–66. [ Google Scholar ]
  • Linebarger DL. Learning to read from television: The effects of using captions and narration. Journal of Educational Psychology. 2001; 93 :288–298. [ Google Scholar ]
  • Linebarger D, Piotrowski JT, Greenwood CR. On-screen print: The role of captions as a supplemental literacy tool. Journal of Research in Reading. 2010; 33 :148–167. [ Google Scholar ]
  • Mechling L. The effect of instructor-created video programs to teach students with disabilities: A literature review. Journal of Special Education Technology. 2005; 20 :25–36. [ Google Scholar ]
  • Parkhill F, Davey R. We enjoyed it and we learned at the same time! Practically Primary. 2012; 17 :8–11. [ Google Scholar ]
  • Parkhill F, Johnson J, Bates J. Capturing literacy learners: Evaluating a reading programme using popular novels and films with subtitles. Digital Culture & Education. 2011; 3 :140–156. [ Google Scholar ]
  • Rickelman RJ, Henk WA, Layton K. Closed-captioned television: A viable technology for the reading teacher. The Reading Teacher. 1991; 44 :598–599. [ Google Scholar ]

Benefits of Captions: Hearing Adults

  • Bean RM, Wilson RM. Using closed captioned television to teach reading to adults. Reading Research and Instruction. 1989; 28 :27–37. [ Google Scholar ]
  • Brasel SA, Gips J. Enhancing television advertising: Same-language subtitles can improve brand recall, verbal memory, and behavioral intent. Journal of the Academy of Marketing Science. 2014; 42 :322–336. [ Google Scholar ]
  • d’Ydewalle G, de Bruycker W. Eye movements of children and adults while reading television subtitles. European Psychologist. 2007; 12 :196–205. [ Google Scholar ]
  • d’Ydewalle G, Gielen I. Attention allocation with overlapping sound, image, and text. In: Rayner K, editor. Eye movements and visual cognition: Scene perception and reading. New York, NY: Springer; 1992. pp. 415–427. [ Google Scholar ]
  • d’Ydewalle G, Praet C, Verfaillie K, van Rensbergen J. Watching subtitled television: Automatic reading behavior. Communication Research. 1991; 18 :650–666. [ Google Scholar ]
  • Findlater L, Balakrishnan R, Toyama K. Comparing semiliterate and illiterate users’ ability to transition from audio + text to text-only interaction. Paper presented at CHI; 2009; Boston, MA. 2009. Apr, [ Google Scholar ]
  • Griffin R, Dumestre J. An initial evaluation of the use of captioned television to improve the vocabulary and reading comprehension of navy sailors. Journal of Educational Technology Systems. 1992–1993; 21 :193–206. [ Google Scholar ]
  • Hinkin MP, Harris RJ, Miranda AT. Verbal redundancy aids memory for filmed entertainment dialogue. The Journal of Psychology. 2014; 148 :161–176. [ PubMed ] [ Google Scholar ]
  • Kothari B. Let a billion readers bloom: Same language subtitling (SLS) on television for mass literacy. International Review of Education. 2008; 54 :773–780. [ Google Scholar ]
  • Kothari B, Pandey A, Chudgar AR. Reading out of the “idiot box”: Same-language subtitling on television in India. Information Technologies & International Development. 2004; 2 :23–44. [ Google Scholar ]
  • Kruger JL, Kruger H, Verhoef M. Subtitling and the promotion of multilingualism: The case of marginalised languages in South Africa. Linguistica Antverpiensia. 2007; 6 :35–49. [ Google Scholar ]

Benefits of Second-Language Captions: Hearing College Students

  • Alkhatnai M. The effect of TV captions on the comprehension of non-native Saudi learners of English. Sino-US English Teaching. 2012; 9 :1573–1579. [ Google Scholar ]
  • Al-Seghayer K. The effect of multimedia annotation modes on L2 vocabulary acquisition: A comparative study. Language Learning & Technology. 2001; 5 :202–232. [ Google Scholar ]
  • Berwald JP. Teaching foreign languages by means of subtitled visuals. Foreign Language Annals. 1979; 12 :375–378. [ Google Scholar ]
  • Blane S. Interlingual subtitling in the languages degree. In: Sewell P, Higgins I, editors. Teaching translation in universities: Present and future perspectives. London, England: Association for French Language Studies and Centre for International Language Teaching Research; 1996. pp. 183–208. [ Google Scholar ]
  • Borrás I, Lafayette RC. Effects of multimedia course-ware subtitling on the speaking performance of college students of French. The Modern Language Journal. 1994; 78 :61–75. [ Google Scholar ]
  • Chang CC, Tseng KH, Tseng JS. Is single or dual channel with different English proficiencies better for English listening comprehension, cognitive load and attitude in ubiquitous learning environment? Computers & Education. 2011; 57 :2313–2321. [ Google Scholar ]
  • Chung JM. The effects of using video texts supported with advance organizers and captions on Chinese college students’ listening comprehension: An empirical study. Foreign Language Annals. 1999; 32 :295–308. [ Google Scholar ]
  • Danan M. Reversed subtitling and dual coding theory: New directions for foreign language instruction. Language Learning. 1992; 42 :497–527. [ Google Scholar ]
  • d’Ydewalle G, Van Rensbergen J, Pollet J. Reading a message when the same message is available auditorily in another language: The case of subtitling. In: O’Regan JK, Lévy-Schoen A, editors. Eye movements: From physiology to cognition. Amsterdam, The Netherlands: Elsevier Science; 1987. pp. 313–321. [ Google Scholar ]
  • Etemadi A. Effects of bimodal subtitling of English movies on content comprehension and vocabulary recognition. International Journal of English Linguistics. 2012; 2 :239–248. [ Google Scholar ]
  • Fazilatfar AM, Ghorbani S, Samavarchi L. The effect of standard and reversed subtitling versus no subtitling mode on L2 vocabulary learning. The Journal of Teaching Language Skills. 2011; 3 :43–64. [ Google Scholar ]
  • Garza TJ. Evaluating the use of captioned video materials in advanced foreign language learning. Foreign Language Annals. 1991; 24 :239–258. [ Google Scholar ]
  • Ghasemboland F, Nafissi Z. The effects of using English captions on Iranian EFL students’ listening comprehension. Procedia—Social and Behavioral Sciences. 2012; 64 :105–112. [ Google Scholar ]
  • Gorjian B. The effect of movie subtitling on incidental vocabulary learning among EFL learners. International Journal of Asian Social Science. 2014; 4 :1013–1026. [ Google Scholar ]
  • Grgurović M, Hegelheimer V. Help options and mul-timedia listening: Students’ use of subtitles and the transcript. Language Learning & Technology. 2007; 11 :45–66. [ Google Scholar ]
  • Guillory HG. The effects of keyword captions to authentic French video on learner comprehension. CALICO Journal. 1998; 15 :89–108. [ Google Scholar ]
  • Harji MB, Woods PC, Alavi ZK. The effect of viewing subtitled videos on vocabulary learning. Journal of College Teaching and Learning. 2010; 7 :37–42. [ Google Scholar ]
  • Hayati A, Mohmedi F. The effect of films with and without subtitles on listening comprehension of EFL learners. British Journal of Educational Technology. 2011; 42 :181–192. [ Google Scholar ]
  • Huang HC, Eskey DE. The effects of closed-captioned television on the listening comprehension of intermediate English as a second language (ESL) students. Journal of Educational Technology Systems. 1999–2000; 28 :75–96. [ Google Scholar ]
  • Hui W. The effects of captions on Chinese EFL students’ incidental vocabulary acquisition. CELEA Journal. 2007; 30 :9–16. [ Google Scholar ]
  • Markham P. The influence of culture-specific background knowledge and captions on second language comprehension. Journal of Educational Technology Systems. 2000–2001; 29 :331–343. [ Google Scholar ]
  • Markham PL. Captioned television videotapes: Effects of visual support on second language comprehension. Journal of Educational Technology Systems. 1992–1993; 21 :183–191. [ Google Scholar ]
  • Markham PL. Captioned videotapes and second-language listening word recognition. Foreign Language Annals. 1999; 32 :321–328. [ Google Scholar ]
  • Markham P, Peter L. The influence of English language and Spanish language captions on foreign language listening/reading comprehension. Journal of Educational Technology Systems. 2002–2003; 31 :331–341. [ Google Scholar ]
  • Montero Pérez MM, Peters E, Desmet P. Is less more? Effectiveness and perceived usefulness of keyword and full captioned video for L2 listening comprehension. ReCALL. 2013; 26 :21–43. [ Google Scholar ]
  • Price K. Closed-captioned TV: An untapped resource. MATSOL Newsletter. 1983; 12 :1–8. [ Google Scholar ]
  • Redden E. Teaching international students. Inside Higher Ed. 2014 Dec 1; Retrieved from https://www.insidehighered.com/news/2014/12/01/increasing-international-enrollments-faculty-grapple-implications-classrooms .
  • Shea P. Leveling the playing field: A study of captioned interactive video for second language learning. Journal of Educational Computing Research. 2000; 22 :243–263. [ Google Scholar ]
  • Stewart MA, Pertusa I. Gains to language learners from viewing target closed-captioned films. Foreign Language Annals. 2004; 37 :438–442. [ Google Scholar ]
  • Taylor G. Perceived processing strategies of students watching captioned video. Foreign Language Annals. 2005; 38 :422–427. [ Google Scholar ]
  • Winke P, Gass S, Sydorenko T. The effects of captioning videos used for foreign language listening activities. Language Learning & Technology. 2010; 14 :65–86. [ Google Scholar ]
  • Yoshino S, Kano N, Akahori K. The effects of English and Japanese captions on the listening comprehension of Japanese EFL students. Language Laboratory. 2000; 37 :111–130. [ Google Scholar ]
  • Yüksel D, Tanriverdi B. Effects of watching captioned movie clip on vocabulary development of EFL learners. The Turkish Online Journal of Educational Technology. 2009; 8 :48–54. [ Google Scholar ]
  • Zarei AA, Rashvand Z. The effect of interlingual and intralingual, verbatim and nonverbatim subtitles on L2 vocabulary comprehension and production. Journal of Language Teaching and Research. 2011; 2 :618–625. [ Google Scholar ]

Benefits of Second-Language Captions: Hearing Children and Adults

  • Hsu CK, Hwang GJ, Chang YT, Chang CK. Effects of video caption modes on English listening comprehension and vocabulary acquisition using handheld devices. Journal of Educational Technology & Society. 2013; 16 :403–414. [ Google Scholar ]
  • Kadoyama T. An overview of closed captions research in the United States and its implications to EFL classrooms in Japan. Studies in the Humanities and Sciences. 1996; 37 :257–279. [ Google Scholar ]
  • Koolstra CM, Beentjes JWJ. Children’s vocabulary acquisition in a foreign language through watching subtitled television programs at home. Educational Technology Research & Development. 1999; 47 :51–60. [ Google Scholar ]
  • Koskinen P, Knable JE, Markham PL, Jensema CJ, Kane KW. Captioned television and the vocabulary acquisition of adult second language correctional facility residents. Journal of Educational Technology Systems. 1995–1996; 24 :359–373. [ Google Scholar ]
  • Kruger JL, Steyn F. Subtitles and eye tracking: Reading and performance. Reading Research Quarterly. 2013; 49 :105–120. [ Google Scholar ]
  • Mitterer H, McQueen JM. Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE. 2009; 4 :e7785. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Neuman SB, Koskinen P. Captioned television as comprehensible input: Effects of incidental word learning from context for language minority students. Reading Research Quarterly. 1992; 27 :95–106. [ Google Scholar ]
  • Pan Y-X, Jiang D-N, Yao L, Picheny M, Qin Y. Effects of automated transcription quality on non-native speakers’ comprehension in real-time computer-mediated communication. Paper presented at CHI 2010: Sound and Speech; Atlanta, GA. 2010. Apr, [ Google Scholar ]
  • Vanderplank R. Déjà vu? A decade of research on language laboratories, television and video in language learning. Language Teaching. 2010; 43 :1–37. [ Google Scholar ]
  • Vanderplank R. “Effects of” and “effects with” captions: How exactly does watching a TV programme with same-language subtitles make a difference to language learners? Language Teaching. 2013; 48 :1–16. [ Google Scholar ]
  • Weasenforth DL. Closed captioning: Students’ responses. Paper presented at the Annual Meeting of the Teachers of English to Speakers of Other Languages; Baltimore, MD. 1994. Mar, [ Google Scholar ]

History and Theory of Captions

  • Bird SA, Williams JN. The effect of bimodal input on implicit and explicit memory: An investigation into the benefits of within-language subtitling. Applied Psycholinguistics. 2002; 23 :509–533. [ Google Scholar ]
  • Block MH, Okrand M. Real-time closed-captioned television as an educational tool. American Annals of the Deaf. 1983; 128 :636–641. [ PubMed ] [ Google Scholar ]
  • Caldwell DC. Closed-captioned television: Educational and sociological implications for hearing impaired learners. American Annals of the Deaf. 1981; 126 :627–630. [ PubMed ] [ Google Scholar ]
  • Cronin BJ. Closed-caption television: Today and tomorrow. American Annals of the Deaf. 1980; 125 :726–728. [ PubMed ] [ Google Scholar ]
  • Downey G. Teaching reading with television: Constructing closed captioning using the rhetoric of literacy. In: Nelson AR, Rudolph JL, editors. Education and the culture of print in modern America. Madison: University of Wisconsin Press; 2010. pp. 191–214. [ Google Scholar ]
  • Downey GJ. Closed captioning: Subtitling, stenography, and the digital convergence of text with television. Baltimore, MD: Johns Hopkins University Press; 2008. [ Google Scholar ]
  • Erath AS, Larkin VM. Making distance education accessible for students who are deaf and hard-of-hearing. Assistive Technology: The Official Journal of RESNA. 2004; 16 :116–123. [ PubMed ] [ Google Scholar ]
  • King J. Using DVD feature films in the EFL classroom. Computer Assisted Language Learning. 2002; 15 :509–523. [ Google Scholar ]
  • Mayer RE, Anderson RB. Animations need narrations: An experimental test of a dual-coding hypothesis. Journal of Educational Psychology. 1991; 83 :484–490. [ Google Scholar ]
  • Withrow FB. Jericho: The walls come tumbling down! American Annals of the Deaf. 1994; 139 :18–21. [ PubMed ] [ Google Scholar ]

Everyone should turn on video captions; captions improve comprehension, memory, and attention, for everyone.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Learning Center

Go from a camera-shy beginner to a video marketing pro.

  • Developer Docs
  • Customer Stories
  • Asset Library

Blog Categories

  • Product Updates
  • Wistia Culture

2024 State of Video Report

Level up your video strategy with insights from over 90 million videos, 100,000 businesses, and 2,000 professionals.

Video Captions: How to Add Closed Captioning to a Video

March 1, 2024

  • Accessibility

Lisa Marinelli

Want to make your videos accessible to everyone? Start by adding captions to them! It should be a core step in your video distribution process — and it’s super easy to do.

In this post, we’ll explain what closed captions are, why they’re important, and how to add captions to videos. Let’s dive in!

What are video captions?

Video captions are text overlays that show up at the bottom of the screen when you’re watching a video. They show you the spoken words in text form right as they’re being said. But it’s not just the dialogue; captions also clue you in on other sounds happening in the video, like background noises, music, and sound effects.

To see what we mean, check out this video with captions:

Captions are a great addition to any video because they make videos accessible to everyone, whether they’re deaf or hard of hearing, a non-native speaker, or simply watching videos without sound.

Closed captions vs. open captions

There are two types of video captions: closed captions and open captions.

Closed captions can be turned on or off by the viewer, which is especially useful for folks who don’t need captions all the time.

Open captions are permanently on top of a video and cannot be turned off. They can be pretty handy for big events like conferences where lots of people are watching the same screen. Plus, they’re a great choice for videos hosted on platforms that don’t support closed captioning.

The choice between the two boils down to the video platform you’re using and what works best for your audience.

Captions vs. transcripts

Some people use captions and transcripts interchangeably, but they’re actually two different things.

Like we mentioned earlier, captions show up in the video and they cover almost anything you can hear in the video, including dialogue and sound effects.

A transcript is simply a written log of all dialogue that happens in a video, and sometimes it comes with time stamps. You can get a transcript for anything — a video, a podcast, or even your conversation at lunch (if it was recorded).

Transcripts are offered alongside the video, typically as a separate document or text file, for folks who prefer to read the content rather than watch or listen to it.

To caption a video, you start by transcribing it. This gives you a transcript, which you can then refine for accuracy and detail (like adding sound effects). Once edited, this transcript can be uploaded to become the captions that viewers see on the screen.

What’s more, you can turn a transcript into a descriptive transcript, which describes relevant visual elements in the video, and then provide it with the video. Since transcripts are compatible with screen readers, a descriptive one will come in super useful for folks who rely on screen readers when consuming videos.

So if you want to make your content fully accessible to your viewers, you should include both captions and descriptive transcripts.

Why do videos need captions?

It’s simply because captions benefit both your audience and your business in many different ways:

  • Accessibility: Captions open your videos up to a much wider audience . Not only will more people get to enjoy your content, but also this expanded reach can help you net more leads and strengthen your brand’s presence in the market.
  • Compliance with video accessibility laws: Did you know there are several laws that require businesses to add captions to videos ? To make sure your videos get a thumbs up from Uncle Sam, all you gotta do is give them good, accurate captions.
  • SEO boost: Search engines can’t watch videos, but they can read transcripts and captions. By adding these to your videos, you’re helping your videos rank for relevant keywords and attract more search traffic .
  • Increased time spent on your site: Since captions make your videos watchable in less audio-accessible environments and across a wider audience, they can get folks to stick around on your site longer. And the more interested and engaged they are, the more likely your dwell times will start rising too.
  • Increased engagement on social media: Digiday found that 75% of people watch mobile videos with the sound off. Throw in some captions, and these folks will probably keep watching your video beyond the first few seconds.

In short, adding captions to your videos not only makes them more accessible, discoverable, and engaging but also ensures compliance with accessibility laws. Plus, it encourages longer visits to your site.

How do I add captions to my videos?

Now, we’ll explore a couple options for adding captions to your video content, including how to do it right in Wistia.

Adding captions in Wistia

Wistia makes adding captions to your videos a walk in the park. All you need to do is toggle a switch in the Customize panel — yep, really!

When you upload a video to Wistia, we’ll automatically transcribe it for you. You can choose between:

  • Automated transcripts: free of charge (depending on your plan type), ready in minutes, and rated at 92% accuracy (but you can edit the transcript to bring the accuracy to 100%)
  • Professional, human-generated transcripts: a default wait time of four business days (or one business day for an additional cost) and rated at 99% accuracy

When you have the transcript, it’s time to hop into the Customize panel, open up the Controls tab, switch on the captions, and voila. The captions button will appear in the video playbar and your viewers can turn them on if they want.

But wait, there’s more! We also inject captions into the structured data of your video, which helps search engines better understand the video’s contents and rank it higher. We’ve got an article on video metadata if you want to learn more about video SEO.

If you already have an SRT file ready to upload to your video, no problem. With Wistia, you can upload as many transcript files as you’d like to your media to accommodate different languages. If you’re working with multilingual captions, we’ll supply captions that match the language of your viewer’s browser. If those captions aren’t available, we’ll serve English captions by default.

Ordering caption files

A simpler way to manually add captions to your videos is to order caption files from a provider like 3Play Media or Rev . These sites offer super-accurate human transcription at a higher rate, or artificial intelligence (AI) transcription for a lower price per minute.

Next, you’ll want to download your captions as an SRT (SubRip Subtitle) file. When uploading your video to a platform like Wistia or YouTube, you can upload the SRT file along with the video.

Creating captions manually

If you want full control of the captioning process, you can always create your own SRT file. You should include information like the start and end times of each subtitle, along with the corresponding text.

This process can be very time consuming and requires you to type-up caption files for your videos by hand, which makes room for errors and mistakes. You also might not know the captioning standards of a trained captioning professional (or AI). Save yourself time by getting them through Wistia or a third-party provider!

Reviewing your captions for accuracy

Checking your video captions for accuracy is key to making content that’s inclusive, professional, and on the right side of the law. It’s all about making sure the captions match what’s said and heard in the video and that they work for everyone. Here are some reasons why giving your captions a once-over really matters:

  • Inaccurate captions can confuse your viewers and make your content actually less accessible — particularly for folks who are deaf or hard of hearing.
  • With accurate captions, you can convey the intended message of your video correctly.
  • Accurate captions help your business comply with media accessibility laws.
  • Captions that are on-point reflect well on your brand! Viewers are more likely to trust content that’s error-free and easy to understand.
  • Accurate captions make your videos more discoverable. Search engines may use the text in captions to understand and index the content, potentially improving your rank in search results.

Making corrections to your captions in Wistia

If you spot an error while reviewing your captions, fixing them in Wistia is no big deal. All you have to do is edit the transcript file from your media page and hit “Save.” That’s all!

Get captions for your videos with Wistia

It’s as easy as 1, 2, 3:

  • Upload your video to Wistia.
  • Sit back as Wistia automatically transcribes your video.
  • Turn on the captions.

Mailing list sign-up form

Sign up for Wistia’s best & freshest content.

More of a social being? We’re also on Instagram and  Twitter .

We will keep fighting for all libraries - stand with us!

Internet Archive Audio

the following presentation has been closed captioned

  • This Just In
  • Grateful Dead
  • Old Time Radio
  • 78 RPMs and Cylinder Recordings
  • Audio Books & Poetry
  • Computers, Technology and Science
  • Music, Arts & Culture
  • News & Public Affairs
  • Spirituality & Religion
  • Radio News Archive

the following presentation has been closed captioned

  • Flickr Commons
  • Occupy Wall Street Flickr
  • NASA Images
  • Solar System Collection
  • Ames Research Center

the following presentation has been closed captioned

  • All Software
  • Old School Emulation
  • MS-DOS Games
  • Historical Software
  • Classic PC Games
  • Software Library
  • Kodi Archive and Support File
  • Vintage Software
  • CD-ROM Software
  • CD-ROM Software Library
  • Software Sites
  • Tucows Software Library
  • Shareware CD-ROMs
  • Software Capsules Compilation
  • CD-ROM Images
  • ZX Spectrum
  • DOOM Level CD

the following presentation has been closed captioned

  • Smithsonian Libraries
  • FEDLINK (US)
  • Lincoln Collection
  • American Libraries
  • Canadian Libraries
  • Universal Library
  • Project Gutenberg
  • Children's Library
  • Biodiversity Heritage Library
  • Books by Language
  • Additional Collections

the following presentation has been closed captioned

  • Prelinger Archives
  • Democracy Now!
  • Occupy Wall Street
  • TV NSA Clip Library
  • Animation & Cartoons
  • Arts & Music
  • Computers & Technology
  • Cultural & Academic Films
  • Ephemeral Films
  • Sports Videos
  • Videogame Videos
  • Youth Media

Search the history of over 866 billion web pages on the Internet.

Mobile Apps

  • Wayback Machine (iOS)
  • Wayback Machine (Android)

Browser Extensions

Archive-it subscription.

  • Explore the Collections
  • Build Collections

Save Page Now

Capture a web page as it appears now for use as a trusted citation in the future.

Please enter a valid web address

  • Donate Donate icon An illustration of a heart shape

Disney Channel promos, 1/17/1997

Video item preview, share or embed this item, flag this item for.

  • Graphic Violence
  • Explicit Sexual Content
  • Hate Speech
  • Misinformation/Disinformation
  • Marketing/Phishing/Advertising
  • Misleading/Inaccurate/Missing Metadata

plus-circle Add Review comment Reviews

7 Favorites

DOWNLOAD OPTIONS

In collections.

Uploaded by VHSgoodiesWA3 on October 31, 2021

SIMILAR ITEMS (based on metadata)

IMAGES

  1. The following presentation has been CLOSED CAPTIONED For PeoPle Who have been SUFFELING With

    the following presentation has been closed captioned

  2. Closed Captioning

    the following presentation has been closed captioned

  3. The following presentation has been closed-captioned for the hearing impaired.

    the following presentation has been closed captioned

  4. Episode 25 Teaser Closed Captioned by Project ReadOn

    the following presentation has been closed captioned

  5. PowerPoint Closed Captioning: How to Add Subtitles to Videos in PowerPoint

    the following presentation has been closed captioned

  6. A guide to the visual language of closed captions and subtitles

    the following presentation has been closed captioned

VIDEO

  1. PowerPoint Subtitles, Captions Walkthrough

  2. 4/3/1995 Disney Channel Promos (Very Bad Quality)

  3. The following presentation has been closed-captioned for the hearing impaired

  4. Request Pay-Per-View Intro Screen (PG-13)

  5. Closed Captioning For Beginners

  6. NFL on CBS Presentation Intro 2 2016-Early 2018

COMMENTS

  1. Present with real-time, automatic captions or subtitles in PowerPoint

    Turn the feature on or off while presenting. If you're in the middle of giving a presentation and want to turn the feature on or off, click the Toggle Subtitles button from Slide Show View or Presenter View, on the toolbar below the main slide:. In Slide Show View:. In Presenter View:. You can also toggle subtitles from the right-click menu, or with the shortcut key J.

  2. Add closed captions or subtitles to media in PowerPoint

    In PowerPoint, in the Normal view, open the slide that has the video that you want to add captions to. Select the video on the slide. On the Playback tab, select Insert Captions, and then select Insert Captions. In the Insert Captions dialog box, select the file or files and then click Insert. If you need to add more caption files, just repeat ...

  3. What is Closed Captioning? The Most Detailed Guide Ever

    Closed captioning is the process of showing a text version of the spoken parts of a video. For example, closed captions would include the text spoken during the dialogue in a movie or the presentation audio during a computer presentation. Closed captions do not include notes about audio cues, music, or background music and assume that the ...

  4. Present more effectively and inclusively with video and closed captions

    Create a new presentation in PowerPoint for the web by going to https://ppt.new and signing in with your organization account or your personal Microsoft account. Select Insert > Video > Insert Video From: This Device, and then select the video file you want to upload. Select Video > Insert Captions, and then select the captions file in WebVTT ...

  5. What's that you say? Present with captions in Google Slides

    The closed captions feature is available when presenting in Google Slides. It uses your computer's microphone to detect your spoken presentation, then transcribes—in real time—what you say as captions on the slides you're presenting. When you begin presenting, click the "CC" button in the navigation box (or use the shortcut Ctrl ...

  6. Create closed captions for a video

    Create a closed captions text file. Notepad is automatically installed with Windows. Start the app by typing Notepad in the Cortana Ask me anything box or by searching for Notepad on the Start menu. When Notepad is open, save your closed-caption file with a name in the following format: MyClosedCaptions.en.vtt.

  7. Caption This! Best Practices for Live Captioning Presentations

    Implement AI-based auto-captioning directly within the presentation software. Use an external microphone. Speak deliberately and clearly. Practice with the presentation software beforehand and add ...

  8. Add Closed-captions or Subtitles to PowerPoint Presentations

    Deliver more accessible presentations by enabling automatic closed-captioning in your PowerPoint presentation.

  9. Closed captions for audio makes your PowerPoint presentations more

    How it works. Right-click (or Control-click on macOS) the audio icon or video, and then click Save Media as. In the Save Media As dialog box, select the folder you want to save the media file in, and enter a name. Select the Save button. If the audio or video has closed captions, the closed caption files will be saved in a new folder along with ...

  10. What is Closed Captioning and How Does it Differ From Subtitles

    Both closed captioning and subtitles serve important communication purposes. While captions help those with auditory impairments, subtitles assist with language translation. Thanks to technology's constant evolution, providing visual on-screen scripts is becoming the norm. Though some countries, like America, are bound to legal stipulations for ...

  11. Make PowerPoint presentations more accessible with closed captions in

    To insert closed captions in an embedded video, select the video and choose Insert Captions on the Playback tab. 2. Select the file or files you want to insert. 3. To show the captions while playing the video, click the Caption button () to the far-right side of the video playback bar (or press Option + J) and then select the track that you want.

  12. Add closed captions or subtitles to media in PowerPoint

    In PowerPoint, in the Normal view, open the slide that has the video that you want to add captions to. Select the video on the slide. On the Playback tab, select Insert Captions, and then select Insert Captions. In the Insert Captions dialog box, select the file or files and then click Insert. If you need to add more caption files, just repeat ...

  13. Using Google Slides live closed captions in the classroom

    Google Slides transcribes your speech live and displays it at the bottom of the slides in your presentation. Check out the how-to's and those ideas in this video. You can turn on closed captions at the bottom of videos you watch on TV and online. But providing closed captions in real time in the classroom has been harder to pull off — until ...

  14. Have You Been Following These Closed Captioning Best Practices?

    This results in the following direct gains for the broadcasters: Closed captions increase the average watch time and ensure users stay engaged with the content as captions provide context to the viewer. Closed Captioning Guidelines. Standards and guidelines for closed captioning have been defined by various organizations like FCC, DCMP, CVAA ...

  15. Create closed captions for a video

    Closed captions can be stored in a text file with a .vtt filename extension. You can create a closed caption file yourself or use a caption-creation tool. This article describes how you can create a closed caption file yourself in Notepad. To search online for available tools and detailed instructions, type "create vtt file" in your search engine.

  16. Disney Channel Closed Caption Bumpers

    Bumper: On a black background we see a yellow/orange "CC" logo positioned towards the left. To the right if it is the words "The following presentation has been CLOSED CAPTIONED". Below all that are the words "The Disney Channel" (with "Disney" colored blue and in its familiar script logotype). Early Variant: The early version of the logo had a ...

  17. Closed captions not embedded in exported video.

    I've created a presentation with an inserted video using Powerpoint - office 365. I have added closed captions using VTT files which works successfully however when I export the presentation as a video the closed captions are not embedded. Is this a known issue, is there a workaround? Thank you.

  18. My Movie 1992 THE FOLLOWING PRESENTATION HAS BEEN CLOSED CAPTIONED FOR

    cmgus vcr classic: 1992 summer aug or sep the following presentation has been closed captioned for the hearing impaired cc segue clip shown before a disney c...

  19. Use a screen reader to add closed captions to recorded PowerPoint

    In PowerPoint, in the Normal view, navigate to the slide that has the video you want to add captions to. For instructions, refer to Use a screen reader to explore and navigate PowerPoint. On the slide, press the Tab key until you hear the video announced. With the focus on the video, press Alt+J, N, C, and then 2.

  20. Closed Captioning Guidelines for TV, Movies, and Video Platforms

    Like television, streaming services must provide closed captions for licensed and original content. On its website, regarding WCAG 2.0 and closed captioning, W3C states media outlets should include closed captioning. Some precedents have been set by class-action lawsuits instead of recommended guidelines. The Rise of Netflix Captioning

  21. The following presentation has been closed-captioned for the hearing

    https://youtu.be/Zl-ZYxMEzhg the hearing impaired.

  22. Video Captions Benefit Everyone

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in ...

  23. Video Captions: How to Add Closed Captioning to a Video

    To caption a video, you start by transcribing it. This gives you a transcript, which you can then refine for accuracy and detail (like adding sound effects). Once edited, this transcript can be uploaded to become the captions that viewers see on the screen. What's more, you can turn a transcript into a descriptive transcript, which describes ...

  24. Disney Channel promos, 1/17/1997

    Disney Channel, 1997, promos, pay-TV. Aired before and after 'Thumbelina.'. - Magical World of Disney promo (partial) - The following has been closed-captioned. - Magical World of Disney intro. - Next on the Magical World of Disney. - The Many Adventures of Winnie the Pooh/Winnie the Pooh and a Day for Eeyore/Peter Pan promos (the latter ...

  25. PDF Federal Register/Vol. 89, No. 74/Tuesday, April 16, 2024/Notices

    Closed captioning will also be provided. The purpose is to discuss matters affecting the Federal Depository Library Program and its transition to a digital program. All sessions are open to the public. DATES: May 2, 2024. Hugh Nathanial Halpern, Director, U.S. Government Publishing Office. [FR Doc. 2024-08040 Filed 4-15-24; 8:45 am]