Text transcripts, captions and sign language

Поділитися
Вставка
  • Опубліковано 1 жов 2024
  • These accessibility solutions are designed to support people with auditory impairments. This chapter presents their main characteristics.
    Accessibility website of the Publications Office of the European Union: op.europa.eu/e...
    TRANSCRIPT (shorten)
    Text transcripts, captions and sign language are the accessibility solutions designed to support people with auditory impairments. The basic idea is to convert audio information into text.
    TEXT TRANSCRIPTS provide a textual version of the content that can be accessed by anyone. They also allow the content of your multimedia to be searchable, both by search engines and by end-users. Transcripts do not have to be the verbatim text of the spoken word in a video. They should also contain additional descriptions, explanations or comments that could be useful, such as indications of sound effects or audience laughter.
    The transcript should be displayed close to the audio or video content to be easily found.
    The audio content may be presented on the webpage in text-form, unabridged and unaltered or embedded into a web page with the text transcript below.
    The advantages of transcripts: Users can quickly scan a transcript to learn about the media subject before pressing play. Transcripts can be printed or converted to braille. They can be read offline on any desktop or mobile device. This can be useful for users whose connection to the internet is limited. They can be used as a basis for foreign-language translation.
    CAPTIONS are text versions of the spoken words presented within multimedia (like subtitles in films). Though captioning is primarily intended for those who cannot hear the audio, it can also support people who are not fluent in the language in which the audio is presented.
    This abbreviation CC (small icon on the screen) stands for closed captions.
    Closed captions are captions that are not visible until they are activated by the viewer. In contrast, open, burned-in or hard-coded captions are visible at all times. The captions are part of the video and cannot be switched off.
    Most of the world does not distinguish captions from subtitles, but in the United States and Canada these terms have different meanings.
    SUBTITLES assume the viewer can hear but requires assistance with the language, so only the dialogue and some on-screen text is transcribed. CAPTIONS aim to describe all significant audio content: dialogue and non-speech information, such as audience laughter, background noises or dramatic music.
    When using closed captions, caption tracks are often stored separately from the video file. There are several competing FILE FORMATS for adding captions to digital media: SubRip text (SRT) is the most common caption-file format. SubRip is a free software program for Windows that extracts subtitles and their timings from videos. The result is stored using the SubRip subtitle text-file format. Web Video Text Tracks (WebVTT) is a standard of the World Wide Web Committee for displaying timed text in connection with HTML5 elements.
    There are other file formats that are less commonly used. Most are human-readable text formats that can be converted from one format to another.
    SIGN LANGUAGES use the visual manual method to convey meaning. They are fully-fledged natural languages with their own grammar and vocabulary.
    When signing, a person uses their hands, arms and body to articulate single letters, words or sentences.
    Different natural languages use different sign languages to express the same meaning. Therefore, people with auditory impairments who are capable of signing in one sign language might have problems communicating in a different sign language.
    Sign language can be embedded in a video. In this case, part of the video might be overlapped by a frame showing the sign language interpreter. This approach has two disadvantages. The sign language interpreter is too small for every gesture to be recognised, and the position of a single finger can make a difference.
    The overlay may block an important part of the video, which creates the risk of losing content. A better approach is to split the screen. In one part, the sign language interpreter is shown in a sufficient size, and in the other part, the video is shown in a reduced size.
    As screen sizes for personal computers and TVs have increased significantly in recent years, this is the preferred approach.
    COSTS
    Transcripts, which can easily be created as simple texts, are positioned close to the original content. This is a good and cheapest solution for many types of users and application scenarios.
    Captions are more difficult to create. Even though they are also just texts, they need to be synchronized with the original media.
    Sign language is the most complex and expensive solution: a second video needs to be recorded, cut and synchronized with the original video. This duplicates the required storage. The sign language video and the original video need to be combined in a new video.

КОМЕНТАРІ •