Tuesday, December 6, 2022
HomeScienceBlind and sighted readers have sharply different takes on what content is...

Blind and sighted readers have sharply different takes on what content is most useful to include in a chart caption


Three columns containing varied graphics. The primary comprises the canonical Flatten the Curve coronavirus chart and two textual descriptions of that chart, color-coded in keeping with the 4 ranges of the semantic content material mannequin offered within the paper. The second comprises a corpus visualization of two,147 sentences describing charts, additionally color-coded, and faceted by chart sort and issue. The third comprises two warmth maps, similar to blind and sighted readers’ ranked preferences for the 4 ranges of semantic content material, indicating that blind and sighted readers have sharply diverging preferences. Credit: Massachusetts Institute of Expertise

Within the early days of the COVID-19 pandemic, the Facilities for Illness Management and Prevention produced a easy chart as an instance how measures like masks sporting and social distancing may “flatten the curve” and scale back the height of infections.

The chart was amplified by information websites and shared on social media platforms, however it usually lacked a corresponding textual content description to make it accessible for blind individuals who use a screen reader to navigate the online, shutting out lots of the 253 million individuals worldwide who’ve visible disabilities.

This different textual content is usually lacking from on-line charts, and even when it’s included, it’s incessantly uninformative and even incorrect, in keeping with qualitative information gathered by scientists at MIT.

These researchers performed a research with blind and sighted readers to find out which textual content is beneficial to incorporate in a chart description, which textual content just isn’t, and why. In the end, they discovered that captions for blind readers ought to deal with the general tendencies and statistics within the chart, not its design components or higher-level insights.

In addition they created a conceptual model that can be utilized to judge a chart description, whether or not the textual content was generated routinely by software program or manually by a human creator. Their work may assist journalists, lecturers, and communicators create descriptions which are simpler for blind people and information researchers as they develop higher instruments to routinely generate captions.

“Ninety-nine-point-nine percent of images on Twitter lack any kind of description—and that is not hyperbole, that is the actual statistic,” says Alan Lundgard, a graduate scholar within the Pc Science and Synthetic Intelligence Laboratory (CSAIL) and lead creator of the paper. “Having people manually author those descriptions seems to be difficult for a variety of reasons. Perhaps semiautonomous tools could help with that. But it is crucial to do this preliminary participatory design work to figure out what is the target for these tools, so we are not generating content that is either not useful to its intended audience or, in the worst case, erroneous.”

Lundgard wrote the paper with senior creator Arvind Satyanarayan, an assistant professor of pc science who leads the Visualization Group in CSAIL. The analysis will probably be offered on the Institute of Electrical and Electronics Engineers Visualization Convention in October.

Evaluating visualizations

To develop the conceptual mannequin, the researchers deliberate to start by learning graphs featured by fashionable on-line publications equivalent to FiveThirtyEight and NYTimes.com, however they bumped into an issue—these charts principally lacked any textual descriptions. So as an alternative, they collected descriptions for these charts from graduate college students in an MIT information visualization class and thru an online survey, then grouped the captions into 4 classes.

Level 1 descriptions deal with the weather of the chart, equivalent to its title, legend, and colours. Level 2 descriptions describe statistical content material, just like the minimal, most, or correlations. Level 3 descriptions cowl perceptual interpretations of the information, like advanced tendencies or clusters. Level 4 descriptions embody subjective interpretations that transcend the information and draw on the creator’s data.

In a research with blind and sighted readers, the researchers offered visualizations with descriptions at totally different ranges and requested members to charge how helpful they have been. Whereas each teams agreed that stage 1 content material by itself was not very useful, sighted readers gave stage 4 content material the very best marks whereas blind readers ranked that content material among the many least helpful.

Survey outcomes revealed {that a} majority of blind readers have been emphatic that descriptions mustn’t comprise an creator’s editorialization, however moderately keep on with straight details concerning the information. Then again, most sighted readers most popular an outline that informed a narrative concerning the information.

“For me, a surprising finding about the lack of utility for the highest-level content is that it ties very closely to feelings about agency and control as a disabled person. In our research, blind readers specifically didn’t want the descriptions to tell them what to think about the data. They want the data to be accessible in a way that allows them to interpret it for themselves, and they want to have the agency to do that interpretation,” Lundgard says.

A extra inclusive future

This work may have implications as information scientists proceed to develop and refine machine studying strategies for autogenerating captions and different textual content.

“We are not able to do it yet, but it is not inconceivable to imagine that in the future we would be able to automate the creation of some of this higher-level content and build models that target level 2 or level 3 in our framework. And now we know what the research questions are. If we want to produce these automated captions, what should those captions say? We are able to be a bit more directed in our future research because we have these four levels,” Satyanarayan says.

Sooner or later, the four-level framework may additionally assist researchers develop machine studying fashions that may routinely counsel efficient visualizations as a part of the information evaluation course of, or fashions that may extract probably the most helpful info from a chart.

This analysis may additionally inform future work in Satyanarayan’s group that seeks to make interactive visualizations extra accessible for blind readers who use a display reader to entry and interpret the knowledge.

“The question of how to ensure that charts and graphs are accessible to screen reader users is both a socially important equity issue and a challenge that can advance the state-of-the-art in AI,” says Meredith Ringel Morris, director and principal scientist of the People + AI Research crew at Google Research, who was not concerned with this research. “By introducing a framework for conceptualizing natural language descriptions of information graphics that is grounded in end-user needs, this work helps ensure that future AI researchers will focus their efforts on problems aligned with end-users’ values.”

Morris provides: “Rich natural-language descriptions of data graphics will not only expand access to critical information for people who are blind, but will also benefit a much wider audience as eyes-free interactions via smart speakers, chatbots, and other AI-powered agents become increasingly commonplace.”


Chrome descriptions of images will clue in blind and low vision users


Extra info:
Alan Lundgard et al, Accessible Visualization through Pure Language Descriptions: A 4-Level Mannequin of Semantic Content material, IEEE Transactions on Visualization and Pc Graphics (2021). DOI: 10.1109/TVCG.2021.3114770

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a preferred web site that covers information about MIT analysis, innovation and instructing.

Quotation:
Blind and sighted readers have sharply totally different takes on what content material is most helpful to incorporate in a chart caption (2021, October 12)
retrieved 12 October 2021
from https://techxplore.com/information/2021-10-sighted-readers-sharply-content.html

This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.



Click Here To Join Our Telegram Channel



Source link

You probably have any issues or complaints relating to this text, please tell us and the article will probably be eliminated quickly. 

Raise A Concern

RELATED ARTICLES
- Advertisment -

Most Popular