My Model is Unfair, Do People Even Care?
Visual Design Affects Trust and Perceived Bias in Machine Learning

Aimen Gaba*, Zhanna Kaufman*, Jason Cheung, Marie Shvakel, Kyle Wm. Hall, Yuriy Brun, and Cindy Xiong Bearfield

Suppose you are picking between two robot advisors, and one has historically made 10% return for its users, while the other 8%. Which robot would you choose? What if you learn that the first robot has exhibited sexist behavior, making higher returns for men than for women? Does that change your choice? This paper aims to understand how bias and effectiveness of data-driven systems affects users' trust in those systems. We find that describing system behavior using text rather than bar charts leads to people putting more weight on bias, and explicitly labeling a system as biased has more effect than showing a history of biased behavior.




Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization

Eli Holder and Cindy Xiong Bearfield

Could design or framing choices interact with viewers’ social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers’ attitudes to match the divided opinions shown in the visualization. We found that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further.




Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data Visualizations

Hamza Elhamdadi, Adam Stefkovics, Johanna Beyer, Eric Moerth, Hanspeter Pfister, Cindy Xiong Bearfield, Carolina Nobre

Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. We propose a multidimensional operationalization of trust in visualization by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations.



Reasoning Affordances with Tables and Bar Charts

Cindy Xiong, Elsie Lee-Robbins, Icy Zhang, Aimen Gaba, Steven Franconeri

We tested whether confirmation bias exists when people reason with visualized data and whether specific visualization designs can elicit less biased reasoning strategies. Confirmation bias was not significantly present when beliefs were primed, but it was present when beliefs were pre-existing. The tabular presentation format was more likely to afford the ratio reasoning strategy, and the use of the ratio strategy was more likely to lead to the correct answer.



IEEE Computer Graphics and Applications 2022
🏆 VisHikers’ Guide to Evaluation: Competing Considerations in Study Design

Emily Wall, Cindy Xiong, Yea-Seul Kim

We describe the persistent tensions between various camps on the “right” way to conduct evaluations in visualization. *Best Paper Runner-Up for IEEE CG&A*



From Invisible to Visible: Impacts of Metadata in Communicative Data Visualization

Alyxander Burns, Christiana Lee, Thai On, Cindy Xiong, Evan Peck, Narges Mahyar

Leaving the context of visualizations invisible can have negative impacts on understanding and transparency. Recontextualizing visualizations with metadata (e.g., disclosing the data source or instructions for decoding the visualizations' encoding) may counter these effects. Our experiments suggested that visualizations with metadata were perceived as more thorough than those without metadata. Metadata  influence which information participants remembered as important or interesting.


[PDF] [Video]

How Do Viewers Synthesize Conflicting Information from Data Visualizations?

Prateek Mantri, Hariharan Subramonyam, Audrey L. Michal, and Cindy Xiong

Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. We conducted a series of experiments to address this gap.



Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design

Aimen Gaba, Vidya Setlur, Arjun Srinivasan, Jane Hoffswell, and Cindy Xiong

The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly).



How Do We Measure Trust in Visual Data Communication?

Hamza Elhamdadi, Aimen Gaba, Yea-Seul Kim, Cindy Xiong

Trust is fundamental to effective visual data communication between the visualization designer and the reader. Although personal experience and preference influence readers’ trust in visualizations, visualization designers can leverage design techniques to create visualizations that evoke a “calibrated trust,” at which readers arrive after critically evaluating the information presented. To systematically understand what drives readers to engage in “calibrated trust,” we must first equip ourselves with reliable and valid methods for measuring trust.


[PDF] [Video]

Dispersion vs Disparity: Hiding Variability Can Encourage Stereotyping When Visualizing Social Outcomes

Eli Holder and Cindy Xiong

Visualization research often focuses on perceptual accuracy or helping readers interpret key messages. But when designed poorly, data visualizations about social inequity can be misinterpreted in harmful ways and lead to stereotyping. Design choices can influence these biases: Hiding variability tends to increase stereotyping while emphasizing variability reduces it.


[PDF] [Video]

Seeing What You Believe or Believing What You See? Belief Biases Correlation Estimation

Cindy Xiong, Chase Stokes, Yea-Seul Kim, Steven Franconeri

When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. When viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1.


[PDF] [Video]

ACM CHI 2022
🏆 Investigating Perceptual Biases in Icon Arrays

Cindy Xiong, Ali Sarvghad, Çağatay Demiralp, Jake M. Hofman, Daniel G. Goldstein

Icon arrays are graphical displays in which a subset of identical shapes are filled to convey probabilities. They are widely used for communicating probabilities to the general public. We investigated the effect of different arrangements in icon arrays on probability perception. *Honorable Mention at CHI 2022*



Explaining with Examples: Lessons Learned from Crowdsourced Introductory Description of Information Visualizations

Leni Yang, Cindy Xiong, Jason K. Wong, Aoyu Wu, Huamin Qu

Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy.


[PDF] [Video]

Visual Arrangements of Bar Charts Influence Comparisons in Viewer Takeaways

Cindy Xiong, Vidya Setlur, Benjamin Bach, Kylie Lin, Eunyee Koh, and Steven Franconeri

To help a viewer intuitively compare values to quickly generate key takeaways, visualization designers can manipulate how data values are arranged in a chart to afford particular comparisons.



Declutter and Focus: Empirically Evaluating Design Guidelines for Effective Data Communication

Kiran Ajani, Elsie Lee, Cindy Xiong, Cole Nussbaumer Knaflic, William Kemper, and Steven Franconeri

To more clearly communicate data, consider including a clear headline that describes the relevant data pattern, highlighting a subset of relevant data values with a unique color, and connecting those values to written annotations that contextualize them in a broader argument.


🏆 A Design Space of Vision Science Methods for Visualization Research.

Madison A. Elliott, Christine Nothelfer, Cindy Xiong, Danielle Albers Szafir

We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to inform visualization design guidelines. We advocate for a deeper relationship between human perception and visualization research to extend the methodological design space for understanding visualization and human vision. Join us here.




How to Evaluate Data Visualizations across Different Levels of Understanding.

Alyxander Burns. Cindy Xiong, Steven Franconeri, Alberto Cairo, Narges Mahyar

Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. We diagnose levels of understanding of visualized data by adapting a common framework from the education literature.




Truth or Square: Aspect Ratio Biases Recall of Position Encodings.

Cristina Ceja, Caitlyn McColeman, Cindy Xiong, Steven Franconeri

Bar charts are among the most frequently used visualizations, yet people's recall of bar marks' position can be biased. Viewers are biased to remember a bar mark as being more similar to a prototypical square, leading to an overestimation of bars with a wide aspect ratio, and an underestimation of bars with a tall aspect ratio.


[PDF] [Video]

The Curse of Knowledge in Data Visualizations.

Cindy Xiong, Lisanne van Weelden, Steven Franconeri

Two people can see different patterns in the same visualization, potentially leading to miscommunication. We show that when people are primed to see one pattern in the data as visually salient, they believe that naïve viewers will experience the same visual salience.


[PDF] [Video]


llusion of Causality in Visualized Data

Cindy Xiong, Joel Shapiro, Jessica Hullman, Steven Franconeri

Students who eat breakfast more frequently tend to have a higher grade point average. From this data, many people might confidently state that a before-school breakfast program would lead to higher grades. This is a reasoning error because correlation does not necessarily indicate causation – X and Y can be correlated without one directly causing the other. While this error is pervasive, its prevalence might be amplified or mitigated by the way that the data is presented to a viewer.


[PDF] [Video]


Biased Average Position Estimates in Line and Bar Graphs: Underestimation, Overestimation, and Perceptual Pull

Cindy Xiong, Cristina R. Ceja, Casimir J.H. Ludwig, Steven Franconeri

In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information. We show that reports of average position across a short delay can be biased such that line positions are underestimated and bar positions overestimated.



EuroVis TrustVIS 2019

Examining the Components of Trust in Map-Based Visualizations

Cindy Xiong, Lace Padilla, Kent Grayson, Steven Franconeri

Perceived transparency is often associated with perceived trust, and greater transparency in data visualization can be associated with an increase in the amount of information. We showed that perceived clarity, amount of disclosure. and thoroughness significantly predicted individuals’ selection of a Google Maps-like application with either less information or more information.



International Music Perception and Cognition Conference 2016

Perceptual learning of abstract musical patterns: Recognizing composer style

Carolyn A. Bufford, Khanh-Phuong Thai, Joselyn Ho, Cindy Xiong, Carly A. Hines, Philip J. Kellman

How can we improve abstract pattern recognition in music? Can principles enhancing visual learning be extended to auditory stimuli, such as music? Perceptual learning, improvements in the pickup of information from experience, is well-established in both vision and audition. We showed that perceptual learning training can improve participants’ recognition of composers’ styles, demonstrating that composer style can be learned, and perceptual-learning-based interventions are effective in complex auditory domains.