Misleading or Misinterpreted?
When things go astray, is it more the reader’s or the designer’s fault?
I have been struggling with this for quite a while. Should we talk more about the ways visualization can be misleading or misinterpreted? It may seem irrelevant, but think about it for a moment:
Misleading → Onus on the sender
Misinterpreted → Onus on the receiver
It’s not an insignificant detail. When we label a visualization as misleading, we implicitly suggest that something is wrong with how the data is presented. Conversely, when we label a visualization as misinterpreted, we imply that the reader is at fault for misinterpreting the data.
While the two concepts seem separate, they are more closely linked than they appear. A visualization can lead readers to draw incorrect conclusions, but if a reader is attentive and sufficiently skilled, they can readily identify the problem. Conversely, a visualization may be fundamentally legitimate, yet the reader may draw incorrect conclusions due to a lack of necessary skills.
In visualization research, we tend to treat these two aspects separately. On the one hand, research on misleading visualization focuses on the mistakes designers make. On the other hand, research on visualization literacy focuses mostly on measuring people’s ability to interpret data from charts correctly.
This dichotomy between misleadingness and misinterpretation is important because, depending on where we focus, the interventions differ. Misleadingness leads us to focus on the information producer. How can we prevent or detect misleadingness? Misinterpretation leads us to focus on the consumer. How can we empower readers with the skills to reason effectively with charts?
If you think about it, an extreme view of misleadingness is that it’s nothing more than a lack of awareness and skills on the reader’s part. If something is misleading, the reader should be able to capture it. Of course, this is absurd, because we can’t expect everyone to have the highest possible data-reading skills. Also, designers and communicators have an ethical duty to convey information as objectively and transparently as possible.
Another extreme view is that if misinterpretation exists, it’s always the designer’s fault. This is also problematic for at least three main reasons. First, designers can’t possibly anticipate all the ways in which interpretation can go wrong. Second, they can’t design for a million different profiles. One solution could work for one person but not for another, so the “perfect” solution may not even exist. Third, every time a choice is made about what to represent and how, designers implicitly exclude other solutions, thereby potentially concealing information. There is no such thing as representing all the information there is. Designing is the act of choosing, and when you choose, you exclude.
There is more.
Let’s dig deeper into designers and readers.
Designers
Over many years of working in this space, I have become convinced that most misleading visualizations do not stem from malevolent intent. They stem from several factors that can co-occur and self-reinforce:
Lack of skills/awareness. The designer is unaware of a problem with their visualization. The problem stems from a lack of skills and awareness.
Lack of time and other constraints. Many designers work in a fast-paced environment with many constraints. Above all, this is true in data journalism, where graphic editors must find a solution to a given data communication problem within a limited timeframe.
Narrative-first thinking (motivated reasoning). Many people approach data visualization with a preconceived notion of what they want to show. Sometimes even before they dug into the data!
I have also become convinced that many misleading visualizations stem more from specific choices of data set, statistics, and framing than from the visual representation one chooses to use. This indicates that an excessive focus on visual representation does not fully capture the problem, and both designers and readers should be aware of this.
There is a great paper that discusses exactly this idea, and I encourage everyone to read it. It’s titled “Misleading Beyond Visual Tricks: How People Actually Lie with Charts,” and it’s one of my favorite papers of the last few years.
Readers
It is evident that a lack of awareness and skills is a significant factor in misleadingness and misinterpretation. Many experiments have shown that people struggle to read even basic charts correctly. If one cannot understand what a scatter plot is, how can we expect them to capture subtle misleadingness stemming from how the data was collected or the specific angle the author proposes?
Another important aspect, however, is the extent to which the reader is aware of the problem and willing to act on it. Somehow, what is lacking is often his 1) the notion that data is not necessarily objective, and 2) the skeptical attitude necessary to become a more critical thinker of data and charts.
In other words, while skills are important, awareness and attitude are even more important. I have no idea how to address this, but I think that visualization educators should create more materials and learning opportunities to cultivate this attitude and to understand how data can easily mislead people.
In my own little corner, this is exactly what I am trying to do with my research and my online courses. My course on “Thinking Effectively with Data Visualization” is designed to develop that awareness and the skills needed to become a better data thinker. My hope is that I will be able to reach more and more people with this kind of material.
—
Of course, I can’t end this post without talking at least briefly about AI! I have been interested in how AI could play a positive role in this domain. As our information consumption becomes more and more mediated by dialogues with various types of AIs, will it be possible to have an AI to warn us about when our reasoning goes astray? Will LLM be capable of warning or guiding us? Maybe it’s far-fetched or maybe not. If you recall, I have been experimenting with LLM capabilities in this space. This is the post I wrote on this topic, and the results after only a few minutes of testing were not bad.
This occurred approximately 10 months ago. I can imagine that reasoning capabilities might already have improved enormously.
—
And you? What do you think about this whole idea? Please let me know if this rather philosophical yet practical exploration sparked any interesting thoughts. Please leave a comment below and let me know! I am interested in hearing your thoughts.



Great breakdown. I tend to come down on the side of Good charts should make it hard to draw bad conclusions. Just like good tools should make it hard to make known mistakes (a diesel nozzle does not fit on a gasoline car ).
Despite that I also see a world where we have tools that non experts can reach for to critically interrogate charts. Agents!