Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

FILWD Chat: The "Human-in-the-Loop" Issue, with Amanda Makulec

I talk with Amanda about the role of AI in data analysis and visualization

Hi readers, I have been experimenting with this new format where I record a quick chat with someone and post it here and on my YouTube channel. I like to call these “quick chats.” They are completely unstructured in the sense that I have no script. They are motivated by something I have observed from a person (typically on social media but not necessarily), and I am curious to explore further. As I publish more of these video chats, I’d be curious to hear your feedback. I hope you are going to like it!


For this quick chat, I spoke with Amanda Makulec. Amanda is a data visualization designer, teacher, speaker, and executive director of the Data Visualization Society (DVS).

The main impetus behind this chat is something she posted recently on LinkedIn regarding the use of the terms “human-in-the-loop” and “data-driven.” You can see her post below.

The main problem with these terms is the idea that data analysis and communication can be automated and that the human can be relegated to being a component of the system rather than the central actor. With Amanda, we explored quite a few themes. Here is a summary of our chat with a few additional reflections.

We start by discussing the data-driven language, which seems to remove all the context we bring as humans and organizations. Amanda suggests using the terms data-enabled or data-informed, emphasizing the idea of leaving the human in the driver’s seat. One term Amanda kept using during our chat is "AI enablement.” I had never heard of it before, and I think it captures the spirit of our discussion very well. I don’t know if this terminology is commonly used elsewhere, but I think I will adopt it in future conversations.

We agreed that AI is fantastic for dealing with the most menial tasks. It’s amazing how many “boring” steps you can skip when it works well. Many data preparation and cleaning tasks are good candidates both because AI is potentially very capable there and because they are extremely boring. This is confirmed by the DVS survey Amanda mentioned, in which, she said, data preparation and cleaning come up at the top when asking what data visualization professionals use AI for.

Around ten minutes in, Amanda showed an example of a chart produced by an LLM with many issues. In this context, she made a very relevant comment regarding the fact that a skilled designer can instruct the LLM to obtain a better chart when the results are not optimal, but the person will need the skills to give appropriate instructions. In other words, there is no AI that can supplement a lack of skills. At least at the moment. In the future, more skilled AIs may produce recommendations about how to ameliorate the design of a data visualization, but it is not evident to me this will be possible soon. Or maybe it’s just around the corner? Given the development speed, predicting how these tools will evolve is very hard.

In this same context, I remarked about something I have been thinking about for a while: the role of “verification” with these tools (I already mentioned this issue when I recorded myself performing data analysis with ChatGPT). While detecting issues with the visual representation is relatively easy, detecting misinterpretations or miscalculations is much harder. The question is: how does one verify data issues with LLMs?

Toward the end, we discuss the attribution problem, potential biases, and the “data-reality gap,” which refers to gaps between our interpretation of the reality described by the data and reality itself. With AI tools, this problem seems to be exacerbated.

One last idea we discussed is whether AI tools can help us ask better questions. I am unsure what the answer to this question is, but it seems fascinating.

That’s all for now! Let me know what you think of this new format and if you enjoyed the chat.

Discussion about this podcast

FILWD
FILWD