Episodes
Audio
Chapters (AI generated)
Speakers
Transcript
Uncertainty and Trumpery with Alberto Cairo
Data stories is brought to you by click. Are you missing out on meaningful relationships hidden in your data? Unlock the old story with Qlik sense through personalized visualizations and dynamic dashboards.
Alberto CairoSo what I would like to insert a little bit is a little bit of uncertainty in people. Right? Don't be so sure about your own opinions. Don't be so sure about your own ideology. We can fight against that instinct in a conscious way.
Enrico BertiniData stories is brought to you by click. Are you missing out on meaningful relationships hidden in your data? Unlock the old story with Qlik sense through personalized visualizations and dynamic dashboards, which you can download for free at click. Hey, everyone. Welcome to a new episode of Data stories. Hey, Moritz.
Moritz StefanerHey, Enrico.
Enrico BertiniWhat's up?
Moritz StefanerLots of things. As usual. The year is kicking in.
Enrico BertiniBelated happy birthday.
Moritz StefanerYeah, thank you. I just turned 40. Yeah, it's horrible.
Enrico BertiniYou crossed the chasm.
Moritz StefanerOld people. Yeah, yeah, no, it's fine. I had a good day, so I won't complain.
Enrico BertiniGood.
Moritz StefanerAnd it's just a number, you know.
Enrico BertiniThat's what people say.
Moritz StefanerThat's what I like to believe.
Enrico BertiniThat's what people over 40 say.
Moritz StefanerThen suddenly it's just a number.
Enrico BertiniThen suddenly it's just a number.
Moritz StefanerYeah. So, yeah, I just came back from Kosovo. I had another nice data cuisine workshop. We did something with United nations on corruption and that was really interesting. And it was the first time we had a real topic to the workshop. Usually it's just about the place, but this time it was about corruption specifically. So we had like six, seven data dishes about corruption. And you can check them out. I will link them from the blog post. But it was a nice little thing again.
Enrico BertiniYeah, very nice. I saw some pictures already on Facebook. Yeah. It's always fun to see these pictures. Yeah.
Moritz StefanerIt's always a unique experience. The people bring in their perspectives, the local cuisine, the local data. It's always.
Enrico BertiniIt's great. Great. So one thing that I want to talk about briefly is also our Patreon initiative and crowdfunding initiative. Just to make sure that it's still in people's mind. We are still collecting pledges on Patreon, and if you want to read more about it, just go to www.patreon.com Datastories. So very briefly, we are trying to switch to a crowdfunding model for the show. Of course, as you may know, we have some expenses, so we really need your help. So far, we collected 35 patrons. We have 35 patrons so far. And we reached about, what is that, 225. Once we get to 600, which is not a crazy, crazy amount, then we would directly switch to crowdfunding.
Crowdfunding for the show AI generated chapter summary:
We are still collecting pledges on Patreon, and if you want to read more about it, just go to www. patreon. com. Once we get to 600, which is not a crazy, crazy amount, then we would directly switch to crowdfunding. We would love that.
Enrico BertiniIt's great. Great. So one thing that I want to talk about briefly is also our Patreon initiative and crowdfunding initiative. Just to make sure that it's still in people's mind. We are still collecting pledges on Patreon, and if you want to read more about it, just go to www.patreon.com Datastories. So very briefly, we are trying to switch to a crowdfunding model for the show. Of course, as you may know, we have some expenses, so we really need your help. So far, we collected 35 patrons. We have 35 patrons so far. And we reached about, what is that, 225. Once we get to 600, which is not a crazy, crazy amount, then we would directly switch to crowdfunding.
Moritz StefanerYeah. And that would be super nice.
Enrico BertiniWe would love that. We still have a backup with our advertisements, but we really want to switch to that. So please help us.
Moritz StefanerYeah. And if you enjoy the show, which we assume because you're still listening, if you're still listening, then two or $3 per episode is not a problem either. I guess so.
Enrico BertiniYeah.
Moritz StefanerYeah. Exactly.
TIP: Conference season AI generated chapter summary:
Tapestry conference just posted the recorded videos. IO festival is legendary by now. If you can pull the money together and have time in June, you totally need to go there. There's more good conferences coming up.
Enrico BertiniWhat else?
Moritz StefanerYeah, lots of stuff happening. Conference season is kicking in. Conference season. You've been to Tapestry, right?
Enrico BertiniI just came back. I just came back. I had a very nice time at Tapestry. I've been there already three or four times. I loved it, as usual. And plus this year it was in Florida, in St. Augustine. Lovely, lovely place that I've never heard of before going, and of course being still winter, spending some time in the warmth of Florida. It was very nice. And they had an amazing set of speakers, as usual, a mix of very different people with very different backgrounds, as usual, and a lot of fun. So if you are curious about the conference and couldn't attend, I suggest you to go to the website. I think it's tapestryconference.com and they just posted the recorded videos. So if you missed it, you can still at least watch the video.
Moritz StefanerYeah, it's amazing. Like one week after the conference, you have the videos already. And it's great.
Enrico BertiniYeah, really good. Yeah.
Moritz StefanerAnd there's more good conferences coming up. So if you're in Europe, there's resonate festival in April. I can really recommend that it's in Belgrade. It's a very nice mixture of media, arts, design, but there's also like music festivals in the evening. Really good. And I can totally recommend that. And of course, there's IO festival, which is legendary by now. I think it's the fifth or 6th edition. And it's honestly really the best conference series I've ever been to. Really honest about that. It's really, really good. And for some people it's even been like totally career changing or like life trajectory changing. No, I'm serious. It's quite a unique thing. And this year there's great people again. There's Nicholas Felton, there's Gene Kogan, which I always wanted to have on the show. There's Manuel Lima, who wrote the amazing books and ran the visual complexity site.
Enrico BertiniHe has a new book out, right.
Moritz StefanerBook out on circles. So we should maybe talk to him at some point as well. Yeah, there's Albert-László Barabási, who's the amazing Network guy, science researcher. Yeah, we had Kim Albrecht on the show who was working in his lab. And yeah, he's the real deal. Of course, there's Marcian Ignatz from variable IO, amazing data artist, which we could also talk to at some point. Lots of great people. And there's still tickets available, so check out the site. If you have any chance of going there, definitely do it. If you can somehow pull the money together and have time in June, you totally need to go there. That's my recommendation.
Enrico BertiniYeah, maybe I should do it.
Moritz StefanerAbsolutely. And our guest today also has a conference coming up, so it's a perfect segue. It's almost as if we had planned that. So our guest is Alberto Cairo. Hi, Alberto, good to meet you, Alberto.
Interview AI generated chapter summary:
Our guest today also has a conference coming up. It's called the Digital Humanities and Data Journalism conference. It intends to bring together those two communities which have a lot in common.
Moritz StefanerAbsolutely. And our guest today also has a conference coming up, so it's a perfect segue. It's almost as if we had planned that. So our guest is Alberto Cairo. Hi, Alberto, good to meet you, Alberto.
Alberto CairoHey, Moritz. Hey, Enrico, how are you? Nice talking to you again.
Enrico BertiniYeah, great to have you on the show again.
Alberto CairoYeah, it's a pleasure. It's been a while. I don't even remember when. When was the last time?
Enrico BertiniYeah, yeah, I just checked. You've been on episode 35.
Alberto CairoYeah, that was ages ago.
Moritz StefanerWe were babies at that time.
Enrico BertiniYeah, that's 2014.
Alberto CairoMoritz just mentioned that he just turned 40. He made me feel a little bit old because I'm 42, on my way to 43. So yeah, probably it was like when we went, the last time that I was on the show was when I was 15 and Morris was twelve or something like that.
Moritz StefanerWe were talking about dinosaurs anyway. Pokemon dinosaurs.
Alberto CairoYeah. So yeah, I have a conference too, I mean, coming in September, between the 14th and the 16 December. It's called the Digital Humanities and Data Journalism conference. So it intends to bring together those two communities which have a lot in common. I mean, it's the second time that we are doing this conference. The first time was last year and it was a blast. It was very interesting. We had very good speakers, great conversations between people who use data in journalism and people who use data in the humanities. So we decided to repeat it this year and we invited people like Steve Duenes from the New York Times, the head of the graphics department. We have Lena Groeger from ProPublica, we have Mona Chalabi from the Guardian. We have Aron Pilhofer from X from the Guardian, Lynn Cherny, who you know really well, and many other people coming also from, from the humanities. So it should be. It should be quite interesting. The website is dashdjmiami.com and Miami is really beautiful at that time of the year. So that's another excuse to come down to Miami at that point or something.
Moritz StefanerIt sounds amazing, Alberto. So the topic we want to talk about today, however, is something. I need to introduce that a bit more because it started when we found a really interesting online course from the University of Washington. So Carl Bergstrom and Jeff invest, who I actually know from another project, but it doesn't really matter. So they put out this really interesting syllabus for a course, for a course called calling bullshit in the age of big data. And once I read that title, I was like, okay, that's a good topic to talk about. It's a really amazing. It's a very timely issue. And so I checked out the syllabus. I was like, oh, that's amazing. Yeah. So it's all about spotting bullshit. The natural ecology of bullshit, causality, statistical trap. So I think it's a great topic to talk about. And I looked into the readings a bit. So there's even a philosophical essay by Harry Frankfurt from 1986. So he was a clear visionary on bullshit, where he defines the term a bit and talks about how bullshit is different from lying or other forms of deception. So it really takes that word apart. And, yeah, there's lots of great readings on that syllabus side. I just want to quote a bit from the Harry Frankfurt text because I think it's so good. So he says it's impossible for someone to lie unless he thinks knows the truth. Right? So lying, you need to. You relate to the truth. Producing bullshit requires no such conviction. A person who lies is there by responding to the truth, and he is, to that extent, respectful of it. When an honest man speaks, he says only what he believes to be true. And for the lie, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off. He's neither on the side of the true nor the side of the false. He just picks them out or makes them up to suit his purposes. And I think that's a very interesting characterization of that term. And once you think about bullshitting, I mean, we're in the tech industry, so we haven't encountered a lot of it. Right? And it's everywhere. Bullshit is everywhere. And we need to talk about, like, how to spot it and how to also disprove bullshit claims. And so, very timely topic. And just the same day, basically, I ran across Alberto tweeting about that he will start a lecture series on the topic of visual trumpery, or trumpery in general, I find an interesting term. And so, yeah, this is how it came about, and this is why we have him on today. So, Alberto, what is trumpery? That's such a nice word. And where did you find that word? I think it's an actual english word, right? You didn't mix it up.
Callousness in the age of big data AI generated chapter summary:
An online course at the University of Washington focuses on spotting "bullying" Professor Alberto says the term is different from lying or other forms of deception. And we need to talk about how to spot it and how to also disprove it, he says.
Moritz StefanerIt sounds amazing, Alberto. So the topic we want to talk about today, however, is something. I need to introduce that a bit more because it started when we found a really interesting online course from the University of Washington. So Carl Bergstrom and Jeff invest, who I actually know from another project, but it doesn't really matter. So they put out this really interesting syllabus for a course, for a course called calling bullshit in the age of big data. And once I read that title, I was like, okay, that's a good topic to talk about. It's a really amazing. It's a very timely issue. And so I checked out the syllabus. I was like, oh, that's amazing. Yeah. So it's all about spotting bullshit. The natural ecology of bullshit, causality, statistical trap. So I think it's a great topic to talk about. And I looked into the readings a bit. So there's even a philosophical essay by Harry Frankfurt from 1986. So he was a clear visionary on bullshit, where he defines the term a bit and talks about how bullshit is different from lying or other forms of deception. So it really takes that word apart. And, yeah, there's lots of great readings on that syllabus side. I just want to quote a bit from the Harry Frankfurt text because I think it's so good. So he says it's impossible for someone to lie unless he thinks knows the truth. Right? So lying, you need to. You relate to the truth. Producing bullshit requires no such conviction. A person who lies is there by responding to the truth, and he is, to that extent, respectful of it. When an honest man speaks, he says only what he believes to be true. And for the lie, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off. He's neither on the side of the true nor the side of the false. He just picks them out or makes them up to suit his purposes. And I think that's a very interesting characterization of that term. And once you think about bullshitting, I mean, we're in the tech industry, so we haven't encountered a lot of it. Right? And it's everywhere. Bullshit is everywhere. And we need to talk about, like, how to spot it and how to also disprove bullshit claims. And so, very timely topic. And just the same day, basically, I ran across Alberto tweeting about that he will start a lecture series on the topic of visual trumpery, or trumpery in general, I find an interesting term. And so, yeah, this is how it came about, and this is why we have him on today. So, Alberto, what is trumpery? That's such a nice word. And where did you find that word? I think it's an actual english word, right? You didn't mix it up.
What is TRUMBY? AI generated chapter summary:
Trumpery is an actual english word that comes from old French, the definition. It's related to trickery in some sense. It got imported into late medieval or middle English as trumpery. It is also so valuable and so useful to define what we are experiencing nowadays.
Moritz StefanerIt sounds amazing, Alberto. So the topic we want to talk about today, however, is something. I need to introduce that a bit more because it started when we found a really interesting online course from the University of Washington. So Carl Bergstrom and Jeff invest, who I actually know from another project, but it doesn't really matter. So they put out this really interesting syllabus for a course, for a course called calling bullshit in the age of big data. And once I read that title, I was like, okay, that's a good topic to talk about. It's a really amazing. It's a very timely issue. And so I checked out the syllabus. I was like, oh, that's amazing. Yeah. So it's all about spotting bullshit. The natural ecology of bullshit, causality, statistical trap. So I think it's a great topic to talk about. And I looked into the readings a bit. So there's even a philosophical essay by Harry Frankfurt from 1986. So he was a clear visionary on bullshit, where he defines the term a bit and talks about how bullshit is different from lying or other forms of deception. So it really takes that word apart. And, yeah, there's lots of great readings on that syllabus side. I just want to quote a bit from the Harry Frankfurt text because I think it's so good. So he says it's impossible for someone to lie unless he thinks knows the truth. Right? So lying, you need to. You relate to the truth. Producing bullshit requires no such conviction. A person who lies is there by responding to the truth, and he is, to that extent, respectful of it. When an honest man speaks, he says only what he believes to be true. And for the lie, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off. He's neither on the side of the true nor the side of the false. He just picks them out or makes them up to suit his purposes. And I think that's a very interesting characterization of that term. And once you think about bullshitting, I mean, we're in the tech industry, so we haven't encountered a lot of it. Right? And it's everywhere. Bullshit is everywhere. And we need to talk about, like, how to spot it and how to also disprove bullshit claims. And so, very timely topic. And just the same day, basically, I ran across Alberto tweeting about that he will start a lecture series on the topic of visual trumpery, or trumpery in general, I find an interesting term. And so, yeah, this is how it came about, and this is why we have him on today. So, Alberto, what is trumpery? That's such a nice word. And where did you find that word? I think it's an actual english word, right? You didn't mix it up.
Alberto CairoNo, I didn't make it up. It's an actual english word that comes from old French, the definition. You can look it up on Google. So, by the way, I need to credit when I begin the lecture series and also when I write the corresponding book, because I'm planning to write a book with the same title, trumpery. So trumpery is. I need to credit the person who tweeted about it. It was not me, it was someone else. I saw the word and I said, this is the word that we need to use because it's so perfect for the time that we are living through. And it is also so, so valuable and so useful to define what we are experiencing nowadays. Because the definition of trumpery is something that is attractive, but or of little use. Something that is worthless, something that is delusive, something that is shallow and something that is misleading. All right. Something that. So it's related to trickery in some sense. It comes from french tromper. I'm not very good at pronouncing French. And then it got imported into late medieval or middle English as trumpery, probably. And then it evolved and it become trumpery with anew. So for a while, since I wrote the truthful art, there is a little bit about calling bullshit, calling out bullshit in the truthful art, my previous book. But since I finished writing it, I have been thinking that there was a lot that I could have said more about the topic of calling out bullshit and how to fight against bullshit and how to create a world that is better informed, etcetera. And I was looking for a title for the talk and probably, and also for a book that I want to write. And I was planning to use the word bullshit, but that has already been taken. Right. There is, as you mentioned, there's this wonderful book, surprisingly, on bullshit. Yeah, it's own bullshit. And as you said, bullshit means a complete disregard for the truth. Basically, it's just using misleading statements to push your own agenda, but without even considering what the truth might be, you don't even care. In order to lie, you need to know what the truth is. But in order to bullshit, you don't need to know what the truth really is. You just don't care.
Calling Out Bullying AI generated chapter summary:
"Trumpery " is a much more appropriate word for these lectures. In order to lie, you need to know what the truth is. But in order to bogus, you just don't care. Whatever pushes your agenda. I'm writing a book proposal to submit to publishers.
Alberto CairoNo, I didn't make it up. It's an actual english word that comes from old French, the definition. You can look it up on Google. So, by the way, I need to credit when I begin the lecture series and also when I write the corresponding book, because I'm planning to write a book with the same title, trumpery. So trumpery is. I need to credit the person who tweeted about it. It was not me, it was someone else. I saw the word and I said, this is the word that we need to use because it's so perfect for the time that we are living through. And it is also so, so valuable and so useful to define what we are experiencing nowadays. Because the definition of trumpery is something that is attractive, but or of little use. Something that is worthless, something that is delusive, something that is shallow and something that is misleading. All right. Something that. So it's related to trickery in some sense. It comes from french tromper. I'm not very good at pronouncing French. And then it got imported into late medieval or middle English as trumpery, probably. And then it evolved and it become trumpery with anew. So for a while, since I wrote the truthful art, there is a little bit about calling bullshit, calling out bullshit in the truthful art, my previous book. But since I finished writing it, I have been thinking that there was a lot that I could have said more about the topic of calling out bullshit and how to fight against bullshit and how to create a world that is better informed, etcetera. And I was looking for a title for the talk and probably, and also for a book that I want to write. And I was planning to use the word bullshit, but that has already been taken. Right. There is, as you mentioned, there's this wonderful book, surprisingly, on bullshit. Yeah, it's own bullshit. And as you said, bullshit means a complete disregard for the truth. Basically, it's just using misleading statements to push your own agenda, but without even considering what the truth might be, you don't even care. In order to lie, you need to know what the truth is. But in order to bullshit, you don't need to know what the truth really is. You just don't care.
Moritz StefanerYeah, you just want to achieve something and you say whatever brings you there, if it's true or not.
Alberto CairoExactly. Whatever pushes your agenda. So it's so timely, it's perfect. Right. But I believe that trumpery is a much more appropriate word for these. So, yeah, I'm doing that. I'm doing the lecture series, and I'm, I'm writing a book proposal to submit it to publishers. So let's see where these leads me.
How to Fight Data-based Lies and Stupidity AI generated chapter summary:
The book focuses on how we disregard uncertainty when we make an assertion. It involves using logic and using probability and uncertainty thinking. One of the challenges in data visualization is that we are not good at showing the uncertainty in models.
Moritz StefanerYeah, so I think there's two ways we can relate that to data visualization, just to get that angle in. Of course, there's loads of ways you can lie with data or use data to, or like, tweak reality, or. And on the other hand, maybe data and data visualization can play a role in refuting lies and bullshit. Who knows? I mean, shall we talk about the first approach first? Like, what are typical ways people misinform with data?
Alberto CairoMisinformed with data? Well, there are plenty of books about that. There is one that was published recently titled weaponized lies, which is quite interesting. It's very basic, but it's quite interesting. You know, you got, you have naked statistics, which is not exclusively about lying with data, but it teaches you how to use data, etcetera. I think that we need to go beyond that. I need that. We need to go beyond the mere calling out specific instances of trumpery, and we need to help people get a better understanding, not just about statistical thinking or quantitative thinking, but of logic and probability in general. And that is what the book and what the lecture are going to be about. I certainly will call out instances of bad use of data, but the topic would be more, it will be broader, it will be how we completely disregard uncertainty when we make an assertion. So people, human beings, we like things that are black and white. So something is either true or is completely untrue. And if we decide that something is true, we go for it, particularly if it helps us push our own agenda. And we don't even consider alternative explanations for a particular topic. That is natural, that is human, that is what we do intuitively, at least, most people, we do it all the time. So we need to fight against that instinct, so we need a method to fight against that instance, and we need to teach that method. And it involves using logic and using probability and uncertainty thinking, which is something that I have been thinking quite a lot about in the past few years. And this relates to data visualization, because one of the challenges in data visualization, I believe, is that we are not good at showing that the uncertainty that is built in naturally in the models that we present to the public. We present a time series chart, and that is a very crisp and sharp looking line, and it looks very precise and very accurate. But what perhaps we should be showing is like a fuzzy cloud surrounding that line to show people that the model is uncertain. We have a certain confidence that the actual value is within the boundaries of that cloud. But the point will basically be anywhere within that cloud. And that kind of certainty, I believe, indirectly leads people to be too certain about their own opinions. So what I would like to insert a little bit is a little bit of uncertainty in people. Don't be so sure about your own opinions. Don't be so sure about your own ideology, right. Don't get isolated in your own ideological bubble, which is something really easy to do. Everybody does it. We all do it. Right. But we can fight against that instinct in a conscious way. Right. By making our media diet more varied. If you are a liberal, you should read conservative, rational conservative publications. And the other way around, if you are conservative, you should read liberal publications and assume that the people who write for those publications are rational human beings and they are well meaning people. Right. They mean well. Right. So they want to also inform the public. So if you take, if you approach things with this frame of mind, you will plant a seed of doubt in your own mind, a seed of uncertainty in your own mind, that later on can help you approach the world in a more rational way. So I took a detour in the answer to the question, so how can data visualization help with this process of showing uncertainty and then help you overcome, at least partially, that uncertainty so you can make a decision about what to think about. I certainly believe that data visualization plays a role, and obviously, I will write a little bit about visualization, and I will talk about visualization in the lectures as well and how we can use it. It's only that, it's not the only tool that we should use. That's the key thing. Visualization is just one of the many tools that we should use to help people understand the world in a more rational way.
Data visualization and the politics of science AI generated chapter summary:
Data visualization is just one of the many tools that we should use to help people understand the world in a more rational way. The same graphic can be interpreted in multiple ways. The visualization alone doesn't give you the truth of the story.
Alberto CairoMisinformed with data? Well, there are plenty of books about that. There is one that was published recently titled weaponized lies, which is quite interesting. It's very basic, but it's quite interesting. You know, you got, you have naked statistics, which is not exclusively about lying with data, but it teaches you how to use data, etcetera. I think that we need to go beyond that. I need that. We need to go beyond the mere calling out specific instances of trumpery, and we need to help people get a better understanding, not just about statistical thinking or quantitative thinking, but of logic and probability in general. And that is what the book and what the lecture are going to be about. I certainly will call out instances of bad use of data, but the topic would be more, it will be broader, it will be how we completely disregard uncertainty when we make an assertion. So people, human beings, we like things that are black and white. So something is either true or is completely untrue. And if we decide that something is true, we go for it, particularly if it helps us push our own agenda. And we don't even consider alternative explanations for a particular topic. That is natural, that is human, that is what we do intuitively, at least, most people, we do it all the time. So we need to fight against that instinct, so we need a method to fight against that instance, and we need to teach that method. And it involves using logic and using probability and uncertainty thinking, which is something that I have been thinking quite a lot about in the past few years. And this relates to data visualization, because one of the challenges in data visualization, I believe, is that we are not good at showing that the uncertainty that is built in naturally in the models that we present to the public. We present a time series chart, and that is a very crisp and sharp looking line, and it looks very precise and very accurate. But what perhaps we should be showing is like a fuzzy cloud surrounding that line to show people that the model is uncertain. We have a certain confidence that the actual value is within the boundaries of that cloud. But the point will basically be anywhere within that cloud. And that kind of certainty, I believe, indirectly leads people to be too certain about their own opinions. So what I would like to insert a little bit is a little bit of uncertainty in people. Don't be so sure about your own opinions. Don't be so sure about your own ideology, right. Don't get isolated in your own ideological bubble, which is something really easy to do. Everybody does it. We all do it. Right. But we can fight against that instinct in a conscious way. Right. By making our media diet more varied. If you are a liberal, you should read conservative, rational conservative publications. And the other way around, if you are conservative, you should read liberal publications and assume that the people who write for those publications are rational human beings and they are well meaning people. Right. They mean well. Right. So they want to also inform the public. So if you take, if you approach things with this frame of mind, you will plant a seed of doubt in your own mind, a seed of uncertainty in your own mind, that later on can help you approach the world in a more rational way. So I took a detour in the answer to the question, so how can data visualization help with this process of showing uncertainty and then help you overcome, at least partially, that uncertainty so you can make a decision about what to think about. I certainly believe that data visualization plays a role, and obviously, I will write a little bit about visualization, and I will talk about visualization in the lectures as well and how we can use it. It's only that, it's not the only tool that we should use. That's the key thing. Visualization is just one of the many tools that we should use to help people understand the world in a more rational way.
Enrico BertiniYeah. Alberto, I think this opens, what you just said opens so many doors that I don't know where to start. Gone forever.
Alberto CairoYeah, I know. Yeah. No, we are. I'm trying to tie too. I'm trying to tie too many things together, perhaps, but I believe that there is a connection between all these two things.
Enrico BertiniOh, yeah.
Alberto CairoNo, there is something. There is like a. There's like a rational argument to be built connecting our. The fact that people tend to fall victims of confirmation bias, the fact that people tend to create ideological bubbles around them, the fact that that makes them feel too sure about their own opinions, and the fact that we should pierce that bubble from the outside, so other people should pierce our bubbles, but we can also work from the inside of our own bubbles. Right, exactly. Piercing the bubble from the inside ourselves, rationally.
Enrico BertiniYeah.
Moritz StefanerI think some of the problems in the US, especially right now, is this big partisanship and this like that. Whatever you say, you're taking aside, like either you're for or against us, and this seems to happen on every topic. And then, of course, you have much more rhetorics going on or everything is suddenly rhetoric, and you can't have a neutral or a two sided conversation very easily anymore if everything is an argument for something. Right?
Enrico BertiniYeah. And I think there is even more than that. I think there are lots of people just disengaging from the old game just because it has become very hard to speak rationally. Right. So even if you try to do it, there's just no way because you get lots of strong reactions. Right. So that's another issue there.
Alberto CairoYeah. And we must not forget that visualization is also rhetoric.
Enrico BertiniOh, yeah, absolutely.
Alberto CairoAnd the statistical reason is also. So one of the problems also is that people who know little or just a little bit about numbers and about graphics, etcetera, I believe that either consciously or unconsciously approach numbers and approach visualizations as if they were the only explanation for the phenomena that they want to understand. And those are just models, models of reality, and they need to be interpreted. So the same graphic, and I'm going to give you a specific example, the same graphic can be interpreted in multiple ways. For instance, if you take a map, a county level map of the United States, and you map the election results in the past election, in the November elections, you will get, obviously, a lot of red because most counties went for Donald Trump, and just a little bit of blue because fewer counties went for Hillary Clinton. You can interpret that map in two different ways. You can just say, well, Trump won on a landslide because he won in a huge majority of counties, or you can interpret it well, but you need to take into account the population of these counties. And if you do that, you will see that it was actually Hillary Clinton who won the popular vote. Now, I would say that both stories are true, right? It's only that you need to interpret them correctly, and the visualization alone doesn't give you the truth of the story. The visualization is just one tool that can help you with the argument, with the rational argument that you need to make. Obviously, I tend to side, I tend to side with the second answer to that thing. I believe that we need to take population of those counties into account. But if you rely only in the visualization, it is true that the visualization shows you that Trump won on a landslide because he won on. That's what the graphic shows, but it's not what the interpretation of the data should show. Right. So there's a difference between those two things.
Moritz StefanerYeah, and of course there's a difference. I mean, you can do a lot of stuff that is technically, theoretically true, you know, and you can do that in visualization, but you can also do that with words or, you know, numbers, however you like. But there's always, some of them are more, I think. I mean, you need to take into account what it evokes for people. And if you achieve the right effect, basically, if somebody looks at your map and basically gets the right idea. Right. Yeah, but I think you covered this also in your books really well, this whole journalistic aspect of data visualization, that you're actually communicating something and that this has to be well researched and truthful. Of course, you know, otherwise we don't.
Alberto CairoEven need to start.
Moritz StefanerYou know, if this is not our maxim, why even start? But can I ask something about the uncertainty? I think that's an interesting point. So before you said the problem is often the too simple messaging and the need for the simple bottom line, or a catchy headline, if I understood you right, and that we should communicate more of the scientific thinking or the uncertainty maybe behind the numbers. Right. But can I ask a tough question in this context? So how about climate change? Because we have really good evidence that climate change is real. We just don't know exactly how real it is. Or, you know, if the earth will like, warm three degrees or two. And funnily, this actually leads to that. A lot of people think the whole phenomenon is not there. There's like, I don't know, 98% or 5%. I don't know. The last number of scientists agree climate change is real. It's just not clear to which degree. But less than half of people in the US believe it's real. So wouldn't that be a case to communicate maybe less detail? What's your take on that?
Understanding the uncertainty of climate science AI generated chapter summary:
Less than half of people in the US believe climate change is real. The problem will be better understood if you have a better grasp of what uncertainty means. It is hard to communicate uncertainty well, to make people understand how to make decisions under that uncertainty.
Moritz StefanerYou know, if this is not our maxim, why even start? But can I ask something about the uncertainty? I think that's an interesting point. So before you said the problem is often the too simple messaging and the need for the simple bottom line, or a catchy headline, if I understood you right, and that we should communicate more of the scientific thinking or the uncertainty maybe behind the numbers. Right. But can I ask a tough question in this context? So how about climate change? Because we have really good evidence that climate change is real. We just don't know exactly how real it is. Or, you know, if the earth will like, warm three degrees or two. And funnily, this actually leads to that. A lot of people think the whole phenomenon is not there. There's like, I don't know, 98% or 5%. I don't know. The last number of scientists agree climate change is real. It's just not clear to which degree. But less than half of people in the US believe it's real. So wouldn't that be a case to communicate maybe less detail? What's your take on that?
Alberto CairoNo, I don't think so. I don't think so. I don't think so. I think that, I mean, I think that you can take the problem in, you can take the problem two ways. Right. So I think that understanding that climate change is real will, I mean, that problem will be better understood also if you have a better grasp of what uncertainty means. So it's mainly the people who don't understand uncertainty or confidence levels who tend to deny climate change because they say, well, this is completely uncertain. So we should not do anything. But that is not what I mean by understanding uncertainty. That's the opposite of understanding uncertainty, right, in a model. Let me give you. So I talked a little bit about uncertainty in the recent NICAR conference. This is the computer assisted reporting, investigative reporting conference. That was right after tapestry, by the way, that Enrico mentioned before. So I attended this other conference, NICAR, and I gave a lecture with some preliminary thoughts about how hard it is to, first of all, to communicate uncertainty well. And second of all, making people understand how to make decisions under that uncertainty. And one of the examples that I put was the, the cone of uncertainty that scientists usually draw around hurricane predictions, which is something, you know, a kind of map that I see every single year here living in Miami. Right? So if you imagine, if you have ever seen those maps, is that the scientists, scientists give you an estimate of where the hurricane goes with a data point. So they show a little dot on the map, and then surrounding that dot, you have a cone of increasing size surrounding that data point. Right. Now, there is research already of how people read that map. And it has been this. And perhaps we can link it from the podcast page. I will send you the links. But there's research about it, and it shows that many people think that the cone of uncertainty is actually the size of the hurricane.
Moritz StefanerOh, my God.
Alberto CairoSo do you believe that that is bigger? It gets bigger and bigger. But just think about it. It's the visual metaphor that unconsciously leads you, because it looks like a hurricane. So it's the visual metaphor that leads you to believe that not only that, not only that, there is even a bigger problem. And this is something that many people are not aware of. The cone of uncertainty, it's only a 66% confidence level. That means that two out of three times the hurricane path will be inside the cone of uncertainty. But one out of three times the path of the hurricane will be outside the cone of uncertainty. So if you wanted, for example, a 95% confidence level, the confidence, sorry, the cone of uncertainty will be huge. It will be super wide. Right. But then the problem is that is the opposite, that if you show such a wide cone of uncertainty, many people will say, well, scientists don't have a clue of where the hurricane is going.
Moritz StefanerThat's what I mean. I should not prepare.
Alberto CairoYeah. So what is the solution for that? I don't have a good solution for that other than showing both, showing, you know, perhaps showing people a specific recommendation. And I explained that in the, in the talk. I said, you know, get out of here. So show the map with an area, highlight it in red and say get out of here now. Right. And perhaps an area in orange saying you should think about leaving. Right. So that would be another level. But then, you know, there is a also show the level of uncertainty. So you can show both, you can show the specific recommendation and also the level of uncertainty, particularly for that portion of the public who understands that uncertainty, because it's important to show that, right. To make that, to make that decision. So I need that. We need, I think that we need both. We need better communication and more clear communication, but at the same time we also need to help the public in general increase their knowledge about how to interpret uncertainty and how to make decisions in the face of uncertainty. Better probabilistic thinking. That's the bottom line. So we should aim for both.
Binary Thinking about Hurricane Risk AI generated chapter summary:
We need to get rid of that binary thinking. And we can only do that with better education about probabilities, probabilistic thinking and about thinking about more about uncertainty. It's not only uncertainty, it's also risk. visualization itself is not something that alone will help us overcome all these problems.
Alberto CairoYeah. So what is the solution for that? I don't have a good solution for that other than showing both, showing, you know, perhaps showing people a specific recommendation. And I explained that in the, in the talk. I said, you know, get out of here. So show the map with an area, highlight it in red and say get out of here now. Right. And perhaps an area in orange saying you should think about leaving. Right. So that would be another level. But then, you know, there is a also show the level of uncertainty. So you can show both, you can show the specific recommendation and also the level of uncertainty, particularly for that portion of the public who understands that uncertainty, because it's important to show that, right. To make that, to make that decision. So I need that. We need, I think that we need both. We need better communication and more clear communication, but at the same time we also need to help the public in general increase their knowledge about how to interpret uncertainty and how to make decisions in the face of uncertainty. Better probabilistic thinking. That's the bottom line. So we should aim for both.
Moritz StefanerYeah. And again, it's probably, again, this false binary is that either you know everything or you know nothing at all and reality is always in between in the sense that just because you don't know exactly what's going to happen, it still doesn't mean that at least you don't have a best bet or, you know, roughly the space where what the playground is or what's more, problem or not. Right. And so maybe the problem is, again, this wrong dichotomy of either somebody's brilliant or a total idiot again.
Alberto CairoYeah. That's a problem with binary thinking, which is what I was referring first. Right. So it's like I know something or I don't know something. So people who know very little about a topic tend to believe, tend to be miss, tend to be delusional, and we start to think that we know everything that is to be known about that topic. So that's a problem of binary thinking. Right. We need to, you know, get rid of that binary thinking. And we can only do that with better education about probabilities, probabilistic thinking and about thinking about thinking about more about uncertainty. Right. So all these elements are in place, right?
Enrico BertiniYeah, absolutely. And I think there is another parameter there to take into account, which is risk. Right. Every time you have uncertainty, you also have to wait what kind of risk you are facing. Right. So you might actually have a lot of uncertainty, but still in the worst case scenario, you just don't want to risk the worst case scenario. So even if it's highly improbable, since it's a disaster, you may want to give it much more weight that you would give to other situations. Right. So I think that's another very, very important element that I'm surprised. It's almost never put into the equation. It's not only uncertainty, it's also risk. Right. Yeah.
Moritz StefanerAnd some outcomes might be worse than others, and people might be very upset if you miss a bad prediction and you say things are going to be fine and then bad things happen, they might be less upset if you say bad things are going to happen, but then something good happens, you know.
Enrico BertiniRight.
Moritz StefanerSo in weather forecast, there's the notion of wet bias. So, yeah, for, for a public announcement of, like, rain probability, they usually boost it a bit, especially if it's just like ten or 20%. So people are not disappointed in case it really rents.
Alberto CairoYeah, yeah, yeah, yeah. And the problem also is connecting to what Enrico was saying, is that there is, there is, there is, there is uncertainty that we can visualize, and there is uncertainty that we cannot visualize, that we can only explain. So the uncertainty that you can visualize is the uncertainty that you can compute and we can call that risk. Right. What is the risk of being hit? Right. You have a 70% risk or a one out of ten probability of being hit by the hurricane. So that's uncertainty that you can compute and therefore, it's uncertainty that you can show visualization. But in the case of hurricanes, just going back to that theme, there are other problems, there are other sources of risk. Right. So another thing that researchers have seen is that when people see the hurricane tracker thing, the cone of uncertainty, they tend to focus too much on the risks posed by the hurricane itself, by the wind strength. Right. By the wind speed, and they don't focus so much on water surge, for example, the risk of flooding in your household, because that is not, that is not visualized. Right. But that needs to be explained as well. Right. So again, visualization itself is not something that alone will help us overcome all these problems. It's just one of the pieces of the equation. The other one, the other, another one is verbal explanations or written explanations. Right. A bullet point list. Right. Do this, do that. You have a risk of that. You have a risk of that. You should be ready for these. You should be ready for that. That annotation layer. Just to steal the term from Amanda Cox from the New York Times, that annotation layer in the visualization is essential. I think that many times, visualization designers, we forget how important it is to write things in our graphics, to point out important features of the data, explain them, highlight them, etc, etcetera. Right.
Moritz StefanerOne technique that recently popped up, I think last maybe one or two years. I know Jessica Hullman is working on it. But many others, too, is rather than showing just the distributions, like saying, this is the confidence interval, or this is the range of expected values, which is a very summarizing scientific way of thinking about it, this alternative approach is to show actually instances of things happening. In the hurricane case, it would be like, this is one path the hurricane might take. Here's another path. Here's another path. Roll the dice again, see what happens. So basically, instead of showing the distributions straight away, you show a lot of instances of what's possible. And I think that's a very interesting technique. New York Times had something like this for the election predictions last year in the US with a little like a gauge display, and the needle was jumping around all the time. I know it drove people a bit crazy, but I think it was also an interesting way to approach this idea that our measurement is.
Alberto CairoYeah, it was, it was super controversial. Right. But I believe that it was controversial precisely because many people aren't able to deal with uncertainty. We just want black and white. We just want, you know, 50 50% probability of something happening. Right. A 100% probability. We want that kind of precision, which is completely bogus because predictions or estimates are never precise in that sense. Again, there is a cloud of uncertainty surrounding them. Right.
Enrico BertiniYeah. Alberto, I'm wondering, let's say, speaking a little bit more technically, do you have any favorite visualization techniques to express uncertainty or anything brilliant that you noticed out there? I'm pretty sure you've been reviewing some of the material that exists or even just suggestions for visualization designers how to deal with this problem in general.
No easy answer to how to visualize uncertainty AI generated chapter summary:
The general public doesn't have the same understanding of uncertainty. Not many new methods of visualization were developed until the middle of the 20th century. The problem is that the general public has adopted methods that were devised in the 19th century, and they don't understand it well.
Enrico BertiniYeah. Alberto, I'm wondering, let's say, speaking a little bit more technically, do you have any favorite visualization techniques to express uncertainty or anything brilliant that you noticed out there? I'm pretty sure you've been reviewing some of the material that exists or even just suggestions for visualization designers how to deal with this problem in general.
Alberto CairoYeah. All right. So there's not an easy answer to that. It's true that I have been reviewing the literature about how to visualize uncertainty and how to convey that level of uncertainty to the public, and there are no good answers. And there are some, there are techniques, obviously, to show uncertainty. There are papers that have been written to show that explain different techniques, using fuzzy boundaries or using bean plots or using all those kinds of techniques. Right. The problem is that those techniques work relatively well with specialists. So people who already know how to deal with uncertainty, they are able to read those charts and say, okay, so there's, this is the level of uncertainty around the point estimate, right. The general public doesn't have the same understanding of uncertainty. Therefore, they cannot read those charts well, and they may need captions explaining that, you know, the line chart is showing these, and we are pretty certain that that is the truth. But there is a little bit of uncertainty around these and this is what the uncertainty means. And that is when you explain standard deviations or confidence intervals or whatever measure you are using. So the textual explanation here is crucial. One of the problems that we deal with, by the way, and this is something that I touched upon in the talk that I gave at NICAR. And again, it's a very preliminary thought. All these things are preliminary thoughts because I'm still reading about them and thinking about them. It's a historical problem. So the most popular kinds of visualization forms, like the line chart, the pie chart, you know, the bubble chart, maps, data maps, etcetera, they were created between the 16th, 17th century in the case of data maps, and then in the case of statistical charts. Well, the golden age of statistical charts began at the end of the 18th century. And the actual golden age, according to Michael Friendly, who wrote a paper about these, was in the middle of the 19th century when you had, you know, people like Jon Snow or Florence Nightingale, etcetera, developing new way, or Francis Galton, who, you know, they did scatter plots, etcetera. So all these methods were created between the end of the 18th century and the end of the nineties century. But then what friendly explains, and this is actually something that made my mind click a little bit when I was reading all these papers, was that the first golden age of data visualization? Because there is a second one later on. So the first golden age of data visualization ends at the end of the 19th century and beginning of the 20th century, which is exactly when the golden age of prediction and uncertainty and experimental methods, etcetera, begun. Right? So people like Fisher, etcetera, developed experimental testing, etcetera, at the beginning of the 20th century. At the same time that we were living through the first dark ages of data visualization, we were living through the golden age of a predictive and uncertainty, predictive statistics and uncertainty in statistics, etcetera, etcetera, etcetera. So there's a mismatch, because not many new methods of visualization were developed until the middle of the 20th century, the last third of the 20th century, when the second golden age of data visualization began, with John Tuckey and all the statisticians who started writing about exploratory data analysis. Right? So what is the problem that the general public has adopted and understands methods of visualization that were devised in the 19th century, and they don't even understand it really, really well. I mean, there was a Pew Research center review a few months back that showed that among the readers, only two thirds could read a scatter plot correctly, it was a majority, but not a huge majority. And the scatterplot is a method of representation that was created in the first golden age of visualization in the 19th century. Francis Galton used them. Right? So even the public today doesn't understand many methods that were devised in the 19th century. Try to imagine the general public trying to understand methods of visualization that were created at the end of the 20th century to display uncertainty, right? So there is a trickle down effect in data visualization. Methods of representation are first created by specialists, right? Statisticians, computer scientists, etcetera. Then the media starts adopting them, and they start appearing in newspapers and websites and textbooks, etcetera. And as a consequence of that, the general public also starts understanding those methods. We are on that third stage in methods that were created in the 19th century, but we are still on the first and perhaps beginning the second step in methods of representation of uncertainty, in this case, that were created at the end of the 20th century or the beginning of the 21st century. So the historical problem is also crucial.
Enrico BertiniYeah. And I have to say that it's even, in my view, it's even more complicated than that, because at the same time, something similar is still happening in science and research in general. So there are lots of studies out there showing that even scientists who are recurringly using some statistical techniques or some type of charts, when you ask them to estimate the probability of something looking at a chart, even charts that they developed on their own, they can be literally wrong in their decisions. Right? So there is an amazing book that I read a couple of years back. There is a guy named Jeff Cummings, I don't know if you are aware of his work, and he's been pushing very hard for switching from null hypothesis testing the way it's traditionally done, to confidence intervals, they have a much more, say, visual oriented analysis of the results and ways to inspect a given hypothesis. And this is still happening right now. Right. So there are even studies showing how resistant scientists are to these new techniques. Right. Even though they've been demonstrated to be pretty powerful. So I just want to point out that this is not only this is happening in terms of communicating information to the general public, but even people with a lot of expertise are struggling and also developing new methods. Right. So I don't want to give an impression that I'm too much of a pessimistic. Right. But it looks to me that there is such a long way to go here, because if even a person who's been trained for years or ages doing this particular kind of work, can't give correct answers. Right. It looks to me that we have a long way to go to make. Again, it looks a little bit pessimistic. I'm actually not. Not as pessimistic, but I want to point this out and see what's your reaction there?
Alberto CairoWell, my reaction is that if scientists themselves don't understand, not all scientists understand those methods. Well, don't even understand sometimes how to deal with uncertainty. Try to imagine the general public. It's even worse. Right? And this is where the thinking about uncertainty and probability connects with the thinking about bullshit and trumpery. Because the fact that people don't understand and don't want to deal in some cases with uncertainty may lead them to create their own opinions who are completely unfounded. And we tend, because of confirmation bias, we tend to stick to those convictions. Right. We don't want those convictions to be challenged by, you know, other people. And then that is, and then we create agendas around those. Around those convictions and opinions. We enclose ourselves in bubbles, the ideological bubbles that reinforce those convictions. And as a result of that, people start using bad data to push their own agendas. And that's the point when trumpery comes in. Right? So it's a self reinforcing process in which all these elements are somehow connected to each other. And this is the path that I would like to trace, somehow show the connections between all these elements and then try to find a place where we can break the cycle. Right. And either ourselves or with the help of other people.
Moritz StefanerAnd I think this is actually, from my view, this is actually the bigger issue, because uncertainty and predictions important to understand. I think it's great if more work is being done there, but maybe it will always stay sort of a nerds or a specialist type of activity to really figure out if the exact probability is now 60 or 65%, that, I don't know, something will happen. Right. It's very theoretical, but I think what's really striking is also what you mentioned now is how much our existing convictions can shape what we actually even consume in terms of information, and what we even let into our minds in the first place. This is a huge topic. I think we could do a whole other episode on cognitive biases, like all these little mechanisms that lead us to wrong decisions or wrong readings of the same objective pieces of information.
Alberto CairoYeah, there's research about that. There is research about how reading the same, same information may lead two different groups of people form completely different opinions depending on where their starting point is. Right. So again, what I would like to help people who attend the lecture, and as an extension, people who hear from people attending the lecture, is to insert, you know, uncertainty in their own thinking, right? Uncertainty, by the way, understood in a very broad sense. So it's not just mathematical uncertainty. Two kinds of uncertainty you have, again, uncertainty that you can compute and that you can estimate and you can predict and at least limitedly and etcetera. So that's uncertainty that you can calculate. Right, and compute. But there is also a broader kind of uncertainty that is more qualitative, right. If you think about surveys, for example, the way the questions are worded, that also can insert, can add uncertainty to your estimations out of that survey, right? So again, I want to sow doubt in the minds of people saying, you know, you should not be so certain about your own opinions. You can and should have your own opinions, but you should be forced to think deeply about your own convictions and try to reason them. And then if you cannot explain them to other people, then your conviction is wrong, is probably wrong, or you should just go back to the drawing board and try to come up with a better explanation. So the research, by the way, that you mentioned, that shows that the same message shown to two different groups of people, make them form different opinions. There is a way to combat against that, to fight against that. There is research. There's a wonderful book that was published very, very recently, title. Let me see if I can find the title. It's titled the knowledge illusion. The knowledge illusion. It's a very interesting.
The uncertainty in our thinking AI generated chapter summary:
Uncertainty is not just mathematical uncertainty. There is also a broader kind of uncertainty that is more qualitative. I want to sow doubt in the minds of people saying, you know, you should not be so certain about your own opinions. The way you change minds is not to ask people to think.
Alberto CairoYeah, there's research about that. There is research about how reading the same, same information may lead two different groups of people form completely different opinions depending on where their starting point is. Right. So again, what I would like to help people who attend the lecture, and as an extension, people who hear from people attending the lecture, is to insert, you know, uncertainty in their own thinking, right? Uncertainty, by the way, understood in a very broad sense. So it's not just mathematical uncertainty. Two kinds of uncertainty you have, again, uncertainty that you can compute and that you can estimate and you can predict and at least limitedly and etcetera. So that's uncertainty that you can calculate. Right, and compute. But there is also a broader kind of uncertainty that is more qualitative, right. If you think about surveys, for example, the way the questions are worded, that also can insert, can add uncertainty to your estimations out of that survey, right? So again, I want to sow doubt in the minds of people saying, you know, you should not be so certain about your own opinions. You can and should have your own opinions, but you should be forced to think deeply about your own convictions and try to reason them. And then if you cannot explain them to other people, then your conviction is wrong, is probably wrong, or you should just go back to the drawing board and try to come up with a better explanation. So the research, by the way, that you mentioned, that shows that the same message shown to two different groups of people, make them form different opinions. There is a way to combat against that, to fight against that. There is research. There's a wonderful book that was published very, very recently, title. Let me see if I can find the title. It's titled the knowledge illusion. The knowledge illusion. It's a very interesting.
Moritz StefanerThat's an encouraging title.
Alberto CairoYeah, the knowledge illusion. So it explains, you know, it explains part of this, part of this problem. And it describes that research that you mentioned, and it also shows that there is additional research that shows how to make people think and how to change minds. And the way you change minds is, is not to ask people to think. Because if you ask people to think, even using visualization, what they do will be to stick to their own positions, to even reinforce their own opinions even further. They will use the evidence presented to them to solidify their own opinions even more. What you need to do is to ask those people to explain their own opinion to other people, to outline their own opinion, the reasons why they think that some statement or some opinion is truth. But explain that to other people. Because when you do that, people start realizing that their own opinions are not reasonably founded. They don't have very good foundations for their own opinions. And by the way, when I say people, I mean everybody, including me, including you, including everybody else. That is a sense that what I say that we should add more uncertainty to our own thinking. Right. If we cannot explain something clearly, perhaps we should not publish that particular opinion on Twitter, which is something that I sometimes do myself.
Moritz StefanerYeah. I mean, overall, maybe a bit more humbleness on all sides is probably a good way to go.
Alberto CairoExactly. Exactly.
Moritz StefanerSo I understand you have a lecture serious about this coming up. Can you tell us a bit more about that? So if people are interested in hearing more about these topics.
Alberto CairoYeah. So. All right. So as you can notice, I still don't have a well formed ideas about any of these. I have, you know, a very broad understanding of how all these elements connect to each other. And that is what I would like to. I would like to explain in the, in the lecture series. So the lecture series will not be exclusively about bad visualization and how to create more truthful visualization. That will be a core component, obviously, but it will be a little bit broader than that. And the goal is to give the people who attend tools for thinking, for better thinking, and then reading materials and other resources that they can use to inform themselves even further and then spread the word to help other people educate themselves. So I would like this lecture series to be like the seed that will later expand into something bigger with the help of, you know, dozens or hundreds of volunteers who may come to the, to the lecture. So the way, the way it happened is that I read about the word trumpery and said, well, this is exactly the word that I was looking for, for the project that I would like to undertake in the future, either a lecture series or a book or both. So let's use the word trumpery. So I announced, saying, I announced this in my blog, saying, you know, know, I am going to do this series of lectures initially called visual Trumpery, but now just called Trumpery, how to fight against bad data or fake data and alternative facts from the left and from the right, because it's going to be bipartisan, and I will try not to mention any specific politician if possible. And then I said, you know, I am going to waive salary. So the only thing that you need to do if you want to host a trumpery lecture in your city is to contact me. You send me an email saying, I live here in this city, and I would like to host your talk. And the only requirements that I have is that, first of all, you pay for my flight, a coach class. I don't need to fly in business or anything, a bed to sleep in. It could be a hotel. It could be at your home, a glass of wine and a good dinner for the night.
Lectures on Trumpery AI generated chapter summary:
A new series of lectures will focus on how to fight against bad data or fake data. The lectures will be free and open to anybody. More than 30 cities have already confirmed to host the lectures.
Alberto CairoYeah. So. All right. So as you can notice, I still don't have a well formed ideas about any of these. I have, you know, a very broad understanding of how all these elements connect to each other. And that is what I would like to. I would like to explain in the, in the lecture series. So the lecture series will not be exclusively about bad visualization and how to create more truthful visualization. That will be a core component, obviously, but it will be a little bit broader than that. And the goal is to give the people who attend tools for thinking, for better thinking, and then reading materials and other resources that they can use to inform themselves even further and then spread the word to help other people educate themselves. So I would like this lecture series to be like the seed that will later expand into something bigger with the help of, you know, dozens or hundreds of volunteers who may come to the, to the lecture. So the way, the way it happened is that I read about the word trumpery and said, well, this is exactly the word that I was looking for, for the project that I would like to undertake in the future, either a lecture series or a book or both. So let's use the word trumpery. So I announced, saying, I announced this in my blog, saying, you know, know, I am going to do this series of lectures initially called visual Trumpery, but now just called Trumpery, how to fight against bad data or fake data and alternative facts from the left and from the right, because it's going to be bipartisan, and I will try not to mention any specific politician if possible. And then I said, you know, I am going to waive salary. So the only thing that you need to do if you want to host a trumpery lecture in your city is to contact me. You send me an email saying, I live here in this city, and I would like to host your talk. And the only requirements that I have is that, first of all, you pay for my flight, a coach class. I don't need to fly in business or anything, a bed to sleep in. It could be a hotel. It could be at your home, a glass of wine and a good dinner for the night.
Moritz StefanerA bottle of water, maybe?
Alberto CairoYeah, a bottle of water. A couple of bottles of water. And then that you make the lecture open to anybody and free. And free to anybody. Those are the only requirements. So I have already received requests from more than 30 cities, and I have started organizing this schedule. Some cities are already confirmed. So I am going to San Francisco in October. I will probably go to Chicago during the summer, etcetera. So I have few days. Mexico City also. I will go in the summer. But then the other 25, 28 cities, I still need to figure out what dates they will be, and I will announce them publicly as soon as I can confirm them. I couldn't fit them all in the fall semester of 2017. I had to push a few of them to the spring of 2018 because I didn't have enough room in my schedule to feed them all. So let's see what happens. It's a crazy project, but I think that it's a worthwhile wild project.
Moritz StefanerSounds great. Let's hope you get a few more invitations after this podcast. So you can go on until 20 1819. We can make t shirts, big trumpery world tour.
Alberto CairoExactly. Like, like Bruce Springsteen, right? The Trumpery tour. We'll see what, it's a crazy project. But as I said before, I think that it's something that perhaps can ignite people to engage more with the information that they receive that can help us all to be a little bit more wary about how we handle data and how we visualize those data correctly and truthfully. And it's a contribution to the field as well, because, I mean, at the end, I may get some profit out of it, particularly if I write a book about it. But that is not the goal of the thing. The main goal is to do the lectures and make those lectures an instrument that people can use later on to help their relatives or friends, etcetera, also deal with data and with graphics better and become a little bit more skeptical about the things that they see in the media. Because another problem that we have is the media organizations that engage in trumpery. And these are organizations that are in the ideological fringe. So in the United States, for example, we have organizations like Breitbart or infowars that constantly engage in trumpery. So they use data in a misleading way on a regular basis. So those are the kinds of media organizations that we must avoid at all costs because they are ideologically driven. And we should stick to media organizations on the right and on the left left that have a record of good information and more importantly, a record of verification and a record of publishing mistakes. When they make a mistake, they issue a correction. And there are many organizations like that all across the ideological spectrum.
Enrico BertiniYeah. Alberto, I have a kind of hard and maybe provoking question here. One thing I noticed is that, is that people like us who are working in visualization or in data in general, we tend to be, first of all, to think that almost everyone thinks the same way we think, and we tend to preach to the choir a little bit. And it looks to me that a big challenge that is clearly also related to your trumpet initiative is how to deal. I'm pretty sure that many people who will attend your lectures will already be sold on the message that you want to give. Right? So I may be wrong. I don't know. That's my hypothesis.
The challenge of communicating your opinion AI generated chapter summary:
Alberto: How do we actually reach out to the people who need this message the most? He says the message needs to be delivered by someone whom you trust. Other strategies include asking people to explain their own opinions.
Enrico BertiniYeah. Alberto, I have a kind of hard and maybe provoking question here. One thing I noticed is that, is that people like us who are working in visualization or in data in general, we tend to be, first of all, to think that almost everyone thinks the same way we think, and we tend to preach to the choir a little bit. And it looks to me that a big challenge that is clearly also related to your trumpet initiative is how to deal. I'm pretty sure that many people who will attend your lectures will already be sold on the message that you want to give. Right? So I may be wrong. I don't know. That's my hypothesis.
Moritz StefanerI have a problem.
Enrico BertiniSo how do we actually reach out to the people who actually need this message the most? Or shouldn't even we try to do that? What's your opinion?
Alberto CairoOh, yeah, we should absolutely try, but it's not something that I will do myself. Remember that I said that I won the lecture and perhaps the book to be an instrument that other people can use in their own communities, in their own. So I am not going to do that myself. I mean, I have my own bubble. I have tried very hard to pierce my own bubble and expand my own media diet, a media bubble recently, but I am not the one to talk to your uncle or to your, or to your sister or something. You need to do that yourself. And this connects, by the way, to some strategies that I am thinking about including, and again, this is just a preliminary thought about how to make messages more visualization itself, but also other kinds of messages more persuasive and more convincing. So one of them is obviously make the message more effective, not only effective, but also affective. And by that, I mean that the message needs to be delivered by someone whom you trust. Right? So that's the reason why it should be you who talks to your uncle or you who talks to your sister, etcetera. So when it is different, if a liberal elite from a university delivers the message, than if you deliver the message, the message will be more credible if it is you who delivers it. So people tend to trust messages that come from people who they already know. So that's one of the first things. But also strategies like the one that I outlined before about asking people to explain their own opinions. So try to explain your own opinions. And if you can't, it's perhaps the foundations of your own opinion are not so solid as you thought, as solid as you thought, but also ask other people to do the same. So when you have a discussion very openly, ask people, well, why do you oppose this policy? Or why do you favor these policies? Explain it to me, because I really want to know. And this is something that I have done. This is something that I have done myself in the past few months. I have engaged with people from the opposite side of the ideological spectrum. And I'm not going to give names, but people who work in media organizations that I have started reading in the past year or so, because I really want to understand where they come from. And what, again, assuming. This is a very important thing. If you want to challenge your own assumptions, always assume, or in most cases, assume, with some exceptions, that people mean well, that is the first thing. People don't want to tell lies. In general, most people don't want to tell lies. So if you begin with that assumption, you will open yourself up to be persuaded and convinced by the reasons that other people may give you. And then you can weigh those reasons and try to understand the explanations that the other people are giving you, and you can demand those explanations. So explain your position to me. And then, if they cannot do it because they don't have their opinion, doesn't have very good foundations. But if they can explain it clearly and well, perhaps then it will be you who will be persuaded of the quality of their position. And that is also enriching. It's super important to do that as well.
Enrico BertiniYeah, I have to say, it's really hard. Really, really hard.
Alberto CairoWell, but the fact that it is hard doesn't mean that we should not.
Enrico BertiniYeah, I agree. I totally agree.
Alberto CairoYeah. Going to the moon is hard, but we went to the moon, so.
How to support data stories! AI generated chapter summary:
Alberto: I'm learning a lot about how we think. And again, visualization plays a role. Here are a few ways you can support the show and get in touch with us. See you next time.
Enrico BertiniOkay, Alberto, well, I think we have to wrap it up here. As we said already at the beginning, we could go on forever on this topic. There are so many different facets, and it's such a hard topic to dispel completely. So maybe we should have you on after the. Once you are done, done with the tour. It would be nice to have you on again and see what you. What you have. Right.
Alberto CairoYeah, of course. I will learn. I'm learning a lot. I'm learning a lot about. About how we think and, you know, I'm learning a lot about, you know, doubting my own opinions in many cases. And that is. I believe that that is great. Is it hard? It's super hard. Is it painful? It is super painful. It's painful. To. Yeah, but it's necessary. That's, I mean, this is what can help us overcome many of the challenges that we are facing today, including ideological polarization. And again, visualization plays a role. It's an important component of all these.
Enrico BertiniOkay, well, thanks a lot, Alberto. Best of luck with all your amazing plans and your tour. I hope to actually see you in New York. I've heard there are plans to have you here as well. And, yeah, see you next time.
Moritz StefanerThanks, Alberto.
Alberto CairoSee you. Thank you so much for having me again. Bye bye bye.
Enrico BertiniHey, guys, thanks for listening to data stories again. Before you leave, here are a few ways you can support the show and get in touch with us.
Moritz StefanerFirst, we have a page on Patreon where you can contribute an amount of your per episode. As you can imagine, we have some costs for running the show and we would love to make it a community driven project. You can find the page@patreon.com Datastories and.
Enrico BertiniIf you can spend a couple of minutes rating us on iTunes, that would be extremely helpful for the show. Just search us in iTunes store or follow the link in our website.
Moritz StefanerAnd we also want to give you some information on the many ways you can get news directly from us. We're, of course, on twitter@twitter.com. Datastories. But we also have a Facebook page@Facebook.com, datastoriespodcast. And we also have a newsletter. So if you want to get news directly into your inbox, go to our homepage data stories and look for the link that you find in the footer.
Enrico BertiniAnd finally, you can also chat directly with us and other listeners using Slack again, you can find a button to sign up at the bottom of our page. And we do love to get in touch with our listeners. So if you want to suggest a way to improve the show or know amazing people you want us to invite or projects you want us to talk about, let us know.
Moritz StefanerThat's all for now. See you next time, and thanks for listening to data stories.
Enrico BertiniData stories is brought to you by clicking. Are you missing out on meaningful relationships hidden in your data? Unlock the old story with Qlik sense through personalized visualizations and dynamic dashboards, which you can download for free at Qlik Datastories.