Episodes
Audio
Chapters (AI generated)
Speakers
Transcript
Calling Bullshit with Carl Bergstrom and Jevin West
Are you missing out on meaningful relationships hidden in your data? Unlock the whole story with Qlik sense through personalized visualizations and dynamic dashboards. That's Qlik Datastories.
Carl BergstromYou know, bullshit. Once it gets out there, once falsehoods get out there, it's really hard to clean up the system.
Moritz StefanerAre you missing out on meaningful relationships hidden in your data? Unlock the whole story with Qlik sense through personalized visualizations and dynamic dashboards, which you can download for free at Qlik Datastories. That's Qlik Datastories. Hey, everyone, it's a new data stories. Hi, Enrico.
Enrico BertiniHey, Moritz. How's it going?
Moritz StefanerGood, very good.
Enrico BertiniExcited for this episode.
Moritz StefanerYeah, it's another good one, I think. So shall we dive right in? Yeah, we should. Yeah. So there's a couple of connections here. That's a really interesting one. So it's the second one in a series, basically. So some of you might recall we had Alberto Cairo on the show to talk about trumpery and misinformation. And now we have a second one in this series of. Yeah, how the relation of data and truth and lying and bullshitting maybe, too. And so we have two real experts here in the latter area, in the area of bullshitting. And it's Carl Bergstrom and Jevin Vest. Hi, guys.
In the Elevator With Data and Truth AI generated chapter summary:
Carl Bergstrom and Jevin Vest talk about the relation of data and truth and lying. Their core area is data science around social issues. The course is central to what they try to do in their lab and also on the education front.
Moritz StefanerYeah, it's another good one, I think. So shall we dive right in? Yeah, we should. Yeah. So there's a couple of connections here. That's a really interesting one. So it's the second one in a series, basically. So some of you might recall we had Alberto Cairo on the show to talk about trumpery and misinformation. And now we have a second one in this series of. Yeah, how the relation of data and truth and lying and bullshitting maybe, too. And so we have two real experts here in the latter area, in the area of bullshitting. And it's Carl Bergstrom and Jevin Vest. Hi, guys.
Carl BergstromHi, Moritz.
Jevin WestHi, Moritz. How are you doing?
Enrico BertiniHi, guys.
Jevin WestHi, Enrico.
Moritz StefanerGreat to have you on. Can you briefly introduce yourself? Who are you? What are you doing? What's your area of expertise besides bullshitting, maybe.
Carl BergstromSo this is Jevin. I'm an assistant professor at the University of Washington. I co direct what's called the data lab in the information school. Kind of our core area there is data science around social issues. So we work in developmental economics, we work in the science of science, we work in computational social science, we work in data curation. So everything around sort of the social issues of data science. And we have ten PhD students, other faculty, postdocs, et cetera. And so this particular course that Carl and I are going to talk to you about is sort of, I can think central to sort of what we try to do in our lab and also on the education front.
Jevin WestSo I'm Carl Bergstrom. I'm a professor of biology at the University of Washington, and I trained as an evolutionary biologist, but I do a lot of work on the structure of science and how that impacts the conclusions that we draw these days. Jeff and I worked together for a number of years on things around data visualization, data analysis, and the likes.
Moritz StefanerYeah, and that's also how we know each other, coincidentally, because we've been working together eight years ago on a project called Verilforum Eigenfactor, where we looked at information flow in science, exciting project and, yeah, it's great to connect again.
Carl BergstromYeah, that project is, you know, even today, after that many years, it's still, people still, I think, are looking at that and use that as sort of a model for many other projects that sort of have spurred out of the science of science. So this is. Yeah, it's a really fun connection that's being made right now. The audience doesn't know how special it is, but I hope they do now.
Moritz StefanerAnd there's another connection, even, because last time, like last episode, we had Jacque Van weich on the show, and he invented the hierarchical edge bundling technique that we used in that project. So lots of connections all over.
Enrico BertiniIt's all connected.
Carl BergstromThe universe is connected. It's all connected.
Moritz StefanerExactly. But today we talk about a different topic. So the two of you launched a course together at the University of Washington, and it's called calling bullshit. It's a very bold title, and I hear the course has sort of raised a lot of attention before it even started. Right. And, yeah, why don't you tell us a bit about it, how you came up with the idea and what you cover in the course.
Calling BS: The New Science Course AI generated chapter summary:
The University of Washington's new course is called calling bullying. The focus of the class will be the ways in which data can be manipulated. The class aims to teach students how to spot and call bullies. It has already attracted tens of thousands of registered students.
Moritz StefanerExactly. But today we talk about a different topic. So the two of you launched a course together at the University of Washington, and it's called calling bullshit. It's a very bold title, and I hear the course has sort of raised a lot of attention before it even started. Right. And, yeah, why don't you tell us a bit about it, how you came up with the idea and what you cover in the course.
Carl BergstromSure. So Carl and I for several years have been sharing examples of bullshit that we find in science and things that we find in social media. And pretty soon the file was getting so big and the examples were getting so egregious that we thought we should teach a course on this and transfer some of these skills that we learn in our daily lives of peer reviewing papers and reports and proposals and see if we can. It could be probably the most important skill that students learn. And today of all times, people are throwing data everywhere on their own website, in news stories, and somehow data almost looks more truthy than even sort of these rhetorical arguments, because there have been lots of courses in the humanities and philosophy that have been talking about critical reasoning and data reasoning. But the focus of this class will be sort of the ways in which data can be manipulated.
Jevin WestYeah. So just kind of take you through what the class does in a brief summary. We start out by just thinking about what bullshit is in the first place. There's actually a reasonable amount of philosophy written on what bullshit is and some argumentation about whether bullshit is in the eye of the producer or the eye of the beholder. Been a nice debate in the philosophy literature. We think about the way we're talking about calling bullshit, and so we think about the way that calling bullshit is itself a speech act. It's not a proposition, it's an active thing that you do that has a certain force. And so we talk about that and we talk about how to do that without being an asshole, which is, I think, kind of important, too.
Moritz StefanerYeah, yeah.
Jevin WestSo from there we go, we look at how do you spot bullshit in the first place? How can you be on the lookout for it? And we try to just instill these habits of mind where you're constantly thinking as you're reading stuff, whether it's in the New York Times or on Facebook or, you know, whatever. Is this right? Does this make sense? Is this even feasible? And getting in the habits of sort of digging deeper into things that you actually care about, digging deeper before you share something stupid, for example. And then we go on, we talk about correlation and causation, because that's a common area for bullshit, essentially. We talk about statistical traps and the kinds of things that can trip people up. We look at as we're going forward, we're going to be looking at visualization and the way that people mislead others with visualizations. We are going to talk about big data and the way that the sort of cult of big data is used to promulgate extreme bullshit.
Carl BergstromWell, I mean, the other thing is we want to sort of even talk about the ethics of calling bullshit because in a lot of cultures, it's really scary for an individual, especially younger students that we talk a lot with, to call bullshit, but we want them to be able to call bullshit and also receive the calling of bullshit. So we'll talk about the ethics of calling bullshit. We'll talk about some of the ways we call it the natural ecology of bullshit. So the environments in which this kind of information falsehoods really fly faster. So we'll talk about their social media environments and talk about some of the papers that are being written by researchers in this area. But what's crazy is you would mentioned this sort of class took off when we released the course on January 11. We went to bed and that night thinking, well, I hope a couple of our friends think this is cool because we put a lot of effort into it. And we woke up and there was just tens of thousands of people from all over the world. And it's just when we registered the class, it was registered within a minute. So I think it's really resonating. And we want to take advantage of this moment where critical reasoning and data reasoning is now cool with students. So that's sort of what we're trying to take advantage of. But the other thing I wanted, since you guys are on the radio, this is a question. I'm going to flip back at you guys, because you guys know language is better than Carl and I. When we in the English language call bullshit, and calling bullshit, it carries a certain weight and there's very few synonyms that sort of, sort of, you know, mirror that particular calling. And so is there in other languages. Do you know how is it used? And I think it would be useful because you have such an international audience for this, for this show, you know, how else is there different ways in which it's used in these different languages?
"Callous" in English vs German AI generated chapter summary:
In German, we actually use the english word. I don't think there is an equivalent in Italian. We're working with some individuals to translate our course material into other languages. We want to be sensitive to how it's being used.
Carl BergstromWell, I mean, the other thing is we want to sort of even talk about the ethics of calling bullshit because in a lot of cultures, it's really scary for an individual, especially younger students that we talk a lot with, to call bullshit, but we want them to be able to call bullshit and also receive the calling of bullshit. So we'll talk about the ethics of calling bullshit. We'll talk about some of the ways we call it the natural ecology of bullshit. So the environments in which this kind of information falsehoods really fly faster. So we'll talk about their social media environments and talk about some of the papers that are being written by researchers in this area. But what's crazy is you would mentioned this sort of class took off when we released the course on January 11. We went to bed and that night thinking, well, I hope a couple of our friends think this is cool because we put a lot of effort into it. And we woke up and there was just tens of thousands of people from all over the world. And it's just when we registered the class, it was registered within a minute. So I think it's really resonating. And we want to take advantage of this moment where critical reasoning and data reasoning is now cool with students. So that's sort of what we're trying to take advantage of. But the other thing I wanted, since you guys are on the radio, this is a question. I'm going to flip back at you guys, because you guys know language is better than Carl and I. When we in the English language call bullshit, and calling bullshit, it carries a certain weight and there's very few synonyms that sort of, sort of, you know, mirror that particular calling. And so is there in other languages. Do you know how is it used? And I think it would be useful because you have such an international audience for this, for this show, you know, how else is there different ways in which it's used in these different languages?
Enrico BertiniThat's a great question. I have to think about it. I don't think there is an equivalent in Italian, but maybe in the slang from the region I come from, there's something similar, but I'm not sure I can say that. Yeah, it's pretty rude.
Moritz StefanerIt's a worse word than bullshit. It's really bad.
Enrico BertiniYeah, yeah.
Moritz StefanerIn German, we actually use the english word.
Carl BergstromOh, really?
Moritz StefanerI don't think there is like a German word. Yeah, as far as I would say so.
Carl BergstromI know in Swedish that the skitbra, there's very different versions of it, but it's sort of over time, has. Has lost its sweariness. It's actually a quite neutral term that still means something, but it's not like for, I imagine this, at least from my swedish friends. If you said it online, it wouldn't be a problem. But it sounds like an Italian. It's different. And then German. I'm already learning something on the show. Right. On how we use it. I mean, the reason why we care about this is that we do. We're working with some individuals to translate our course material into other languages and we want to be sensitive to how it's being used. So again, a lot of our content is going to be pretty, you know, it deals with statistics, machine learning. It does. It deals with logical fallacies, which, you know, that's translatable from one language to another. But there's this calling part that if we want to make this okay in society, we need to be careful of the different languages.
Moritz StefanerYeah, that's true. So maybe we start with the meaning of the word. Really? Like how. How in your view, or in the view of researchers, is bullshit different from lying or from hoaxes or pranks or just misinformation, propaganda? You know, there's like a lot of words that all deal with misinformation or wrong information. How is bullshit special?
Philosophers of Stuff: What Is BS? AI generated chapter summary:
How is "bullying" different from lying or from hoaxes or pranks or just misinformation, propaganda? Our idea is that it's what someone produces when they are indifferent to the truth. One of the things that Carl and I have been talking about in our class as an assignment is to help students become bullied neutral.
Moritz StefanerYeah, that's true. So maybe we start with the meaning of the word. Really? Like how. How in your view, or in the view of researchers, is bullshit different from lying or from hoaxes or pranks or just misinformation, propaganda? You know, there's like a lot of words that all deal with misinformation or wrong information. How is bullshit special?
Jevin WestThat's a great question. We've thought about that a fair bit. Our idea with bullshit is that it's bullshit is what someone produces when they are indifferent to the truth. They're not trying to trick you. They don't care one way or another. They're just indifferent. They're just blustering. They just want you to think that they know what they're talking about. And so they use big words, they make stuff up and they just kind of babble on. And this is kind of what makes it so dangerous, according to this, or philosophers of bullshit, is that it is this indifference to the truth with a liar. You know, where that liar stands with the bullshitter. There's just, there's, there's no connection at all. And I think that kind of bullshit can be very common. It's what we train students to do. If they're running out of time and they haven't started their term paper yet, they aren't actually trying to tell you anything. They're just trying to make you think that they've read the book and know what they're talking about. And we do that a lot ourselves. I believe in normal interactions with people because we're trying to make a good impression about ourselves. We're not actually interested in the message we're sending. We're trying to say, like, I'm smart, I'm really cool, I did this amazing thing, whatever, and that's when we start making bullshit.
Moritz StefanerSo there's a fine line between just, let's say, trying to impress somebody or like, exaggerating maybe a bit, and then at some point, if you do that too much, you enter the bullshit territory, is what you say.
Jevin WestI think that's basically right. The thing that we really stress in the course is that there's a. Is that bullshit? And calling bullshit are really different things. Bullshit is kind of narrow, but calling bullshit is really broad. You can call bullshit on a lie. You can also call bullshit on injustice. Something's wrong about the way that the laws are set up. You say that's bullshit. What we want to stress in the class is not only to look at this kind of narrow sense bullshit, because I do think that social media is a particularly virulent place for that to promulgate, but we want to stress that even for things that are lies or whatever, where people are deliberately trying to deceive you, how do you detect that and how do you refute it?
Enrico BertiniYeah. So do you think that in order to do some bullshit, you have to also some carelessness, you just don't care? Right? So you're talking about something, and you, you don't even think about whether you're right or wrong. You just pretend. I'm wondering if it. If it gets some. Some. If it takes some specific kind of personality to do that. Right.
Jevin WestThere's certainly a personality spectrum.
Carl BergstromSo we're a bullshit ers.
Jevin WestYeah. I'd be very surprised if there were people that never bullshit it. But maybe.
Carl BergstromWell, in regards to that, I guess I would actually assume that everyone, to some extent, bullshits because there's, you know, we have limited resources for time and whatever you're trying to go after. And one of the things that Carl and I have been talking about in our class as an assignment is to help students become bullshit neutral. So, you know, we live in Seattle and everyone's trying to be carbon neutral. Could we, you know, teach people to start to, you know, clean up as much of the bullshit that they create? If they create. We're all creating bullshit. Can we figure out ways of refuting bullshit throughout the day and sort of cleaning it up? So that sort of starts with the assumption that everyone's sort of creating, for the most part, creating bullshit.
Enrico BertiniSo that's kind of like sustainable bullshit thing. Yeah, sustainable.
Carl BergstromGot it. Yeah, exactly.
Jevin WestYou know, it's interesting, Kevin keeps coming back to these sort of conservation metaphors. And I'm coming around to this. There's this idea. You know, in the 1970s, we had to really get people to care about their physical environment in the United States. People just threw their trash anywhere and nobody cared. And we had these pretty heavy public information and public relation campaigns to say, let's not lit our environment. This is our country. It's beautiful. Let's not make it full of crap. And Jevin keeps coming back to these ideas, which I think are not completely crazy. That we need to do the same thing about our information environment in the late 2010s.
Moritz StefanerYeah. No. We live in information ecologies, and we have to clean that myself. I totally agree.
Carl BergstromYeah, we are. And actually, a big part of that. I mean, you guys are the experts on this, is on the information visualization. Because anyone, you know, most of the public might not deeply understand a p value or deeply understand a nearest neighbor algorithm or whatever that they're hearing about in some sort of technical science paper. But they do think they understand visualization, and they do think they can create visualization. Well, and I mean, you guys know this better than we do, but, you know, there's just so much. There's so much manipulation and sometimes nefarious and sometimes non nefarious examples just on the visualization side. So if you guys. You two are responsible for helping clean that up, too.
Enrico BertiniI don't know if we can get this responsibility, but it's a big job.
Carl BergstromPartly with the show and the other things you're writing about.
Data Visualization, Decorated or Unreadable AI generated chapter summary:
Morris: One of the problems with a lot of data visualization is number decoration. He says the idea is that there's all this decoration on there, and none of it is serving to actually convey any extra information. Morris: How you select numbers and in which context can also be a problem.
Jevin WestMorris, you had an article that I read recently that I really liked where you talked about the problems with a lot of data visualization. You kind of broke things up into two categories. And one of the categories that you talked about was number decoration. And it was just these dashboards that essentially are providing five or six numbers, or maybe eight, but they've got all these fancy pictures and speed dials and all of this kind of stuff on it. And this strikes me as really classic bullshit, because the idea is that there's all this decoration on there, and none of it is serving to actually convey any extra information. All it's doing is making you think, wow, that designer is really good. She's got a wonderful eye for color, and she can really use that program or whatever it is, which is bullshit. It's communicating about the communicator in the guise of communicating information. And I thought that was really interesting.
Moritz StefanerNo, there's definitely something to it. Also, the other side, if you just put out numbers and say, well, the numbers, they speak for themselves. I don't do anything to them. But how you select those numbers and in which context you sort of drop the right numbers or not, and which ones you leave out can. Can be bullshitting too. Just by a totally undecorated dataset might be bullshitting too, right? So are there any, like, you must have studied a lot of, like, different types of bullshit. Are there any tips on how you can detect bullshit? Like, sure. I feel there's like, at some point you develop a certain sense of, ah, this might be bullshit, even if you know nothing about the domain. Really? Yeah, because there are certain communication patterns, maybe, or some tricks people play. Are there some patterns you have identified that can help detect bullshit?
Can You Tell When Stuff's Just BS? AI generated chapter summary:
There are certain communication patterns, maybe, or some tricks people play. One of the most important ones is just to look at a claim. If it seems too good or too bad to be true, it probably is. Other tips: Ask where the source is coming from, why are they telling me this?
Moritz StefanerNo, there's definitely something to it. Also, the other side, if you just put out numbers and say, well, the numbers, they speak for themselves. I don't do anything to them. But how you select those numbers and in which context you sort of drop the right numbers or not, and which ones you leave out can. Can be bullshitting too. Just by a totally undecorated dataset might be bullshitting too, right? So are there any, like, you must have studied a lot of, like, different types of bullshit. Are there any tips on how you can detect bullshit? Like, sure. I feel there's like, at some point you develop a certain sense of, ah, this might be bullshit, even if you know nothing about the domain. Really? Yeah, because there are certain communication patterns, maybe, or some tricks people play. Are there some patterns you have identified that can help detect bullshit?
Jevin WestDefinitely. We've done some introspection, trying to think about what is it that we're actually doing when we sense that something is bullshit. And one of the most important ones is just to look at a claim. And if it seems too good or too bad to be true, it probably is. Even if the claim supports whatever side you like, supports whatever beliefs you have, if the effect size is really big, maybe implausibly big, it's probably bullshit, and you need to look into what's going on there. So that's one of the things we're really teaching our students to think about, is always question that and be really well aware of confirmation bias. The idea that if somebody comes and says, wow, something I don't believe is really true, then I'll say bullshit. But if someone says, oh, this thing I believe is true, then I'm likely to just say, oh, yeah, sure, that's got to be right. And so checking out all of those kinds of things I think is super important. One of the things we're really stressing in the class is that because since we're thinking about the way people bullshit with facts and figures and algorithms and statistical analyses, is that you don't have to necessarily understand the statistical analysis at the heart of a claim. If you just think carefully about what you think of the statistical analysis as a black box and you think carefully about, well, what are the data that go in? Are those reasonable? Are they selected in a fair way? Are they even capable of supporting the kind of claims that are going to be made? And then you look at what comes out, you say, is this reasonable? Is it the right order of magnitude? Is it fairly presented? And so we really think that we can teach people to call bullshit without having to unpack the big data algorithm that's being used or whatever it is in the center to kind of drive things through.
Carl BergstromYeah, I mean, just to give you just a quick little easy example that when we talk to freshman students at a university, they hear averages all the time. And if they understand, I know it sounds so basic, something you learn in middle school, the difference between mean, median and modes, they'll get a sense of when people are manipulating these kinds of very simple metrics to tell the story that they want. So if you ask a lot of people, you say, so what's the mean income in the United States that tells a much different story than something that is closer to the median. And so it's these kinds of things, it's sort of teaching them really, I mean, because we really want wide reach with this content at first, and then we'll probably splinter into more, maybe more sophisticated material that we can get a lot by just teaching people to do what Carl said question when it sounds too good to be true, that's probably one of the biggest ones. The other things in spotting bullshit is just asking about where the source is coming from, who is telling me this, why are they telling me this? And what do they have to gain from what they're telling me? It's kind of a cynical view, but we need to be a little cynical in today's digital environments.
Enrico BertiniYeah, no, I have to say, and bullshit can you, can almost. Imagine this as a pipeline. Things can happen at every stage of this pipeline.
Carl BergstromThat's right.
Enrico BertiniTimed right, so one could almost draw this pipeline and figure out what kind of bullshit can be created.
Carl BergstromYeah, there is a food chain. Bullshit comes out.
Enrico BertiniSo another case that comes into my mind when I'm reading scientific articles is that the question seems reasonable. The outcome seems to confirm the initial hypothesis. But the way this has been done is so complex and unnecessarily complex that you just have to. It's a huge leap of faith. Right. Would you call this bullshit as well?
Jevin WestThat's a hard question. It depends a lot on whether the complexity is because I don't know, the subject area, or whether the complexity is because the person's actually done something kind of stupid. And so if I'm reading a quantum entanglement paper, it's going to be exactly like you described, but it's not going to be bullshit. It's going to be the fact that I don't understand quantum entanglement despite my best effort.
Enrico BertiniYeah, yeah.
Jevin WestIf I'm reading a paper in evolutionary theory and it's like that, then something's very wrong. And then, yes, there's probably some bullshit here. There's no reason that we need to go through all of this messy apparatus to show that these data lead to this claim. And if someone is doing all this messy apparatus, they've got to be hiding something or they've really incompetent.
Enrico BertiniI think more specifically, I was thinking about those cases where it's clearly overly complex and it seems to hide something behind complexity. Right, right.
Jevin WestNo, I think there's a fair bit of that that goes on. And I think in the kind of big data arena that seems to be particularly common now, we're really vulnerable in big data because you take something like a deep learning algorithm, and even the authors don't understand what the hell it's actually doing. You get this input and then it does some stuff, and then you don't know why it does that. No one knows why it does that, but then it does a thing on the output. It makes, you know, it can. It can all of a sudden paint these beautiful pic, you know, take a photograph and paint these beautiful pictures in the style of different artists or whatever and really understands how it's working. And so I think we need to be particularly skeptical there even of stuff we're doing ourselves to just make sure that it's actually doing what we think and we're not kind of ascribing more power to this than it actually has.
Carl BergstromYeah. And one thing, I just want to build on what you're saying, Enrico, because you brought up the scholarly communication bit. You know, Carl has been writing a lot recently about publication bias, along with other people in the literature right now. And this is really important because this is a sort of a form of bullshit going on in science where we tend to only publish things that have this arbitrary value of 0.05. And if that's all that we're seeing, we're not. I mean, it's really a clouded picture of reality. And I think it's a really, it's a dangerous sort of rut that we're in right now in science. Not to say that, you know, we all, we're very careful because when we talk about this, we don't want the general public to think science is completely broken. It's just this is something within science.
Enrico BertiniThat we, it's the only hope we have.
Carl BergstromYeah, that's right.
Enrico BertiniThat's the way I see it. We didn't invent any alternatives so far.
Moritz StefanerLike democracy. Right. That's pretty bad, but better than.
Enrico BertiniExactly. Right.
Jevin WestExactly like democracy. That's, you know, what's the worst system, except for all the alternatives? I mean, I think this, one of the things we're going to talk about a lot in the class is this issue of publication bias and how bullshit comes perhaps unintentionally into science. And that may be bullshit more largely construed rather than this kind of narrow definition we were talking about before. But teaching people how to be skeptical of things even that they see in the published literature, even in a really good journal, just by understanding the fact that you can't publish something unless you get a positive result. And so if there's something a lot of people were working on, and it's not actually true, then, and you'll still see those false positives coming up in the literature, and you won't necessarily see the negatives because it's very hard to get those published. And then that can give you this really, really confusing picture. There was a remarkable study in the New England Journal of Medicine, I believe, that was done about ten years ago, where somebody at the FDA looked at all of the clinical trials of antidepressants. And so if you look at the published literature, you see that there's almost all of the clinical trials of antidepressants were successful. But if you look at what had actually been registered with the FDA, more than half of the trials had been unsuccessful. And some of the ones that were reported as successful were just reported as successful by manipulating the desired outcomes that they were measuring and things like this. And so if you had the whole picture that the FDA had, you get a very, very different story. At least half of these things aren't working than the one that was in the published literature that scientists had access to. So I think this kind of thing is quite common. It's just usually we don't have that information in the background, so we don't know the FDA knows, and we could look at that. Usually, we don't know what people have sitting in their desk drawers. It's called the desk drawer effect. Right.
The Source of BS in Science AI generated chapter summary:
Teaching people how to be skeptical of things even that they see in the published literature. The source of BS that we have to be most concerned with is ourselves. One of the many reasons that Carl and I wanted to put together this course is to get better at calling BS on ourselves.
Jevin WestExactly like democracy. That's, you know, what's the worst system, except for all the alternatives? I mean, I think this, one of the things we're going to talk about a lot in the class is this issue of publication bias and how bullshit comes perhaps unintentionally into science. And that may be bullshit more largely construed rather than this kind of narrow definition we were talking about before. But teaching people how to be skeptical of things even that they see in the published literature, even in a really good journal, just by understanding the fact that you can't publish something unless you get a positive result. And so if there's something a lot of people were working on, and it's not actually true, then, and you'll still see those false positives coming up in the literature, and you won't necessarily see the negatives because it's very hard to get those published. And then that can give you this really, really confusing picture. There was a remarkable study in the New England Journal of Medicine, I believe, that was done about ten years ago, where somebody at the FDA looked at all of the clinical trials of antidepressants. And so if you look at the published literature, you see that there's almost all of the clinical trials of antidepressants were successful. But if you look at what had actually been registered with the FDA, more than half of the trials had been unsuccessful. And some of the ones that were reported as successful were just reported as successful by manipulating the desired outcomes that they were measuring and things like this. And so if you had the whole picture that the FDA had, you get a very, very different story. At least half of these things aren't working than the one that was in the published literature that scientists had access to. So I think this kind of thing is quite common. It's just usually we don't have that information in the background, so we don't know the FDA knows, and we could look at that. Usually, we don't know what people have sitting in their desk drawers. It's called the desk drawer effect. Right.
Moritz StefanerYeah, it's super interesting. I mean, you mentioned biases before. I was thinking about maybe, like, maybe we are bullshitting ourselves quite often as well. Like, we come into the world with preconceived notions, and we just collect more information that confirms our preconceived notions. And so we're in these bubbles, and we think everything's fine. So, basically, that's like self bullshitting, right? Does that happen to you?
Jevin WestNeil Postman, the American sociologist, his third law of bullshit turns out to be that at any given time, the. The source of bullshit that we have to be most concerned with is ourselves.
Enrico BertiniAbsolutely.
Carl BergstromI think it's spot on. I mean, it's one of the things. One of the many reasons that Carl and I wanted to put together this course, because we wanted to get better at calling bullshit on ourselves, because there's so many times when we're working on some project or some paper, and we're humans, so we want something to show up to land in nature that we're human, but we have to be careful of that. We got to do a better job of calling bullshit on ourselves as much as even just things that we see in our digital environments.
Enrico BertiniSo there is also the famous sentence from Feynman, you are the easiest person to be fooled. Right. Or something like that. Richard Feynman.
Carl BergstromYeah, no, I kind of recollect it. I don't know. I can't think of it verbatim, but, yeah, no, I think that's spot on.
The Biggest BS AI generated chapter summary:
"Once falsehoods get out there, it's really hard to clean up the system, " he says. These kinds of examples are real. These are affecting people's lives. It's sort of life or death.
Moritz StefanerDo you have any striking examples of what's the biggest bullshit you've encountered? Or, like, the weirdest stories people made other people believe? Do you have any. Any good anecdotes?
Carl BergstromYeah, so, I mean, this is more of a us centric example, and it's not within science, but it's a. It's a pretty sad story about just a ridiculous spreading of bullshit. So you guys have probably heard of pizzagate. So this is this situation where there were some false news stories about a restaurant in Washington, DC, housing a child, you know, sex ring that was being run by, you know, Hillary Clinton. Of course, this was complete false. And even. Even those that had. Were promulgating this were saying, okay, later on they said, okay, it was false. It's not true that there was a young man from the Carolinas, came with a gun and came, guns ablazing into this restaurant. Well, he found it wasn't true. And he, of course, felt bad. He was trying to do what was. He was trying to save some kids. Well, the worst part, the worst part is that not too long ago, a couple of weeks after this had been even refuted by those that were pushing this falsehood, there were protests outside the White House with t shirts saying, pizzagate is real. This is a conspiracy to cover this up. People still believe this. So this goes along with the Brandolini asymmetry principle, which is that bullshit. Once it gets out there, once falsehoods get out there, it's really hard to clean up the system. So, you know, once falsehood flies, the truth comes limping after sort of thing. So this is. This is an example that we really, you know, these are. These kinds of examples are real. These are affecting people's lives. It's sort of life or death. We even see tweets from, you know, these saber rattling tweets from the, like, pakistani Israeii defense or the pakistani defense minister tweeting back to Israel saying, you know, don't forget we have nukes, too, because they were responding to a fake news article. So foreign preservation is at stake here, as well as democracy and everything else. So that's where they're silly. But then they can really move up to some serious consequences. So this is something we're concerned about.
Callousness in Science AI generated chapter summary:
An important part of the course is to teach students how to actually call ridiculous. One of my favorite ways of calling ridiculous is reductio ad absurdity. There has to be some sort of respect and civility in a room. The only thing you can do is dialogue.
Enrico BertiniSo another thing that we wanted to ask you. I think at the beginning, you mentioned that an important part of the course is to teach students how to actually call bullshit. So can you tell us more about what the strategies are there, how to do properly? I really like the fact that you are considering the ethics side of this problem, and I'm curious to hear, how do you do it properly? So teach us how to call bullshit in the right way.
Jevin WestWell, I'll tell you a little bit about technique, and maybe Jevin can tell you a little bit about ethics if he wants to add anything. One of my favorite ways of calling bullshit, because it's effective both to popular audiences and to technical audiences, is reductio ad absurdity. And so what the idea is, you say, okay, if I follow from the same premises you're using to get your result, then these ridiculous results also follow. And one of my favorite examples, there was a paper that was published, I think, kind of as a joke, but no one's admitted that. It was a joke in nature about a little over ten years ago, where a group of statisticians did linear regressions on the hundred meter dash times of men and women. And so the 100 meters dash times of women were getting better faster than the 100 meters dash times of men. And so the conclusion of the nature paper was that based on these linear regressions, the women would start running faster than men around 21 62. And so a bunch of people wrote in to try to debunk this. There's a beautiful letter from a Texas high school statistics class that explains what's wrong with this. But my favorite response is reductio, where somebody says, well, you know, that's really cool. It's really interesting to see that. But you've missed the really exciting thing because your model predicts that there's going to be a really fantastic race in around 26 50 when people will actually record negative times in the 100 meters.
Carl BergstromYeah, it's a perfect. Yeah, yeah. On the ethics side, I mean, we keep it pretty simple for the students in our class. We say that you can't have ad hominem attacks, so you don't attack the person, you attack the argument. And that right there sort of, there's a lot of other, these sort of other minor things, but that's one of the biggest things. And also there has to be some sort of respect and civility in a room where there's going to be people with divergent or lots of different backgrounds and expertise. And so there needs to be a respect for that kind of expertise. One of the things that we're, again, being very careful about, because we do attack things in science, we don't want students to feel have their grounding completely shaken with saying that nothing out there is true. It's all bullshit. So we're sort of doing that balance act. So I think that goes along with the ethics of refutation, too, that you have to have some grounding in what's true. And philosophers for centuries have been talking about this problem. When you can't reconcile on a third party, if you can't reconcile on a third party. And right now we can't, at least in the US, we can't really reconcile even on media anymore. The only thing you can do is dialogue. So create. Basically the sort of ethical thing to do is to agree to talk and to talk in a civil way. But the main one is sort of not attacking character and sort of attacking.
Jevin WestArgument, giving people the idea, look, if I call bullshit on you, Jevin, that's not a diss. I'm not saying that you're stupid or something like that, even though I may be. And then you have to interpret that as not being an insult. And you say, oh, he thinks that this argument doesn't hold. And we do this to each other all the time, of course. And so far so good. Jevin was referring to this issue of kind. Kind of confusion or just the bad state of affairs we have in the United States where people can't even agree on the baseline for how to talk about things. And this, I think, has become a really enormous problem in the US. It's been described as epistemic tribalism, where people not only disagree on what to do, given the facts, but they disagree on the facts. And they even disagree on the procedures by which we should find the facts.
Moritz StefanerTalk about facts.
Enrico BertiniYeah.
Moritz StefanerNo, and that's also the bit, also the knife to a gunfight problem, in a sense, that if you have a very capable bullshitter, he has, or they have a much bigger arsenal of things they can do than somebody who is sincere. Right. So if you have a very capable bullshitter, they can do maneuvers to always, you know, it's like a wet piece of soap or like a fish, you know, it always jumps away, whatever you do.
Jevin WestI was involved in the efforts to sort of refute intelligent design creationism in the US in the early two thousands.
Moritz StefanerThese are hard discussions. Right.
Jevin WestIt's really hard, very dangerous to debate these guys. You have a sort of world renowned evolutionary biologist on one side and some snake who's willing to lie through his teeth on the other side, and the snake wins every time.
Enrico BertiniIt's not that hard for a bullshitter, for a professional bullshitter to play with people's mind. Right. If you know how to do it, it's not that hard. Right, right. So that's also an issue there.
Jevin WestRight?
Enrico BertiniYeah. I just wanted to ask you something related to calling bullshit again, to the ethics and how to do it properly. I think that's a very fine line and it's very hard to walk. Right. So one thing that I noticed in the past is that, for instance, one problem is whether you do it in public or in private. Right. That's one issue. Right, good point. And another aspect is whether you give the person that I think my rule of thumb, I always give the benefit of the doubt, right. It's possible that the person. Right. I think that that's a very important thing. Right. Even in those cases where it creates pure bullshit. Right?
Callousness in Science AI generated chapter summary:
Carl: Give the person the benefit of the doubt and then move on from there. Kids are pretty good at cutting through our BS just by asking why. Creating an open way for people to talk and having enough respect to give somebody the chance to respond is remarkably important.
Enrico BertiniYeah. I just wanted to ask you something related to calling bullshit again, to the ethics and how to do it properly. I think that's a very fine line and it's very hard to walk. Right. So one thing that I noticed in the past is that, for instance, one problem is whether you do it in public or in private. Right. That's one issue. Right, good point. And another aspect is whether you give the person that I think my rule of thumb, I always give the benefit of the doubt, right. It's possible that the person. Right. I think that that's a very important thing. Right. Even in those cases where it creates pure bullshit. Right?
Moritz StefanerBut.
Enrico BertiniOkay, I just want to give you the benefit of the doubt. Right. So tell me, what's your story? Why did you do. Why do you say that? Right? And I think that's very important because our. We are always tempted to just say, come on, that's bullshit. It's so easy to say that, right? That's our gut reaction. And. Yeah, I don't know.
Carl BergstromI agree 100%, Enrico. And I think we're going to bring you as a guest speaker into our class when we get in this section, because I think you're really. You got it right. I think it's. We should. That's something we should tell the students. I mean, always. That's sort of rule number two, you know, give the person the benefit of the doubt and then. And then move on from there. But I think that's. That's a great way to do it. And also just, you know, one thing we do in science is if we see something wrong, at least a lot of people do. They'll. They'll email the author directly, and if they have a public conversation, at least they're not broadsided.
Jevin WestWe do this on the website when we have a bunch of case studies. And if we're talking about an academic paper, we always write the author and offer them an attempt, offer them space on our webpage to. Before we post, we say, here's what we've. Here's what we're going to put up. And you can have space to say anything you want. You can call us jerks. You can do anything you want to in that space. And so it also gives them a chance to say to us, like, hey, you've totally misunderstood what I did. Go back and look carefully. We haven't had that happen yet. But just that notion of creating this open way for people to be able to talk and having enough respect to give somebody the chance to respond, I think is remarkably important. Yeah.
Moritz StefanerOne good strategy. So. I have kids, Enrico. You have kids, too, Carl. I know you have kids, too. So kids are pretty good at cutting through our bullshit just by asking why. And if you keep asking why long enough, the raw bullshit dissolves into Moritz.
Carl BergstromI can tell you my five year old, even just this week, was wondering about the tooth fairy. He wasn't so sure about it. He wanted to know, how is it that the tooth fairy could go to every house in one night and, you know, and get all, you know, pay for all the teeth. And then the Easter bunny, he's been quite. He's been questioning. You're right. They do a good job of sort of questioning those things.
Moritz StefanerThat's true. The tooth fairy is the first bullshit we encountered.
Enrico BertiniAnd Santa, of course, they go handy in end.
Moritz StefanerThis is very dull start.
Calling Bull: The Blanket AI generated chapter summary:
We're trying to be as active as we can on our Twitter account on the calling bull. org website. We really want to aim this at high schoolers and like you said, more. Will you also post the videos I've seen? There's a few video lectures already.
Enrico BertiniWell, I could go on forever if I want. It's such a fun topic and so tricky. Right? Because I think, I mean, the real problem is that both on the side of the bullshitter and the person who has to call bullshit, you have to go against your natural inclination. Right. It looks to me that our brains are not very good at doing both things. So it's like resisting bullshit and criticizing bullshit in a proper way. These are two hard things to do. Do.
Carl BergstromThey definitely are. They definitely are. And, you know, you have an audience here, and if people have ideas and they want to share it, I mean, they. We're trying to be as active as we can on our Twitter account on the calling bull. They can go to our website, callingbullshit.org, or we have a sanitized version, too. If people don't like the use of bullshit we have, it's called callingbull.org. but what we're trying to do is just. Yeah.
Jevin WestWrote a script to take out all the bad words.
Moritz StefanerWonderful.
Carl BergstromThat was fun. So, yeah, because we really want to aim this at high schoolers and like you said, more. It's even just, our kids are good at this. Maybe, you know, be sort of treating, you know, teaching these kinds of skills. It's even just real basic digital skills on, you know, doing, you know, a little bit of filtering themselves. And I think they do. I think we should give them more credit than we do. So. So the idea is just trying to create a discussion around. This is sort of one of our big missions, and anytime people have ideas, we'll pass other people's tools that they have through the channel. So just basically, let's keep the conversation going.
Moritz StefanerYeah, that's wonderful. Thanks for collecting all this material. The website already has really great articles, lots of links to all the philosophical papers. I was enjoying those quite a bit, so make sure to check those out. And will you also post the videos I've seen? There's a few video lectures already. You keep posting more.
Jevin WestYeah, we will. We're going to be posting the lectures as they occur. So we're, we're, right now we've done four lectures in the class. We've got first ones up, second one's coming up. As you know, there's a lot of editing and production and such to do. So we're lagging by a couple of.
Carl BergstromWeeks and yeah, we'll be releasing the week number two in a couple days and then we'll have, you know, week three will be after that. Yep. But we'll be posting these and then just the other thing is we'll have a full version of the course in the fall, a three credit version, which there will even be more content, but we'll just keep pushing content from based on ideas that other people have and things that we find in the literature.
Moritz StefanerYeah, it's wonderful. It's such a simple idea, but nobody has tackled it that head on, I think, in teaching, in academia, and I think it's a great initiative on your part.
Carl BergstromWell, thank you so much, guys. Thanks for having us. This has been fun. I'm glad that you had sent this invitation because it gave me a chance to look at other people that you guys are interviewing on the show. Guys, got something good going yourself. So thanks for having us.
Moritz StefanerThanks for coming.
Enrico BertiniThank you.
Moritz StefanerYeah, thanks a lot.
Jevin WestAlright, take care.
Moritz StefanerBye.
Enrico BertiniBye, guys.
Carl BergstromBye bye.
Enrico BertiniHey guys, thanks for listening to data stories again. Before you leave, here are a few ways you can support the show and get in touch with us.
Data Stories AI generated chapter summary:
We have a page on Patreon where you can contribute an amount of your true per episode. If you can spend a couple of minutes rating us on iTunes, that would be extremely helpful. We also want to give you some information on the many ways you can get news directly from us. And we do love to get in touch with our listeners.
Enrico BertiniHey guys, thanks for listening to data stories again. Before you leave, here are a few ways you can support the show and get in touch with us.
Moritz StefanerFirst, we have a page on Patreon where you can contribute an amount of your true per episode. As you can imagine, we have some costs for running the show and we would love to make it a community driven project. You can find the page@patreon.com Datastories and.
Enrico BertiniIf you can spend a couple of minutes rating us on iTunes, that would be extremely helpful for the show. Just search us in iTunes store or follow the link in our website.
Moritz StefanerAnd we also want to give you some information on the many ways you can get news directly from us. We're of course on twitter@twitter.com Datastories but we also have a Facebook page@Facebook.com, datastoriespodcast and we also have a newsletter. So if you want to get news directly into your inbox, go to our homepage data stories and look for the link that you find in the footer.
Enrico BertiniAnd finally, you can also chat directly with us and other listeners using Slideshow lack again, you can find a button to sign up at the bottom of our page. And we do love to get in touch with our listeners. So if you want to suggest a way to improve the show or know amazing people, you want us to invite or projects you want us to talk about, let us know.
Moritz StefanerThat's all for now. See you next time, and thanks for listening to data stories data stories is brought to you by click. Are you missing out on meaningful relationships hidden in your data? Unlock the whole story with Qlik sense through personalized visualizations and dynamic dashboards, which you can download for free at Qlik Datastories. That's Qlik Datastories.