Episodes
Audio
Chapters (AI generated)
Speakers
Transcript
Visualizing Uncertainty with Jessica Hullman and Matthew Kay
Enrico Bertini and Moritz Stefaner talk about data visualization, analysis, and the role that data plays in our lives. As usual, our podcast is listener supported, so there are no ads. If you find you enjoy the show or you're a frequent listener, please consider supporting us.
Jessica HullmanI heard Lyndon B. Johnson was once given something with an interval, some information, and he was like, oh, ranges are for cattle. Give me a number.
Enrico BertiniHi, everyone. Welcome to a new episode of data stories. My name is Enrico Bertini, and I am a professor at New York University, where I do research in data visualization.
Moritz StefanerYeah, and I'm Moritz Stefaner, and I'm an independent designer of data visualizations. And actually, I work as a self employed truth and beauty operator out of my office here at the countryside in the beautiful north of Germany.
Enrico BertiniYes. And on this podcast, we typically talk about data visualization, analysis, and generally the role that data plays in our lives. And usually we do that together with a guest we invite on the show.
Moritz StefanerYeah. But before we start, just a quick note. As usual, our podcast is listener supported, so there are no ads. We used to have ads, but now the community actually finances this podcast. So if you find you enjoy the show or you're a frequent listener, please consider supporting us. You can do this with either recurring payments on patreon.com Datastories, so you can set up a certain amount that you will give us every time we publish a new episode, or you can also send us one time donations on Paypal me Datastories. And we always love when a new email comes in with a donation from somebody around the world. It's just fantastic.
Enrico BertiniYeah, that's just perfect. And yeah, also thanks to everyone who is already participating and donating. That's great. Thanks so much. Before we start, I just wanted to briefly talk about the information is beautiful awards. I think that's been a lot of fun. So Moritz won another award, but he couldn't really attend. So me and Destry, our producer, went there because the event was organized in New York and that was a lot of fun. So first of all, congratulations, Moritz.
Information is Beautiful Awards 2013 AI generated chapter summary:
The information is beautiful awards take place every year. This year, Moritz won the truth and beauty award. Go to the website and check out all the shortlisted items. There's lots of great data visualization work there.
Enrico BertiniYeah, that's just perfect. And yeah, also thanks to everyone who is already participating and donating. That's great. Thanks so much. Before we start, I just wanted to briefly talk about the information is beautiful awards. I think that's been a lot of fun. So Moritz won another award, but he couldn't really attend. So me and Destry, our producer, went there because the event was organized in New York and that was a lot of fun. So first of all, congratulations, Moritz.
Moritz StefanerThanks. I was really happy to win. I just had one horse in the competition. This time it was my only project that would qualify, so I was.
Enrico BertiniSo they created a specific type kind of award for. For you. Right?
Moritz StefanerThis year, the truth and beauty award. You mean what's it called? No, it's called unusual, which is like, ambiguous. For me to get an award in the unusual category, it sounds like could be a good thing or a bad thing, but I'll take it.
Enrico BertiniSo. Yeah, yeah, yeah.
Moritz StefanerBut anyways, the awards, I think it's been the 7th or 8th year, and I think every year the quality has been rising of what is being handed in. And the long list is amazing already. The shortlist is actually all great works like 50, 60 maybe 100 different projects. Definitely. Go to the website and check out all the shortlisted items, not just the winners, because there's lots of great data visualization work there. I'm really happy these awards exist, even if I don't win. Of course I'm even happier when I win. But just to document what's happening in the field, it's a really great resource. So we'll put a link into the show notes. And you looked great on stage, like accepting my prize. So I think you did a good job of representing me.
Enrico BertiniSo, yeah, that was a lot of fun. Okay, so let's start. Today we are talking about a very important general topic in visualization, which is uncertainty visualization. How do you visualize uncertainty? I know that a lot of people are interested in that and it's a very hard topic. And there are quite a few people out there who are trying to make some progress in this direction. And I think we today, we got some of the best people out there. We have not one, but two professors. We have Jessica Hullman and Matt UK. Hi, Jessica and Matt.
In the Elevator: Uncertainty visualization AI generated chapter summary:
Today we are talking about a very important general topic in visualization, which is uncertainty visualization. We have Jessica Hullman and Matt UK. This past fall, they started a lab together called the Midwest Uncertainty collective.
Enrico BertiniSo, yeah, that was a lot of fun. Okay, so let's start. Today we are talking about a very important general topic in visualization, which is uncertainty visualization. How do you visualize uncertainty? I know that a lot of people are interested in that and it's a very hard topic. And there are quite a few people out there who are trying to make some progress in this direction. And I think we today, we got some of the best people out there. We have not one, but two professors. We have Jessica Hullman and Matt UK. Hi, Jessica and Matt.
Jessica HullmanHey.
Matthew KayHi.
Enrico BertiniSo, as usual, we ask our guests to introduce themselves. So if you can briefly tell us about what is your background and interests, and then we can dive right in the topic of today.
Jessica HullmanSure. Yeah. So I'll start. I'm Jessica. I'm an assistant professor in computer science, but also in journalism at Northwestern University. Before this, I was at University of Washington in an information school, but I was kind of drawn to be both in cs and journalism. I care a lot about how data is presented in the public as well as in science. And so my research is pretty focused on information visualization. But in the last few years, I've really spent most of my time thinking about how we can create visualizations and visualization tools that better support statistical reasoning. So things like how do we visualize uncertainty, but also things like, how do we help people kind of become better at forming rational beliefs when they see data? So how are people sort of bringing expectations to data? Can we elicit those? Can we make them sort of better at reasoning? So one thing I wanted to mention is that this past fall, Matt and I started a lab together called the Midwest Uncertainty collective, or MU collective. So it's a cross institution lab between northwestern and Michigan with us and our students, and we do a lot of collaborative work on uncertainty, visualization and closely related topics.
Matthew KayI'm Matt Kay. So I'm an assistant professor in the School of information at the University of Michigan and I do work in human computer interaction and information visualization. And I actually kind of came to information visualization via human computer interaction. So I was doing work on personal health informatics. So trying to communicate things like sleep quality or possible factors that influence sleep quality to end users, and then realizing how difficult it was to do that. And that ended up involving a lot of things related to uncertainty and kind of then moved me in that direction. So now I do a fair amount of work in information visualization on communicating uncertainty and communicating statistical results in ways that people can understand. And now also kind of building tools to help people who are building models generate effective visualizations. And, yeah, so I did my PhD at the University of Washington, which is where I ended up doing some work with Jessica. And then now that I've moved here, we've started this cross institutional lab, as Jessica said, which has actually been a lot of fun putting together and seeing our students kind of also start to talk to each other and do things which I think is really exciting.
Enrico BertiniI like the fact that you called it a collective.
Jessica HullmanYeah, I think that was Matt's suggestion, actually.
Matthew KayYeah. I don't know where that word came from.
Enrico BertiniDoes it just sound. Sound good or it just sounds cool?
Jessica HullmanYeah, it makes us sound more like artists.
Matthew KayYeah.
Enrico BertiniWell done. It does. It does sound cool.
Moritz StefanerYeah, it's great. So the topic today, why we invited you, is. Yeah, also, as you mentioned already, your main focus right now in research, it's about visualizing uncertainty. So just to get started, what do we talk about when we talk about uncertainty? Visualization, Jessica.
Uncertainty in data visualization AI generated chapter summary:
What we're usually talking about is visualizing error that's been calculated relative to some model. But there's another less narrow framing, which is thinking about uncertainty in terms of probability. There's always this tension in visualization, we like our data to be crisp and factual.
Moritz StefanerYeah, it's great. So the topic today, why we invited you, is. Yeah, also, as you mentioned already, your main focus right now in research, it's about visualizing uncertainty. So just to get started, what do we talk about when we talk about uncertainty? Visualization, Jessica.
Jessica HullmanSo I would say we talk about something pretty narrow when we talk about uncertainty, visualization among visualization researchers. So what we're usually talking about is visualizing error that's been calculated relative to some model. You could call it risk. A lot of people think of it as risk, but basically, you're making some sort of estimate. You're taking the average of something. You're predicting maybe average unemployment for next year. And you know that there's error. Most commonly measurement error, sampling error. So random error. And so that form of error, random error, is usually what we're talking about sometimes. I mean, I've seen work in uncertainty visualization that looks a little bit at more systematic forms of error, like bias. But I would say for the most part, it's measurement error. It's random. So this is just kind of one definition of uncertainty or one form of uncertainty. There's a lot of work outside visualization, I would say, that talks about uncertainty in broader terms. There's things like ambiguity, there's things like missing data that sometimes come into what we talk about, but more often than not, I think we're using a pretty narrow notion of error. But maybe, I don't know if Matt has a different view. That's my sense.
Matthew KayYeah, I think that's true. And I think it's also in some ways kind of unfortunate, but.
Jessica HullmanYeah. Right.
Matthew KayI would also say there's another slightly less narrow but also slightly less common framing, which is thinking about uncertainty in terms of probability. Right. So.
Jessica HullmanYeah, that's what I meant by risk, basically.
Matthew KayRight, exactly. And I think that it still ends up being narrow because you miss a lot of the more qualitative forms of uncertainty, but you take error and think about it instead as what, say, the sampling distribution or a posterior distribution from a bayesian model look like. But you're still essentially talking about the same idea as just with more fidelity.
Enrico BertiniBut isn't there a difference also between, say, the uncertainty in how something is measured because the instrument is, doesn't really measure things precisely, and uncertainty coming from the fact that the reality itself is uncertain? I don't know if it makes a difference, but, yeah, I mean, I think.
Jessica HullmanOne form that we don't often think about that me and Matt have been talking about with another collaborator, Les Padilla, is something like, you have a model, and your model has a way of estimating measurement error. So random kind of error from sampling, but instead, you know that your model doesn't even include all the relevant data. So there's like, maybe, you know, there's. Yeah, you're just. You don't have all the factors you need and you know it. And so I think of it sometimes as like, small world uncertainty. So my model is like the small world of my data set and what I know about it, whereas often there's these sort of bigger world uncertainties where, you know, we know our model probably itself is missing things. So. Yeah, that kind of thing we don't often talk about in visualization, which I kind of agree with Matt, it's kind of unfortunate.
Matthew KayYeah. It's this idea of, like, model uncertainty or model specification uncertainty. Right. Like, you don't know you have the right model.
Moritz StefanerI mean, there's always this tension in visualization, I think because data visualization, we like our data to be crisp and factual. Right. And often, like any method evaluation of what works well for visualizing a certain type of data assumes sort of. Yeah, the data is crisp and factual, and then we just vary the different ways we can display it. But now, if the data starts to be fuzzy to begin with. And so it makes everybody a bit nervous.
Jessica HullmanThen I think it makes sense. Yeah. The reason we probably focus on things like measurement error is it's very easy to think about visualization once you have that set of errors or whatever that you want to show. Once you have that distribution, it's just a data visualization problem on some level. And so I think that has made us focus on what's easy to visualize.
Moritz StefanerMaybe stepping back a bit, why should we visualize uncertainty at all? I mean, we could also say we just show the best estimate that we have and we show that in a nice and crisp way. Maybe put an asterisk there and say it's based on a model which might be true or not. Why is there value in showing the nature of the uncertainty itself?
Matthew KayI think that at the base, simple answer to that question is just because you can allow people to make better decisions. I mean, so Jessica and I have done this work where we were looking at visualizing uncertainty in transit prediction. So this isn't like a huge high risk context, but we can at least say, okay, so you have, say, predictions for when a bus is going to show up at a bus stop. Well, we've run experiments showing that given better uncertainty visualizations, you can actually lead people to make better decisions. Right. So you set up these sort of incentivized experiments where you give people payoffs for different outcomes and then you can demonstrate that they actually make choices that lead to more optimal decisions. So I think that's the simplest one. And I think that when you look at the proliferation of predictions and kind of machine learning kind of going out into the world, giving people most of the times just that best estimate, it has error associated with it. It has uncertainty associated with it. And we should be communicating that to people because it allows them to make better decisions.
Jessica HullmanYeah, yeah, I would. Yeah, I agree with that. And just to add a few other cases, you know, besides, like when you have model predictions, helping people make these like everyday choices, like, you know, what route do I take? There's also a lot kind of at the sort of societal level where, you know, for policies to be created, they're often relying on, you know, different types of estimates that these government agencies are putting out. So, you know, should we do anything about gun control? Maybe there's analyses about that. But, you know, if we're not really showing uncertainty, which people have written about how even these government organizations putting out, like GDP estimates, et cetera, often don't show uncertainty, we really like, we're just as a society making bad decisions. I think some of the other compelling use cases that include things like experimental science, which Matt and I have both thought about a lot. So, you know, if we're presenting results from experiments that we've runded, there's a lot of uncertainty there. But if we don't really emphasize that further or subsequent researchers might think, oh, I'm going to keep doing research on this thing, because this weird treatment, because they found an effect, when in reality, a lot of. I think what we're seeing with experimental results from certain fields not replicating is probably because we're not really emphasizing or visualizing the uncertainty in a way that people are getting it. And then there's other sort of. Of life and death situations, like, you know, whether, like, one of our collaborators here, lace Padilla, works a lot on things like disasters. So when people have to make these, like, really important life or death decisions, like, should I evacuate if a hurricane's coming?
Moritz StefanerSure.
Jessica HullmanYou know, like, there's all sorts of reasons why you need the uncertainty there, but I think. Yeah, there's also lots of rationales for why, like, Moritz was bringing up that we don't need it.
Moritz StefanerAnd I was playing devil's advice.
Jessica HullmanNo, but I think that's the problem.
Enrico BertiniYeah.
Jessica HullmanMaybe we can talk about that later or something, because I have lots of thoughts on that.
Economists on Uncertainty AI generated chapter summary:
People tend to be persuaded quite a lot by numbers. And I think a lot of what is important about uncertainty visualization is to force people to confront uncertainty. It's cognitively complex to be trying to reason with two intervals rather than two points.
Enrico BertiniYeah, I was wondering. I think maybe another aspect here is the fact that I think we already know from some experimental evidence that people tend to be persuaded quite a lot by numbers. Right. So if you present something and you have some numbers attached, people tend to think that this is more, I don't know, credible. Right. But then we have also the problem that most of the communication that you get from. From the media is that it's always without this uncertainty information. Right. There is always median of something. Right. And then people start debating about it as if that's the ultimate truth. Right, right. And I think that's. That's a much. Yeah, as a. I don't know, bro.
Moritz StefanerThey say the unemployment rate goes up 0.2%, right. But you never get the actual error rate, which is probably 0.5 or something. And then it's a meaningless statement. Who knows what's going on, right?
Matthew KayAnd you get these. I mean, as humans, we're kind of like aggressive dichotomizers, right? We love to put things into buckets and say, this. This went up, or it didn't go up, or there is an effect here, and I can publish it or there isn't an effect here, and I can't publish it. And I think a lot of what I think is important about uncertainty visualization, is that if you do it well, you can try to prevent that dichotomania and force people to confront uncertainty. And I think that's a lot of what we try to do.
Jessica HullmanYeah, I have all sorts of thoughts about forcing people, because I think on some level, like people, it's just like, it's just really hard problem. It's cognitively complex to be trying to reason with two intervals rather than two points. It's just hard. I think I heard Lyndon B. Johnson was once given something with an interval, some information, and he was like, oh, ranges are for cattle. Give me a number. So I think that attitude is like, people just really don't want any sort of distribution because it's complicated. But so, yeah, Matt and I have done a lot of thinking about, you know, how do you kind of bring that information in in a way that they end up taking it into account?
Enrico BertiniYeah, yeah, yeah. Okay. But so let's say that we do want to visualize uncertainty, right? So now, how do you do that? I think there is there's a very large space of solutions out there, and it's not totally clear to me what the state of the art is either. So I'm wondering if you can walk us through what exists already, what the options are.
How to visualize uncertainty in science AI generated chapter summary:
But so let's say that we do want to visualize uncertainty, right? So now, how do you do that? I think there is a very large space of solutions out there, and it's not totally clear to me what the state of the art is either.
Enrico BertiniYeah, yeah, yeah. Okay. But so let's say that we do want to visualize uncertainty, right? So now, how do you do that? I think there is there's a very large space of solutions out there, and it's not totally clear to me what the state of the art is either. So I'm wondering if you can walk us through what exists already, what the options are.
Jessica HullmanSure. Yeah, I guess I'll start and Matt can follow up. So that I think, the way I think about it, a little bit. Or one kind of distinction would be, um, you could show the entire distribution at once. So you have some sort of distribution, um, you know, showing or representing your error. You could show that all at once in something like an interval, um, which is kind of a summary of the distribution. Um, uh, or you could show, you know, something like, um, even a histogram or a probability density function. Um, uh, and then on the other hand, um, and then there are other sort of static plots you could use. But then, on the other hand, you could show people kind of a set of draws from that distribution that you're trying to show them. So kind of those two big buckets of techniques, I think, are very different. So, like, within the sort of first bucket, you're showing people the entire distribution at once. So that I mentioned things like intervals. Intervals themselves are complicated. So they could. I mean, Matt and I have talked a lot in our work about how things like confidence intervals. Intervals are not usually interpreted correctly by people. So confidence interval is describing the sampling distribution. So it's trying to give you a sense of the error and whatever you're estimating a mean or whatever. But the way in which it's constructed is just very difficult to describe. It's using kind of frequentist processes that make people want to think that the confidence interval is actually the interval that's going to contain the true population mean 95% of the time, when in reality, that's not the right interpretation.
Confirmations intervals and epistemic uncertainty AI generated chapter summary:
Intervals are not usually interpreted correctly by people. It can be useful to distinguish between maybe predictive uncertainty and aleatory uncertainty. This is where you also start getting into sort of philosophical debates.
Jessica HullmanSure. Yeah, I guess I'll start and Matt can follow up. So that I think, the way I think about it, a little bit. Or one kind of distinction would be, um, you could show the entire distribution at once. So you have some sort of distribution, um, you know, showing or representing your error. You could show that all at once in something like an interval, um, which is kind of a summary of the distribution. Um, uh, or you could show, you know, something like, um, even a histogram or a probability density function. Um, uh, and then on the other hand, um, and then there are other sort of static plots you could use. But then, on the other hand, you could show people kind of a set of draws from that distribution that you're trying to show them. So kind of those two big buckets of techniques, I think, are very different. So, like, within the sort of first bucket, you're showing people the entire distribution at once. So that I mentioned things like intervals. Intervals themselves are complicated. So they could. I mean, Matt and I have talked a lot in our work about how things like confidence intervals. Intervals are not usually interpreted correctly by people. So confidence interval is describing the sampling distribution. So it's trying to give you a sense of the error and whatever you're estimating a mean or whatever. But the way in which it's constructed is just very difficult to describe. It's using kind of frequentist processes that make people want to think that the confidence interval is actually the interval that's going to contain the true population mean 95% of the time, when in reality, that's not the right interpretation.
Moritz StefanerOh, that's what I thought.
Enrico BertiniDamn.
Jessica HullmanIt's not. It's actually the interval that if you constructed intervals in that way, 95% of the intervals would contain it. So it's, there's just weird misinterpretations. On the other hand, you could have intervals that just, you know, are kind of more like a coverage interval. So 95% of the data itself, the actual observations, will be in here. Yeah, Matt, go ahead.
Matthew KayI think this is actually, it's worth mentioning that this is a spot where we're talking about uncertainty that isn't necessarily error. Right. When you start talking about coverage intervals or predictive intervals. Right. So it can be useful to distinguish between maybe predictive uncertainty, or sometimes we make this distinction between aleatory and epistemic uncertainty. But I don't like terms like that because they're, I don't know. I think statisticians often use words that are more complex than they need to be. Right. So you might think of error as something like the uncertainty in a parameter or a mean or some statistic you're estimating.
Moritz StefanerIf I have, like, a temperature forecast for tomorrow, is that like. And then we don't know if it's going to be 1213 or 14 degrees celsius, what type of uncertainty would that be?
Matthew KaySo I would think about that as predictive uncertainty. So you have a predicted, one way of setting it up is you have a model that gives you a predicted distribution of possible temperatures, and then you construct an interval by taking, say, the central 95% of that predicted distribution. And I would call that a predictive interval in contrast to. And so you call that allatory uncertainty. Aleatory comes from something latin about rolling dice or gambling or something like that. So you're taking another draw from some predicted distribution, and the model will already.
Moritz StefanerGive you a full distribution, or maybe you let it run 1000 times, so you have a thousand results and so on, and. And then the challenge is more, how can I summarize or visually present now this distribution?
Matthew KayRight, exactly. And this is where you also start getting into sort of philosophical debates. You know, are you fitting a bayesian model? Are you fitting a frequentist model. Are you doing something else? I tend to mostly just get annoyed by those philosophical debates, but they exist.
Jessica HullmanYeah, I think that one important point. Well, I think one thing that I feel or have seen people not do so well, we actually have a paper on it with confidence intervals. I think in particular is people don't really understand the relationship between your sample and then the distribution. So your sample distribution, like your actual measurements, and then the distribution of whatever you're estimating, say, a mean, which is much lower variance. So I think there's a lot of confusion even about what's the underlying distribution. We're talking about the fact that we even have a model. Sometimes. I think people don't have a very clear sense of and intervals. I think part of the problem with them is they're used across the board for all these things, and they look the same in all of these cases, whether they're 95% confidence interval, or whether they're just showing you standardization deviation, or whether they're showing you, like, a coverage interval, which is just the underlying data distribution, not a sampling distribution.
Confidence intervals and sampling distribution AI generated chapter summary:
People don't really understand the relationship between your sample and then the distribution. There's a lot of confusion even about what's the underlying distribution. I think people don't have a very clear sense of and intervals.
Jessica HullmanYeah, I think that one important point. Well, I think one thing that I feel or have seen people not do so well, we actually have a paper on it with confidence intervals. I think in particular is people don't really understand the relationship between your sample and then the distribution. So your sample distribution, like your actual measurements, and then the distribution of whatever you're estimating, say, a mean, which is much lower variance. So I think there's a lot of confusion even about what's the underlying distribution. We're talking about the fact that we even have a model. Sometimes. I think people don't have a very clear sense of and intervals. I think part of the problem with them is they're used across the board for all these things, and they look the same in all of these cases, whether they're 95% confidence interval, or whether they're just showing you standardization deviation, or whether they're showing you, like, a coverage interval, which is just the underlying data distribution, not a sampling distribution.
Enrico BertiniYeah, sure. So what are other options?
Jessica HullmanRight. Yeah. So we can talk about, I guess, what we've been doing. So we've been kind of focusing on a couple different, I would say, outcome oriented or sample based techniques. So rather than showing you some representation of your distribution, you're going to draw samples from it and show those. So the technique that I've been working on for a while is hypothetical outcome plots, where we're basically going to take the distribution and draw samples from it and then visualize those samples one at a time as frames in an animation, so that you're basically seeing the uncertainty kind of play out. You're not able to just focus on the whole distribution at once. So it's kind of a way of, I think of it as more of a way of sort of forcing people to take into consideration the variance, because you're actually, if you're not going to actually add the mean as a mark to it, then all they can see is kind of the, you know, draws from the distribution one at a time. They have to kind of use the visual system to kind of try to estimate, like, what the full distribution looks like. And so I think we've been doing some research on these, and there's certain reasons why, once you move to this sampling paradigm, where you're showing samples drawn from a distribution one at a time, there are some benefits. Like if you're trying to show any sort of joint probabilities or multivariate probabilities, it becomes a lot clearer, or you're able to basically estimate those much better. So if you imagine the typical thing would be, you could show two bars in a bar chart, like maybe somebody did a scientific experiment, and they have two bars. One is the control group and one is the treatment group in their study, and they put error bars on both of them. So you can see the mean whatever blood pressure in the control, the mean blood pressure in the treatment, and the error in both of those estimates. It can be hard to answer certain questions about that data from a visualization that's showing you both distributions like that. So if I wanted to know, for instance, if I ran this study again, what's the probability that I would still see a treatment effect that's larger than the control? It's really hard to answer, basically, from a static depiction, because that data is just not expressed. So if you were to show someone the same bar chart, but where the bars are changing height, each frame in an animation, you can begin to estimate some of these multivariate probabilities. So, like the hypothetical outcome plot stuff, I think there's been a few cases in the media that maybe people would recognize. So things like the New York Times has done various types of election predictions using where they're showing you actual animated samples, draws from a distribution. I think the most notorious one, which is kind of a conversation topic in itself, is the New York Times needle that they used on 2016 presidential election night.
Noise and Uncertainty: Political Analysis AI generated chapter summary:
The technique that I've been working on for a while is hypothetical outcome plots. Instead of showing you some representation of your distribution, you're going to draw samples from it and show those. The most notorious one is the New York Times needle that they used on 2016 presidential election night. It forces people to confront uncertainty.
Jessica HullmanRight. Yeah. So we can talk about, I guess, what we've been doing. So we've been kind of focusing on a couple different, I would say, outcome oriented or sample based techniques. So rather than showing you some representation of your distribution, you're going to draw samples from it and show those. So the technique that I've been working on for a while is hypothetical outcome plots, where we're basically going to take the distribution and draw samples from it and then visualize those samples one at a time as frames in an animation, so that you're basically seeing the uncertainty kind of play out. You're not able to just focus on the whole distribution at once. So it's kind of a way of, I think of it as more of a way of sort of forcing people to take into consideration the variance, because you're actually, if you're not going to actually add the mean as a mark to it, then all they can see is kind of the, you know, draws from the distribution one at a time. They have to kind of use the visual system to kind of try to estimate, like, what the full distribution looks like. And so I think we've been doing some research on these, and there's certain reasons why, once you move to this sampling paradigm, where you're showing samples drawn from a distribution one at a time, there are some benefits. Like if you're trying to show any sort of joint probabilities or multivariate probabilities, it becomes a lot clearer, or you're able to basically estimate those much better. So if you imagine the typical thing would be, you could show two bars in a bar chart, like maybe somebody did a scientific experiment, and they have two bars. One is the control group and one is the treatment group in their study, and they put error bars on both of them. So you can see the mean whatever blood pressure in the control, the mean blood pressure in the treatment, and the error in both of those estimates. It can be hard to answer certain questions about that data from a visualization that's showing you both distributions like that. So if I wanted to know, for instance, if I ran this study again, what's the probability that I would still see a treatment effect that's larger than the control? It's really hard to answer, basically, from a static depiction, because that data is just not expressed. So if you were to show someone the same bar chart, but where the bars are changing height, each frame in an animation, you can begin to estimate some of these multivariate probabilities. So, like the hypothetical outcome plot stuff, I think there's been a few cases in the media that maybe people would recognize. So things like the New York Times has done various types of election predictions using where they're showing you actual animated samples, draws from a distribution. I think the most notorious one, which is kind of a conversation topic in itself, is the New York Times needle that they used on 2016 presidential election night.
Moritz StefanerOr many people.
Jessica HullmanRight. Yeah, I know Matt has thoughts on this.
Matthew KaySo, I mean, I think that. So the thing about the needle is it forces people to confront uncertainty. And I think this actually goes back to that question of, why do we need to do uncertainty visualization in the first place. Well, a lot of people got anxious about the needle because they were forced to confront uncertainty in something that they actually cared about. And I think the thing about effective uncertainty visualization is when you do it well, you can make people anxious, but they should be anxious if it's something they care about.
Jessica HullmanExactly.
Moritz StefanerMaybe. Let's explain briefly what it did. So it was the prediction of who will win the election 2016.
Jessica HullmanIt was a little confusing, actually. Yeah. Do you want to explain it?
Matthew KayYeah. So, I mean, I ended up. I talked to Kevin Queeley, who, when I was at Openviz, about how the thing worked, and also read a blog post that they wrote that I never saved the URL for, so I can never find it again. But basically, the way that the thing works, from my understanding, is. So this was a live prediction of the difference in the vote margin between Hillary and Trump as the returns were coming in. So as the returns are coming in, they're updating some model to predict this vote margin. The model generates a distribution describing their uncertainty in the margin. The margin is some number between negative 101 hundred, where zero means that they both get the same number of votes. What it's doing is every 30 seconds the model gets updated and it generates a new distribution describing the uncertainty. That is, I believe, just summarized as a mean and a standard error or standard deviation, because predictive distribution that's then being shoved into people's browsers and then some JavaScript in the background is drawing samples from this distribution, which makes a needle kind of jitter on this scale, from Hillary wins to Trump wins. Right.
Moritz StefanerRight. But the needle was jumping much faster than the data was updating.
Matthew KayRight.
Moritz StefanerSo the needle did not reflect the data updates, but it reflected, it reflected the uncertainty metaphor, more or less, or like a hypothetical, hypothetical outcome plot, basically.
Matthew KayRight. It was a hypothetical outcome plot and it was being updated every 30 seconds and then also was jittering in between that to demonstrate the uncertainty. Although it was actually, the interesting thing about the needle is it wasn't jittering. It was actually not faithfully representing the uncertainty. It was actually not jittering enough. So it was only jittering in the central 50% interval. It should have actually been jittering a whole lot more. Which is one of the things that I think I don't particularly like about it. I do really like the needle, but I think there are things that I would have. One of them is that the other one is exactly this other problem, which is something that we call a deterministic construal error, which is uncertainty visualizations, where people misinterpret something that's representing uncertainty as something that's representing something deterministic, in this case, thinking that the jitter was representing real time updates.
Jessica HullmanRight. Yeah, that was a big part of.
Moritz StefanerIt, but you could also read it as sort of a faulty device. So I was thinking about, like, it's a sensor that doesn't quite work, or it's a machine that's under pressure and lots of steam, you know, and so, and I think in a way it gave a good metaphor to, okay, it's a very, like a vivid process that goes in all kinds of directions and we see like a shaky picture of that. So I think from that end, maybe it was a good metaphor overall, I.
Jessica HullmanThink it was a great metaphor. Yeah.
Matthew KayYeah. I think that's the metaphor they're trying for. But I think the problem is some people got that and some didn't.
Jessica HullmanRight?
Moritz StefanerRight. Yeah. As it is with metaphors, I think.
Jessica HullmanThat one is kind of perhaps a symptom or the way that was received is perhaps a symptom of the fact that even leading up to the election, we weren't seeing things like that. If we'd seen the dial all along and had been forced to think about the uncertainty in these predictions all along, maybe it wouldn't have been as big a deal. Pretty sure it wouldn't have been. But, you know, the fact that we rarely see uncertainty in a way that we have to take it into account, I think is also a big part of the problem. There's kind of a norm to not show it or to show it in a way that it's easy to just dichotomize. So for me, I think you would see these predictions leading up to the election. You could go to whatever it was, the upshot page, and it would say, you know, there's like a 75% chance that Clinton's gonna win or it was even higher than that for a while. But I think it's the very notion of probability is hard. So, you know, people just wanna round that up. You know, it was like, you know, looks like Hillary Clinton's gonna win because she has a reasonably higher than 50% chance.
Moritz StefanerAnd so I think maybe percent in people's mind.
Jessica HullmanExactly right. Yeah. And so I think that's why uncertainty visualization, it kind of needs to be a policy. But I think we're pretty far from that. So that people become better at just using it and more comfortable with it. So it's not like something like this gets rolled out and like, half the people don't even seem to get that. It's uncertainty, you know, like the deterministic constituent error. I think that was what you were talking about. Right, Matt? They don't even. They think that's the actual latest prediction based, like, poll results are coming in every second and. Or every, you know, 500 milliseconds.
Post-processing techniques AI generated chapter summary:
Yeah. You need to be careful to hit the right metaphor. But let's talk more about techniques because I think we all agree, okay, it should be visualized. It depends a bit on the task.
Matthew KayYeah. Or whatever. Right? Yeah.
Jessica HullmanRight.
Moritz StefanerBut let's talk more about techniques because I think we all agree, okay, it should be visualized, but we also agree, okay. It depends a bit on the task. You need to be careful to hit the right metaphor. So what else is out there? What other things can people do to show uncertainty or make it accessible?
Other Ways to Show Uncertainty AI generated chapter summary:
There are other discrete outcome techniques. Like an icon array, which is often used in medical risk communication. When you express things in this way, you can reason about conditional probability more easily. And I think just that re expression is one way of getting people to more easily understand uncertainty.
Moritz StefanerBut let's talk more about techniques because I think we all agree, okay, it should be visualized, but we also agree, okay. It depends a bit on the task. You need to be careful to hit the right metaphor. So what else is out there? What other things can people do to show uncertainty or make it accessible?
Enrico BertiniSo I think that.
Jessica HullmanYeah, Matt, maybe should talk about this, the counterpoint to hypothetical outcome plots. Like, if you don't want to use like, if you think animated uncertainty is too much, like people are going to freak out the way they did for the dial, then there are other discrete outcome techniques.
Matthew KayYeah. Or I mean, if you're in a situation where animation just isn't possible.
Jessica HullmanRight?
Matthew KaySo there's two that I think I would mention. So one actually follows right from the election prediction examples. There was an interesting example of what we normally call something like an icon array, which is often used in medical risk communication. And the idea there is like, you give a grid of, say like 1000 possible outcomes and you just color them or use little icons or something to indicate the different possible values. So maybe it's something like you're looking at a. Well, actually, I'll use the election example. So you might show a grid of 1000 possible outcomes and color them one color. Well, let's say blue if Hillary wins and red if Trump wins. Right? There was a nice example of this. There was a Washington Post article after the election showing probabilistic predictions from a couple of different poll aggregators, I think Huffington Post and the upshot. And 538 where they had predicted, 538 had predicted maybe 80% chance, or maybe it was a 75% chance that Hillary was going to win. A. And the other ones were more certain, so it was maybe like 90 and 95%. Right? Well, if you look at that as a grid, they were actually doing it in a fancier version called a risk communication theater. So you look at it as like a seat map to a theater where I color some of the seats one color if Hillary's gonna win, some of them another color if Trump's gonna win. And then I say, I've given you a random ticket to a seat in the theater. If you end up in a red seat, Trump wins the election. But when you look at that kind of visualization, the 85% is no longer easy to round up to 100%. It actually looks fairly clear that there's a decent chance Trump will win.
Moritz StefanerThat's interesting.
Matthew KayYeah. And I think just that re expression is one way of getting people to more easily understand uncertainty.
Jessica HullmanIt's weird. Yeah.
Moritz StefanerI have the same with false positives. So I always have this hard time to understand conditional probabilities unless I see it. It's hard.
Enrico BertiniYeah, same for me.
Matthew KayAnd that's where this idea comes from, really. This frequency framing or discrete outcome, or however you call it, there was this research on bayesian reasoning, so reasoning about conditional probability. And when you express things in this way, you can actually reason about conditional probability more easily. Right? So I give you an icon array of possible outcomes of this procedure you're going to undergo to treat some health problem you have. But then I can start doing things like coloring them differently, depending on if you get a complication or something. And then you can start reasoning about, well, how many of these dots where the procedure is successful still have some complication, which is a question of conditional probability that you could do with arithmetic, but is hard to do.
Jessica HullmanYeah, I think also on the frequency framing, that's kind of the motivation behind the techniques we've been developing. Like hypothetical outcome plots and then some of the other discrete ones. This work in bayesian reasoning. But I would add that there's kind of another distinction between the animated versions where you're seeing one draw at a time and the ones that aggregate. So like an icon array is showing you everything at once. There's been work, I think it's. We still haven't really brought it into vis or talked about it explicitly. But there's work in JDM also talking about how there's a difference in experiencing distributional information yourself. Kind of actually experiencing it versus having it described to you. So people tend to make decisions a bit differently based on whether something like probabilities are described versus experienced. There's a few kind of these underlying differences with some of the outcome based stuff compared to the more traditional showing you a density plot, showing you error bars. So I think we're kind of still learning about them.
Moritz StefanerI'd like to move the discussion to more methods because you don't have terribly much time left as well, I think.
Quantile Dot Plot AI generated chapter summary:
A quantile dot plot is basically an icon array, but for a continuous variable. It's more perceptually effective than a density. So it's sort of a mixture of accountable, discrete representation and like a distribution.
Matthew KayWell, I was going to mention one other static approach that we've been. We've developed this idea of a quantile dot plot, which is basically an icon array, but for a continuous variable. So you're taking these possible outcomes and you're stacking them up on say, a number line. And it allows you to do similar sorts of things as with an icon array. But you can estimate things like intervals kind of arbitrarily. But it's more perceptually effective than a density.
Moritz StefanerSo it's sort of a mixture of accountable, discrete representation and like a distribution.
Matthew KayYes, exactly.
Jessica HullmanExactly.
Moritz StefanerOh, best of both worlds, potentially.
Visualization with Value suppressing uncertainty palettes AI generated chapter summary:
There are some interesting work on what's called value suppressing color palettes to show uncertainty. You need a principled way of deciding how much to suppress, and that's actually a hard problem. We've been discussing about alternative techniques.
Jessica HullmanI think there's one other technique worth mentioning, that the like, along the same lines, is trying to force people to take into account the uncertainty. There are some interesting work on what's called value suppressing color palettes to show uncertainty. Matt might know the actual algorithm better than I do. I haven't read the paper in a while, but it's basically like you're using, you're trying to hide the colors. So if color is representing value in some heat map type style visualization, you're actually going to mix gray with the colors when they're more uncertain so that your eye actually has a harder time of reading the value back from the color.
Moritz StefanerThat's something I also did visualizing wind prediction data. And I had big debates with the scientists because they had 51 different models and they were like, you cannot summarize them in a meaningful way. We have to show the full distribution was fighting for showing the best guess and then using opacity to make it harder to read the ones that are more uncertain. And because we wanted to show a map. Right. And so there was a limit of how much you can do at once. Right. And it's maybe not really reflecting the full distributions that well, you know, as the other techniques, but just making the more uncertain stuff harder to read. It's a very basic technique, but also one, I think that's kind of smart. I think in some cases do exactly the right job.
Matthew KayRight, right, exactly. I think it can be good. The issue I had with value suppressing uncertainty palettes is I think that you need a principled way of deciding how much to suppress, and that's actually a hard problem. I think you can go to these kind of models of how people perceive probabilities. There's this one model called the, the linear and log odds model. I'm not going to try to describe it right now, but basically there's some work in cognitive science that you might be able to take and say, oh, here's a principled way we could adjust probabilities. It's not something that anyone has done yet. It's something I'm interested in doing, but we haven't gotten there yet.
Moritz StefanerYeah, but maybe these techniques also like using blur or using sketchy rendering, which some people do to make something more sketchy, that is, more uncertainty, maybe more helpful for just indicating, oh, there's more uncertainty here and less over there.
Jessica HullmanRight.
Moritz StefanerI think if you're really interested in the exact nature of the uncertainty, then.
Jessica HullmanYeah, I think the task there matters a lot. Like sometimes maybe all you wanted to say is like, these are kind of uncertain.
Moritz StefanerYeah. And sometimes that's totally fine.
Jessica HullmanBut I think we default in visualization to thinking that or the techniques we've used for a long time, you know, really only support that kind of task where you can kind of tell what's more uncertain and less uncertainty. But, like, when you're using these visual encodings that just aren't very effective, like hard to read. You know, you're kind of like throwing away information. Like if you have the full distribution and you're showing it in a way that people can only pick out three different levels of probability, then, you know, like, it's, I don't know. To me, it goes against the whole reason for visualizing things. You want to actually allow people to draw inferences on that data that you have.
Enrico BertiniWe've been discussing about alternative techniques, but ultimately, if you want to say that one technique is better than another in a certain context, you have to figure out how to evaluate them, which sounds really complicated to me. So I'm wondering, I know that you've been doing some work in this space as well. So how do you actually know that one technique is better than another in one? What type of context?
Is Uncertainty Visualization Better Than Others? AI generated chapter summary:
How do you know which visualization techniques are better? The real hard problem or the reasons why evaluation is so difficult with uncertainty visualization is that people do want to avoid it. How do you actually set up experiments to test these things under realistic conditions?
Enrico BertiniWe've been discussing about alternative techniques, but ultimately, if you want to say that one technique is better than another in a certain context, you have to figure out how to evaluate them, which sounds really complicated to me. So I'm wondering, I know that you've been doing some work in this space as well. So how do you actually know that one technique is better than another in one? What type of context?
Jessica HullmanI think, yeah, so we've been talking about this, and I mean, I think there's kind of a distinction between how do we currently say that we know which visualization techniques are better and how should we, like, what evidence should we be looking for? And so, I mean, for context, we did a study where we looked at like about 90 papers that had published uncertainty visualization techniques and then done some sort of study to figure out which ones worked better. And what we found was that the vast majority are either just asking people to basically read data back, so asking them to report some probabilities that they saw in the visualization and then scoring them on how accurate they are. How close are you to the true probability? So kind of like, how well do people read the data? And then there are also a number of others that were asking people basically for some sort of self reported kind of sense of how satisfied they were. So that might be like, how confident do you feel about your judgments looking at this visualization? Or, you know, how much do you like using this visualization? How helpful does this seem to you? So that was like, the vast majority of what we saw was doing that. And I think there's problems with both of those. So I think the real hard problem or the reasons why evaluation is so difficult with uncertainty visualization is that people do want to avoid it. So there's kind of an obvious moral reason to show uncertainty because it's giving people more information and they can make more informed decisions. But the fact that people try to just evade uncertainty so they try to round things up makes it, I think, really hard to know if uncertainty is helping unless you're actually looking at how they make some sort of action or how they make some sort of decision where there's actual kind of incentives. So a lot of, I think what we've seen in the study literature is just kind of looking at, well, when you ask people to read the data back, can they do it? And then do they like the feeling of this visualization? Like, does it. Does it seem helpful to them? And I think those are totally the wrong thing, and I guess not totally the wrong thing in that, like, uncertainty cognition is very complex. There are aspects of it that, of course, depend on whether you can read the probabilities. There are aspects of it that do depend on this kind of effective or emotional reaction you have. But I think the really core stuff that Matt and I have been working on, but still, we're still kind of early in it, too. It's just how do you actually set up experiments to test these things where you're looking at decision making under realistic conditions? So I think it's the uncertainty visualization evaluation is just fundamentally hard because uncertainty, comprehension, or cognition is fundamentally difficult even to understand, but also something that people avoid. So it's like, how do you tell when something's actually going to work in the real world, given that people might want to just look at the mean? And I feel like that's where we haven't evaluated much at all, so. I'm sure.
Moritz StefanerBut then it comes back to tasks. Right. And I think that's a very interesting perspective to not just say we need to visualize uncertainty, because I think, yeah, the term is very broad. As we have seen, there's many types of uncertainty, and then people will want to do different types of things with it. And this is hard to figure out exactly what people want to do, but it will make it easier, maybe.
Jessica HullmanRight. Yeah.
Moritz StefanerEnd of the day.
Matthew KayYeah, yeah. I mean, I think I see all of those things, like perceptual aspects. They are, you know, necessary, but not sufficient. Right, exactly. And the other way I kind of try to usually put it is, you know, just because someone. Just because you can demonstrate that someone can pull the probability out of a visualization doesn't mean that they know how to use it to make the right decision.
Moritz StefanerRight. Yeah.
Matthew KayAnd that's really the hard. The hard problem. And so, yeah, as Jessica said, I mean, we've been doing some incentivized decision making experiments, and I think that's where a lot of our work is kind of heading. And I hope that a lot of other people's will as well.
Jessica HullmanYeah, yeah.
Moritz StefanerIt's been a hot topic. I mean, we mentioned it last year as one of the review, as one of the hot topics of 2018, and that's, I have a feeling it's not going to go away. 2019?
Enrico BertiniYeah.
Jessica HullmanI hope nothing.
Matthew KayYeah.
Jessica HullmanYeah. I think sometimes it's hard to define the task. Like, as a designer, you're creating a visualization, and it's hard to know like or define what should people do if I show them the error information. And I think that's part of maybe why we don't see people using uncertainty or showing it as much as we might like. It's just that it's, you know, it's rare that there's a very clear decision to be made from a visualization. Like, I think there often is a decision, but it can be hard to define. And sometimes what people should do with uncertainty is not clear, and that's why you need these kind of formal decision frameworks to think about it. So I think it matters, but it's hard to sort of evaluate that way, or at least in practice, when you're designing something and you don't have the ability to run some controlled experiment.
Feeling Certain About a Design Decision AI generated chapter summary:
I think there are some things that people shouldn't feel too certain about. Maybe they may feel certain just because the way this is represented doesn't take into account uncertainty. There's a weird relationship, I think, between what we like and what's good for us.
Enrico BertiniI'm now curious about isn't another aspect whether how people feel certain about something according after seeing different representations. Right. I guess that's objective, but it's important. Right. I think there are some things that people shouldn't feel too certain about.
Jessica HullmanRight.
Enrico BertiniAnd maybe they may feel certain just because the way this is represented doesn't take into account uncertainty.
Matthew KayRight, right. This is a sort of like, you might. You might do something like elicit subjective confidence in your decision or something, but if you don't show them the uncertainty, they might be really confident in a decision that is for.
Jessica HullmanRight. There's a weird relationship, I think, between what we like or like what makes us feel good and what's good for us.
Matthew KayYes.
Enrico BertiniRight. In this domain, we want to believe some things. Right. And.
Jessica HullmanRight.
Enrico BertiniIt's hard.
Matthew KayThis is why I always get back to this idea of, you know, anxiety being proportional to uncertainty. Right. It's something you care about and. And you're uncertain about it. Sometimes you should feel bad.
Jessica HullmanRight. Which is weird for designers.
Enrico BertiniYeah.
Jessica HullmanI think that's why there's this fundamental tension.
No Textbook on Uncertainty Visualization AI generated chapter summary:
There's no established, I would say, textbook or even the material is very scattered. So how do you actually learn all of these things? And then, similarly to practitioners, how do they get a better sense of which tools they can use. Thanks so much for all these good tips and thanks for coming on the show.
Enrico BertiniI'm wondering if we can conclude maybe by giving some practical advice to people who want to either learn more about visualizing uncertainty. As far as I can tell, there's no established, I would say, textbook or even the material is very scattered. So how do you actually learn all of these things? And then, similarly to practitioners, how do they get a better sense of which tools they can use, which methods? I think there's so much to do in this sense, in this space.
Jessica HullmanYeah, that's a great question. I'm just trying to rack my brain. Yeah, yeah, go ahead, Matt. You can start.
Matthew KayYeah, I mean, one thing that has happened very recently, there's a visualization book that I'm going to mispronounce. Claus Wilke.
Jessica HullmanYeah, that's what I was going to say.
Matthew KayYeah. So he wrote a chapter in a visualization book that he's currently writing on uncertainty visualization that talks about a lot of the more modern techniques that we've been talking about here.
Enrico BertiniOkay.
Matthew KayAnd I think that's probably one of the better references that exists. I had this ambitious plan of starting to write a book on uncertainty visualization, which is probably at some point still going to happen in my copious free time. But, you know.
Jessica HullmanActually the grammar of graphics, I mean, I really liked the chapter on uncertainty in that book, which is kind of like a classic visualization book. It's a little researchy or scholarly, but I feel like the way the treatment of uncertainty is very clear in that book on some level, just reviewing just basic stats on inferential stats is probably not what people want to do. But the notion of a sample and a population and how you. How you estimate and what sampling error is. I feel like if maybe someone needs to create an accessible resource, that's like three pages of what you should.
Matthew KayThere's that seeing theory, kind of animated stats textbook thing, which is pretty good if you want the frequentist perspective on it. And I think I kind of tend to end up usually working in a bayesian mode if you want the bayesian perspective on it. The first, I think two chapters of Richard McElreath's statistical rethinking is another good resource, or he has some online lectures because it's designed as a class. So that's another place to go for that kind of basic idea of probability.
Jessica HullmanYeah. I was also going to say Howard Wainer has written a few books that are kind of almost Tufte esque. I don't know if he would appreciate that comparison, but they're kind of, they walk you through examples and I think.
Enrico BertiniLots of case studies. Yeah.
Jessica HullmanMuch more accessible than some of the other stuff we're suggesting. Although, like the McElreath book, I think is like really fun to read.
Matthew KayYeah.
Jessica HullmanI like to think that you don't have to care about stats, but I don't know for sure.
Matthew KayYeah, I guess another thing that I've been doing lately is I have a, I end up often answering uncertainty visualization questions on Twitter. My approach to that now has been to compile the various things that I've constructed into a repository on GitHub. So that's another possible resource for some of the, there's a lot of examples of like hops and some quantile dot plot stuff on there and a few other even weirder things I did one that was on uncertainty and correlation heat maps, which uses this kind of dithering technique for uncertainty, which is kind of cool and interesting.
Enrico BertiniOkay, so we're going to add all these links in our show notes. So yeah, make sure to go to our blog post and check all these useful links. Thanks so much for all these good tips and thanks so much for coming on the show. I think that's, again, we could go on forever, I guess. It's such a complex and interesting topic. Yeah. Thanks so much, Jessica and Matt, for coming on the show and explaining some of it.
Jessica HullmanThank you.
Matthew KayThank you.
Jessica HullmanYeah, anytime. Welcome back.
Matthew KayYeah, yeah. Love to be back for sure.
Enrico BertiniYeah. Bye bye.
Moritz StefanerThank you. Bye bye.
Jessica HullmanBye bye.
How to Subscribe to Data Stories on iTunes AI generated chapter summary:
This show is now completely crowdfunded. You can support us by going on patreon. com Datastories. And if you can spend a couple of minutes rating us on iTunes, that would be extremely helpful for the show. Don't hesitate to get in touch with us.
Enrico BertiniHey, folks, thanks for listening to data stories again. Before you leave a few last notes, this show is now completely crowdfunded. So you can support us by going on Patreon. That's patreon.com Datastories. And if you can spend a couple of minutes rating us on iTunes, that would be extremely helpful for the show.
Moritz StefanerAnd here's also some information on the many ways you can get news directly from us. We are, of course, on twitter@twitter.com. Datastories. We have a Facebook page@Facebook.com. data stories, podcast all in one word. And we also have a slack channel where you can chat with us directly. And to sign up you can go to our homepage, Datastory eas. And there is a button at the bottom of the page.
Enrico BertiniAnd we also have an email newsletter. So if you want to get news directly into your inbox and be notified whenever we publish an episode, you can go to our home page data store ready and look for the link you find at the bottom in the footer.
Moritz StefanerSo one last thing we want to tell you is that we love to get in touch with our listeners, especially if you want to suggest a way to improve the show or amazing people you want us to invite or even projects you want us to talk about.
Enrico BertiniYeah, absolutely. And don't hesitate to get in touch with us. It's always a great thing to hear from you. So see you next time. And thanks for listening to data stuff stories.