Then Do Better

View Original

David Spiegelhalter: COVID statistics, thinking about risk in life and medicine | Podcast

David Spiegelhalter is an expert on statistics. He was the president of the Royal Statistical Society and is Chair of the Winton Centre for Risk and Evidence communication. He is also a World Champion, in a version of pool called Loop and hosts his own podcast Risky Talk.  David has a new book out (with Anthony Masters), COVID by Numbers (Amazon link), which is an excellent book on COVID statistics. This follows his previous bestseller, the Art of Statistics (Amazon link). His Twitter is here. We had a chat in mid December 2021.

David discusses what was most surprising and misunderstood about COVID statistics. David emphasises how numbers can be emotional and weaponised and what we can do to protect ourselves.

We chat about what thinking about risk and techniques we should teach children and think about in every day life. Ideas such as baseline risk and absolute vs relative risk.

We think about unintended consequences, the agency challenges of regulators and how to think of a range, tolerance or gradation of risk.

David explains fat tails and extreme values and that, for instance, AI risk is an extreme existential risk but perhaps over rated.

I learn about the “Rose Paradox” and “Cromwell’s law”, in statistics. The Rose paradox suggests policy might be useful at a general population level but not at an individual basis. For instance, government messages about drinking less and things like that can be rational at the population level and yet it's also rational for individuals to take no notice of it.

Cromwell’s law implies many life events are not  0% or 100% and you should take that into account in decision making - or, in plain English, you should always imagine there's something you haven't thought of.

We discuss the risks of alcohol and touch on air pollution and cholesterol (stain drugs), and how to think about medical statistics.

David explains the attraction and beauty of stained glass art.

David ends with life advice about enjoying life and taking (good, well-managed) risks in order to have a fulfilling life.

We/I mention previous podcasts with epidemiologist, Meaghan Kall and philosopher, Jonathan Wolff and the work of Tim Harford, Nate Silver and Full Fact.

Transcipt below, you can listen below or where you get podcasts. Video above or link here.

PODCAST INFO

See this content in the original post

Transcript (only lightly edited, so expect typos etc.)

Ben (00:00): Hello, and welcome to Ben Yeoh Chats. If you're curious about the world, this show is for you. How should we think about risk in the world? On this episode, I speak to David. David is a statistician and former president of the Royal statistical society. We speak on COVID statistics, alcohol risk, and how we should think and communicate about risk. I also learn about the rose paradox and Cromwell's law. If you enjoy the show, please like and subscribe as it helps others find the podcast. Thank you, be well. Hey, everyone, I'm super excited to speak to David.. David has a new book out, 'COVID By Numbers', which is an excellent book on COVID statistics. This follows his previous bestseller, 'The Artist Statistics'. David is an expert on medical statistics, was the president of the Royal Statistical Society and is chair of the Winton Center for Risk and Evidence communication. He is also a world champion in a version of pool called loop and hosts his own podcast, Risky Talk. David, welcome.


David (01:10): Oh, wow. I'm very pleased to be here and great to get all those good plugs.


Ben (01:15): Thank you. So when writing your book, 'COVID By Numbers', what did you find as perhaps the most surprising or most misunderstood of the numbers that you found? For me in the UK at least I think it was the data on what people perhaps would call the social determinants of health. So this was some of the data you had on professions such as chefs and bus drivers, which had, I think it came out as age standardised mortality around 80 to 85, hundred thousand compared to an average, closer to 30 to 35. And as I discussed actually on an earlier podcast with Meaghan Kall, an epidemiologists the data kind of around poverty and deprivation and how that was affecting the numbers, that kind of actually surprised me but I don't know what perhaps surprised you most or what you thought was most intriguing coming out of the book.


David (02:05): Okay. That's good. Now, first of all I should say I'm only a co-author with Anthony Masters. We've been working together for a year now writing a weekly article in the Observer, which has been an interesting discipline. We've done about 50 of them now. We're about to stop, which will be a great relief but yeah, we get on very well because we're both statisticians. We tend to agree with each other very much about the stats and about what we find interesting and surprising and we disagree sometimes about writing style. I have slightly more journalistic and he's more formal, but never mind, we manage to sort that out. But in terms of surprise, we make quite a thing about the surprising stuff. Okay, we'll come onto the social-- the determinants in a minute but I remember one of the ones perhaps I shouldn't have been so surprised about was the fact that the pandemic has been a net life saver for young people that in 2020 300 fewer 15 to 30 year old's died in England or Wales than we would've expected and a hundred died from COVID. That means 400 fewer non COVID deaths over the year.


David (03:16): If you think about it…young people were locked up, they weren't driving and having accidents. They weren't going out getting drunk and getting into trouble. And so far fewer of them died. But the interesting thing was the asymmetry because there's a hundred families over that year that were mourning the death of a young person from COVID and terribly tragic events, but there were 300 families who were not mourning the death of a young person, but because they don't know who they are and it's this asymmetry between lives saved, which are statistical lives and nobody knows who they are and lives lost, which are identifiable lives and that means they have a very different emotional impact on people. And that comes into all sorts of things like new forms of motorways and so on. So that was interesting. The ones that get mis-- I think you said the ones that get misunderstood, I think it's usual stuff about just the daily data that gets reported and it's great that we daily. I'm clicking on the dashboard at four o'clock to see what happened like hundreds of thousands of others and it's great we get that daily data, but it's not the best data because it comes incredibly rapidly where it's only about what's been reported.


David (04:34): And so cases we shouldn't look at by the number reported each day. We should look at the numbers by specimen. To be able to get a pretty good idea of what's going on, we should look at when the samples are actually taken. And even then we have to go back a few days to know when people were infected. So, just yesterday we are recording this and at a time when there were 90,000 cases reported yesterday. Well, in fact, if you look back at the data a few days ago, last Wednesday, there were more than a hundred thousand positive specimens taken. There was a huge threshold passed that the media completely missed because they're only looking at the daily data that gets reported. And then of course, the number of deaths. Well, this is the number of deaths of people within 28 days of a positive COVID test because they might not have died of COVID, COVID might be incidental and all sorts of reasons. People might die of COVID after 28 days if they'd been in intensive care or whatever.


David (05:34): So it's not a great number. It's not a bad number. It's sort of useful, but we have to really wait a couple of weeks for the death certificate data from the office of national statistics. But worst of all about that 28 day number, it varies hugely depending on the day of the week. In the book, we illustrate that if you just look on Sundays, it's about half the number of cases reported than on Wednesdays. So, it's just ridiculous and the Evening Standard has been [poor] three times. They've had a headline: “Cases Soar!” because of a change from Monday to Tuesday, that's the biggest jump usually, cases soar and [that’s] hopeless, just hopeless. This is after a year and a half, don't you know this is not the number of people who died yesterday. The deaths could go back a long way, days and days, possibly weeks. And so, it just drives me mental and the health reporters are great. We can come on maybe to reporting later, but it's when it gets in the hands of general reporters, they are completely hopeless. The social determinants, it's fascinating and in the book we cover occupation to some extent because it's a big interest. You mentioned coach drivers, very well known, security guards, chefs public facing roles, people working closely in teams have been at much higher risk, it's well established. But also the interesting thing is the other factors that we know have been at risk, which is to do with deprivation, living in multiple occupancy housing, certain occupations and ethnicity.


David (07:21): Now, the real problem is all those are hugely correlated, trying to separate out --Oh, and, and of course medical conditions and obesity. They're all sort of utterly tied up with each other. And so the office of national statistics and others have made a real effort to try to disentangle those using statistical methods where you kind of try to keep one thing fixed and then see what happens when you vary and then the other one varies. Those statistical regression methods are always inadequate. They never work quite, but they are helpful because it does make you realize that almost all the effect of ethnicity was explained by if you allow for medical conditions and deprivation and multiple occupancy and so on then almost all the effect of ethnicity is explained. Not quite all of it. Unfortunately there's no database that could put occupation in with that as well. So I don't think anyone's ever been able to look at all of those things together to try to tease them out, very difficult to do cause with vaccines, there's a big ethnicity effect happening at the moment because there's a big difference in the take up of vaccines by ethnicity. So again, just saying, oh, well certain communities have got higher death rates is actually not very helpful cause it sort of suggests somehow there's some genetic effect and I don't think there's any evidence for that. They need to try to drill down, work out just why and that's what statistics is about trying to work out--, kind of give a suggestion of why without all the signs being unbelievably cautious about saying why. We don't do causality very much at all. We don't like it.


Ben (09:09): Yeah. I mean, those confounding effects, like you say, deprivation, and then even just sort of the country differences like you are alluding to countries define death very differently. So often not an apples to apples comparison and all of these other confounders. I recommend your Twitter feed as well and one element I picked up from that is how numbers and statistics are essentially often emotion and how a lot of parties have essentially weaponised them in some way, either sort of defensively or offensively where you are using seemingly big numbers and big letters used to make a point. And we come across this idea that numbers are kind of neutral, but actually when humans use numbers, they're not neutral at all and although the data might be true, there's a lot of emotional value and can be very manipulative. Do you think this is a particular problem today or we're more acutely aware about it and what perhaps should we do about it to defend ourselves from these emotional numbers to get a kind of maybe clearer sense about what might be more true or not true.


David (10:18): This is such an interesting area and I think that everyone with COVID in particular has become very aware of indeed. I grew up as a mathematician and then a statistician and I kind of thought that numbers were cold, hard facts statements around the world and they were sort of sacrosanct things. I'm afraid I've had to completely change my view on this. Both through a number of really good exposures versus by trying to do communication and you realise once you're in the media world, which I have been doing, how storytelling emotional impact is so valuable and how I use it all the time. I use humour, I use storytelling, I use my own self all the time to illustrate what I feel is important about statistics. And secondly, the fact that my work now involves work with psychologists and so on who and social scientists who have taught me so much about this. So I always go back like in my book, 'The Artist Statistics', plug, plug, which thank you very much and I start with this quote from Nate Silver, [ ] he says, "The numbers do not speak for themselves, we imbue them with meaning." And I think that's such an important point cause it really says-- Well, it says what it says, the numbers don't speak for themselves. They don't give up their knowledge and their lessons automatically, there's no algorithm that will tell you what to learn from data. This is always mediated by human influence.


David (11:46): And so often, that mediation involves in a way the exploitation of those numbers to make an argument and I feel I've got this after years of practice, I can make any number look big or small. I could make any number look frightening or reassuring. Just tell me what your impression is. …[there are] many people with these skills and it's extremely dangerous because it means that the numbers being used-- Well, I think I invented the term number theatre last May, which was an accusation of the way the numbers were being used in the government press briefings, lots of big numbers being thrown out regardless of their validity or their meaning or their context. And I thought this is pure performance art. You're pulling a number out of thin air and I actually accused the health secretary of that last week when he pulled out this number about 200,000 [COVID] infections today to announce to parliament. Well, it's a nice big number. It could very well be true, but we don't know, and he didn't give any justification for it and the method wasn't published until days afterwards. So I think that's really not an acceptable way to communicate numbers to the public. And so many people have picked up on this, Tim Harford, I'm a huge admirer of him. [He] starts with questions to ask about numbers, they always asked this and his first one is how does the number make you feel? The very first question. Before you ask whether you believe it, how does it make you feel? And I thought that's such an interesting thing. And actually that's also “Full Fact” - a fact checking organisation. They also have got a little three point check about numbers and they always say, how does it make you feel?


David (13:33): So, in other words by looking at your own feelings and your emotional responses, you could start understanding what the provider of that number is in fact trying to induce in their audience, what are they trying to do? They're like a writer in theatre, they're trying to produce an emotional impact, usually either fear or reassurance and so we have to identify when that's being done. And you do that by looking at context,…is this really asking, is this really a big number, by trends and so on and about trying to communicate those numbers in a balanced way. Now, I love numbers. There's so powerful and there are so useful and I want them all the time, but I know that they've got to be interpreted with great care and great caution. So when someone is using a number as a rhetorical device to bash me over the head with it, to change my feelings about something, I'm deeply suspicious of that, deeply suspicious and I think that needs to be counted, but it's very difficult to do. If you can quote some number with lots of zeros on it, everyone says, whoa, that's important. Well, maybe it isn't. So I think, it's a continuing problem. I'm sure it's always exists in the past, but I think now with social media and particularly with the pandemic where there's so many numbers around that it's become extremely apparent, extremely sort of important.


David (15:05): Well, to me it's just completely revitalised my interest in statistics because it's turn from what used to be a sort of technical mathematical interest to much more their social role and people have been talking about this for ages and I never want to suggest somehow a real relative physician that, oh, well, there's no such thing as truth, that every number is just what people want to make of it. No, numbers do tell us something incredibly valuable, but let's be very careful about what somebody is trying to do with that number.


Ben (15:37): That's a very good tip how numbers make you feel in terms of emotion and I really like that phrase the theatre of numbers or essentially using the moments like a dramatic performance and I see that all the time, use a decimal point or you use a really large number or say something with millions and millions where actually the order magnitude is billions. So it means nothing. And the thing you talk about, the sort of the big numbers on the side of a bus and big letters and things like that. The general public seems increasingly interested in risk and data visualisation and all of this. What do you think are maybe the basic life skills in terms of thinking about risk that we should be teaching people, maybe particularly teaching our children maybe at the 10 years old or 12 year old type of level? So apart from buying your book, Artist Statistics, which maybe is a little bit old for the younger children. I mean the one I probably use the most is trying to transform risk into an absolute statistic and then use the baseline. So put it out of a hundred, for instance, 25 cancer patients out of a hundred are alive after six months on a drug against 20 cancer patients who receive no drug. So you can sort of know that absolute measures versus one another, but what are the kind of skills or thoughts that you would emphasise for everyday risking thinking and maybe one which children can try and get a grasp of now?


David (17:03): Yeah, that one you mentioned is easily by far, I think the most important one to grasp. Me and Jenny Gates wrote a book with Cambridge university press called 'Teaching Probability', which is about teaching probability and it's almost completely in terms of what you've described, which is expected frequencies. If we did this four times, we did this a hundred times, what would we expect to happen? The nice thing about that is that it's so easy to visualise. You can draw straw on a little grid of people, you get kids to do this and things like that. So rather than think of words like chart or probability or whatever, or even percentages it's been shown in experiment after experiment that if you do think in terms of expected frequencies, then that really enhances understanding. And so that is the absolute first lesson, I think, to everybody of every age, what does it mean for a thousand people or 10,000 people? It's a bit tricky when you get very tiny risks because you kind of have to visualise a million people or something like that. Like the risks from the side effects from the vaccines, about 100,000 or whatever. Well, you can draw a hundred thousand dots with one little dot and actually I think it is quite useful to do but that's quite tricky. And the other area where it gets interesting is things like climate change.


David (18:20): Now, we haven't got a hundred thousand planets in which this is going to happen in some mayors and some not. We're talking about-- when we talk about probabilities of the future of the world, we're thinking about possible futures, all those different ways in which the future might work out; those little branches, those little tentacles that happen from our current state into the future. So, a multiverse idea, I love it. I love that. In the book, Artist Statistics, I really think about this as a sort of metaphorical way to think about probability. It's not that we necessarily think this is happening, but it's extremely useful I think to think that the world may work out, we may work out in lots of different ways in how many of those am I going to live another 10 years? And how many of those is the planet going to warm up by two degrees, et cetera, et cetera. I think it's an extremely useful metaphor, which I've tried teaching kids in classrooms and things like that, just to think-- I get nice exercise to do with kids, which I've done, okay, sit and close your eyes. It's four o'clock …  What are you going to be doing in an hour's time? Think about that or the possible things you might be doing. Okay. What are you going to be doing in 24 hours' time? Think about those, which ones are more likely, think about the possible scenarios. What are you going to be doing in a year's time? What are you going to be doing in 10 years' time? What are you going to do? And then, just to get this visualisation of these potential futures, it's scenario planning but you also attach some idea of likelihood to them. I think it's a really good mental exercise and that I find quite challenging to do for myself. But when I do try it, it's always very valuable indeed.


Ben (19:57): Interesting scenario planning, because that brings up two thoughts. So one is thinking about climate. One is this idea of the risks of not doing things which maybe we'll come to, which I think is really interesting cause some of the climate is, well, if we don't do this, then this is something that's going to happen. And the other is what I guess, people in science often fling around the words risk and also uncertainty. And then when you ask people about them, they define them sort of differently. Risk is often when you know the rules of a game like you're playing [backgammon] and you've got dice and you kind of know what they would do. Whereas uncertainty is often where you don't quite know what the rules of the game are, what the distribution is and so climate is quite a good one as in, we don't really know quite how to model the complexity of the earth. We have an idea and our model seems to do reasonably well of a reasonable type of things in certain cases but there's a lot of uncertainty about that.


Ben (20:55): So I guess I'm interested in asking you about how we should think about risk and uncertainty and particularly uncertainty cause a lot of our world is that and I think one technique you talked about that's very interesting sort of scenario analysis and thinking about what might happen or what might not happen. But do you have any thoughts about how we should deal with uncertainty and whether we should think more carefully about whether something is where you could apply risk and a probability and a frequency and feel fairly certain or when it's something which is so uncertain, particularly with these small risks or uncertainties, which are very hard to get your head around.


David (21:32): Yeah. I mean, I spent ages trying to think about definitions of risk and uncertainty. I've given up, really. I don't bother, I try not to use the word risk anymore for anything cause people have got their own interpretation of it and immediately think a bit like you. I talk about risk and say, oh, well come to talk to us about pensions. I don't know anything about bloody pensions, but some people if you work in pensions think that if you know about risk, you know about pensions. I know absolutely nothing except for the fact I get a check every month, that's it? So that's it but the terms I quite like cause risks don't necessarily have to be quantified either, but I think broadly speaking, I like to think in terms of quantified risks, things you do feel happy putting numbers on and then when you might be uncertain about the numbers and then you've got deeper uncertainty when you're not even certain about-- you can't even list the options, you can't even list the possibilities, you kind of don't feel there are things that could occur that you'd never even thought of and that's the unknown unknowns..… So I prefer to think in terms of almost a gradual scale from fully quantified risks, like flipping a coin, but even of course that's the assumption that the coins balanced right through to areas where you actually haven't got a clue what's going on. You're almost blindfolded in a dark room. You just don't know what's there.


David (22:53): So that's what I regard as deeper uncertainties. So there's a continuum right across there and I think it's unbelievably important to try to-- Well it is possible try to move things up towards the quantification stage except to never believe you've done it. You've all always got to have this possibility that something might occur you never even thought of. There is Cromwell's law. Do you know about Cromwell's law?


Ben (23:16): No, tell me.


David (23:16): Cromwell's law is that you should never-- you should always imagine there's something you haven't thought of and it's from Oliver Cromwell. He was ready to fight [in] Scotland … the church of Scotland was coming out against him and he was trying to say we don't need to battle, we don't need to fight, just don’t… and he wrote a letter to the church of Scotland saying, "I beseech thee in the bowels of Christ. Think it possible, you may be mistaken."


Ben (23:48): Very good.


David (23:49): Just have that little asylum, humility and doubt that you actually might be completely wrong and something else might be right. Just always retain that. Anyway, they didn't admit it because the church of Scotland was notoriously stubborn and of course he thrashed them. So Cromwell's law ..  What's fascinating at the moment is what's going on with this Omicron variant and because we are exactly faced with that situation at the moment in the UK because there are all sorts of possible scenarios, modellers put out different scenarios, they've been criticised because they haven't put probabilities on those scenarios. In other words, they haven't sort of explored the whole range of possibilities, the more optimistic ones they haven't been doing. And so this has led to a lot of criticism and I think some doubts from the cabinet when they're making decisions about what should be done. And so , very interestingly, this discussion about envisaging futures, can we list the possibilities, can we put probabilities on this is absolutely at the root of government policy at the moment affecting all our lives. It's riveting to watch it and of course we will find out a bit more later. And also as you said, the range of options being considered do include, not exactly do nothing, but just stick with what they're doing at the moment. No extra measures and that is in there as an option, but even within those policy decisions you might make, you then have to consider well, in terms of the things you're not in control of and to explore that I feel strongly, you should explore the whole range of potential plausible outcomes with some idea, very difficult to put a probability on them all, but some idea of what that range might be.


David (25:43): So, it's a very delicate matter but then of course you're left at the point in decision making where if you can't put numbers on everything, which you never can what's the appropriate strategy when you've got a lot of uncertainty and the possibility of really bad things happening, do you take a precautionary principle? And a lot of government strategy is based on looking at the worst case scenario, reasonable worst case scenarios. It's official, that's what you're supposed to do in order to plan for the best and worst and hope for the best. So in other words, the planning is based on pessimistic assumptions. Now, that sort of makes sense, that sort of weak precautionary principle and until things are proven to be, you don't have to prove something's dangerous before you take precautions against it. But just how pessimistic, where should that point be? What are you planning for? Are you planning for not the expectation, but you planning for-- I talked to this for ages, something that's sort of 99.5% pessimistic or 70% pessimist. We're in this distribution of possibilities, which if you could imagine it, well, what are you planning for? So I think this sort of reasoning suddenly becomes public and becomes really controversial that they are planning for pessimistic situations. That's what the modellers are doing and that looks like it's actually being slightly rejected by the government for various reasons, riveting to watch this going on risk management in practice, it's really red and tooth and claw.


Ben (27:20): It's really interesting because typically when I hear experts talk about, well, we are even more uncertain than they tend to go more cautiously because they say, well, our point estimate or our number here, we are not sure. So let's wait and see, but in this scenario, because you may have exponential numbers or the waiting, this doing nothing is also potentially costing you something. Whereas you think precaution principle the planet's quite a good example because there is no planet B which people would say, and because you can be reasonably certain that human actions are causing this warming of which there is a decent chance that something really bad would happen, you should probably do something about it. But actually, in the case of the pandemic, there's so many of these other complicated tradeoffs that are not as clear and also your Cromwell's law reminded me that that seems quite close to or adjacent to what I think Nassim Taleb has suggested with these kind of black Swan ideas. There's always something that you need to be aware of that is not in your universe of calculations and actually, I think he often talks about these so-called I think the sort of everyday parlance where it's like these fact tails, which are these small chances of extreme events, but seem to happen a lot more than you thought and he also suggests that I think a lot of models that we use don't work that well in the real world because a lot of these things and that's just a suggestion that there's a lot higher uncertainty and I think that's what you said is that you have to be potentially a little bit more humble in the face of that, but with something where you have to do pandemic planning where no decision actually is a decision, it just seems to be a very complicated trade off to think about.


David (29:11): Yeah. I mean, I think the fat tail business, I mean, that's standard in statistics of power tales. I was appalled when I looked at parent naivety of some of the financial models and this stuff, because if you're working in extreme value theory and you are used to very fat tails indeed. In other words, your modelling events decide what you've never seen before and that's what you should be doing in an extreme cause that's the whole point; they're extreme values and they're all developed mathematically and anybody working in a catastrophe would be very familiar with all those ideas. So, it's not like it's some massive innovation, I don't think, but it is overlooked by some people I think and that's really unfortunate. So there's the modelling point of view and you can really have a go at that by having very, very, very big tails to your distributions. From the risk management point of view, I think again, there's been-- I can't remember who. Lots of people have written about this stuff, but I really like this idea of the difference between robustness and resilience and a robust strategy is one that should work under a wide range of envisaged scenarios. It's you've thought of a lot of things and you made sure that what you're doing, it should hold up among these things that have happened but a resilient strategy is one that should hold up even if things have happened that you never thought of.


David (30:32): The idea is that you know you can't list all the stuff that's going to hit you. And that means it's expensive or resilient you have to overcome, you know, I know it's like supply chains. Even if you haven't thought of exactly why something might be disrupted in a certain way, you make sure you've got resilience in your supply chain that could stand up to almost any [Inaudible:00:30:56]. Obviously if an asteroid destroys the earth, that's not going to be much good, but never mind, for things that are potentially that you might want to deal with that you should be resilient to a wide range of things, including those you haven't thought of. And I quite like that idea which I think people can grasp that you can make a list, but you can always guarantee whatever happens won't be on the list.


Ben (31:29): There must be another law for that, yeah.


David (31:32): And that happened with this pandemic, the risk planning for the country, there's a very good national risk register. I've worked on some of that, the volcanic stuff, which it's never publicized. It's never scrutinized, the media don't cover it and the flu pandemic has always been in the top right hand corner of the risk matrix as being the most likely harmful thing. We didn't have a flu pandemic. The emerging virus in a pandemic was way down and not considered a huge threat and the crucial thing is that the uncertainty about that emerging virus was not fully taken into account. It was plugged in the middle of the matrix as not being-- It might cost a few hundred lives, [Inaudible:00:32:20], they were planning, it seems to have been around one scenario. What? What? So, I Think Again, it is these restrictions in terms of the imagination about the potential possible futures that is one of the big issues there.


Ben (32:38): So, there's a group of thinkers or people often associated with effective altruism or long termist and they talk a lot about existential risks. So these are these small risks, which could kind of do really badly for humans. So climate, I guess, nuclear, lab made pandemic and things like that. So, everyone can agree that the risks are quite small. No one can really agree what the numbers are. And there's probably this category that you talked about the risk that we probably can't even envisage, which might impact us. Do you think we should maybe be spending less time thinking about these things cause we can't do that much about them or should we may be spending more time thinking about them to try and develop technology to handle it, or maybe I guess for some when you're looking at nuclear war having so many safeguards that we don't really go anywhere near that type of thing; is existential risk a category that you've come across and thought about much?


David (33:38): Yeah, I was on the advisory board for the Cambridge Centre for existential risk at one point. But I left that. I think they're doing great. Somebody should be doing this. I'm really glad this Cambridge Centre exists. They're very good people. Somebody really should be looking at this stuff. What you do about it is a different matter, for example, the gain of function. I remember years ago having talks and people really saying about the danger of gaining functional experiments. Now I'm not saying where this virus came from, but it is plausible that it escaped from a lab in China. I'm not saying it did or it didn't or anything like that, I don't know.


Ben (34:13): Plausible scenario.


David (34:14): I don't know but it is a plausible scenario and so what that suggests--


Ben (34:17): And it's certainly a plausible future scenario from any lab somewhere.


David (34:20): Exactly. So, I remember reading people's concern about gain a function research and saying, ah, come on. Well, we're all right, we can control this. So, I think my alert levels there were completely wrong. I was wrong. And so I'm glad I'm not in charge of any of this stuff, but it's too easy as well at the same time to make a big fuss about stuff that actually you can always-- something can always happen. You can always imagine all sorts of horrible things might happen. So, it's assessing the level of plausibility, I think, which is a matter of judgment because by definition you're talking about things that have never happened. And so the statistical models can only go so far. So, I think somebody should be looking at this, but in a way they need to be very cautious. I think about what you build up as being a real risk or not. I think a lot of the claims about AI are a bit overblown, to be honest.


Ben (35:23): Sure. That makes a lot of sense. I mean, to just circle back to what we've been discussing a little bit sometimes about this risk of not doing anything, because you can choose so many things to do. You've got resources on not everything. I think about this a little bit in terms of regulatory. In fact, we're facing this situation now. There's an antiviral drug, which is sitting at the regulators. I think the US regulator is looking at that and if you ask most observers, they are 99% sure that this drug is going to pass through and then some people who are running sort of some of the stats from it saying, well, assuming this is true and there's infinite supply, which there isn't, but every day you delay your costing statistical lives from that delay. So the risk of not doing something is costing you and they sort of make the further point that actually the regulators don't get that much reputation or incentive by quick approvals because those lives which are dying from not having the drug are not easily countable within that. It's a classic kind of regulatory or split incentive problem. Do you think we should be paying more attention to these risks of not doing something or is it just a little bit too complicated with what the resources that we have when we're focused on them and where we have them?


David (36:41): Well, again, this is risk management, a topic which I am desperate to steer away from but everyone always asks about it so I suppose I got to talk about it. It's very important when it comes to regulation, it is very important because the inertia regulation can be a real dead hand on innovation which can be costly. I mean, we've been through this before. We went through this with aids in the 1980s, in the early 1990s and the accusation, the FDA was far too slow in approving aids treatments. And there's a big activism, real serious activism but also some absolute commitment of the AIDS community to engage also with that process and they changed the whole way in which drugs were approved about experimental release and so on. So they had a big impact because they felt it was so important not to have such a huge built inert system. We've seen with the vaccine just how fast things can be done in terms of approvals, far quicker than it'd normally be done. So, I think one of the-- what all that has shown is that when there is genuine public interest and concern, things can change, the bureaucracies and inertia can alter. At the same time, it's absolutely vital that you have these things. When you think of the absolute crap that's been suggested as being wonderful drugs for COVID and whether it's hydroxychloroquine or [ivermectin], all this sort of stuff, someone's got to look at this stuff quite seriously both in terms of the safety thing and also whether you want to pay for it or not.


Ben (38:27): Sure. No, that makes a lot of sense. So I mean, there is a dead hand, but there is definitely a tradeoff and I guess that's something for our politicians or our social scientists to look at.


David (38:38): Yeah. But that's why I want to come back-- It shouldn't just be left up to them. This is something where I think public interest and pressure if necessary from certain groups, well, there'll be pressure from all sorts of groups. But I think that it should be a matter for public discourse. It should not be just a matter for experts, government and bureaucrats. It's quite reasonable for regulation to be a matter of public discourse. That means it would be more contested because there will be different sides and different arguments. Well, that's the way it is.


Ben (39:12): Yeah. Well, and your analogy of HIV then is actually quite a good one there. So there's a couple of medical statistics I've picked up, which I'd be interested in your opinion of. So one is on alcohol. So the world health organisation [WHO] writes that alcohol consumption contributes to 3 million deaths globally. And they also write that harmful use is responsible for 5% of the global burden of disease, but they're quite careful of sort of reading their press release to use the word contribute because in the footnotes and all of that, they go, well, it has complex associations with conditions like cancer to probably make things like cancer worse, but they might have some cardioprotective and it depends on excessive use. And then when I've looked at some plots of actual data, of small data sets on population.


Ben (40:08): So again, a lot of this is kind of modelled and doesn't use actual data but when I look at some actual data, it seems to me that there is a smaller percentage, maybe five in a hundred up to maybe 25 in a hundred of people who have this much higher alcohol use and it's this population, which seems to be contributing to a lot of the negative statistics that we see for instance on hospital admission and things like that relating to alcohol and that but that means the vast majority sort of the other side, anywhere between 75 to maybe 95 out of a hundred people are really seeing limited to no sort of effects from their alcohol use at all. And I've been flip flopping around that and you get papers that are written by one type of funder there and another and seem to be quite biased use of these emotional numbers. And then there's the difference between the personal risk versus the statistical life risk, which you've mentioned. So I would be interested to think about how you think we should think about alcohol statistics and risk overall.


David (41:09): Oh, I got waves of nostalgia because before this damn virus thing came along, I spent a lot of time arguing about the effects of low dose alcohol and criticizing so much of the studies and the communication that was done around it cause it really annoyed me and oh well, let's get back to it. What fun. So the first thing is the population level stuff that WHO is saying about contributing and you've got to be careful with your words, because for example if you look at air pollution, for example, I think one person in the UK has had air pollution on their death certificate. Nobody has it on their death certificate and yet the suggestion in a sense leads to, or tens of thousands of excess deaths of the year.


Ben (42:00): Actually, that was my second question. So, I'll give you the stats, the WHO claim outdoor air pollution is attributed to 3 million deaths and they use the word contribution again, 9%, almost one in 10 deaths, which seems an extraordinarily high number, but must be due to the stats--


David (42:18): So, that's all based on modelling. So what happens is that a decision is made from epidemiological stuff that 2% or something of asthma deaths are due to air pollution or something like that, and then you look at asthma death and 2% of them count towards that count. So, there's a lot of-- it's all based on modelling, not actually counting. And of course it's mainly the same with alcohol as well. You get a certain number, not that much alcohol causes death, it's actually on the cause of death and the death certificate, but alcohol does raise the risk, I think of breast cancer and other things. So, it does contribute to a lot of ill health. Now, the crucial thing I suppose is-- first of all is the curve… In other words, there is a benefit for low dose in terms of all-cause mortality or cardiovascular I think. And the other real aspect, I mean, people can argue till the cows come home on that and actually it's unbelievably difficult to tell because you're trying to make the difference. What's the difference between people who drink a little essentially versus people who don't drink anything? But the real problem is that people who don't drink anything have got really bad health. … They're really unhealthy people on a whole, people who don't drink. If you look just at the risk of [death], they've got the risk of comparable with fairly heavy drinkers because there's a reason people don't drink. Partly because-- I mean, even if you get rid of the ex-drunks and get rid of those, the people who never drunk it's often because of various reasons.


David (43:53): I mean one quite possibility is that because of their constitution, they just don't like alcohol. They can't take it. So they're actually not very robust people. So there's all sorts of reasons. But for whatever reason, non-drinkers have high mortality, no doubt about that at all. Which I think is interesting, nobody ever seems interested in that. So you're trying to distinguish really because if you can't use nondrinkers, you're trying to distinguish the difference between the effect of people who drink almost nothing to people who drink very little. You can't do it, you can't do it. It's hopeless, absolutely hopeless. So we don't know/ whatever it is, small amounts of alcohol do not have a big difference either way really. And you know, people enjoy drinking. Well, that's fine. So I think there's been far too much attention bickering about this and what's important is that the curve thing starts going up pretty soon. Once you start slugging it down, the curve goes up quite steeply as to harm. And that's what causes the harm, the harm at the individual level. The problem is you've got this-- Do you know the Rose Paradox after Geoffrey Rose?


Ben (45:00): No. Tell me.


David (45:01): It's about the fact that if you want to make a big difference to public health and because people drink too much or don't exercise enough or are overweight or eat too much fat, whatever then the normal way you'd think of doing it, oh, let's find the real high people and try to get them down. But actually if you want to maximize the overall benefit, what you should do is move the whole curve to the left. Even the people who are moderate or even low, just get everyone to reduce their amount of consumption by a certain percentage, everyone moves. And that has the biggest impact in terms of overall public health. But in terms of individual health, it means you're aiming for the health message of people who really are very-- they're unlikely of being harmed anyway. They won't notice any effect of their change of behavior. All these low risk people are putting them into a slightly low risk category, but because there's so many of them that move can actually produce a bigger public health impact, but at the individual level it's completely negligible. So you have this point that government messages about drinking less and things like that can be rational at the population level and yet it's also rational for individuals to take no notice of it.


Ben (46:19): Okay. So that--


David (46:19): Unless they're heavy drinkers.


Ben (46:21): Okay. I hadn't heard that articulation before, but it makes a lot of sense and that's why…


Ben (46:26): Yeah. The same one, the cholesterol on the statins. That same idea.


David (46:30): Yeah, yeah, yeah.


Ben (46:30): Leave the curve whereas--


David (46:31): Everybody comes down. I mean, that's why somebody would say everyone should, everyone over 60 or so should just take statin. Statins just move you down. Most people won't notice. I pop my stat in every morning. I have absolutely no idea and I never will, whether it's done me any good whatsoever, but it has lowered my cholesterol and so I reckon I estimated my risk of a heart attack or stroke.  Over the next 10 years but I've got no idea whether it actually happens. Absolutely not. But at the population level, it's very effective.


Ben (47:04): Rose paradox. Very good.


David (47:06): Rose paradox.


Ben (47:07): Yeah. So those air pollution stats; do you think they're fair or they're fair from the model, but you think they're a little bit misleading? I mean, I guess the flip side, well, you think about it although I think you got an answer already there is because when I think about-- and I was looking at the so-called bills of mortality, which were written in the 16 and 17 hundreds and what came through was that there were some things on there which seemed to me like true causes, but which are never get put on your death certificate. So one of the classic ones is grief. I know a lot of people who essentially probably have died of grief, but obviously at the end it was a heart attack or whatever it was, or going back to our very first question that a lot of people underlying really, they died of deprivation, they died of poverty, but it was something else which is the causal part. And air pollution seems to go maybe the other way, although again, it's not on the death certificate. So some things that were the real underlying cause never get recorded and then some things are the other way. So I guess that's two questions. One is, are those air pollution stats fair? And second is, do we sometimes really miss that underlying cause like something like grief because that's hardly ever going to be put on your death certificate.


David (48:19): Yeah. I mean, it's loads of things but poverty does not get put on the death certificate. And yet we know at a population level or individual level that raising people out of poverty improves their health and that's the sort of standard thing and air pollution is one of them all sorts things because death certificates to do with immediate cause. It's all to do with immediate causes. The doctor has to sign what is the cause of this death. There's a bit of a chain there, but you have to have an underlying cause and often smoking won't go in there on the certificate. So, those in a way underlying deeper courses have to be inferred at the level of populations using epidemiological methods, but it makes it no less real I think in terms of what's going on. It just means you can't count it. You can't count them, which for a statistician is rather frustrating. In the end, you have to model these things, which makes them a bit more easy to contest, to be honest.


Ben (49:22): So do you think the order of magnitude of air pollution, is that roughly right do you think then?


David (49:27): Well, I think what does that mean? What's that mean? I think it is. They don't look implausible, the figures. In other words, if you [think about the scenario] and we all live in wonderful clean air, then you would have fewer deaths happening each year. I don't think that's implausible because of the accentuation of other underlying conditions that you might have. But that of course is you can't test it experimentally, particularly since it's all based on models. It's never testable essentially, except the only way it's testable in a way is when suddenly you get a great big batch of air pollution, like we used to get smogs in London and still do in places like Deli when they're burning the crops outside and suddenly people are really sick.


Ben (50:17): But that would suggest that maybe governments, although I guess they should potentially be doing a little bit more about something like air pollution, even though they'll never be able to quantify as easily, well, these are the actual number of lives saved. It's one of those unquantifiable, but big areas that we could potentially improve.


David (50:37): Yeah. And a lot of people were putting pressure on for things like that quite reasonably. But very often, as you said, all these things go together within a cycle of deprivation and poverty and say rather more nebulous things. You mentioned grief, but also lack of ambition, a feeling of well, I suppose deprivation, but in a broader sense than just being not on much disposable income. It's a much bigger idea than that. Those are very nebulous ideas, but actually we all know we could look at it statistically, but actually we know personally we feel how important those issues are and that someone's feelings about their environment and their feelings of their potential. In fact, the wellbeing in their life is enormously important, let alone for their mental health of course but also for their physical health. So I think that is pretty appreciated now. Now, what you do about it cause a different matter.


Ben (51:49): Okay, one more on the stats. I'm interested in your thinking on essentially expected value and in particular on the concept of transforming the value of a statistical life potentially into a dollar value. We have this in healthcare quite a lot where you think about daily adjusted life years, or the cost of a sort of quality adjusted life year, the nickname is QALY. How useful a statistical concept is it in terms of valuing life and should we be doing more of it or less of it, and how do we best use this technique? Cause a lot of people kind of think like, oh wow, statistical life is going to be valued at whatever it is, a million pounds or 50,000. But I was talking to a public philosopher the other day, Jonathan Wolff, Jo Wolff, who made the point that if you don't do these types of calculations, then by default, you are accepting something, but not knowing the inputs or choices that you are using. So I'm just interested in where you think expected value is and that sort of concept and how much value it has in statistical thinking.


David (52:56): Yeah, I'm glad you talked to Jonathan Wolff. He's excellent. The stuff he did on rail, I used the constant source rail safety and the idea of shame and things like that was so important to me. I've learned so much from what I would call jobbing philosophers like Jonathan Wolf who roll their sleeves up and say, well, okay, how am I going to use my philosophy in this practical problem? And the other one is Onora O’neill (philosopher, especially on Kant) who has had a huge impact on statistics of communication, statistical communication. By the way, just expressing my enormous respect for roll your sleeves up philosophers. I love them. But where was I? Oh yeah, putting a value on a human life. Oh, I think it's a brilliant idea provided that you treat it appropriately. So, it's done in two main areas; in transport, value of a statistical life, VOSL. I used to be about 1.6 million in the department of transport. It's probably gone up a bit more now, but that's used broadly to decide about road improvements and things like that. And in health it's used all the time by organizations like NICE who value quality adjusted life years between 20, 30,000 pounds. Sounds like it should go up a bit recently. It's been stuck at that for some time and is used as a broad benchmark to decide about which treatment is going to be paid for within the national health service and which when they started doing that caused a lot of fuss, oh, you can't put a value on a human life. Of course, you've got to do that. I mean, how else are you going to decide, otherwise you sort left up to the loudest voice to say, oh, we need our treatment for this. We need the NHS to pay for this. We need this.


David (54:35): Everyone would love that, of course. But actually, with an organization with a limited budget, you spend money on one thing, you're depriving somebody else of something else. So while it's not perfect in any sense at all, having that basic acknowledging that that's an appropriate framework is I think a very good step indeed and I was involved in, for example, doing that kind of stuff where oh God, what are they vaccines for something or other, vaccine cost effectiveness. Now there's all sorts of let outs for it and it gets misspoken like you pay more for cancer treatment now, you paying more for end of life care because you're not going to-- cause if someone's not going to live very long, you're not going to get that benefit from it clearly within panics and pandemics, that all went out the window. I mean, what's the cost effectiveness of all these, these lateral flow tests we're taking and things that all these freebies we're getting and they're throwing away. Who knows, all that cost effect goes out the window in a crisis essentially. But when you're in a more stable situation, when you can kind of do the numbers, then I think it's an incredibly valuable exercise to do. Well, admitting it, it's only a guideline. It's not third decimal place stuff.


Ben (55:53): So, a lot of behavioral economic theories at the moment have this idea of rational expectations and then they point out that people don't behave like these models a lot of the time. But there's a kind of, I guess, a counter prevailing backlash to that suggesting that maybe the models are wrong rather than the people being wrong. Like this idea of rationality is only something which is coming out of a model. So I'm interested in what you think about rational expectations and whether it's helpful or not cause this plays into nudging, it also maybe plays into behavioral change for health and these things cause they kind of use at this core rational expectations. So I'd be interested in how you view that.


David (56:38): I think, yeah, it's very important and… rationality is very important. I think you got to distinguish us as individuals, our behavior and policy level stuff at a societal level. And I don't think there's-- We've already talked about the fact that if there's a lot of unknowns, deeper unknowns, then it may be more reasonable to take a precautionary approach to guard against the worst. If there aren't, if you really know what's going on, you can establish the parameters, the structure of your decision then expected gain must be the right thing to do cause you're maximizing the return for your investment in NICE. You've got to spend so much on health service. This is pure utilitarian ethic stuff. Now, of course there are-- I don't know any philosophy, I wish I did, but to you that is a very utilitarian approach. There's also a role for much more duty ethic counting approach that in some circumstances, for example, a named individual who is suffering, suddenly you'll spend a lot more on them because you have a duty to protect the vulnerable lives for those individual people. So, it's not a simple thing, but basically that seems extremely reasonable at that big level essentially when everything averages out and you know the structure.


David (57:58): It's like insurance, that's how that works. Now. At an individual level there's absolutely no reason why we should be rational for a number of reasons. First of all, we don't know the values and the probability. So how could we exercise maximizing expected utility? Secondly, the whole idea of expectation really has got some idea of there being the opportunity to repeat or to average over a context. Whereas if you only got one life or one planet, then maybe you don't, maybe you do want to just protect against the worst. Maybe you do want to be cautious or maybe you want to have adventures and take risks and things like that. That expectation, although you could think that you make repeated decisions through all your life and you want to expect, maximize your expected enjoyment over your whole life. I think that I wouldn't expect anyone to act in that way particularly, but largely because we just don't know. We don't know how we'll feel about things at the time and so on. And I think that this at the same time, in certain circumstances we should stop. We should try to be rational and this is the thinking fast and slow idea of Daniel (Danny) Kahneman,  which is very powerful indeed. So, I disagree with the idea that somehow this is a fault in us, the fact that we're not entirely rational people.


David (59:26): I actually think this is not a fault. This is part of being human, that we operate with our guts, with our emotions, we respond in ways that are not formulaic, partly cause it'd be impossible to do anyway. At the same time, in certain circumstances, we shouldn't operate with our guts. We should be questioning what to do and in particular, when anyone's trying to manipulate us, we should be sitting back and trying to take the argument apart and weigh things up. And certainly when we have got difficult, important decisions, financial decisions or health decisions or risk taking decisions, we really should be trying to think about what the possibilities are, how likely they are, how bad it would be if certain things happen and operate accordingly, maybe not still for expectation, but at least for protect ourselves from the maximum losses in that sense.


Ben (01:00:15): Okay. Well coming up to the last couple of questions, but that's really interesting. I think the way you articulate the difference between a sort of population level and an individual level, but even so at an individual level at certain decision points, taking a step back and thinking and trying to engage a different part of the brain might be worth it. I do sometimes think about the unintended consequences about some of this population level or what you do. The silly example I have is I read some statistics. It might well be false, probably done by an egg person but there's something like 720 million eggs thrown away each year in the UK and about 3 in 10 Brits are doing this solely on a best before date, because they're worried about bad eggs, right. But actually it's quite hard to get the data, but I looked into it and death from food poisoning by egg, so this would be salmonella, is extremely rare in the UK and Europe due to regulation. In fact last year, most of the years in the last five, we haven't had any at all. So you can get hospitalized, but there's no deaths recorded in the UK. It's a little worse in the US. I think there are about 30 deaths in a year, but the sort of flip side, particularly if you were worried about the planet and say, take the UK it's 720 million eggs, or the 30% just from the best before where the eggs are probably not off yet is a sort of enormous amount of other cost in a different part of the system and when you're looking at the food agency, they're never really going to take into account the food waste part of their mandate, because they're always, probably quite rightly going to be worried about the food safety bit. And then this goes into your absolute risk.


Ben (01:02:02): I mean, the other one I think about is different country's advice for pregnant women. I find it really funny because in Japan they never advise against raw fish and in France, they don't advise against blue cheese because they know that there's kind of no point doing that, cause people are going to eat it anyway. And the absolute risk is very, very low, even though the relative risk might be higher. But I don't think that's a conundrum that we can– [easily answer at the population level]. But I do think some of those unintended consequences of the system of the whole it'd be really nice if someone in government or policy was trying to put that together and think about it and okay, well actually, maybe we should tweak the mandate or the advice here. Is this something you come across often?


David (01:02:43): Yeah, well, again, I try to avoid it. As you said, it's to do with regulators and precautionary principle and where you draw thresholds. And I mean, for me, the biggest misunderstanding is people thinking they're being a single threshold and suddenly goes from being safe to unsafe, which is complete nonsense. And so, people could set a threat like they do in the best before date, everything should have two dates, which would be okay-- The first date is ideally eaten before this date and the second date is don't eat after this date and in between there's a range and this is known as the tolerability of risk for which is absolutely entrenched in the legislation for the health and safety executive in the UK. It's been fantastically successful in the UK and essentially health and safety philosophy in the UK is not based on a single risk threshold. It's always two. And so you've got a level below which something is considered safe and a level above which things are considered unsafe. And in between there's this gray area where you should be trying to reduce it, the risk as low as reasonably practicable, but there are tradeoffs and maybe you can't do it now, but you should try, but it's not the end of the world if you don't and so on.


David (01:04:12): I think the biggest and same with alcohol and all this stuff trying to produce a single threshold is such nonsense because you're drawing a line in the sand and the group setting the threshold has got an incentive to be very cautious and then the consumer, as you said, throwing away the eggs thinks, oh, anything above that is dangerous. No, nobody ever said that. They never say that. They know it's safe if it's above that. And so you got a conflict of incentives there with people taking an act without thinking of the full consequences and I think it's crucially to do with naivety about trying to sort of threshold on something that hasn't got a threshold.


Ben (01:05:02): Yeah. I hadn't heard of [range and tolerability] of risk and I didn't know that about--


David (01:05:05): It's really powerful. It's such a powerful-- like God, I just bang on about it all the time. You need two thresholds for everything.


Ben (01:05:15): Well, cause I was speaking to a climate scientist the other day and that was one of the points he made is that for communication, a lot of people using 1.5 or 2 or 2.5, but actually that point estimate is not at all really what the models are saying. It's this spectrum and it's not a threshold. So last two questions. One is what do you like about stained glass art and what maybe should we know about stain glass? Cause I believe that you did some.


David (01:05:43): I do some, yeah. I got some in the shed at the moment I got to finish off for Christmas. So yeah, I do stain glass stuff. I like doing it because I haven't got any artistic skill whatsoever but I can copy sort of mathematical patterns and I love the impression of stain glass. I love churches. I spend my time going around the churches of cathedrals with my binoculars because the first rule, always take binoculars into a church because some of the best stuff is hidden away at the top. Some of the best windows in your minster, you can hardly see from the ground unless you've got very good binoculars. It's way up and it's brilliant stuff.


Ben (01:06:24): Is it the detail of the patterning and the shapes or is it how the light goes through or all of them?


David (01:06:28): No, the light as well, but basically it's detailed but a lot of the old, I love old glass but what I really like about it, partly it's quite a forgiving technology. I'm not very skilled and provided-- If you look at it with a light behind it, you don't see the poor quality leading and the poor quality soldering. So that's really good. Although I always look at it very carefully now, what I really like about it is both the fact that one can produce things that are quite nice without a huge amount of skill, but the technology is basically unchanged since 1200s, the stained glass …which is just the most wonderful thing. Everyone should go [to] see the glass …., it's staggering. So that's why I love it. But you don't-- using lead, using colored glass, okay. I have got a diamond tick glass cutter when they didn't have, but for example, and then you're using the solder which is exactly the same. And what's nice, the flux is used when you're soldering the lead is a tallow candle. You buy them from the shop and it's a candle, tallow candle.


And that's what you use to do it. Well, that's what they were using in 1200.


Ben (01:07:45): That's amazing.


David (01:07:46): Isn't that lovely, the technology. I also got a grinder for finishing all my stuff. So there's two cheats. But apart from that, oh actually I got a really nice lead cut.


Ben (01:07:58): Three cheats maybe.


David (01:07:59): Three cheats and I got a solar temperature control. So there are few cheats, but the basic materials have not changed since 1200.


Ben (01:08:09): Actually, that reminds me, I think I was reading again, something that Nassim Taleb said, although I don't think it was his concept, this idea of something being Lindy, which I think referred to a restaurant or a bar in New York, but this was a restaurant which the longer it survives, the kind of more likely it is to survive type of concept. So this is the idea of the stain glass. It survived so long that there's something valuable to it and actually surviving in this form means that the longer it goes on, the more likely it is to continue to survive.


David (01:08:40): It's resilient. Yeah.


Ben (01:08:41): Yeah. It's resilient. Exactly. Great. So my last question is I guess asking for your life advice in general, whether what you would advise people to think about in terms of yeah life advice, maybe. And we touched about this. You can think about this in risk or maybe, obviously there's Omicron at the moment, but what will you generally advise people and think about life or risk or any advice that you'd like to--


David (01:09:05): I mean, nobody should ask me this stuff but all I do is say when I used to give a lot of schools talks and talk to kids about risk and things like that and what I always said was, yeah, take risks, be bold, take risks but don't be reckless, don't be reckless. So in other words, think through the possibilities and make sure you really strongly mitigate the worst possible outcomes. But go for it because the upside of taking risks is the experience, lesson even if it goes wrong and the joy of and the excitement of, and the fun. So, I put a huge, huge value, both monetary and personal on fun, which doesn't tend to get into the equations very much at all but that's why I do idiotic things like going on winter wipe out and I like cold water swimming and I jump off cliffs and things like that. So I love all that kind of stuff but I'm always careful. If I'm going to jump off a cliff, I make sure Johnson's jumped off just before.


Ben (01:10:18): Okay. That makes a lot of sense and that's the kind of idea that a life too cautious is not a life well lived.


[Crosstalking:01:10:25].


Ben (01:10:28): Mitigate them obviously with sensible things, but don't be put off.


David (01:10:32): Yeah. That's my philosophy. And so I just hope nobody ends up getting injured because of that philosophy. So I say no, be careful.


Ben (01:10:40): Yeah. Great. Well on that note, I thank you very much. And also everyone listening do look out for the books that David's written. So thank you very much.


David (01:10:52): Thanks. It's great conversation. Thank you. Okay, great.


Ben (01:10:56): Bye.


David (01:10:56): Bye.


Ben (01:10:57): If you appreciate the show, please like and subscribe as it helps others find the podcast.