Saloni Dattani: improving science, important questions in science, open science, reforming peer review | Podcast

Saloni Dattani is a founding editor at Works in Progress, a researcher at Our World in Data and a commissioning editor at Stripe Press. She has recently been profiled by Vox as part of the Future Perfect 50. Saloni is an excellent thinker on progress and science with recent articles for Wired (on making science better) and Guardian (on challenge trials). Her substack is here and twitter here.

Saloni tells me what are the most important questions in science that we should be working on.

We discuss making science better and thinking around challenge trials, making science more open source, reforming peer review and thinking around experimental clinical trial design.

We talk about vaccines, why Saloni might have room for optimism and what risks and opportunities around science she is thinking about.

…there are two points that I have that I think about when it comes to optimism. The first is that we already have lots of tools that can make the world better that just aren't being used as much as they could be. Global health is a really good example of that. So we have lots of vaccines and treatments that have huge benefits to people who take them, but much of the world doesn't take them yet. One example is influenza vaccines which have varying efficacy, but they're usually quite effective and yet most countries don't routinely give them out to people even though they could massively reduce the burden of hospitalizations and respiratory disease and so on. There are many other examples of treatments like that where we already have these things but they're just not implemented widely. That gives you some optimism but also makes me a bit cynical about why we aren't even using what we have.

Then the second part is more about the frontier and having new technology developed. I think on that it's kind of difficult to predict how much progress is possible on those areas, but it's also useful to see how things have changed over time. So genome sequencing is one example where the cost of sequencing is dropped from millions of dollars in the early two thousands to now just a few hundred. What that means is that we can collect much more data than we could in the past. We can understand things that we couldn't before. Sequencing is hugely important for understanding how our cells work but also how viruses and bacteria work.

It just means that we have much more scope to make progress on treatments and vaccines than we did in the past. So that's something that I'm hopeful about. So there's various technology that drives the ability of researchers to make advances in these fields. But I also kind of avoid taking any stance on whether I'm an optimist or not. I think it's helpful to just see how things are, see how things have been, and treat everything as potentially different from other examples and just look at the facts of what's happening in that particular area. Sometimes there's room for optimism and sometimes there isn't…

Borrowing from Tyler Cowen, I ask: 

How ambitious are you ?

Which of your beliefs are you least rational about?”  (Or what is she most irrational about?)

What is something esoteric you do ?

We play over rated / under rated on:

  • Substack

  • Misinformation

  • Doing a PhD

  • Women in Science

  • Vaccines and Drugs

We end on Saloni’s current projects and advice.

Podcast available wherever you get podcasts, or below.  Transcript follows.

PODCAST INFO


Transcript: Saloni Dattani and Ben Yeoh (only lightly edited)

Ben

Hey everybody. I'm super excited to be speaking to Saloni Dattani. Saloni is a founding editor at Works in Progress, a researcher at Our World in Data, and a commissioning editor at Stripe Press. She's an excellent thinker on progress and science and has recently been profiled by Vox as part of the Future Perfect 50. Saloni, welcome.


Saloni

Hello. Thank you for inviting me onto your podcast.



Ben (00:29):

You're most welcome. So to start off, what do you think are the most important questions in science or meta science that we should be working on today?



Saloni (00:39):

That's a really good question. I think the most important questions are thinking about what scientific publishing is going to look like in the future. So currently, a lot of academic research gets published in journals, and journals are very connected to what they used to be like before the internet. So they're published in a static format, they don't get updated very easily, their citations are kind of difficult to follow, and it's also hard to put together research with comments from other experts and criticism. All of that means that things are happening quite slowly in a lot of academic fields. And I think what we're seeing now is that people are trying to think of other ways of publishing. So for example, publishing on GitHub or on forums, and you can often see comments between different experts on the same topic. It's a huge change, but I still feel like it's just the beginning of what could be done. I think how to develop that further in a way that allows people to contribute to science but also has some kind of moderation or some ability to filter out the good and the bad will be really important.



Ben (02:15):

I had Alec Stapp on the podcast who made a similar kind of idea for think tanks or policy pieces in general; that you put out a white paper and it's a very old school way of doing it. And policy makers as well haven't adapted for the internet age. I guess you had the recent article out in Wire talking about how to make science better. I feel this comes under maybe one of your items of make science more open source. What do you think about expanding on that idea and maybe we can talk about three or four of your other ones as well. But I guess this is the making science open source. Is it just building on things we have like GitHub and things like that? Or do you think there should be some other system or something we should develop to help that open source ecosystem?



Saloni (03:04):

Yeah. So I think GitHub is a really good starting point. So what GitHub does is it allows you to share your data and your code for whatever analysis you've done. It means other people can comment on those. They can also branch out so they can kind of reuse your code and change it slightly to adapt it to whatever they're doing. It's a really cool way to publish science because it means that people can contribute from anywhere around the world. That's something that was done during Covid. So a lot of research groups published their data on GitHub and then had hundreds or thousands of people from across the world commenting and pointing out that like, "This country had missing data for these three weeks and something's gone wrong." Or they've contributed in a way that helps people automatically update their data sets.



That's a very good way to publish science, I think. But it also doesn't really work for a lot of fields; a lot of fields that don't have analyses that are written in computer scripts. It also doesn't fit some of the things that we want. So it doesn't make it easy for people to add a kind of review, comments from other experts. It doesn't allow you to connect different research pieces together. So I think there's a lot of scope for new tools and platforms to be created and I want to see how that happens. Then the second part is I also think that there are lots of new tools that could be created in terms of making sure that people aren't plagiarizing or fabricating their data or manipulating their images and things like that. And that has just started to be developed. So I think some kinds of platforms that allow you to check for errors and things like that could also be much bigger in the future.



Ben (05:09):

I guess that rolls round into your idea on reforming peer review. That's some way you could do that check and balance. Like if the image is too perfect which is often something that people done. I'm sure AI or some sort of computational manner can pick that up more easily. But is there anything more to your idea of reforming peer review which isn't just about the system?



Saloni (05:33):

So in terms of the system, do you just mean what gets checked or...?



Ben (05:39):

Yeah. I guess the system of journals which you're kind of indicating is not fit for purpose and then you've got new things like GitHub. I guess is there anything around the particular peer review idea which you're kind of thinking about? You highlighted some of the problems like checking for errors or false data and things like that. I think in your article you make the point that if you ask-- Papers always claim that you can ask for the data, but if you actually go and ask the academics, “Do you have your data?” then a lot of the time they've lost it or they didn't really have it or it's moved on system. It's not even necessarily in bad faith, they just didn't keep it right. So there's kind of these crossing checks and balances which aren't really great, I guess in terms of thinking about peer review.



Saloni (06:24):

Yeah. I mean, that is such a strange issue. I just find it so strange that email is the way that we are supposed to get people's data instead of it just being stored in some repository where anyone can access it. But in terms of how else we can reform peer review, I think one great idea that is starting to get traction is this idea of having some kind of centralized platform where you submit your research to this central platform which connects to various reviewers. They comment on the paper, help you improve it, and then it gets published in a journal. I recently just today saw that that is happening with registered reports. So registered reports are this type of academic paper where you submit your methods and your introduction or hypothesis for what you want to find, people comment on it and make sure that the whole analysis is done correctly. And then you start actually analyzing or collecting your data, and then you publish the results no matter what they show. So that kind of avoids the issue of publication bias where people decide not to publish certain things because they didn't like the results or they change the analysis after they've found the results or so on.



So that's something that I've just seen today has been developed by Chris Chambers who is a meta scientist, who's one of the editors at Cortex, this neuroscience journal. So what they plan to do is have people submit their papers to the central system where reviewers review the paper and then after that it gets sent to a different neuroscience journal. That seems like a great way to do it. Not just because you are kind of avoiding the publication bias issue, but also because it means you only have to submit your paper to a journal once and then it just gets connected to the journals. Whereas currently it's the journals themselves who are trying to find reviewers and doing it very badly because each of them don't have that many connections. They can't really track how much time people have to review papers. It's just a very strange system. So having the central platform which is connected to loads of different researchers would be a huge benefit.



Ben (09:05):

That seems to me dividing up the labor in the journal side of the system. But you also seem to have some ideas about dividing up labor within the research ecosystem itself. Would you like to touch upon that and how that occurred to you in terms of like, well actually if you could divide up labor and research that might be also more effective for science?



Saloni (09:27):

Yeah, definitely. For me, it's quite a personal experience where I realized that I really enjoy some parts of this scientific endeavor. I like reading past research. I like summarizing it. I like writing manuscripts. I don't really like coding so much and I don't really like going out and presenting my work and finding people to collaborate with. What I've found is that other people really enjoy those things and don't enjoy writing or reading very much, and other people are really good at running lab experiments. It's very strange because each of these skills take so long to develop. You have this deep learning curve when you're starting out on the process, but you also have to spend a lot of time to maintain those skills and keep them up with the latest developments in the field. And it just doesn't seem very efficient to be trying to do that for every single role that's part of the scientific industry.



So what I think is that if we allow people to have much more focus on the things that they're interested in or the things that they're skilled at, it just means everything becomes more efficient and also improve its quality. So I can spend time on the things that I'm very good at and team up with someone else who can manage the data or runs scripts for me and that's something that they enjoy as well. It's very similar to the origin or one of these ideas from Adam Smith's story of the pin factory where people are specializing in different parts of manufacturing a pin. You increase the production of pins enormously just by putting these different pieces together instead of having one person do everything. So I think that kind of approach to science would allow much more science to be done at a higher quality, at a higher speed. And also it would mean that we could open up the scientific process to people not just in academia, but also like software engineers, data people, writers and so on.



Ben (11:57):

Yeah. It's really obvious to me when I was at university that you had some lecturers who just weren't very good at lecturing or teaching or they didn't want to, but they were brilliant scientists. Their papers were brilliant. It always seemed like, "Well, why do you have to do the teaching?" And vice versa; some people teaching maybe didn't want to do as much lab work. And we have these, I guess they're not so much fledgling fields anymore, but I guess they sort of are like science communication or even climate change science communication. It's obvious to me they are separate disciplines. You can do that really well. You don't have to be an atmospheric scientist to be able to put together complex systems work. Actually, you get to Adam Smith's point, comparative advantage over that. So I do think that's quite interesting.



And a lot of other industries obviously do that. Would that flow into your ideas of also streamlining experiments? You can get people doing their little bit and then perhaps improving on that bit. I guess the other bit you talk about is collecting routine data which seems a little bit different. But it does occur to me that the kind of people or organizations collecting some of that data may not be what you would traditionally have thought of as a science organization, right? There could be some public body or even a private body. There is some very interesting data set. They should just collect it or maybe be paid to collect it somehow and then that's a resource that people can use.

Saloni (13:23):

Yeah. So streamlining is this idea that you have lots of experiments running in a parallel way. So the aspects of experiments that are routine or shared can be managed by a different group in the same way that you have specialization within a field. One of the examples that I give there is the recovery trial in the UK. So what happened there was this was a very large randomized control trial that had around 20 different treatments. Patients in the NHS were connected to this trial and could volunteer for it. And as the pandemic went on, they would be randomized to different treatments but within the same trial. So you wouldn't need to set up the whole organization, you wouldn't need to recruit patients for each one separately.



You would analyze their data in the same way, look at the same outcomes, and it would mean that you could make better comparisons between them, but it would also save a lot of time and costs and effort in setting these things up. So the difficulty there is actually setting it up in the right way and making sure that you're not having this badly organized thing that affects every experiment. But I think there's a lot of scope for that to be done well if there are people who are able to learn from experience and do these-- Running a trial is very difficult. Operating a trial in practice is very difficult and I'm sure there's people who have been doing that for a very long time. It just makes much more sense to give them the ability to run the practical aspects of it while the smaller research groups are able to specify what specifically needs to be done in their part of the trial.



Ben (15:25):

I reflect that actually I currently think that the classic standard so-called RCT sort of double blind placebo controlled trial is a little bit overrated. Obviously it's extremely good, but it means that people haven't thought about these other ways of doing trials. Trials which can change over time and these more up to date statistical methods. So I was wondering whether you think experimental design has actually maybe moved ahead a little bit further but we don't do enough. One very classical element of this, and I think you've written about it as well, is on the so-called challenge trial particularly for type of infectious type things where there was a reasonable amount of pushback for certain bioethical point of view. But if you looked at it in total population utility and other things, there was a lot of people saying, "Well, we would potentially save a lot of lives or get a lot of other information quicker."



And there was perhaps a little bit. But again, it hasn't really developed as I thought it might. So I'd be interested in whether you think experimental design might be, whether we should perhaps look at challenge trials or other parts of design more? I guess we had a recent Nobel Prize winner in terms of discontinuity designs and other sort of designs as well which has obviously really helped the field. And I kind of feel there is just so much there but it hasn't filtered down. I'm sort of quite a far way observer of it then. So I’d be interested in your view; challenge trials and general experimental design.



Saloni (16:58):

Yeah. I would say I'm a huge fan of randomized control trials just because they're very helpful when you don't know how the mechanism of something works. Usually when you have a study where you're trying to just use data that's already observed, you're not running an experiment or manipulating something that requires you to have some understanding of what the confounders are or how people might select into receiving the treatment and so on. What a randomized control trial lets you do is kind of avoid a lot of those questions and just using the process of randomly allocating someone to the placebo or treatment how that affects their outcomes. But at the same time, randomized control trials are not very old. The first one was run in the 1940s and with time they've had various developments along the way.



So a lot of the procedures that we consider part of the traditional randomized control trials have only been developed in the last 20 years or so. So for example, for clinical trials they're now all registered online. So the methods and the things that will be measured are declared before the trial is run. That didn't happen in the past. And also blinding, for example, is relatively new. So I expect that these things will continue to advance and change in the future. And so with this example of what's called a platform trial where you can allocate people to different drugs at different times, that's also quite new. I think it's just another part of the evolution of these journals and it's a cool thing to see how they develop.



In terms of challenge trials, I think challenge trials are a bit tricky. Sometimes they're very useful and sometimes they're not. So with a usual randomized control trial, what you do is you give people a vaccine or a placebo for example, and then you just kind of wait until they get infected on their own by the infectious disease, by the pathogen and you see how those rates vary between the two groups who were randomly given the vaccine or not. That can often take a really long time. So because you're just waiting and watching for them to get infected, if the disease is rare or if it doesn't cause symptoms in many people, you might miss a lot of the people who get infected. In contrast with that, what a challenge trial does is you deliberately give people the pathogen and you can then see the effects of the vaccine if there is an effect in a very short amount of time because you're controlling when you give it to them, you're controlling the doses that you're giving and the routes, and you're monitoring them in a much more controlled environment.



So the main benefits of challenge trials are that they can be much faster. They can involve much fewer participants like sometimes a hundred versus thousands in a usual trial. It just means that you get this data much quicker. But the problems are that you have to actually be able to develop the pathogen in the lab and be able to give it to each of these participants. And that can be hard. So you might not be able to culture the pathogen in the lab. You might not be able to give it to them in the same way that they would receive it in the outside world. You also have to figure out the right dosing. So if you give people too much of the pathogen then they might just get infected regardless of whether they got the vaccine or placebo, even if the vaccine is usually helpful in an outside setting.



So those kinds of questions are difficult and they make challenge trials currently not as useful as they could be. But at the same time, they're very helpful for some diseases. So I've recently written about how they're going to be the best way to test vaccines for Zika virus disease because the disease is now in this trough and there just aren't that many cases worldwide and we don't know when the next wave will come. So if we want to prepare for that, we have to try out these new methods at least making sure that people are informed about what the risks are and willing to take those risks and also kept in this safe environment where they can easily be treated if anything happens.



Ben (21:56):

So they have a role; there are downsides, but there are upsides too. So in specific circumstances maybe we should be looking more closely at them. You've written a really great piece on depression I think on World in Data. So I was interested in your view as to maybe what's most misunderstood about depression? I guess there are various types. And whether maybe overall depression drugs are overrated or underrated, or I'm sure you're going to talk about the nuance that actually it probably depends. But I'd be interested in your research on looking at the overall depression field and what you think.



Saloni (22:32):

Yeah. Good question. So I think what's most underrated about depression is that it's not that recognizable. Sometimes we have this image sometimes of someone who's just sad and no matter what they do they're unhappy. They just don't react positively to anything. The main quality that we associate with depression is just sadness. It can be a lot more than that. It can be different from that. It can involve just lots of tiredness and anxiety, feeling very guilty about themselves or feeling thoughts of death and so on. Sometimes it can involve people who are sad overall but can react positively to good news. It can last several months and sometimes people don't realize that they're depressed because they don't have those classic symptoms of just pure sadness.



I think those symptoms are one of the least understood, the least known parts of the condition. So it's not the case that just because it involves other symptoms that it's like a very loose criteria. To get a diagnosis of depression you need to have these symptoms almost every day for two weeks at least. It involves two main symptoms. So either sadness or a loss of interest in your usual activities along with at least five other symptoms. So that can be quite a high threshold, but it's also not that uncommon for people to have it.



Ben (24:28):

And those five other symptoms you can almost draw from them from quite a large pool. Reading your work it was kind of like, "Wow, there's these hundred other of symptoms that might be slightly large.” But the mix of the five can be almost any sort of mix which is why you might miss it. You've obviously got the sadness and loss of interest. But you often miss it because it's like, "Well, it could be these five or it could be a combination of these five and they all kind of make sense.” I hadn't understood that kind of complexity or broadness of it. And then the fact that depression drugs, your sort of first line of treatment, a lot of them kind of work for some people and then they often stop working and you sort of recycle through them. So I think in general they're probably quite useful for a whole bunch of people but obviously they don't work for everyone. I didn't know if you had any thoughts on looking at the use of the drugs.





Saloni (25:19):

Yeah. What's difficult with that is it's actually not easy to tell whether they've worked at an individual level. If you think about maybe your own experience of being happy or sad during different weeks, you have various changes. So some days you'll feel happier, some days you'll feel worse, and these things also continue over weeks or months. And because there's so much variation within each person, even if a treatment is reducing your symptoms by 30% or whatever that means-- Even if it is reducing your symptoms by 30%, you still might be having some really bad days at the end of a trial, for example. So that's why we need to look at these kind of group differences. So the difference between the whole group who's taking the antidepressants and the whole group who's taking the placebo.



And when we do that, we usually do see a sort of moderate effects of antidepressants where it tends to reduce the probability that people will still have a diagnosable depression at the end by about 30% or so. I would say that is underrated by some and overrated by others. So clearly it could be a lot better than that. But that's still a big effect considering these are just small pills and we don't really have a great understanding of how depression even works. Despite that, we have seen many studies confirm that these treatments actually do work. They work around as well as behavioral therapy or talking therapies and so on. I think they're quite useful especially for people who don't have the time to go out and get therapy or are just in these desperate situations where they need something to work quickly. So I think they have an important use but they're also quite misunderstood. I think it’s also a mistake to necessarily think that because one person doesn't feel much better at the end of it that it didn't work for them even. But also there is variation between how different people experience them.



Ben (27:46):

It's the challenge of the kind of individual dynamic versus the population statistics. I was speaking to David Spiegelhalter who looks a lot of stats and he makes the same one. Classically there's alcohol and also statins, big population things. You can see a population effect but you might not see it on the individual effect. In fact, the individual might have a preference to go the other way but not hopeful on the population stats. I hadn’t understood it's a sort of well understood statistical, almost paradox. So thinking maybe on a higher level about optimism and opportunities and then maybe also risks. When I read your work, one of the things I like about it is it's-- I guess I would say cautiously optimistic. There's a lot of things you sort of say like, "Well, yes, there's a lot of things wrong and we could do things better, but we could do things better." And I like that.



So I was wondering what you were maybe most optimistic about or things generally that you see in the world. Obviously there's the improving science part, but do you have room for optimism? And then we can flip on the other side of other kind of risks that you think are underrated that maybe we had to watch out for and mitigate it. But I'm kind of interested in your optimism side because I see that as a very nice thread through your work. Actually, it probably goes through some of the World in Data work and Works in Progress work in general. But this feeling that we can do better and we perhaps are doing better.



Saloni (29:14):

I guess there are two points that I have that I think about when it comes to optimism. The first is that we already have lots of tools that can make the world better that just aren't being used as much as they could be. Global health is a really good example of that. So we have lots of vaccines and treatments that have huge benefits to people who take them, but much of the world doesn't take them yet. One example is influenza vaccines which have varying efficacy, but they're usually quite effective and yet most countries don't routinely give them out to people even though they could massively reduce the burden of hospitalizations and respiratory disease and so on. There are many other examples of treatments like that where we already have these things but they're just not implemented widely. That gives you some optimism but also makes me a bit cynical about why we aren't even using what we have.

Then the second part is more about the frontier and having new technology developed. I think on that it's kind of difficult to predict how much progress is possible on those areas, but it's also useful to see how things have changed over time. So genome sequencing is one example where the cost of sequencing is dropped from millions of dollars in the early two thousands to now just a few hundred. What that means is that we can collect much more data than we could in the past. We can understand things that we couldn't before. Sequencing is hugely important for understanding how our cells work but also how viruses and bacteria work.



It just means that we have much more scope to make progress on treatments and vaccines than we did in the past. So that's something that I'm hopeful about. So there's various technology that drives the ability of researchers to make advances in these fields. But I also kind of avoid taking any stance on whether I'm an optimist or not. I think it's helpful to just see how things are, see how things have been, and treat everything as potentially different from other examples and just look at the facts of what's happening in that particular area. Sometimes there's room for optimism and sometimes there isn't.



Ben (32:03):

So neither a techno optimist nor a doomster. I would add on the genome sequencing which I think is a big thing and then I would pair it with essentially what DeepMind and others have now done on protein folding and AI and computational biology. And I just see like when I speak to computational biologists versus 10 years ago, they kind of think this is magic. It’s equivalent to magic that you see; not quite, but it's a definite step change. Are you worried about anything on the risk side? I guess you've got some people thinking about broad existential risks; so manmade pandemics, nuclear war, those type of things. Or I guess sort of rogue AI or there’s more sort of mundane risks like antibiotic resistance, climate, war currently. I guess you sort of said it was in a balance. Maybe there's something around science. Is there anything you think is particularly underrated or is on your radar screen that you kind of worry about somewhat?

Saloni (33:09):

Pandemics are a big one. So the risks of pandemics come from various different places. There's the risk that a disease emerges in the first place and then the risk that it turns into a large epidemic and then turns into a pandemic. Just because we have so much more travel and globalization means that these things can spread much faster than they could in the past. And also because of how we use land, how we farm animals, how we sort of reuse land for various purposes means that we're encountering animals and plants and various pathogens that we wouldn't have in the past. So that's a kind of growing risk. 



There's also the risk of biosafety from labs, and I think one thing that's very underrated is bio risks from industrial use. So if we have these much larger factories that are using cultured products or whatever biomaterials that has quite a large risk of turning into a big safety issue much more than small research labs. So that's something that I worry about but I also think we have better tools than we could in the past to stop them. We have better tools to understand what they are as soon as they emerge with genomics and so on. It feels like we're kind of playing catch up with the technology versus the risks, but they're also various ways that we can reduce the risks while still having growth and development. So for example, having much more biodiversity conservation, having much more urbanism, having people live in places that are not so risky that we have to worry about these spillovers and so on. Having cultured meat, for example, makes a big difference as well. One of the big drivers of spillovers is factory farming, for example. We have like avian flu very often breaking out around the world. That's something that could be avoided with technology and so on. So yeah, I think that's a big underrated risk.



I also think in general new technologies do often carry risks. So if we look at the 20th century and we look at lead use for example, leaded gasoline and so on, those were big breakthroughs in technology but they also had risks of pollution that people didn't necessarily see at first or didn't take much attention to. And I think we should also be thinking about those happening now. So what are the inventions or tools that we're creating now that have big negative effects that we're not seeing yet? Usually when people say that they are kind of anti-growth and they're afraid of new technology in general, I think if we look at history we have to recognize that these things have happened before with lead, with x-rays for example and so on. And there are many risks that we wouldn't have foreseen at the beginning that we should still be aware of.



Ben (36:52):

That's very balanced. Do you have a view on any of the ideas of effective altruism then? Because it seems that some effective altruist or EAs are very worried about these existential risks or the risks of going too fast in AI. So something bad happens, some of them are very worried on biosecurity. On the other hand, you can say that a lot of this technology is going to happen and it's like how we use antibiotics, how we do cultured meat or whatever which will be there. Where do you think there might be most right or wrong? Or do you have no particular view?



Saloni (37:28):

It's hard to say. I haven't really thought very much about AI in particular. It just seems so out of field for me that I've kind of avoided and stuck with what I think I'm better at. But I think in general, we should take these risks seriously. It's not necessarily the idea that we should just stop technological development in some areas or just try to align it with what would be best for us. In the past there have been lots of examples of these kinds of new technologies just being banned. For example, CFCs which were responsible for the hole in the ozone layer and there were replacement technologies that we could use for them. I think that's kind of one thing that I think is underrated. It’s having this kind of backup or contingency technology that could be used instead if there are growing risks with something that we don't recognize yet. So trying out different things, comparing them, not being satisfied with something that works just because we've looked at how effective it is, but also thinking about are there other options that are safer and effective at the same time and not treating it as a tradeoff between growth and safety. I think you can often just do both.



Ben (38:55):

That seems fair. Okay. I have a couple I guess Tyler Cowen inspired type of questions that he asks on this type of thing. Maybe I'd start one about sort of ambition in the future. So one of the things he ask is, I guess how ambitious are you?



Saloni (39:20):

I think people find it annoying when I tell them that I just don't have ambitions. I don't think I've ever really had ambitions. I have short term goals of projects that I want to complete and things I want to do. I generally want to improve myself and be a better person and so on. But I don't think there are big goals of things that I want to achieve. It's just a path that will kind of emerge and I'll see how things go in my life. I didn't really plan to be where I was. I just kind of tried out different subjects that I was interested in and fell into various jobs along the way. I think it's nice to have an open mind about what the possibilities are just because you don't know what opportunities will arise, but you also don't really know where you'll be when you're able to achieve them.



Ben (40:19):

That's very modest of you because I think you are doing a lot to both improve science and meta science and speaking in public about it which perhaps is not a stated ambition, but I think is a pretty big thing. My next question in this trio is which of your beliefs are you least rational about? In another way he asks it sometimes the thing is, what views do you hold almost irrationally or like a religion?



Saloni (40:51):

Okay. So I think the main one is-- And I feel like this is not really a religious belief. But I think the motivation is what you're getting at. I generally think that it's very important to be honest and transparent in general just because-- My reasoning for that is that you don't know what the consequences of those facts are. So if you're trying to kind of lie or misrepresent something because you want someone to believe a certain version of the fact, you don't know what other consequences that might have, how other people might interpret it and misuse this kind of distortion and how that might affect other research and other views or policies. So I think it's better to just be very transparent and honest with what you're doing just because of that.



There's so many different ways to use this information that you can’t really predict how it should be used in each situation. But at the same time, there are examples of when those things might be more effective if they were slightly distorted. If you kind of downplayed the risks of something that was in general very good, people might be better off in some ways. I think it's very hard to know those things and I think it's better to stick with a rule. Mainly, I feel like it prevents yourself from going down routes that you wouldn't want to.



Ben (42:44):

That's very fair. I'd interpret that as-- to use Tyler Cowen's term generally don't be Straussian. He's got this idea of reading between the lines. I think we saw that in the pandemic. A lot of people even in good faith knew a certain thing but said a certain other thing because they thought that's what was going to help them achieve the original thing which is very meta. I see it actually at the moment on climate. I try and do quite a lot of work on this broad sustainability climate space and I don't quite know what to do about it. But the sort of median scientist or the median paper makes the point that we've come some way, but they also make the point that there's a really big gap between that and where we should be, could be, want to be.

And a lot within the climate community, some who know this don't particularly want to talk about how much we've come through because they want to focus on a kind of either fear or a raising or they have a particular theory of change that they are using which means that they intentionally-- They don't normally misinformed, although I think sometimes they do. But they misinform by omission instead. So it's even sneakier. They’ve got a definite theory of change about it. But I'm a little bit like you. I worry that it first of all isn't correct, and second of all actually has the second and third order bigger impacts. Like well, you've said literally the world will end in 12 years and some people have taken you at those literal words and actually the median scientist is not saying that at all. So 12 years will come around and then some people I'm sure will say, "Wow, look..." Like Nostradamus, the world didn't literally end like that. And it's like, "Well, that's not quite what we're saying."



Saloni (44:43):

It's hard because I feel like when you frame things as something to worry about, some people will take action because of that and other people completely turn off and they just say, "Well, we're doomed anyway. I might as well just not try to improve things." I think there's like a third important factor here as well that when you are truthful and accurate about something, it means that we can understand where exactly the problems still are. If you are being transparent about how much progress has been made in X and how much lack of progress has been made in Y, that's really important because it means that people know where they should focus their attention. Having the ability to compare things in a way that's neutral and not distorted is very important.



Ben (45:34):

I agree. I think it raises trust and I think trust is a very important sort of meta layer. But also when I think you get it right then you can also say, "Look, this is what I really don't know." That's very useful information to know when trusted people really don't know something and that will then inform versus when you have a reasonable chance or reasonably informed on where it is. Okay. The last one in these type of questions goes along the lines of, what is something which is esoteric that you do? So I guess this is, do you have an unusual habit, hobby, belief or action, something that is perhaps a little bit esoteric or weird?



Saloni (46:22):

I'm trying to think of one. I think the only surprising thing that people find about me is that I really love really silly game shoes. I recently watched this show called "Is It Cake?" where there are these bakers who are trying to make these hyper realistic cakes and fool the judges into thinking they’re real objects. I like silly things like that. I really love detective novels and crime shows and things like that. I like film noir, the 1940s and fifties genre of crime and detective films. That also seems odd because I think people often think if you're in this kind of progress studies kind of space that you prefer the modern version of anything that exists. There are often genres that have just faded out just because of fashion. But when they existed they were really good. There are also different things that we focus on now. We have much better ability to take very good cinematography, have lots of cool special effects and stuff like that. What I like about old movies is that because people didn't have that, the only way to really get people's attention or keep them entertained was by focusing on the dialogue and the acting. So those are things that I care about much more when I watch a film. And so it means that I enjoy those kinds of films much more even though they're much less developed in a lot of ways.



Ben (47:58):

I agree. I guess there's a thinker, Nassim Taleb, although it's not just his idea, has called certain things which last lindy which is the idea that things which last are quite important. He tends to go back to ancient Greeks and beyond. But I think it's the true of same of film and books of culture. Books which have actually lasted from the 1800s are usually pretty good even if they're not in fashion. And obviously, movies and films are a little bit of a younger art to those which have survived on the forties, fifties and sixties are actually really good. Mine is children's books. As I like to say, that actually they shouldn't really be called children's books because simply they're books that everyone can read as opposed to books only for adults. I think we're in a kind of golden age of children book writing. It's never been better. There's amazing stories and literature which includes children as well as adults can read. But they usually think quite serious investor, podcaster, sustainability person, why are you reading children's books? But they're really wonderful.



Saloni (49:05):

I mean, children's movies are also quite underrated because people who make them have this very difficult task of keeping children entertained but also keeping their parents entertained. I sometimes watch kids’ shows and stuff and I notice that there are lots of silly jokes and very plain dialogue, but at the same time they also have these more sophisticated jokes that I assume they're just there because parents might be watching them. It's cool to see-- I think it's not the case that they're only targeted at kids.



Ben (49:39):

For sure. You see that in a bunch of Pixar type movies but you also see it in, for instance, some Japanese anime or the Studio Ghibli work which actually doesn't always have adult in jokes. It's just very beautifully done storytelling which can then appeal to everyone from the age of four to a hundred and beyond. I actually think in some ways that's some of the pinnacle of storytelling when it appeals to everyone and not just certain groups. But yes, reasonable people can disagree on that. Okay. So I thought maybe we have a short section on overrated and underrated and then we'll come to the end. Although I think couple of these might be obvious from where the conversation is going. But overrated or underrated, vaccines in general?



Saloni (50:31):

Very underrated, I think. People don't realize how much progress we've had because of vaccines. In many countries they're so widely used that we don't actually see the counterfactual. We don't see how much disease would be spreading without them. It's hard to see how they could be overrated. I think people often don't understand how vaccines have worked historically. They don't realize that many of these vaccines get better with taking several doses. They don't realize that they don't have super high efficacy but that isn't necessarily a reason to down rate them because if you have very high coverage and also depending on how the vaccine works, you could reduce the transmission of the disease to a much greater extent than like... Say polio vaccines have a 60% efficacy of reducing the disease by that much in a single person who takes them. But if everyone takes them, the disease can't really transmit between them and also because of how a vaccine works, it reduces transmission much more than it reduces infection. So it prevents the spread of disease in a much bigger way. And I think that's also something that people tend to not realize and tend to not put into the calculation.



Ben (52:05):

And would you say the same on pharmaceutical drugs in general?



Saloni (52:13):

It's such a wide range. I think many drugs are very underrated. Especially in psychiatry, I think people tend to talk about psychiatry as an area where we haven't had that much progress or things were just in the same place that we were 50 or 60 years ago. And that's just not true. We have so many treatments that work. We have so many treatments now that focus much more on their safety profiles where they're just as effective as they were before but they're much less toxic and they have much fewer side effects. That's also really important because if people are going to take medications on a long term basis, you really don't want them to be developing other problems as well. So I think in general that's quite underrated. But at the same time, we also haven't had that much experimentation on new types of treatments that could be made and so there's a lot of scope for development there.



Ben (53:16):

Great. Overrated or underrated then, doing a PhD?



Saloni (53:22):

Good question. I mean, it really depends on what you want out of it. For me, I haven't really been interested in academia from the beginning. I did my PhD because I wanted to develop the skills that I would get from a PhD like being able to read the research and learning about various methods and so on. But I think many people-- If you're planning on being a full-time researcher in a specific field, then it totally makes sense to do a PhD and it also kind of allows you to focus on this one big project for a few years in a way that you often can't when you're older because you have other students to manage and grounds to apply for and so on. But I think it really depends on the person. If that's not something you want, then it doesn't really make sense to do it. But there are also other benefits of doing a PhD just because of the social structure of how things work. PhDs tend to just put you on track for a higher salary or they make it easier to migrate to certain countries and so on and people don't factor those in. Not that things should be that way, but they are beneficial in other ways apart from academia.



Ben (54:44):

Yeah. The second order benefits from all of that. Women in the sciences?





Saloni (54:53):

I don't understand how underrated and overrated can apply.



Ben (54:59):

Well, I guess we could go do you think there's significant underrepresentation and that's a problem? Or are you more kind of like, "Well, it's maybe not such of a problem?"



Saloni (55:13):

I think underrated. I think there could be a lot more women doing science. I think what's difficult about science is we have different gender roles and expectations in terms of what we expect from careers and how much work people spend at home or with children. But also academia is very high pressure and you're expected to be continuously doing research for a very long time at a very high rate of publication and so on. And what that means is that even though there are lots of brilliant women who are great scientists, they often drop out along the career progression just because it's difficult to flexibly work on research and also care for their children or whatever they want to do. I think there needs to be a lot more flexibility in what we expect from people and how we structure career progression in academia just to allow for that. And for that reason, I think it's underrated and women are highly underrepresented especially at the later career stages.



Ben (56:31):

Yeah, I agree. I'm quite distant from it, but I see it as generally a problem for science because if it means that there's a big pool of people which on average should be adding a lot and you're not accessing that talent pool, then your science is going to be significantly worse than it could before, I think to your point, reasons which you should be able to at least mitigate a lot more. We seem to have the same problem in economics as well. I mean, maybe you could roll that into science under the social science banner and that seems to have flared up. Again, I'm quite distant for it but it seems quite problematic if you don't have these talent pools. I guess you could extend that to the whole of Africa to some degree like that. But I think it's even obvious within that. Great. So last couple on here. Overrated or underrated, misinformation?



Saloni (57:32):

I'm going to assume that means the impact of misinformation.



Ben (57:35):

I guess you could take it there. Impacts of misinformation or whether perhaps there isn't as much misinformation out there as might be deemed. That's another way you could take it.



Saloni (57:46):

Yeah. I think underrated. Maybe overrated in the sense that people don't just believe anything that they hear. But at the same time, there are memes and things that get shared much more widely and much more frequently than they did in the past just because we have different ways to communicate than we used to. Having all of these WhatsApp forwarding groups and things like that means that communication is very different from what it used to be. So I generally think it's underrated. I feel like it's something that we haven't yet learned how to tackle or moderate correctly and it's something that we're still kind of catching up with. I don't think that there needs to be a trade off again between how much we communicate and whether misinformation occurs, but I think we're just not very good at putting it into context and helping people understand things accurately.





Ben (58:55):

Yeah. I think I agree. That vaccine issue, some of it during pandemic certainly seemed to be misinformation or some people would say disinformation. And then we refer to the climate thing. I think there might be an issue there as well maybe on both sides. So I think it is probably a bigger problem than some people think. But yeah, not exactly sure. The last one on this, writing a newsletter. What everyone is doing nowadays it's via Substack. So underrated or overrated endeavor?



Saloni (59:28):

I think underrated. I think Substack is really cool because blogs had this problem that people didn't really follow updates when they came out. So for example, I'd love somebody's blog but I'm not going to check their Twitter feed all the time to see if they have a new blog post. I don't really use RSS. And even RSS doesn't really give me the information at once. It just tells me that there's something new. The thing that's different about newsletters is that it just gives you a lot of information on a platform that you're already using like email. So it means that you can keep up with writers that you really like but also you don't have to be reading all of their stuff. You're just kind of having this feed in your inbox. And obviously some people just use it as a blog. So they just have their usual blog post type formats and then you don't have to subscribe. The fact that you can do both on these platforms I think is very cool.



Ben (01:00:30):

Great. And on that note, you should subscribe to Saloni's Substack which I think is Scientific Discovery. Is that what you named it in the end?



Saloni (01:00:37):

Yeah.

Ben (01:00:38):

Great. So last couple of questions. What are the projects you're working on now or future projects or things that you are excited about?



Saloni (01:00:48):

I'm currently working on this big-- I've just started this big project at Our World in Data on the history of pandemics. So what I'm looking at is essentially just putting together good estimates of the mortality of various pandemics in history. The reason for that is that currently the collections that you see online when you try to Google for how many deaths were caused by each pandemic come from different sources, but often they're also not even cited. So you have these infographics where it shows the size of each pandemic and then there's just no references at all or the references are very dated and you don't know where they come from. So what I'm trying to do is put together good estimates for each big pandemic that we know of and that will hopefully be useful for many people. So I'm currently just going through each one, one at a time, looking for good data sources and methods and putting together these numbers.



Ben (01:01:50):

That sounds really exciting. I was speaking to Mark Koyama the other week who does economic history. He was looking at economic history cross with Black Death and there was a lot which came from that. But there are issues with the data and I don't think there's been so much studied on other pandemic. So that could be a really exciting data project for a lot of people. And then last question is, do you have any advice or thoughts for listeners out there? Maybe young, independent researchers thinking about wanting to make an impact in the world or any other sort of life advice or thoughts that you might have?





Saloni (01:02:29):

I feel like it's so hard for me to give life advice because I often feel like a lot of things that I'm doing just kind of happen by chance. Like I just knew the right people, things were happening at the right time. I also have been doing lots of different things at the same time. That means that I've had many different opportunities than people would usually have. I think if you're someone who doesn't really know what they want to do in the future, doesn't have a particular interest, then my advice is to just keep your options open. Just try out different things and see how they go and take opportunities as they arise. Whereas if you really do know what you want to do, then focus on a good way to do that. Focus on what you're good at.



Ben (01:03:18):

That makes a lot of sense. But I guess that was partly your early career tactic was to do a lot of things and then you get cross links between those or different networks and then you don't, like you say-- Although you seem to fall into it by accident under one reading, another reading is that you kind of made your own luck by trying out lots of things.



Saloni (01:03:38):

Yeah. I mean, I feel very fortunate. I do work hard, but it still feels so weird that I'm doing all these different things and it feels very much up to chance.



Ben (01:03:51):

Great. I saw you did some travel this year. You've been to India, you went to Italy. Other places that you still have on your bucket list that you really want to go to? I kind of assume that you think travel is underrated still. Or somewhere you've been in the last few years you thought was really brilliant and you'd want to go back?

Saloni (01:04:12):

So I was in Florence for a conference in Italy and it was amazing. I've never felt so happy to see old architecture but also how... The history of Florence is really interesting. I listened to loads of podcasts while I was there while just walking around the city. I really like places that are very walkable, where there's lots of history, and there's lots of culture. There are so many places that I haven't been to. I think lots of Europe I haven't really been to. I also want to go to those big national parks in America. I haven't been to America at all in my life except for once when I was two years old and I don't remember that obviously. So I would love to go sometime. I think I should have some time to go next year and so hopefully that'll be good.



Ben (01:05:15):

Great. Well on that note, Saloni, thank you very much.



Saloni (01:05:19):

Thank you.