• Home
  • Start Here
  • Podcast
  • Thinking Bigly
  • Investing
  • Arts
    • Contact/Donate
    • Sign Up
    • Search
    • Privacy
    • Disclaimer
    • Arts
    • Investing
    • Newsletter
    • Theatre
    • Poetics
    • India (1997)
    • Indonesia (1998)
    • Popular
    • Blogs
    • Food
    • Photography
    • Personal
    • Mingle
    • Writer Bio
    • Investor Bio
    • Me
    • Yellow Gentlemen
    • Investment Aphorisms
    • Places in Between
    • Grants
    • Angel Investing
    • Shop
    • Unconference
Menu

Then Do Better

Street Address
City, State, Zip
Phone Number

Your Custom Text Here

Then Do Better

  • Home
  • Start Here
  • Podcast
  • Thinking Bigly
  • Investing
  • Arts
  • Support
    • Contact/Donate
    • Sign Up
    • Search
    • Privacy
    • Disclaimer
  • Archive
    • Arts
    • Investing
    • Newsletter
    • Theatre
    • Poetics
    • India (1997)
    • Indonesia (1998)
  • Blogs
    • Popular
    • Blogs
    • Food
    • Photography
    • Personal
  • About
    • Mingle
    • Writer Bio
    • Investor Bio
    • Me
    • Yellow Gentlemen
    • Investment Aphorisms
    • Places in Between
    • Grants
    • Angel Investing
    • Shop
    • Unconference

Kanjun Qiu: AI, metascience, institutional knowledge, trauma models, creativity and dance | Podcast

January 17, 2023 Ben Yeoh

Kanjun is co-founder and CEO of Generally Intelligent, an AI research company. She works on metascience ideas often with Michael Nielsen, a previous podcast guest. She’s a VC investor and co-hosts her own podcast for Generally Intelligent. She is part of building the Neighborhood, which is intergenerational campus in a square mile of central San Francisco. Generally Intelligent (as of podcast date ) are looking for great talent looking to work on AI.

We get a little nerdy on the podcast but we cover AI thinking, fears on rogue AI, and the breakthroughs of Chat AI. We discuss some of her latest ideas in meta science based on the work she has done with Michael Nielsen (previous podcast here) and what are the important questions we should be looking at.

We chat about the challenge of old institutions,  the value of dance and creativity and why her friends use “to kanjun” as a verb.

We cover her ideas on models of trauma and why EMDR (Eye Movement Desensitization and Reprocessing therapy) and cognitive therapies might work.

We discuss why dinosaurs didn’t develop more.

We chat around “what is meaning” and “what is the structure of knowledge”, what are the strengths and weakness of old institutions; culture vs knowledge vs history  and other confusing questions.

Kanjun gives her advice on how to think about dance (dance like you are moving through molasses).

Dance is inside of you. It just needs to be unlocked.

We play underrated/overrated on:  having agency, city planning, death of institutions, innovation agencies, high frequency trading; diversity

Kanjun thinks on how capitalism might want to be augmented and what excites Kanjun about AI and complex systems.

Kanjun asks me questions and I offer my critique on Effective Altruism. (Although Tyler Cowen more recently (link here Dec 2022) and philosopher Larry Temkin (podcast link here mid 2022) have deeper comments on this.

This is quirky long form conversation on a range of fascinating topics.

Available wherever you get podcasts. Video above and transcript below.

PODCAST INFO

  • Apple Podcasts: https://apple.co/3gJTSuo

  • Spotify: https://sptfy.com/benyeoh

  • Anchor: https://anchor.fm/benjamin-yeoh

Transcript (only lightly edited)

I am super excited to be speaking to Kanjun Qiu. Kanjun is co-founder and CEO of Generally Intelligent, an AI research company. She works on meta science ideas often with Michael Nielsen, a previous podcast guest. She's a VC investor and co-host her own podcast for Generally Intelligent. She's part of building the Neighborhood which is an intergenerational campus in the square mile of central San Francisco. And she is all round amazing. Kanjun, welcome.


Kanjun (00:30):

Thank you. I'm really excited to be here.


Ben (00:33):

Everyone is talking about OpenAI's, chatbot, ChatGPT. The bot has some major flaws but can also produce some amazing writing and code. You can add on top of that the recent advances in AI art generation also via open AI or stable diffusion. And then other more technical advances such as DeepMind and protein folding and the like. I'd be interested what do you make of the current state of AI? Where do you see perhaps AI language going and where does Generally Intelligent and its own mission fit into this ecosystem?


Kanjun (01:10):

Yeah. It's a good time to be asking this question because I think when I first started Generally Intelligent with Josh, my co-founder, a few years ago we definitely didn't expect things to go this fast. I mean, we expected things to go quite fast. But I think this is faster in terms of progress than we've ever seen. It's really exciting and a little bit scary. We can get into the safety stuff later. But I think ChatGPT is something we've kind of roughly known would be coming for a while. But what's really remarkable about it is that you change the interface from something that's a freeform, unconstrained, humans have to prompt into an interface that's a chat interface and suddenly tons of people figure out how to do much more creative things.


Something I've been thinking about is like the Xerox PARC component of AI where we have all of this interesting development in capability. So one question is, "Okay, what are the interfaces that might allow humans to be able to get a lot more out of even existing models?" In terms of where models are going, I think-- At Generally Intelligent, we work on building general purpose agents that can be safely deployed in the real world. I think we kind of timed it well. We think we will have general purpose agents that can be hopefully safely deployed in the real world in the not too distant future.


So I can go into kind of our focus really is about studying generalization and reinforcement learning and how these models are able to generalize. And whether we can get a better, clearer theoretical understanding of how to construct these models in a way that is more predictive. Like, can we know ahead of time given this training data, these parameters, this training procedure, this type of model that you'll end up with a model that has these behaviors? So that really is the kind of hope. It’s that we can get to something that is more controllable. It's kind of like building bridges or nuclear power plants. Neither of these things is exactly safe, but we've made them safe because we understand how they work. So I think the same thing has to be true of neural networks.


Ben (03:38):

And so for a person outside the tech world or even outside the AI world, what do you think is most misunderstood about where we are with AI? So one impression I have is what you alluded to; it has gone a lot quicker than expected. Another element we can segue into the sort of rogue powerful AI, the so-called alignment problem on safety which I think the person in the street is generally not something which comes across their minds. And obviously there's a lot of strange and detailed technical elements to some of this. But I'd be interested in what you think is perhaps most misunderstood when you speak to the person in the street.


Kanjun (04:18):

I have a few. One is we tend to use words that we use to describe humans in order to describe AI. For example, the word 'understand.' So when I talk about whether or not a human understands something, I'm kind of referring to a mental process in their head that I have some sense of like, “It happens in my head too." So I have some model of what's going on in their head. A lot of people they say, "Oh, these models are just statistical. They don't really understand anything." In some sense that's true. Certainly they're not understanding in the way that humans are understanding. They don't have the same mental process in their head as we have in our head. But I think that is maybe foolish to expect these models to have the same mental processes and to say, "Okay, unless they have the same mental processes as humans, they can't be intelligent or capable and be able to do things that humans can do." So I think just being careful about using human descriptors to describe these models as a way to say, "They can't do X because they're not like humans in X." What else is misunderstood?


I think on the flip side some people say, "Oh, we already have something that's general. Everything is solved." I think that's probably not true. It seems like as we scale up models that we get a lot more capabilities for free. I think if you ask any researcher in the field, there are few results remaining towards something that's a lot more general.


Ben (06:08):

That makes a lot of sense. I think that initial observation you make about how we humanize things has been quite a human trend for a long time. So we humanize trees and we have already done the whole religions and cultures around that. So I can understand why we would do it with AI. But perhaps it's the same mistake to think that trees are like humans. It could be a similar type of thing. How worried are you about the alignment problem and rogue AI? AI safety is how real a thing that we should be working on? There's some people who are kind of dedicating their whole careers to it and others who seem quite blasé and just say, "Look, this is seemingly a human construct." Is there a small but real risk or is it overstated?


Kanjun (07:00):

I think the risk is that we don't know. I don't know, none of us really know how these models are going to evolve, what kind of capabilities they're going to express. The issue with complex systems-- and I would say neural network is a complex system, is that they end up with emergent behavior. So will they end up with behavior where they're trying to deceive us at some point? Maybe. It's hard to say, "No, absolutely they're not going to do that." So you can kind of construct the problem in ways where you can study whether or not they're likely to end up having that behavior. And I think it's really good for people to study.


Ben (07:42):

I guess that makes sense because I'm never quite sure how much we really understand the brain itself. Like maybe below 50%, maybe even as little as 10 or 20%. I'm going to segue into something which I've been reading. Some work that you've done and something which isn't very well understood in the world which I guess is around trauma or anxiety. I'm interested in neural networks and how the brain thinks of it because we don't really understand it. I'm particularly interested in this technique called EMDR which works on eye movement and it essentially seems to have a reprogramming effect. It stands for Eye Movement Desensitization and Reprocessing therapy. If you look at controlled trials, it really seems to work for some people. Not everyone, but actually pretty good results in trauma and things like that. Quite a lot of neuroscience, we don't really know how it works. So I'd be interested in what you think AI or neural networks or this has perhaps told us about trauma or anxiety or the brain, and maybe vice versa in terms of maybe some learnings or thoughts you have about how it might be helpful or not in this kind of area.


Kanjun (09:01):

Yeah. So get ready for the rabbit hole. I have a bunch of thoughts on trauma. I think maybe the kind of core idea is-- and I'll write an essay on this at some point. This idea that trauma is overfitting and therapy is actually a process of retraining. So let me give some examples. I've done a ton of therapy on myself, I think. I grew up in China. I moved here when I was six by myself. My parents had moved here when I was much younger. I had a lot of abandonment issues that resulted in all sorts of weird behaviors that were not well suited to my current environment. Those behaviors were rooted in fear. I think basically young children copy a lot of beliefs from our caretakers, and in particular we copy beliefs where our caretakers are scared.


The theory behind social copying like this is that copying is a really efficient pre-filter on potential beliefs or potential behaviors because presumably other people around you have those beliefs and behaviors because they're adaptive. That's kind of one theory why we copy so much. So we copy all of this into our brains and so now we have all these fears that our parents and grandparents and family have. I had internalized a lot of those fears. We call something trauma for really two reasons. One is something that seems really bad happened. That's kind of the medical term of trauma. But I want to use trauma to refer to this other phenomenon where we call it trauma when it doesn't seem like it's well suited to our daily life or our current life.


So post-traumatic stress disorder is a trauma and PTSD is fine if you're at war. You do actually want to be super jumpy and really careful about whether or not a shell is coming. But then you migrate back into the real world which is out of the data distribution of the environment you were just in and now you're out of distribution. So you've been overfit to the previous environment-- and I'm abusing some terms here, but now you're not generalizing to this new environment that you're in. I think this is true of most people. We don't actually generalize that well from childhood to adulthood. We freeze a lot of these old beliefs because they're adaptive and in an environment that is much more dangerous than our current environment that worked really well. But our current life, this environment is quite safe and having these fears is actually not very useful and causes a lot of maladaptive behavior. And so trauma is overfitting.

So therapy is retraining. I think what I've observed in therapy is that there are really three processes going on. There is the process of activating or accessing a network; a sub network of some sort. There's the process of giving it new training data from your current life and then there's the process of updating it. So actually having that memory reconsolidation process happen. It seems to me that all therapy techniques are good at one to three of these three elements of retraining. So EMDR as you mentioned is actually quite good at all three. I think that's part of why it's so effective. It's that with EMDR, you are flicking your eyes back and forth or you're doing this bilateral tapping where you're tapping different sides of your body. Somehow that seems to reduce fear response so that you can more easily access a network.


EMDR I would say the places where it's not as effective is in giving your mind new training data from your modern day. Your therapist actually has to prompt you to do that. So if you're working with a therapist who doesn't understand this frame of, "Okay, now you need to see how this old memory is not adaptive and you can update it." If your therapist is not doing that, if they're not giving you training data from your modern day then you may not see updating. And so that might be why. I've done EMDR on a lot of people and it seems fairly consistently if you do show them the training data and their modern life is good, then they'll update.


Ben (13:23):

This is fascinating to know and also that's a sort of learning that you get from understanding ideas of neural networks and training data. It might in some ways bring us closer to some animal behavior type model. But I hadn't heard it articulated that way and also makes sense of why some of these things may fail or not.


Kanjun (13:44):

I kind of have this whole theory of why therapies work or don't work because I've experimented so much on myself and designed new techniques combining existing ones because they're not all good at all three things. So cognitive behavioral therapy is fantastic at getting new training data from your modern day because it's asking you things like, "What are the costs of what you're doing or what are the costs of that belief?" So it's basically a data accumulation mechanism. You can acquire more data using CBT procedures. But it's not very good at accessing these very old beliefs. The kind of complaint about CBT is that it often works at the surface level where it's actually really hard to get to these deep old beliefs. And then it's pretty good at helping you update. Has a bunch of different methods to help you update.


Like, you write down on the costs, you look at the list of costs, you implement habits into your life, et cetera. So that's particularly good for getting training data but it's not good for access. Now you look at something like Internal Family Systems where you're talking to different parts inside your body and these different parts are kind of-- The way I think about it is they're like old memories that are stored. IFS is really good for access. The whole point is that you're accessing some of these parts and able to go into them, but it's not very good at getting training data from your modern day. So often people do IFS, they go into those parts and if they're not prompted properly or they haven't prepared their conscious mind with new beliefs from today, then they'll just believe whatever that part is believing. They'll believe like, "Oh, I'm three years old and I'm really scared and it's good to be scared." And you're like, "Well actually no. You're not three years old anymore." You kind of have to bring them out of that purposefully.


So combining IFS and CBT or somatic therapy and CBT, these are effective methods. It makes a therapy method much more effective. I think this framework actually makes therapy debuggable. So if a person is for some reason not able to update a situation that they're feeling frustrated by you can kind of say, "Okay, well is the problem in access? Is it in training data or is it in updating?" There are techniques that you can combine to get all three to happen. Then to your point about neural networks and how that influence the mind, I think basically both neural networks and the mind are learning systems. And learning systems-- like modeling systems have some shared properties. So overfitting is one of them.


Ben (16:33):

That's amazing. So you definitely have to write that as an essay. It sounds like you should actually have a whole new startup investigating that. You may not get to AGI, but you solved the therapy problem which seems to me almost as great.


Kanjun (16:50):

Actually, I think an important part of safety is figuring out what values to align to. So understanding humans is an important part of that.


Ben (16:59):

Exactly. It leads me to think that... Do you think animals go through trauma then? Some form?



Kanjun (17:06):

Almost certainly. Yeah. So you'll see some dogs were abused when they were young or by a previous owner and then now they're really jumpy even though they're with a new owner. So these learning systems update very conservatively in humans and living creatures which makes sense because the real world is fairly unforgiving. If you update too quickly, then you might die.


Ben (17:32):

Sure. So this is the paired learning response. It's really strong. Okay. So a slight left field pivot then is, do you think dinosaurs felt trauma?


Kanjun (17:44):

Almost certainly. Again, it's a learning system. Unless dinosaurs never learned anything new, that's plausible, but unlikely.


Ben (17:56):

That's quite a good one because you have a question on your website which you posed which I guess we have no answer, which is they existed for 165 million years or so, give or take, and they did not seem to advance to the levels that we seem to advance to. But obviously they seemed to felt trauma and they have some of these learning mechanisms. I guess we can just extend this to other animals which have been around for a long time. Maybe you could do mammals like rats or you could do insects as well. What's your current thinking on this?


Kanjun (18:30):

So the question is basically-- I was reading a lot about dinosaurs and I was like, "Why is it that for hundreds of millions of years we had these creatures that evolved a little bit but didn't seem to improve dramatically or change dramatically? And then versus in the last like 1 million years or 2 million years on earth, we have had this crazy exponential change in the nature of intelligence of animals on this planet. Is there something that causes this kind of change in evolution?” My current best hypothesis is environment constraints force evolution of new skills or capabilities. So basically you want to be in an environment where intelligence is rewarded. If the environment is too abundant-- which at that time there was a lot of oxygen in the air. Maybe it was very abundant in terms of an environment, in terms of food. If the environment's too abundant, there's no benefit to being smarter. And so here now today, the environment's not as abundant-- I think, we think. I'm not sure, no one knows. But if that were true, then constraints, maybe.


Ben (19:46):

Yeah. Okay. I like that theory. I got one slightly downstream for that which is dinosaurs didn't have hands as much. I think the second thing which comes alongside is language. And that didn't develop. But why didn't those things develop? There is some evidence or there’s some theories that the human animal or something like that was forced to-- for instance in ice ages or when there was scarcity-- to develop these things to survive. Therefore that would happen with a bit of luck. So they kind of intertwine. I guess the follow one from that though is I was pondering when you go back just a little bit in time I guess on this time scale to the Romans and the Greeks, why did they not invent more advances than we have today? They certainly seem to have some of the capabilities-- and they did invent some things like Roman cement which we still can't seem to copy which seems to be a really good material.


But they didn't invent some of those things. You can see this is going to segue into meta science at a point because they also didn't invent some things which didn't really need other types of technology. The one I always think about is the randomized control trial. So you test one arm and you test another arm and you compare them. That didn't need any extra science and certainly they had seemingly the capabilities of it. In fact, you could have gone back 2000 years earlier and they would've had the capabilities of that. But it didn't develop, or maybe it did and it just didn't hold which is also kind of interesting. Have you ever thought about that? Why did Greek and...


Kanjun (21:28):

Yeah. I thought about it. Quite a lot.


Ben (21:31):

Is that essentially a meta science question? Because some of these ideas like randomized control trial or how they did it wasn't holding and so it didn't transmit.




Kanjun (21:41):

Right. So I'll talk about randomized control trials first and then we can go to the tie to evolution. So I think randomized control trials, the reason we have them now is because we have the statistics, the mathematical foundation to be able to evaluate these two groups. Are they actually the same or not the same? I tried this trial. I randomized one is the control group and one is the test group. Did the test group outperform the control group? There are a lot of statistical techniques that are needed to really understand that question. So I could see that maybe they would have done randomized control trials for really small effects a long time ago, but that maybe it's not very deterministic. You get some people in this scoop, it works and some people in this other group, it works. Then they might throw up their arms and be like, "I don't know what to conclude about this." So I could see the reason RCTs didn't exist before is because we didn't have the mathematical foundation to be able to look at the results and say something about it and get information out of it. And I think that's true.


Ben (22:50):

Geometry which I find a lot harder, but maybe that's just me. Euclid invented geometry which seems to be quite harder than the stats behind certain RCTs. But it's true. They hadn't seemingly invented the stats. But I'm not sure- Like the ability to see that.


Kanjun (23:11):

Trigonometry doesn't build on so many other fields of mathematics whereas statistics does. But I kind of back to the question of, "Why did Romans and Greeks not-- their civilization didn't accumulate in the way that ours does technologically?" I think it actually comes back to this process of variation and selection which is true in evolution and also true in science. So in evolution we were just talking about constraints. I think the reason why constraints are interesting is because in evolution you're varying, you're doing a lot of variation. Then what the constraints do is they enable selection. The tougher the constraints are, the narrower the selection is. In science at that time in the Roman and Greek era, maybe a way that people thought about knowledge is that knowledge maybe came more from authorities.


There is not this idea of evidence being a thing. So it was not until the royal society in the 1600s, I think, that they had this model; nullius in verba, which means take no one's word for it. Which means that before that model, people took other people's word for it. So people weren't varying and evaluating ideas and they weren't able to test and select new ideas to adopt as a culture. So the church had some top-down ideas-- many of which were wrong, and so no one was able to change them. But now we have this process of science is quite remarkable. In the ideas of science we can-- Any grad student if they're more correct than a Nobel Prize winner, the field actually acknowledges that they're more correct. So this is quite a remarkable thing to be able to have the ideas change not from the establishment, the authority, but from the outside; from people with no authority at all.


Ben (25:22):

It can take them some time but it does eventually happen. I think about ulcers and how they figured that out, but they weren't believed for some time. But eventually the science does seem to win out which like you mentioned is a kind of remarkable thing.


Kanjun (25:40):

Actually, in some fields it happens really quickly. Like in physics, there's this idea of superconductivity where Brian Josephson was this 22 year old and he had published this work on Superconductivity. John Bardeen who's the only person who has ever won two Nobel prizes said like, "No, you're totally wrong." People pretty quickly realized Bardeen was wrong and Josephson, the 22 year old was right. And the physics community pretty quickly, I think, came to this conclusion. So it's not true in all communities. Some communities rely a bit little bit more on authority, but happens sometimes.


Ben (26:21):

Isn't that true about-- was it Linus Pauling and DNA as well?


Kanjun (26:24):

Right. Pauling was wrong.


Ben (26:27):

He was wrong and he admitted it quite quickly and so did the community. And said, "Look, this is obviously right." So I think that's true as well. You speaking about the fields of mathematics in ancient Greek and all of these other fields leads me to something. So I've just been reading it in the last couple of days and I believe you were part of this conversation as well as actually a ChatGPT. And that was Michael Nielsen's recent notes on the ideas of fields or communities as a unit of advancing progress or thinking about progress. I thought this was a really interesting idea. This is reflecting on the fact that you said stats has to build on quite a lot of other fields, whereas some fields may just develop out of not quite nowhere, but may not have to build on so many other things.


I think of this in creative literature art fields and performing art fields. To what extent are you building on what goes before or to what extent do you take from a whole sideways field and make a kind of new field of it? It seems to me that actually that is one interesting way of thinking about how advanced we are, and that actually now that any individual human will find it very difficult to even have a surface knowledge of all of the fields, and certainly not an in-depth knowledge of maybe more than 5, 10 or 20 of which something like an AI could start to do. So I was interested in what you think about fields as a unit and therefore AI or some development putting these fields together to then create new fields, is maybe one of the key questions that we should look at.


Kanjun (28:12):

Yeah. I'm actually quite confused about the-- I think the underlying question here is kind of like, "What is the structure of knowledge? Or something like that. There's this one model of knowledge as being a thing that is like a tower block. You've got some blocks at the bottom and then you put some... This is not a good analogy. There's maybe a better analogy. Like a computer program where you're calling a lot of previous functions. So like you have a function that calls another function, and that function calls an older function, and that function calls an older function, and each of those functions is like a discovery. So there's that model where by definition, the outermost function is dependent on all of these discoveries that came before it and can't be simplified. It has to call all of these preexisting functions. That's one model, but that model is not really true.


It often feels like when an idea is first developed in a new field, the person who developed it actually doesn't understand it as well as the people who follow them. So I think there was a Feynman quote that talked about how like-- or maybe a Hamming quote that talked about, "Einstein didn't understand his own ideas as well as the people who came after him." There's this kind of reconsolidation process where the understanding is simplified and it may be simplified even more. So it's no longer true that this function is dependent on all the previous discoveries. In fact, you might end up with a new fresh function that doesn't depend on any other previous discoveries and that still captures all of the information. So it's actually not clear to me that humans can't-- in some fields at least, that humans can't understand everything. Maybe the reason we can't understand it yet is because it's not yet simplified to its final form.


Ben (30:18):

I guess that might be true because I'm going to segue from creative arts as well, and that does seem to be true in creative arts. Although it's argued about, the very simplified precis is that you might have a play or a poem or whatever it is. The audience or later artists make much more of that creative work than the original creator. The original creator has obviously their view and vision and have that. But actually those who come after make it even so much more than what it was. It's actually out of the original creator's hands, and particularly once time has passed-- I guess you can think about this in Shakespeare today. Obviously, he had a view and we don't quite know what his view was on all of his work. But it has been taken to another level by so many more artists and creators. And actually, I think arguably is greater today than it was in his own time because of that. And actually that feels-- although it's still argued about, quite well established within art. That your creation is not just your own and that the very greatest creations become bigger than what the actual original creator think and might be interpreted more deeply than what the original creator can even think. Therefore, you actually aim [inaudible 31:30] who at the peak of their art will often slightly step away from commenting on what they think their art is because they realize that their answer may not be the best answer and actually might narrow the interpretation of what that might be by imposing this idea that the author knows best.


Kanjun (31:50):

That's really interesting. I'm really curious about this process. So here's what's happening. In your view, what's happening-- let's take Shakespeare. These people who have built upon it, is it that they've kind of added additional meaning to what-- They're interpreting it in different ways. And so a single sentence if you read it yourself might only have a little bit of meaning, but if you read it in the context of everyone else's interpretations, it has a lot more meaning. Is it that they've added more meaning? Or is it that the people who come after have found simplifying patterns in his work that mimic this reconsolidation process in science that I was talking about of ideas. What is it that's making it richer?


Ben (32:36):

So actually for the most complex work it's both elements. And I would actually add a third to segue into something like cave art. So cave art, we find new... Obviously, we don't actually even know the original creators of that art, whether they even viewed it as art. But obviously we find patterns now. So young children put their hands in the mud or in wet cement. And so we riff on the now of the culture and then obviously we will add our own meaning into what we see how in the now. Then because so many people have commented on cave art and made their own mud art and a kind of, I guess, a meta art sense from that, it's also made the whole field richer. So you've definitely got those elements and it added together. Then you've got people who draw on those various things to then create more meta pieces.


So it definitely builds that way as well as wide. I'm only thinking out loud. I'm sure there must be more, particularly in something as rich as Shakespeare because it then becomes so pivotal to other things and actually might segue into things like culture. So it's now a key element in how British people think about themselves. If it wasn't for Shakespeare, we wouldn't think the way we do, I'm pretty sure. Obviously that would be arguable. There's probably a PhD in that. And even, there are words and catchphrases from Shakespeare's plays which now have gone into different nations' vernaculars which didn't exist beforehand, but have also made alive something which was already there. But actually crystallized it potentially in a simplifying form or in some form that people get, "Yes, that is what that was about and that's what that phrase means."


Kanjun (34:28):

That's really interesting. Yeah. This is another thing I'm quite confused about which is, "What is meaning?" I think an interesting thought experiment that my co-founder, Josh, gave is. "Let's look up at the sky, pick any star, and let's imagine that a hundred thousand years from now humans or some descendant of humans live on that star." Now, we have two scenarios. In one scenario, they have no memory of earth or where they came from. They don't know how they ended up on that star, but they're there and they don't know anything about their history. In the other scenario they look up on the night sky and they point at earth and say, "Hey..." I mean, maybe they can't see Earth, but in this general direction. Like, "Hey, that's where we came from." There's like some merchant on that star that's like, "I know that my distant distant ancestors came from Earth." In one of those scenarios it feels like there's a lot more meaning. That merchant feels a lot deeper sense of meaning than in the other scenario where they're kind of disconnected from where they came from. I don't really understand why. Somehow meaning is tied to this sense of context and history and kind of where things came from and why they are the way they are. But I don't know what it is.


Ben (35:50):

I can't answer the why. I kind of almost obviously, or if I had, I would've made some genius breakthrough in the human condition, I guess. But it is definitely true. So you think about money or cultural symbolisms or particularly art. But say in your example of the star. Say I said, "Oh, I went onto the internet the other day and I sold Kanjun that star or one of these internet star nomination things and 10 million ancestors later, Kanjun's ancestors arrived on the star which supposedly she owned." Again, you would've imbued that with so much more meaning. Except in today's day and age, what does it mean selling someone a piece of paper saying, "You sponsored such and such a star?" There is no ownership in our legal culture of what that star might be.


That's actually why I am not so worried about what some people are worried about in terms of some aspects of AI generated art because-- I don't know what the portion is, but some significant portion of art is the value is in the so-called the eye of the beholder. So the meaning we give it and its time and place and things, and that's part of its value. Obviously that's part of the value in the techniques and everything which went into it which is obviously going to be different in AI art. But there's definitely this part that humans bring. This kind of human value part which is not seemingly part of the physics natural laws, but seems to be part of the human natural laws.


Kanjun (37:21):

Yeah.


Ben (37:22):

That's leads me to think then, what do you think out of all of the things you are confused about, interested in. What do you think are the most important questions in science or meta science that we should be seeking to understand at the moment?


Kanjun (37:38):

We wrote a whole essay on that.


Ben (37:43):

Glad you noticed.


Kanjun (37:47):

Yeah. Talking about the essay, I want to come back to this idea of meaning later because maybe we can riff on it.


Ben (37:52):

Yeah. Essay back to meaning. Something might...


Kanjun (37:57):

So talking about the meta science essay, originally some of the questions that motivated us were questions like, "Why is it that a refunder says they want to do high reward research and yet they end up doing relatively low risk incremental work; a kind of funding low risk incremental work? What is causing that?" Clearly there's an intention to do something different and yet every new funder kind of gets sucked back in into the existing ecosystem. They don't diverge very much from the existing processes and norms and results. So why? That's so weird, right? Isn't it weird that you try to do something totally different and you end up being exactly the same? In many fields that's not true. In art, if I try to do something totally different, presumably at some point I'll ended up with something totally different.


So that really was the beginning of us trying to understand like, "What is going on here? Why is it that we can't fund things in totally different ways?" There are lots of different kind of ideas expressed in the essay, but one of the ideas is this idea that the space of social processes of science is very underexplored in that most funders have very similar social processes to existing funders. Even if you start a new funder, you might still have peer review, you might still approve grants based on you don't anonymize the names or anything. You approve grants based on how good other people think those grants are. You approve them based on how successful they seem like they might be.


Michael in his head already had a giant list that I started to add to of potentially totally different programs that you could run as a funder. For example, high variance funding. You only fund things where the peer reviewers really disagree with each other or funding an open source institute. So this is something that typically wouldn't be considered science. But if you're a funder, you could fund a whole institute that's working on open source projects and that is an important infrastructure for science. I think at some point we talked about like a traveling institute of scientists where you go around the world... A young genius immigration program where you immigrate people into the country, find people across the world who seem like they might be really capable in some particular way. So it was like, "This is infinite." We just kept going. And so we were like, "Okay, well that's interesting." Eventually we found a frame for this which is the beginning of the essay which is, “Let's say you come across some aliens and they do science. Would those aliens have discovered mathematics?” Seems likely. “Would they have discovered atoms?” Unless we're totally wrong about atoms, seems likely. “But would they have PhD programs or tenure or...?”

Ben (41:21):

And have randomized controlled trials.


Kanjun (41:23):

They might have randomized controlled trials or we may discover that actually... One thing about RCT... Sorry, go ahead.


Ben (41:31):

We haven't discovered it.


Kanjun (41:34):

Well, one thing about RCTs that's interesting is that we only do RCTs in situations. We don't do RCTs in physics because we can understand mechanistically why the phenomenon is happening. We only do RCTs in situations where the situation is so complex we've kind of given up on mechanistically how to explain the result. And so we just kind of divide into two different camps and see which one is more probabilistically likely, which to me... Maybe it is a method that we'll always have because there are some things that are always beyond our understanding. But to me it's very unsatisfying.


Ben (42:16):

Yeah. I hadn't heard it described like that, but you're right. Complex sciences, biological sciences; anything to do with humans, so social sciences, that's true. I will try and say, so what do you think is the best idea? Or you can do both. What was the worst idea in your meta science paper? Because maybe that's the one we should go for on the grounds that actually it should be a lottery and you don't know. You might say there was also failure audit, 10 year insurance, you had your traveling scientists, you had long short prizes, anti-portfolio, interdisciplinary institute, you had the funder as detector, and you had a variety more. What's the thing which you just think was probably Michael's like, "Oh, that's just really awful but we should just keep it in there because you never know. It might be right." Or the otherwise like, "This is mine and this should be right at the top because the best idea in it." What do you think?



Kanjun (43:10):

Maybe I'll answer a slightly different question which is like, "What are the most interesting ideas?" And then...


Ben (43:19):

Or you could go, what's the most high risk idea? What's the idea most unlikely that anyone's going to go for, but probably is the one they want to try?


Kanjun (43:27):

Well, all of the ideas are a little bit out there. I think the most interesting ideas are a little bit less about the funding programs because those programs are just given as examples of underlying generating functions. What's much more interesting is looking at the generating functions. So we have this general idea of latent potential. As a funder you're assuming that you-- When you allocate money, you can give it to people where there's latent potential; where their potential is not yet unlocked by another funder or another way of doing discovery. So your goal should be to find latent potential. I think this is actually a relatively new idea. Most funders don't think about their job as finding latent potential. Just like in finance, your job is finding edge.


Also as a funder, your job is finding edge. You shouldn't be allocating more money. And it's not very effective, I guess, to allocate more money just where everyone is allocating money. So this idea of latent potential is kind of this core idea. Then we have all of these-- The way we got to a lot of this part of the essay is that we were trying to understand, “What are different ways of excavating latent potential? How do you find it?” So the anti-portfolio idea, for example, came from Bessemer Venture Partners, which has on their website a list of all of the great ideas that they missed. If a funder had a list of all the great ideas that they missed, the theory about latent potential here is that there's something in the incentives or motivation of the program manager or the people making the funding decisions that maybe is overly risk averse or doesn't have a feedback loop and there's latent potential in that feedback loop and in shifting that incentive structure.


I think maybe another idea that is on the list or we don't have is a Noble Prize for funders. Kind of gets at a similar generator of latent potential which is changing the way that the program manager is behaving. There's a totally different way of thinking about latent potential which is around how you might construct new fields or how might we 10x the rate of field production. So that is a question that generates a lot of potential program ideas. The community's idea maybe is one of them. I used to run this big group house and one idea I had was like, "What if you started lots of group houses and funded lots of group houses and just got interesting people to live in them?" You end up with these interesting communities where maybe that would increase the rate of field production. Or another thing is maybe there's a field shutdown process where at some point the field feels too incremental and you have to shuffle people into a different field. So you get cross-disciplinary work as well as this kind of death of fields and maybe that would result in new fields. There are lots of ideas for like, “Okay, if you think about this question, you'll end up with lots of different ideas.”


Ben (46:48):

Well, people should definitely read the essay, or rather, I like to say open source book.


Kanjun (46:53):

Right.


Ben (46:54):

I'm going to riff on two of those things. So one you mentioned laterally and we might come back to the Neighborhood idea as well. I think that's really interesting because surely what 10, 20, 30, 50, maybe even 80% of the value in universities is the social capital of bringing people together. Why not recreate that and see what happens? The other one you said is about investing edge because that's really interesting in financial markets. There is this school of thought where you should try and identify where you have skill and then play where you have skill and don't play where you don't have skill. This is true of a lot of sport and other games. Therefore even if you are lucky, it will have been viewed as a mistake if you played where you didn't have skill. And actually if you won, that is also judged to be a mistake. Whereas if you have skill and you played a good bet and it didn't go your way-- which markets happens a lot-- that is actually the correct process because you played where you had skill. Where you had edge, the edge didn't come off. I think this is very true of funders. They probably have idiosyncratic skill because it's a social science, but they probably do have that and then they should play to that and not where they feel they don't. I think that's a really...




Kanjun (48:14):

Every funder is started by a different person and that person has different networks, et cetera. So I think their edge could be different. They could end up investing in really different programs but they don't. Another idea that we don't talk about that much in the essay is this idea of institutional antibodies. So another reason why-- and this is the entire second part of the essay. It’s about kind of bottlenecks to change. “Why does change not happen? We have all these program ideas. We just came up with a hundred of them off the top of our heads. Why doesn't anyone do them? That's so strange." That really got into the second question of like, "Okay, well actually there are a lot of bottlenecks and one is institutional antibodies where you try to do something different and existing institutions actually lump asked you for it.” They really don't like that. There are reasons for it. One is maybe they feel threatened. Maybe Harvard feels threatened by a new funding institution. I think Harvard said something really negative about the Thiel Fellowship when it first came down because it was a threat. And by definition, if someone's finding edge somewhere else and it's a little bit competitive, it will be somewhat of a threat to some existing powerful institution.


Ben (49:29):

Don't we call those type of things-- So you call them antibodies. But it strikes me that what we're really talking about or what a layman might think about is culture or at least part of that culture. There's something interesting about institutions which have a very long history about the culture that they develop. I know you're very interested in institutions and culture broadly. So I guess my question is, is that one of the problems about old institutions is the way that culture has ossified whether it's competition or not. And therefore you need new institutions or maybe new arms of old institutions. And maybe you could potentially look at through the lens of, or you might want to comment on the culture that maybe you are trying to build at your firm or in startups in general because that seems to be potentially one of the competitive edges that startups have. It's that they don't have to deal with a legacy culture of whatever that might be. Except there's this observation that old institutions seemingly have this problem, this bottleneck which you see in funders. .


You can see in a lot of old institutions and in fact in startup language. Jeff Bezos calls companies on that. They’re two companies. He wants to be a day one company and he says, "You're day two company, you’re dead." That's another way of saying dead's probably slightly overused. But it's saying that your culture and everything is ossified and you can't do all of these things that you want to do for innovation startup. But it strikes me that that seemed to be a very human thing. On the other hand, cultural and institutional knowledge when you take the long cycle of history has been incredibly valuable for making progress, at least up until this point. So I'm not entirely sure that something about it has been preserved for very good reasons. So yeah, your thoughts on culture.


Kanjun (51:18):

A lot of thoughts on this. Okay, there are a few categories of thoughts. One is an idea of old institutions that's overfitting. Second is, I think the phenomenon of institutional antibodies is speaking to actually something broader than culture. The way I think about culture is culture is a set of beliefs held by the people in that culture, and it is also reinforced by a set of systems. So kind of just going back to the point of institution-- I think there was a third thing that you talked about around-- I forgot. Maybe we should cut out the...


Ben (52:11):

The history of culture also being important for traditional knowledge as well on that flip side. But knowledge over time.


Kanjun (52:20):

That's right. So knowledge versus culture and history. So I'll talk about the institutional antibodies thing first. I think that goes beyond just the beliefs. I think that's actually more about the competitive system dynamic. So it's about the broader ecosystem of institutions and what happens when there's a competitive dynamic in general. So whenever there's a competitive dynamic you're kind of taking away the power of an existing institution and that institution's going to retaliate because they're old. And part of why they're old and still exist is because they have some kind of power. So I think that's actually more of an institutional is an organism and that organism is being threatened and less about the culture of that institution in particular.


I think there are some environments like the startup environment-- and we were inspired a lot by the startup environment where threats happen all the time and the existing institutions retaliate but it doesn't matter. It doesn't matter that much. There are some mechanisms in place like antitrust that prevent existing institutions from retaliating too much. So some ecosystems of institutions like a startup ecosystem is relatively healthy because new institutions form all the time and old ones die when they need to die, and they don't die when they don't need to die, and you have kind of this outside party that enforces antitrust laws. So that's one way to-- That's kind of addressing the institutional antibodies is broader than the culture of the institution.

But now if we talk about institutional culture. I think your question is something like, "Why is it that existing institutions can't implement new ideas? Like the Harvard, Harvard can't implement the Thiel Fellowship." I think it's because basically these two institutions have beliefs that are directly in conflict with each other. So that's one reason why the existing institution can't do it. So Thiel Fellowship says, "University/ College is not useful for some people." And Harvard says, "I am a college, you should come. It's clearly useful for you.” They have this core underlying belief that just can't coexist. So in this case, it's really hard for those two beliefs to exist under the same institution unless you have two really different cultures.


Ben (55:04):

I get that. But Harvard, I guess could do a failure audit or a ten year insurance or high variance funding which also seems to be a bottleneck in this culture. But I can see sometimes you just can't take it on because it's not part of your set beliefs.


Kanjun (55:19):

Yeah. I think Harvard could do a lot of these other things. We'd love for them to do those things. We just feel like people have not been very imaginative in the types of things that they could try.


Ben (55:30):

Why do you think we've lacked imagination?


Kanjun (55:34):

I think it's not very safe.


Ben (55:38):

I guess that hearts back to culture. I understand your point about antibodies of maybe being wider, but I do wonder about this culture thing. You're not safe to have maverick ideas. Maybe they're maverick, maybe they're not even maverick. I'm jumping around, but I guess that leads me to think how you create a positive maverick then if you've got institutional antibodies as a problem. Maybe you've also got culture as a problem, although maybe you've got these new institutions. And you might-- I cut you off before thinking about the knowledge question as well. You can return to that one as well.

Kanjun (56:14):

That's okay. We'll come back to that one. So this problem of safety I think is really interesting both from an institutional perspective and an individual perspective. So humans, if we feel unsafe as individuals we won't take risks generally. There's some survival instinct that prevents us from doing that. Institutions, I think a lot of funders don't feel safe because their funding is coming from some outside party. It's coming from the government or it's coming from someone who's wealthy or someone else who wants to see success. They want to see that their funding is going to a good place. So as someone who runs a funding institution I would need to be like, "Okay, I need to be able to show success because I'm not the source of this funding." But if you see funders run by the high network individual-- For example, the Arnold Foundation who funded Brian Nozick, who started the Center for Open Science which we also talk about in the second part of our essay. Brian was rejected by the NSF-NIH. Literally everyone, I guess NSF, literally everyone rejected him for years until the Arnold Foundation found him.


And I think part of why Arnold Foundation was able to do something strange or fund something strange is because John Arnold was involved in it and he was the supplier of all the money so he felt very safe. When I interact with funders of different types where the source of their money is held by the decision makers versus not held by the decision makers, they actually end up behaving in really different ways. When a funder's money is held by the decision maker, then the funder takes a lot more risk because they feel safer. They can spend their own money however they want. They have no one to report to.


Ben (58:10):

That's really interesting. So slightly adjacent to that would suggest that you might believe in all of the work done on psychological safety which is work in teams. Google sponsored it for Amy Evan's work. So this idea that when you're feeling safe you will point out bad decisions and also venture more riskier, audible decisions. And when you're not, you don't point out something which you think is obviously bad because you're worried it sounds stupid. And also you don't take more risks in new ideas because you're just feeling not safe in your team.


Kanjun (58:42):

Yeah. I think psychological safety is really useful in situations where you want higher variants. You want people to take risks. And you may actually not want it in situations where you don't want people to take risks.


Ben (58:54):

Maybe. Although they also point out stupid things you're doing. So it's not just the risk side. That adds to your point. You've been doing it for 10 years and you get a new member of the team like, "Well, they've been doing it for 10 years," but obviously this way is better and then they don't suggest it. Somehow they think it's better. Another thing then is I observe a strong streak of creativity in your work and life. There's setting up a Neighborhood...


Kanjun (59:22):

Sorry, we haven't talked about knowledge yet.


Ben (59:25):

Okay. Let's go on knowledge and then we'll go-- Maybe knowledge into creativity is a good way of going. So holding knowledge...


Kanjun (59:30):

Okay. Yeah, we can do that.


Ben (59:32):

Holding knowledge on an institutional level. Or let's hold knowledge in and then see where this goes as well. So strong streak creativity in your life. Setting up the Neighborhood you obviously had feelings before. I really wanted to talk about all of the dancing as well. The dancing comes in because I was going loop back to somewhere early in the conversation about meaning and value because I think dancing has that. Actually, dancing might be quite a good example because there's also knowledge held in dances. The way that we dance and that talent is also developed through time. Actually, we can segue, sometimes institutions hold onto that knowledge although sometimes it's groups and community groups. I think of things like, I guess Brazilian dance or Capoeira which held in a community and then has come out. Maybe not exactly a dance though, although you could call that. So how important is creativity? And you can talk about knowledge and institutions as well.




Kanjun (01:00:37):

Yeah. I guess to answer that question directly, I think for me, creativity drives basically everything I do. I only realized this a few years ago. I never really identified as creative personally. I had a really close group of friends in high school and a few years ago talking to one of them I was like, "I think I might be a creative person." And he was like, "Duh." Everyone knew that. It was really obvious. I was like, "That's strange." But anyway, I think like...


Ben (01:01:11):

All of those people understand you better than you understand yourself.


Kanjun (01:01:19):

Yeah. I guess it points to how powerful stories of yourself can be and how limiting they are. I believe in human creativity and the potential for like if we can unlock human creativity, then there's this extraordinary potential in humanity. I think that's why I'm so interested in artificial intelligence. It seems to me like fire or like electricity. One of the greatest tools for unlocking human creativity that we've ever encountered. So I think all of my projects; AI, working on meta science, the Neighborhood, the fund, the podcast, it's all about human potential and understanding human potential, unlocking human potential. Can we set up systems and excavate ideas that can unlock human potential?


I guess to your point of knowledge, there's something about knowledge I'm very confused about which is I don't understand really... It seems like institutions, there's like a benefit from institutions and cultures holding old knowledge and then there's a point at which it's not useful in many situations to hold that knowledge. So institutions like Harvard might have really outdated knowledge or beliefs that they're holding. And so this is why I was like, "Maybe there's an analogy to overfitting where times have changed and you actually should be dropping some of your beliefs.” But I'm also confused because it seems actually good for some institutions to hold the belief of, “Universities are good” because they're good for some people and good for other universities, other institutions like the Thiel Fellowship or New Science which is a new funder out here, to hold the belief that universities are not useful because they're not useful for some people.


So now you have institutions that hold different beliefs, cultures that hold different beliefs, and so you get a lot more variants or diversity and you're able to 'service' many more people because different beliefs, different institutions are a good fit for different people. But I'm kind of confused about this dynamic of like, "What causes an institution to grow?" Based on this beliefs like, "How come the Thiel Fellowship is still so small?" I'm sure there are more people that could service-- I don't know. I'm a bit puzzled about this relationship between knowledge and culture and diversity.


Ben (01:04:01):

Sure. So my reflection on that is that's because that form of knowledge is actually much closer to art than we would like to acknowledge. And so as we are riffing on earlier, art has a time in place and culture and can be interpreted by one set of group and people in time and place and one. And actually just adjacent to that in another country or another time or another thing, they can interpret that very same piece very differently. Plausibly both getting a lot of value from it, or one group could get one value or one less value. But because we pretend to ourselves that knowledge is closer to that physical natural science, like you said, the physics, like how the atom works. Well, actually institutional knowledge is not like how the atom works. To your point, I don't think the alien would view the same way like Thiel Fellowships or Harvard.


To the alien it might be, "Well, you might do it that way or you might not do it." It's not like the atom would be. Therefore it is closer to art than all of the complexities around art. That's probably why you need a randomized controlled trial to ascertain it. But even though I actually think it would give you a false read because as time changes and as people change, actually the value of that will also change which is why it riffs back to your earlier thing about creativity and art. It seems to me to be a lot closer to that, but I'm not sure.


Kanjun (01:05:21):

That's really interesting. Yeah, this idea that the beliefs expressed in art are dependent on particular things in order to be effective or useful or meaningful. So they're dependent on how everyone else is interacting with each other; some of the other norms and beliefs that exist. You can't take a belief fully out of context because it's part of this dependent system of beliefs and behaviors.


Ben (01:05:49):

Correct. And therefore that's the same of institutional like that. I'm going to finish off on the creativity and the dance element because I'd be interested to know what you think non-dancers do not understand about people who dance.

Kanjun (01:06:10):

I guess I was a non-dancer until college. Then I started doing competitive Ballroom dance which is a very structured dance style. That was very comforting to me because before doing Ballroom I was like, "I can't dance. I'm super clumsy." I'm still super clumsy. I don't have any spatial awareness. I'm not paying attention. I'm always in my head. So Ballroom was this really interesting way of getting to understand the connection with a partner and with my body and with the floor and with the air, the space around me, in an environment that felt like I wasn't just failing the entire time and doing a bad job. I think there's something interesting here about non-dancers where a lot of people say, "I can't dance."


It I think is more because of this lack of positive feedback, this feedback cycle where people are not getting any positive feedback for their attempt to dance. So then they end up with this belief that they can't dance. So for me, I did Ballroom first and then after 2012, I started to strip away some of the structure of Ballroom. I got tired of it and I was like, "I really want to do something where I can be a lot more expressive and still have this partner connection because that adds so much complexity to the dance. But where I can make things up or improv or try new things, discover new things." And so slowly got into first, West Coast Swing and then Fusion. Fusion is this really interesting partner dance style where it's literally what it says; it's fusion of all styles. You're just making things up the entire time and just combining contact improv, with ballroom, with swing, with all of these different styles and making up new styles all the time.


It's basically kind of like constructivist dance with a partner to music; constructivist movement with a partner to music and I love that. Some people come to Fusion and they say, "I'm not very good at it. How do you get good?" I do think that the structured training of Ballroom and this understanding of connection and how to hold my body and how to connect my body to itself was really helpful. I guess maybe a piece of advice I give people for dance or like new dancers is an easy way to hack this sense of connection is to imagine that you are moving through molasses. So air is no longer air, it's molasses, and it's very, very viscous. And now you're basically always pushing against some force that's pushing back. This actually basically imitates a lot of this sense of internal connection that you get in dance when you've done dance for a long time.


Ben (01:09:22):

Wow. That's really fascinating. So think of it like molasses or I guess like those slow Tai chi movement dances. And also that if you are a non-dancer and you think you can't dance, that's obviously false in some fundamental way. I guess I reflect every three or four year old can dance in some form. Therefore it's not something you must lose the ability to. You must somehow decide that. I've learned a lot.


Kanjun (01:09:53):

Dance is inside of you. It just needs to be unlocked.


Ben (01:09:57):

And somehow I'm not quite sure. But the fusion dance seems to me a kind of long loop analogy to general intelligence. This idea you mixing a lot of things, it's kind of everything that it is, but it helps with structure and you are forming new things in real time with a chat partner or the environment or everything that you are. Obviously that's a very human thing, but somehow in this whole conversation it seems to me a distant cousin of what we were talking about on the connections of all of these other types of things.


Kanjun (01:10:30):

That's actually really interesting because in AI we often talk about-- So if you're training an AI system in some simulated environment, we often talk about multi-agent simulations. So you want multiple AIs in the environment. Why do you want multi-agent simulations? Well, some people say, "Okay, well, you get maybe interesting emergent behavior." But I think that's actually much less interesting. What other agents cause is more diverse set of data, outcomes in the environment. So the environment is static and other agents are modifying it. So you're much more likely to encounter new states of the environment when other agents are modifying it than not. So I don't like dancing by myself because I kind of get stuck in local minima of movement. Whereas if there's somebody else then they're always introducing new states of the environment that caused me to have to figure out how to react in new ways and discover new things about myself and my own sense of movement. I do think there is this parallel to multi-agent simulations.


Ben (01:11:34):

It seems to me to draw another maybe tenuous parallel, that friction or dance between two entities, either in dance or something else seems to be more likely to create this new field or a field where we don't even know it's a field because you've had a new novel interaction within dance. Well, plausibly you do it enough times and people like it. That's actually a new dance form. At that moment in time no one would thought it was a new dance form because it was the first time it ever happened between those parties. And if that analogy holds, that would be other AI agents or whatever making those new fields to stay form.


Kanjun (01:12:08):

Yeah, that's interesting.


Ben (01:12:12):

So how about we play a short version of underrated, overrated and then you can talk about a couple of your current projects and things. So you can make a comment, you can pass, you could just say underrated, overrated. We have some of these things. Underrated or overrated having agency?


Kanjun (01:12:36):

Depends on what environment you're in. I think vastly underrated in the vast majority of the world. Slightly overrated in the rationality community. Speaking as someone who is very close to it. I met my co-founder at a rationality workshop. What agency is kind of the underlying description of human capability? I think of it as like, "What is our ability to change the world from one state to any arbitrary state?" The farther away that state is, the more agency we have or something like that.


Ben (01:13:15):

I could definitely see that. It's like whenever I meet a long-termist or something like that, I definitely think they overrate existential risks, but that's probably because everyone else underrates existential risks. So you're probably meeting somewhere in the middle. Although on that one, that's why I can never quite get my head around the totality of EA or effective altruism because they seem to just have so little weight on art and creativity which just cannot jive this no world view even though they try and make exceptions for it. So they vastly underrate it, but might mean I erect it a little bit. Okay, next one, city planning; overrated, underrated?


Kanjun (01:13:55):

Dramatically underrated. I think all cities should be raised except for some and then built from scratch.

Ben (01:14:02):

Would it be from a centralized planner or would you somehow let-- I guess you could call it a kind of market forces or people choose where to be? How much zoning would you there from, I guess, nothing to everything. Are you just in the middle?


Kanjun (01:14:19):

Yeah, I would basically... Okay, maybe not raise cities. I live in San Francisco which is the most frustrating city in the world. A good friend of mine studied zoning in lots of different cities. My vote would be basically almost no zoning. Tokyo is one of the most interesting cities in the world partly because it's zoning laws are very, very loose. So basically anyone can build anything anywhere. It's not fully true, but people have a lot of agency over what they're able to do with their space and so you see this really crazy stuff; really interesting outgrowths. To me, Tokyo is one of the most beautiful places on earth. It's kind of this melting pot of humanity in a way or where people really can express themselves in the environment.


I think San Francisco is the opposite of that. Everyone's expression is completely shut down because you're not allowed to do anything. So I wouldn't do central planning necessarily, but maybe a little bit of it. And much narrower streets, human scale, planning human scale cities. I think we can actually probably convert existing cities if there were enough motivation into much more human scale cities where we build onto the roads, or on the roads we have like parks and restaurants and pop-up shops and things like that.


Ben (01:15:47):

Yeah. So I can see that. And San Francisco is ridiculous like that, isn't it? It must be one of the highest GDP per capita places like in the world with maybe five other places. But one quirky thing about Tokyo which I'm sure you know is that-- or in fact Japanese planning in general is they think about their buildings only with 30 to 80 year lifespans. So because of that, actually this idea of renewal-- And you think of that because you think of the really old temples and wooden buildings which obviously lasted a thousand or even 2000 years. But actually those are the exceptions.


Kanjun (01:16:22):

The Shinto temple also gets replaced. So that's just really interesting. I think actually this is a very underrated thing which is death. In the western culture we really underrate death. Institutions should die. It's part of why we have all these problems in science because institutions can't die. Buildings should die.


Ben (01:16:42):

For those listening actually, before the 12th of January 2023, my next performance lecture is all about death. That actually in the modern day society we don't really talk about it enough. Whereas even going back just 50 years, but certainly fifty hundred, two hundred years, the death of everything-- whether it's buildings, institutions or particularly people, was a much talked about thing.


Kanjun (01:17:08):

Actually, I just made a connection which is I think there are lots of things with tradeoffs. So longevity versus death has very clear tradeoffs. Similarly, institutional history versus brand new institution with no knowledge has very clear tradeoffs. So just wanted to make that point of there are different parts in the space that you can choose with different tradeoffs.


Ben (01:17:32):

Yeah. I hadn't thought of them as the opposite, but that's exactly right in what we talked about in terms of institutions, death and renewal and all of that. Okay. Underrated, overrated; a couple more left. Innovation labs-- I think we're thinking ARPA or here in the UK we now have ARIA. Underrated or overrated ideas; innovation agencies?


Kanjun (01:08:00):

Probably neutrally rated. It's not really rated. They're useful and they're not that useful if you don't do new things.


Ben (01:18:14):

Okay. Very good. All right, last two. High frequency trading or in particular high frequency trading algorithms?



Kanjun (01:18:24):

That's hilarious. I guess the context is I was a high frequency trader in college and to pay for college. We'd always tell people... People are like, "What's the purpose? Are you doing something good for the world?" Everybody in high frequency trading says, "We're providing liquidity to the markets. That's the answer. That's the good that we're doing in the world.” So I think probably that purpose is overrated from within the community. I think most likely we should have much less high frequency trading.


Ben (01:18:59):

This is interesting because there's a debate within business market economists. So one of the functions of markets supposedly is to find prices or whatever the correct price is. And they have no idea why we need so many trades to try and actually find out what the correct price is because you don't need that in a lot of other forms of markets. But you seem to need it in stocks. There is also a debate as to whether high frequency trading actually provides liquidity or not. But if it funded your college, then in some ways must be massively underrated because it has allowed you the second or third order to produce all of this other amazing stuff. So that is a bet. Definitely what should have taken if you had that validity. Okay, last one is, I guess broad ranging is diversity in tech or in any domain. But I guess I'm interested in diversity in technology as it always comes up.


Kanjun (01:19:58):

What exactly do you mean by diversity?


Ben (01:20:02):

So you can't clarify. I'm probably thinking of people diversity but you could take it further. And I guess riffing back on our tradeoffs is this idea of lots of different things obviously being good for ideas and other things versus narrow focus which potentially might allow you to go faster on one thing. But you could take the kind of women in tech answer on any domain as well. Or you could take it broadly.


Kanjun (01:20:32):

Yeah. I think probably diversity in general is underrated, I suspect in Silicon Valley style cultures. The Silicon Valley style culture is very maximizing and capitalism in general is rather maximizing. So I think there're probably lots of things that don't get funded or don't get done even though they might be really interesting or useful. So the arts I think are a good example where there's not very much art here for that. I think for that reason it's very maximizing, it's very utilitarian. People are like, "Why do art?"


Ben (01:21:12):

[Inaudible 01:21:12] In San Francisco. There can be great people as well but not that much into art. In fact, I was reading just recently. Julian Gough wrote the end narrative in the original Minecraft but supposedly didn't sign any contracts and has now made that open source or essentially creative commons license. He has this whole long post on this riff about he has nothing against the people who did try and make him sign contracts but that was a capitalist system. And his system, his art, they clash in these very interesting ways for when you want to make or create things, whether you're talking about financial capital or if you want to use finance speak; these other bits of capital, social capital, intellectual relationship, human and all of those type of things.


Kanjun (01:22:04):

That's really interesting. I'll definitely read that. There are a lot of clashes and I like capitalism. It's very good at an important thing which-- this idea is Michael's-- which is aligning what is true with what is good with power. When those things are misaligned, you end up with very corrupt states. In capitalism it actually goes moderately well, but also it has a lot of tradeoffs. I wonder if there exist actually much better systems once we have more intelligence to be able to align things that can have a lot more diversity of ideas and types of things that people can be doing in addition to aligning what is true, what is good, and power. So maybe aligning what is true, what is good, power, and beauty; something like that. That beauty piece is definitely missing.


Ben (01:23:01):

Yeah, that's the next system obviously. My day job's obviously well within the capitalist system as well. I spoke with an earlier podcast someone called Jacob Soll, Jake Soll, who traced the history of ideas. That the early capitalists like Adam Smith traced a lot of their thought back to Cicero; these Romans and groups. But actually early capitalists, one of the ways that they thought about the world, one was an industrialized sort of world and that state and governments were also meant to make and create markets. But they also had this aligning the good and the incentive. I remember actually Amartya Sen has this anecdote or analogy for how he thought the early capitalists work. And that's if you are being chased down the street by someone who wants to knife you for your money or because you look wrong, and you suddenly throw a lot of money in the air behind you and they stop and they go for the money instead.

So it is actually meant to align people to something which is actually aligned for the greater good. So it's actually a moral cause. So the early capitalists actually thought it saved us from our basal nature to do something which you could align from incentives. I had never really heard that. Then I went to read back some of these early capitalists and back to Cicero and it is seemingly true, at least my interpretation of what they were saying, thinking then that that's that. We've evolved it to this state. So it's interesting that to your point, it will likely evolve again and how it evolved for these things is still very open.


Kanjun (01:24:38):

I think that's really interesting. How are we doing? Are we doing okay on time? I can do a slight diversion?


Ben (01:24:46):

Go for your divergent thing.


Kanjun (01:24:49):

Okay. I think this is really interesting in that capitalism has successfully aligned us toward satisfying people's desires and wants and somewhat away from our basal nature. Violence has decreased a lot. There are goals to go forth that are not tribe against tribe kind of goals. It actually feels like we're pretty ripe for some kind of transformation because AI is getting better really fast. If you were to ask a question like this, it's getting better way faster than most people in the world see. I think people can see ChatGPT and see all its flaws, but ChatGPT is just the tip of the iceberg of even research that we've done already. So there's still a ton of low hanging fruit in terms of the capabilities. So I think all of this stuff will happen faster than we think it will. And maybe that means a system like capitalism needs to be augmented somehow.


Ben (01:25:57)

That's an agree for me. So I actually work a lot in biological sciences and I bore everyone. I don't have technical details because I'm not really a coder. But what DeepMind have done with protein folding in biological and computational biology, and actually this is-- I mean there are a couple of generations in. But it's early journeys. It is so mind boggling having studied it 10, 20 years ago. This is the equivalent of magic or a form of magic. It seems that far away. Therefore the downstream effects are just very hard for us to imagine because even people in the field can only just start imagining it and what it would do. In something related to your point, typically we've had to use control trials because we don't really know. This is like some of the beginnings of actually we might start to understand-- and we've got a linking of the biological mechanisms of something. So it's not that we're at zero. But between zero and a hundred, we're closer to 10 or 20 than we are to a hundred for sure. You can just see on the edges of where I have domain expertise that these things are just opening up in ways which just is very hard to imagine and therefore I think could be both very exciting and scary at the same time. Riffing on the capitalism...


Kanjun (01:27:17):

Really one quick thing there which is one of the things I'm most excited about AI. Most people talk about automation and kind of doing what people are doing. That's maybe interesting, but I'm most interested in systems that are able to monitor very complex systems. Because they're interacting with the economy or with all of the people, the model can interact with all of these people simultaneously. It's building up some model of the system that we can then interpret and inspect. That might lead us to a much deeper understanding mechanistically of how human complex systems work in the way that outfolds. Allows us to better understand mechanistically how protein folding works. But you were going to say riffing on capitalism.


Ben (01:28:00):

I'm going to riff on that and I'll come back to capitalism. Are you most interested in human systems like markets and social stuff, or things like weather systems or other complex physical phenomena or essentially both because you think they will probably apply to both?


Kanjun (01:28:16):

More human systems. I think weather and other systems like that we can evaluate and measure using instruments. We can measure how good the weather is somewhere and then we can make better algorithms and collect more data. That doesn't require as much 'intelligence.' But what's interesting about human systems is that humans are full agents that need to be modeled. You need to model both the human agent and also somehow take what they're saying, communicating in language, and turn it into some something useful. That's just huge amounts of data. So you need something that's able to understand linguistically all of that data and maybe beyond linguistically like, "What is their body language, et cetera?" And then turn it into some useful behavior. I think for a system to be able to do that it needs to actually have a pretty good model of not just individual humans. but also groups of humans interacting with each other. And I think that kind of model is a really interesting model to inspect from the perspective of like, "How does a complex human system work? How does an institution or a culture work?" I can imagine deploying some AI chat bot within an institution. If it's actually learning from the people inside of it, then it will end up learning a lot of things about this institution. It would be really fascinating to inspect what is the fingerprint of this institution versus this other institution.


Ben (01:29:39):

Yeah. And because group behavior is different from the individual behavior, it is this uniquely or very interesting place to assess that.


Kanjun (01:29:48):

Exactly. Also you can think about democracy. Democracy is an algorithm that has really terrible inputs right now. It's just binary for every person. We use that terrible input because it's the highest resolution that we could reasonably aggregate from people so far. But AI systems, maybe that doesn't have to be true. It doesn't have to be the highest resolution.


Ben (01:30:11):

That's the best worst system we have or something. What is the quote? "Democracy's really bad. It's just better than anything else we have" or something like that.


Kanjun (01:30:21):

Right, right. Exactly.


Ben (01:30:23):

Okay. One final riff on the capitalism and then I'll ask you on your current projects and advice. If you go back to the seventeen hundreds and eighteen hundreds-- at least my reading of it, you had thinkers like Malthus who basically thought, “We could never really get rich as a large population.” And if you looked at the preceding thousand, 2000, 4,000, 10,000 years of human history, you would've broadly agreed with them which many people did that. There was seemingly these kind of limits whether they were physical limits and all of these others.


And then we essentially had the industrial revolution. You can talk about corporate labs, innovation, corporate forms, industrialization, energy, technology, innovation which seemed to break the Malthusian trap. And so we are at least in aggregate really wealthy in a way that the person in 1850 or 1800 or even 1900 would've been completely astonished by. This seems like magic. But on the other hand, I think they would be very surprised. In fact, my reading of what they were writing about at the time is they were very surprised that we cannot be splitting the pie better. So they would basically say, "There's no way we could have grown this pie this big." They would just be like, "That's incredible. What did you guys do? That's like magic."


But on the other hand is like, "How dumb could you be that you could grow this pie but you haven't-- You've got more than enough but there's no way that you can split it? Surely splitting it was the easier problem and growing it was the hard problem.” That's really interesting. You read the work and you go like, "Wow, they just thought the really difficult thing was growing the pie. They thought splitting the pie was just going to be really easy." Well, when you have it and you have all of these things like, "Oh, when we're really wealthy no one will work or they'll split it, all that." It seems to be the reverse is that-- and we see with AI will come. It looks like we are going to be rich in whatever form we are, at least in aggregate. But splitting the pie we have not got any closer really. I mean maybe at the edges, some ideas, welfare states, some insurance and all of that. But it's nowhere near what the thinking the 1850 thought we would get. But they had no belief that we would ever get this wealthy.


I find that's a really interesting dichotomy about how we got it wrong. So I think we probably got everything wrong in the future. But I always think this about capitalism is they didn't think they would be this rich, but they just didn't think we'd be this stupid in terms of splitting the pie. It's just incredible.


Kanjun (01:32:58):

I think this is a really big and important problem especially when it comes to AI being deployed. I want to kind of harken back to our points about death which is there was a study about how if you increase the estate tax to basically a hundred percent or even like 95%, then a lot of our incoming inequality problems would be solved. This is quite interesting. One hypothesis that they had is that basically accumulation effects are what result in the wealth gap, not necessarily what you're able to get in a single generation. Once you accumulate a lot of that wealth, get rich fund investment, it's a little bit harder to lose than it is to just get from scratch. And once your rate of increase gets much higher than someone could ever make in their day or lifetime, now you're just accumulating wealth at a much higher rate than anyone could even gain it when they start from zero. So yeah, I think this point of death or estate tax or-- Maybe hypothesis here would be something part of the reason why it's not actually about splitting the pie. It's about disrupting accumulation here. It's not a pie, it's actually...

Ben (01:34:24):

The pies that you continuously make because obviously the analogy falls down. But you're true in that.


Kanjun (01:34:29):

The pies that are always growing; self-growing pies.


Ben (01:34:34):

Great. Like, “We’ve invented self-growing pies, magic, but we haven't...” And so there is something on that because in the UK and Britain, the landowners have been rich for a very, very long time. Land is still basically the majority of Britain's wealth which is very weird. So for a thousand years they've been one of the richest people and that has not had the renewal, whereas actually most other long-dated families, technology or merchants or whatever have gone through and you see that. But actually the landowners have not because it's an accumulated wealth which you don't give up. Okay. So one last cheeky question and then finish. We had a slightly cheeky question from Twitter about how you can use your name as a verb. I have seen on your website that you can use it definitely kind of as a noun because you have kanjectures of which we talked about a couple. But do you also see yourself as a verb to action; so “kanjun” as well as have kanjectures?


Kanjun (01:35:36):

So this question's from Aram, one of my best friends who was my housemate for many years. The joke is that I love to steal people's food. People's food is much tastier when it's someone else's food than it's my food. So often I'll go out with friends, I won't order very much, and I'll just steal food from them. So Aram coined the term too can-june, which means to steal someone's food. However, there's another thing that I do which is whenever I talk to somebody-- and you didn't encounter it that much here although you would have if we had more time to riff. I kind of intensely question them about things; like trying to understand them. Like, "Why did you make these decisions? Why do you think this? What do you think about this?" So also too can-june means-- this is like my default state encountering anyone-- means to basically intensely question someone about their life and their thoughts. So it can mean either; take your pick.



Ben (01:36:44):

That's great. So actually that's two verbs and a noun. I will riff back to our earlier thing and posit that's because food from somebody else's plate has more meaning to you than food from your own.


Kanjun (01:37:00):

I love it. Yes.


Ben (01:37:02):

The calories are obviously the same. The value to it is more. I don't often do this. I'm open, you can ask me a question if you would like to ask me a question.


Kanjun (01:37:15):

Oh, I asked you some questions, but I have lots of questions about your relationship to art. I'm really curious how your reaction to-- The culture I'm in, in Silicon Valley, EA, and there's maybe a non-value of neural art. I'm really curious from your perspective as somebody who's a playwright and who clearly art is very important to. Why do you think that there's this disvaluing of art and what do you think are the things that are lost in a culture like this?


Ben (01:37:54):

So in my view, this stems quite clearly from a utilitarian led thinking which you can trace to both Peter Singer all the way back to John Bentham. And then you can also see it in the work of Derek Parfit who was viewed as one of the greatest living British-- actually global philosophers. He died a few years ago. But it was actually an encoder of his work, “Reasons and Persons.” So to put a long answer short is their shortcut for this is something which you'll know, but for listeners that you get round to expected utility theory or expected value. So this is a shortcut for trying to value something, particularly stocks or cost benefit or things with cash flows and things that you can count.


There are a lot of paradoxes which don't work for expected value. The classic one is Petersburg Paradox. But things like when you have a 51/49 bet. 51 you get a lot of value, but 49 you lose everything. Let's say you destroy the world or you double the value of the world and you play it for long enough and you're going to destroy the world. Strict expected value basically tells you to play this game. And that is because there's a lot of things which you can't put into an expected value calculation. Now, the so-called in my view, fudge for this in expected utility which can work for some things, is that you have this idea of utility which you then try to fudge the value to bring it closer back to humanness. But this is really hard for us to do. And actually mostly we fail to do it except in very simple cases.


Then even in complex cases where you can then take some idea of what humans might think, it's not clear that there's a consensus. So a classic one from healthcare economics is it's very expensive to save a preterm baby. It costs somewhere like half a million to a million dollars or more. Whereas you could spend that in saving the life of a diabetic which only costs maybe $20,000 or $30,000. In first order expected value you always save the diabetic. Then you kind of think about utilities, "Oh, maybe there's more life to something like the baby and the society" and you can try and fudge it. Or you can ask people and you find that the person in the street, the majority say, "No, we should save some babies." And so that's just one way of getting this kind of dispart which doesn't play into expected value.


But because expected value is such a core way of how their thinking has grown up, they can't do things which are hard to measure. Or to our point, things like art or creativity which is not only hard to measure, potentially impossible, but is also not time and variance. So moves across time, people, space, and culture. And because that doesn't fit into that framework-- and I guess some of them do get there if they think really hard about it or they do fudges or they might think of themselves as slightly pluralist. It really doesn't fit into that framework and therefore all of that falls down. Therefore when you had enough time to do expected value over malaria nets, you definitely can't do it over art. And then the second order effects which are just really hard to do about the power of art, you just can't put into your calculations. Yet they seem to be at really crucial tipping points for the world.


So two or three examples. Maybe it's emergent behavior talking about that. Greta, is that emergent behavior or not. But for instance, the white Texan lawyer for Martin Luther King Jr became a lawyer for minority rights because he listened to Louis Armstrong play jazz and he said, "I've heard genius in a black man. The only thing I know is that I need to fight for equality." He's on the record of saying that. That's the interesting formative power of art. You think of something like...


Kanjun (01:42:01):

Art is like a way to see the humanity in anyone else.

Ben (01:42:05):

Exactly that. And all of these important social progress points you had an artistic narrative which changed the system or made the system better. Some of that goes to our deepest myths. I call them myths because they're things that are true only because humans believe them to be true. Like money. The tree doesn't care about money. The dog doesn't care about money. The alien probably wouldn't care about money depending on whether we had to interact. Humans only care about money because we've made that. So that's one of our greatest myths. And essentially that's an act of creativity. In fact, that's kind of an act of art which is across that and humanity. Because that doesn't fit into expected value theory very well with some soft order pluralism, they can't get it so easily and therefore it fails by the wayside. But that's why you then get these critiques outside about how they don't understand the system. They do obviously understand the power story and art and things, but because you can't really weigh it up between individual stories, they tend not to really invest in it very much. So that's an answer to that. But I feel fairly certain that seems to be the roots of it. And when I speak to a lot of EAs that seems to be true. If you read their influential philosophy, that also seems to be true.


Kanjun (01:43:27):

Yeah, I think that's true. Peter Singer explicitly says funding the arts is not as good as saving lives in different countries. One thing that I think I've been puzzled by is this question of measurement. Michael and I think -- One of the things we talked a lot about is funders trying to measure research results and how that results in short-term incremental decisions. And I think measurement is actually-- It's something that I deal with a lot in my day-to-day life. I used to run a startup. Startups are all about measurement because it's all about short-term. But running a research lab, I can't measure anything. There's very little stuff that I can measure. So I've had to completely change my perspective on measurement.


So I think there's this really interesting point about utilitarianism. Startup founders can be utilitarian because actually they're not losing that much when they are super measurement oriented. But when you're doing research or you're in the arts or you're trying to do real long term change in world countries, then actually things are really hard to measure. The problem is that when I talk to an EA-- I would consider myself somewhat EA in terms of being interested in the philosophy but also finding some things quite problematic. But if I were to talk to an EA they would say something like, "Well, the issue is that your utility function just hasn't captured all of the things that you're missing." There was a time when I thought, "Yeah that seems true. We should be able to capture other things in the utility function." But now I'm at a point where I'm wondering, "Actually, that may not be true at all. Actually, some of these things may not be measurable. Maybe not for a very, very long time.” So actually using expected utility as a framework is just going to lead to the wrong actions. It's going to lead to non-optimal actions in the long term. So I think your point about art is super interesting. Old me would've said, "Okay, can you count art in the utility function?" The now me is like...


Ben (01:45:37):

And they believe you can. I think I'm arguing cannot. So what's the phrase? "Not everything that can be counted counts and then not everything that counts can be counted either." I have got quite a long podcast with a philosopher called Larry Temkin who has this critique on EA. It is for those listening, a little bit nerdy and it does go on for three hours. But his main point is that his work on moral philosophy and something called the transitivity or intransitivity problem shows that the axioms behind expected utility theory do not hold. They do not hold for moral choices. So they hold for maths; obviously three is bigger than two, two is bigger than one, three will always be bigger than one. And it holds for heights. So it holds for maths, it does not hold for social or moral reasoning.


He has a 500 page book to explain all of this, but you can get it as read. And when he first posited it a lot of people thought, "Hmm, that doesn't seem to be right because that's crazy." And now actually there's a majority of thinkers who actually think this is true. Therefore my point of view is if this is right and it seems like it is right. If one of the fundamental axioms cannot hold, then you have to use it with caution. Doesn't mean it can be a useful tool, it's obviously a useful tool. But to apply it to all moral reasoning you need to do with caution and it seems to be not potentially with a much caution. Early in the year Larry Temkin explains this in actually his book. It's very interesting reading about that if anyone wants to go back and listen to that.


Kanjun (01:47:24):

That's really interesting. I'll definitely listen to it.


Ben (01:47:27):

Yeah. I'll send you the link. Alright, so let's end on-- You could also feel free to ask me another question. But current projects that you are working on that you might want to mention. I think we have mentioned quite a lot of them. You could also offer any advice that you have either for startups; could be what you're looking for in startups, women founders or your own journey or something in AI. So yeah, current projects and any thoughts or advice you have.


Kanjun (01:47:58):

What would you like for me to give advice on? Advice is always very personal.


Ben (01:48:06):

Okay. So let's do advice on if you are thinking about working in AI or as a startup, what you should be looking to do? Because you already gave me really good [inaudible 01:48:17].


Kanjun (01:48:21):

Join Generally Intelligent.


Ben (01:48:23):

Yeah. Okay. So that's number one. You are hiring. If you're interested in that you should look them up on the website and you should go and join them. But maybe you are abroad so that might be tricky for you. So if you're not going to go and hire at Generally Intelligent, what's the next best thing?


Kanjun (01:48:41):

We also hire remote engineers.


Ben (01:48:45):

Other best thing. If you want to work on AGI, join Kanjun.


Kanjun (01:48:51):

That's right.


Ben (01:48:53):

So that's that.


Kanjun (01:48:55):

What else?


Ben (01:48:58):

Well, you can maybe look at current projects.


Kanjun (01:49:01):

My current projects?


Ben (01:49:02):

Or advice.


Kanjun (01:49:04):

Sure. Yeah. I guess I can briefly talk about current projects. So Generally Intelligent, we've already talked about-- Well, we are hiring and we're very interested in inspecting systems to get a deeper understanding of what's going on and using that to figure out can we get good guarantees around robustness, safety, et cetera, as capabilities increase. Other projects-- I guess if you're a startup founder I have a fund to investing folks called Outside Capital. We do a fair amount of AI investment or if you're interested in investing in a fund. What else?


Ben (01:49:49):

You want to talk about the Neighborhood at all?


Kanjun (01:49:52):

Maybe. It's not something I advertise too much. But the Neighborhood is a really fun project. The reason why we ended up doing it-- Jason Benn, my former housemate and coworker is the one driving most of it. I'm just helping him. We used to have this house called The Archive and it was a great house. I think it changed our lives in really important ways. Maybe this is the piece of advice I would give which is something like you really become the people around you. I really become the people around me. The like five people I spend the most time with. I think choosing those people very carefully is quite important. For me, I've chosen them in particular ways such that they are people I really look up to and would want to become more like because people tend to internalize the beliefs and the limitations of the people around them.


So I want people around me who are expanding my idea of what I can be capable of and not reducing it. So The Archive was a 25 person house that I co-ran for five years. One of the big things-- Michael actually made this comment that The Archive seems like a self-actualization machine. Like someone comes in and a year later they are just much more actualized. I think that was a really interesting thing that we did culturally accidentally just because of the people that we were. We were really trying to understand how do we become better versions of ourselves and discover what our potential could be. I think very seriously asking that question and having people around us asking those questions was really transformative. We shut down The Archive in the middle of Covid.


The Neighborhood is like a scaled up more sustainable for 30 somethings version of it where essentially we want something that is like a university campus for modern adult living where you can have your best friends around you and have this lively intellectual culture that we found at The Archive. And kind of live in an area in a city where you just run into people that you know and want to be around. So in the Neighborhood-- I live at the southern tip of it. I run into a lot of people which is really interesting and have spontaneous interactions in a way that in a normal city I wouldn't have.


Ben (01:52:26):

That's amazing. I really think that's an underrated thing. I wish in my twenties I was walking distance with my close friends and that. The only piece of advice listening to that is I would design it if it's at all possible-- and might not be-- to take you all the way to end of life. So when you're 60, 70, and you are 80 that would be great.


Kanjun (01:52:50):

That is the goal. We'll see what kind of institutions we need to build. In a lot of ways we're constructing a new type of institution, a new way of living. It's not really centralized. It's like a set of decentralized institutions. So we're planning schools and then eventually there will be like end of life type of things when you're growing older, when you want to leave a legacy, et cetera.


Ben (01:53:15):

Mini town within the town. Well, that's great. That advice sounds really great. So choose your friends carefully.


Kanjun (01:53:26):

Yes.


Ben (01:53:27):

And with that, Kanjun, thank you very much.


Kanjun (01:53:30):

Thank you. This is super fun.

In Podcast, Science, Dance Tags Podcast, AI, Kanjun Qiu, Metascience

Michael Nielsen: metascience, how to improve science, open science | Podcast

November 15, 2022 Ben Yeoh

Michael Nielsen is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta science and how to improve science, in particular, the social processes of science. His latest co-authored work is  ‘A Vision of metascience: An engine of improvement for the social processes of Science’ co-authored with Kanjun Qiu (open source book link). His website notebook is here, with further links to his books including on quantum, memory systems, deep learning, open science and the future of matter. 

I ask: What is the most important question in science or meta science we should be seeking to understand at the moment ?

We discuss his vision for what a metascience ecosystem could be; what progress could be and ideas for improving the the culture of science and social processes.

We imagine what an alien might think about our social processes and discuss failure audits, high variance funding and whether organisations really fund ‘high risk’ projects if not that many fail, and how we might measure this.

We discuss how these ideas might not work and be wrong; the difficulty of (the lack of) language for new forming fields; how an interdisciplinary institute might work. 

The possible importance of serendipity and agglomeration effects; what to do about attracting outsiders, and funding unusual ideas. 

We touch on the stories of Einstein, Katalin Kariko (mRNA) and Doug Prasher (molecular biologist turned van driver) and what they might tell us.

We discuss how metascience can be treated as a research field and also as an entrepreneurial discipline.

“...."How good a use of the money actually is? Would it be better to repurpose that money into more conventional types of thing or not?" It's difficult to know exactly how to do that kind of evaluation, but hopefully, meta-scientists in the future will in fact think very hard and very carefully about how to do those kinds of evaluation. So that's the meta-scientist research discipline.

As an entrepreneurial discipline, somebody actually needs to go and build these things. For working scientists it's often remarkably difficult to do that because it doesn't look like a conventional activity. This isn't sort of science as normally construed. Something that I found really shocking-- you may be familiar with and hopefully many listeners maybe familiar with, the replication crisis in social psychology. So this was, I guess most famously in 2015, there was a paper published in which 100 well-known experiments in social psychology were replicated. I think it was 36% of the significant findings were found to replicate and typically the effect size was about roughly halved.

So this was not a great look for social psychology as a discipline and raised a lot of questions about what was going on. That story I just told is quite well-known. What is much less well-known is that in fact going back many decades, people had been making essentially the same set of sort of methodological criticisms. Talking about the file drawer effect, talking about p-hacking, talking about all these kinds of things which can lead to exactly this kind of failure. And there are some very good papers written in-- I think the earliest I know is from the early sixties. Certainly in the 1970s and 1980s you see these kinds of papers. They point out the problems, they point out the solutions. “Why did nothing happen?” "Well, because there's no entrepreneurial discipline which actually allows you to build out the institutions which need to be built out if anything is actually to change."


We discuss how decentralisation may help. How new institutions may help. The challenges funders face in wanting to wait until ideas become clearer.

We discuss the opportunity that developing nations such as Indonesia might have.

We chat about rationality and critical rationality.

Michael gives some insights into how AI art might be used and how we might never master certain languages, like the languages of early computing.

We end on some thoughts Michael might give his younger self:

The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not.

Available wherever you get podcasts. Video above or on YouTube and transcript below.

PODCAST INFO

  • Apple Podcasts: https://apple.co/3gJTSuo

  • Spotify: https://sptfy.com/benyeoh

  • Anchor: https://anchor.fm/benjamin-yeoh


Transcript: Michael Nielsen and Ben Yeoh (only lightly edited)

Ben

Hey everyone. I'm super excited to be speaking to Michael Nielsen. Michael is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta-science and how to improve science, in particular, the social processes of science. His latest co-authored work is, “A Vision of Meta-science: An Engine of Improvement for the Social Processes of Science.” Michael, welcome.

Michael

Thank you so much for having me on the podcast, Ben.

Ben (00:33):

So first question, big question of science. What do you think is the most important question we should be seeking to understand around science or meta-science today?

Michael (00:45):

Okay. Science and meta-science are such large subjects that it's impossible to give a really comprehensive answer to that. I'll just stick with the sort of science itself. I think there's this really interesting transition that's happening in science at the moment where for the longest time, the way we made new objects in the world, in some sense it was sort of by inspired tinkering. And we’re now over the last 20, 30 years kind of going through a transition where we actually have a theory of the world that's pretty darn good. And we're starting to use that theory to inform how we build up new objects. It's a little unclear. What I mean by that is sort of atom by atom we are starting to do assembly.

We're not doing it ad hoc. We're actually able to design things from first principles. I think this is a really large change in technology. Over the next hundred or 200 years, we're going to figure out how to do this kind of first principles design whereas before it was always there were some external objects lying around in the world. Maybe you discover copper or iron in the Iron Age and you make use of it but there's no principled underlying theory. So I think that's actually a very exciting thing which is underlying a lot of changes in science over the last 20 or 30 years and I expect it will be a really large force over the next, in fact, probably centuries.

Ben (02:28):

You sketch with Kanjun a really dizzying vision of what a science ecosystem could be. And I understand your point is there's a lot of variety that we could be expected to see in a really healthy ecosystem. Then right in the start of your essay or book, I think you have this interesting provocation around what would an alien think when presumably an alien would understand the laws of gravity, understands the laws of physics. Although maybe if they've time traveled maybe there'd be something else on gravity. But you make the point that actually around the thinking of the social progress around science, there might be some elements that an alien finds the same. Maybe something like a controlled trial. You would've thought that design would also stick in an alien culture and some things that an alien might find really very different. So when you talk about some of these first principle building blocks, are you thinking of the principles that an alien would hold to be true? And in which case, which of those principles do you think we've discovered already?

Michael (03:33):

Yeah. Scientists certainly often obsess quite a bit about things like their citations and their number of publications and their h-index and all these other kinds of things. And it's pretty difficult to imagine that an alien scientist or an alien society would've rediscovered those. They don't seem like platonic elements of the world just out there waiting to be discovered. Although it also seems likely to me that alien societies might have some pretty strange constructs of their own that we would also laugh at. In terms of what's actually fixed, I would frame that question as a question about fundamental meta-scientific principles. Are there particular ways in which it's good to organize the social processes of science to enable discovery? So an example might be for example, in the 16th century, Francis Bacon had a certain number of interesting ideas, but a couple-- One was that there should be extremely limited deference to authority.

At the time of course, there was still a great deal of deference to the church, to the state, to Aristotle, to the other great thinkers of antiquity. And Bacon said, "No, that's not right. Really, we should get away from that. In fact, nature is the final." And anybody, if you do an experiment and it contradicts what Aristotle says it's so much the worst for Aristotle. You don't simply defer just because the great man of 2000 whatever, 1500 years earlier said so. There's a whole bunch of notions there. But this notion of deference to nature rather than to authority and freedom of inquiry, those are examples of meta-scientific principles that I think it's reasonably likely that in fact other alien civilizations, to use your framing, might well have discovered things which are quite similar to that as kind of being best practices how to organize the process of science.

Ben (05:43):

And how well do you think we understand how we've made progress in science? Because there's kind of essays out there saying, "Well, we don't really understand that much about how much progress we've made. It's all been a little bit haphazard." But then some people say, "Well, no, actually we do have some of these progresses and what we think." Do you think we've understood very well the processes that we've made? Or do you think it's barely touched and just been completely haphazard and we don't really quite know what we've been doing?

Michael (06:15):

We said they have lots of rules of thumb. Economists will tell you that total factor productivity is somehow strongly related to the practice of science. People try and write papers relating to patents, to productivity growth and things like this. It's hard certainly trained as a theoretical physicist not to feel like we're still in the pre Aristotelian phase of this. These are very gross kinds of ways of thinking that they just seem very coarse. We don't have anything like a detailed theory of how to design institutions to support discovery. But people are making little bits and pieces of progress which is fun. I think of it as kind of... I mean, it's a proto field. It's a proto field now with a long history. You can find people proposing doing this sort of a very explicitly kind of investigation going back a century or so. And then you have people like Bacon or in fact many others in the 16th and 17th century who in some sense were doing what I would call meta-science.

Ben (07:31):

So still the early days which is very exciting. So how should we go about improving the culture of science or these social processes? Perhaps one way of asking that-- because that's the whole big part of your book, is you pick up on one idea on high variance. So things that people support or donor variants. So maybe I thought I would ask you in the least that section, what do you think is the best idea in that and what do you think is the worst idea? What's the one that you would just not do and maybe else should do?

Michael (08:06):

So certainly the processes which we use now in universities and more broadly in the research culture, a lot of those seem pretty arbitrary and ad hoc. If you think about the NIH panel system or things like this, in many cases those kinds of systems weren't designed after a lot of careful reflection and thought. They were just done in a very ad hoc contingent manner by people who rush to get things done which is terrific. But then when you don't have any process for actually upgrading those systems later on, you can get really stuck in quite a rut. So I certainly think that I'm far from being alone in this. A lot of people I think have considerable problems with the notion of essentially committee based peer review of proposals for scientific projects.

You have five or 10 or sometimes more of your peers sitting in judgment of your proposal and you get these kind of averaging effects where the thing that is pretty acceptable to everybody is the thing that can get funded. And the idea that maybe is radically polarizing it's very, very difficult to get funded. So some people love it, some people hate it. That will tend to die under that kind of process. And that process wasn't designed to be the best. It's just something that made quite a bit of sense early on as something to try out. But it's now a little bit frozen. I've heard so many people refer to it as the gold standard. I don't know in what sense it's the gold standard. Maybe the sense in which it’s the thing more people complain about than anything else.

Ben (10:03):

Right. Okay. So that high variance would perhaps weigh against it. All right, I'm going to spit out a few of these and then we can maybe talk about why they don't happen or maybe which one is the worst or best on that. So use high variance on that. So I was very interested, failure audit, 10 year insurance, an institute for traveling scientists, long/ short prizes (prizes in general), having an anti-portfolio and an interdisciplinary institute. So out of that, maybe I'm just going to pick one first and then you can dwell on whether those you really hate or really like one of those ideas is. I'm really interested in why science hasn't really got a failure audit much built into it because a lot of other industries have it. Some it's even mandated somewhat either by regulations and law that you have to go and look. If your bridge failed, you'd look and examine why exactly that happened.

And actually even in one of my domains investing you're always looking at your investing mistakes; what worked and what didn't, particularly what didn't. And yet at least on the funding part where you haven't-- So like your high variance part, did they go looking back and, “Well, committees all gave this an average score and we put it through. Was that a failure or not?” So that does raise the question of why something like a failure audit hasn't happened. You talk about some of the institutional bottlenecks maybe for that. So how good an idea do you think this is? Why hasn't it happened and what should we do?

Michael (11:33):

Just to fill in some background. We propose a large number of different ideas more just as kind of grist for the mill than anything else in this long essay with my collaborator, Kanjun Qiu. One of those ideas-- it's just a paragraph, is the idea of a failure audit. It's that, for example, if a funder claims that it's serious about high risk, high reward research, then they should actually evaluate what fraction of their projects fail. And if the failure rate is not high enough there should be some kind of restructuring. Maybe the program officer gets fired, maybe the director's job is under threat, some kind of consequences if they really claim to be serious about this kind of high risk model. Whether that's a good idea or not is another question, but let's just assume for the sake of argument that they are.

So just a couple of points about that. One of many motivating factors for this was just reading a report from the European Research Council where they're doing an analysis or a retrospective on what they funded. So throughout this report, they talk very assertively about how they're engaged in such high risk work. But they also claim at the same time that 79% of the projects-- I think it's 79%, it's about 80% of the projects that they fund are extremely successful. So this is not quite a definition-- I don't know what definition of risk they're using. But when 80% of the things that you try work not just well, but really well, but you're also trying to claim that you are really engaging in highly risky behavior-- Well, there seems to me anyway like there's some kind of a mismatch. I can't speak to why they do this but it does seem a little bit confusing. This is certainly a common thing when I talk to individuals, many funders. They will talk a lot about wanting to encourage a lot of risk, but they're not assessing in any sensible way the extent to which they actually are buyers of such risk. And when that's the case, there's no real feedback mechanism which is actually encouraging them to move towards the behavior which they claim to want. So this is just sort of a simple-- There are many variations of this idea which one could try, but the idea is to have some kind of mechanism for doing that kind of alignment between people's stated intentions and the actual outcomes.

Now, when I've talked to people and individual funders about this, about trying some sort of variation of it, they're often fascinated by the idea. I've talked for hours to people at some funders about it but they won't do it. And I think for really quite a natural reason which is, yeah, they're nervous about having consequences in this kind of a way. It's the most natural thing in the world to want to avoid those consequences. Of course, I don't like having my nose rubbed in my failures either. But it does leave this gap between what they think they're doing or what they say they're doing and what they're actually doing.

Ben (15:01):

Maybe we need to pay them more to offset that but with consequences. So you don't think it is because they actually want to see success and they're not interested in high risk? You think they are interested in high risk but they just don't particularly want to be audited on it?

Michael (15:20):

At the individual level I'm quite certain many are interested in high risk. There is of course this issue around, “What exactly does one mean by this?” So you talk to individuals about what specifically they're looking for, and it turns out that what they're looking for is actually quite different than what one other scientists in their potential pool of fundees thinks is high risk. So also having that kind of mismatch creates something of a communication problem. So I live in San Francisco in Silicon Valley and I have some friends who work in the startup and technology land. It's interesting there to talk about risk seeking behavior on the part of the venture capitalists. Certainly I've chatted with friends and acquaintances who've been pitching some idea for a startup company and they eventually realize that what they're pitching is not ambitious enough to get the attention of the top VCs. So there it's a case of what-- Basically, the good they're selling is not ambitious enough for the VCs. In the case of many of the funders, they're saying they want a purchase risk but they're only buying safety because eventually the scientists just start offering safe kind of proposals.

Ben (16:50):

That's what's happening. There's something slightly akin within investing world, one of my primary domains. If you claim you are a value investor and you actually define value, then when you're audited at the end of the year your own end investors will ask you, "How did you make your money?" Hopefully you made some. But if you didn't make it doing value and you bought--In investing you can buy growth and all sorts of other things, well, they say actually you failed. “You might have made us money so you might have had successful projects, but you didn't do what you said. And we can audit and we check you.” And actually, you check it in very many different domains. So this goes back to your VC example. VCs are expected to take these hundred x, a thousand x return type of businesses. Typically, a café business is not going to do that even though you might have an okay return. So you wouldn't expect to see a café business in a typical VC portfolio. And if you're pitching it, you're not really likely to that because it doesn't match what they're looking for. But there is an audit on that. It's both a success and a failure audit. So I thought it was really interesting that it didn't seem that science funding really had that.

Michael (18:00):

Not to my knowledge. I don't think I've ever talked to a funder that said that they really do a pointy version of this. Many of them do retrospectives. That's quite common. But they tend to be very soft in terms of the consequences. Actually, for an interesting reason which is those documents are both evaluations for the funder in terms of how they're looking at themselves and you can view them potentially as self-improvement documents, but they also tend to be marketing documents as well. In particular, it may be a document which is being used to communicate with government and with the wider public. Then there's a very interesting situation where to the extent it's a marketing document. They're trying to use it in fact to raise money or to ensure the continued supply of money. That's obviously a difficult spot to be really brutally honest with yourself about your own failures. Sort of absent really strong controls to ensure that you are honest in that way.

Ben (19:04):

I guess it's hard to put into that language of risk and you need something maybe simple. Like the person proposing the project has got to say, "Well, I honestly believe there's only a 4% chance of this project working." And you assess, "Well, I funded 10 projects at 4% chance which actually means all of them should probably fail. At the end then if I got one out that was really lucky." But it's quite hard probably for the originator of the project to honestly say, "Well, this is only a 4% chance of working” or something like that. Interesting I do a lot of analysis of early stage drug development, and we actually know before you enter phase one, you are talking roundabout 1% chance give or take a little bit within the error. So you kind of know what the average is when you go into that funding without having to say that. So if you get one in a hundred right you're doing about average on that. But we have a language of risk that you use which I don't see when I look at a lot of projects. And maybe that's because it's hard to do. But I guess the originator of the project or the funders should probably put a percentage chance and if they funded something they should say, "Well, I honestly thought this had a 10% chance of working."

Michael (20:12):

So I mean, in the conversation we're having we are putting a lot of emphasis on risk and working and failure. I guess I just want to emphasize-- That's a particular-- You could have a particular set of VCs around that. Those VCs may or may not be right. We're sort of for the sake of the conversation assuming that they are.

Ben (20:31):

Yeah, this might be completely wrong.

Michael (20:33):

It might be wrong. But once you have doubled down or decided that you want to pursue a particular thesis, having mechanisms in place to ensure it's not just words but something that's actually being done is obviously a valuable thing to have. The one really large funder that seems to do this quite well... I don't know of a really formal estimate that's been done of the failure rate of DARPA's programs. But people pretty close to the agency have estimated rates of sort of 80 and 85% failure which is truly remarkable given the scale at which they operate. And yet, I think most people at least view DARPA as having been really quite an outstanding success.

Ben (21:25):

And that's one of the things I'd emphasize on reading your essay is that you just put out so many ideas, some of which you think might not be any good at all or might not be correct. But the whole point being is that there seem to be so many ideas out there which we haven't really tested at all. And you make the point that actually that might be really the early days of it. So I might just touch on a couple more ideas and then move on to exploring this thing as ideas of funder as a detector. But I was very interested in your interdisciplinary institute because I sense a lot of universities or research thinks think of themselves as interdisciplinary or they have things and maybe some universities have that different type of scientist meeting. But the vision you gave of something which was truly interdisciplinary, I didn't see out there in the world. And when I read that I thought, "Well, that is actually surprising that there's a lot of talk as well about how great it is to be interdisciplinary and how much you learn from far domains and near domains and mixing of people in agglomeration effects." But to your point-- and you say this with a lot of these things, it hasn't really been done in the real world. Maybe assess the idea. Do you think interdisciplinary institute's actually a good idea? And what would you do if you thought it was a good idea?

Michael (22:42):

So again, this is just another-- It's sort of an amusing idea. It's simply to point out that in fact programmatically, you can try and engineer serendipity. If you've got 30 pairs of disciplines and you simply hire three people at the intersection of all 30 choose two, whatever that is. I think it's the-- I can't remember. It's about 450 odd-- sort of intersections to work at the intersection of any pair of those disciplines. Most of those researchers are going to fail. A few of them though will be doing something that is not supported anywhere else in the world and that happens to actually have a lot of latent value and they may pay for all the rest. So again, it's really sort of a mechanism design point of view where you say, "Let's take seriously the idea that maybe actually we have a systematic problem with funding interdisciplinary work. Let's simply see how to do it in a scalable way, design the mechanism to achieve that end." And then there's a question, sort of a meta-scientific question which is, "Is the value created actually sufficient to justify the upfront investment?" We don't know the answer to that question. You probably I think need to do a lot of work to make a prediction about it, but it's at least plausible as kind of an interesting way of going.

Ben (24:06):

And you could test some aspect about it.

Michael (24:09):

You could certainly do it at a low level relatively easily. The standard way in which a lot of interdisciplinary work is done, it's by people who are pitching their particular interdisciplinary work. I was involved in quantum computing beginning in the early 1990s. At that point it was very much a field that was seen as interdisciplinary. It was at the intersection of computer science and physics, didn't really have a natural home in either place. And so a certain number of quantum computing people would go to universities and sort of pitch at us essentially an interdisciplinary kind of a project. So that's kind of a bottom up approach. This thing we've been talking about is much more top down where you're just sort of saying, "Let's seek to create it at the intersection of every possible pair of disciplines." It might be that the bottom up approach is better to sort of look for things which are bubbling up. Again, that's something that actually needs to be tested. If you wanted to do something which was a little-- didn't involve quite as many resources as funding 450 at the intersection of 450 disciplines, you could simply sample from that set, for example and you'd get some interesting information.

Ben (25:19):

Yeah. Actually it would probably be pretty interesting even with fewer numbers of pairs.

Michael (25:22):

Even with 10 or 20 pairs. Just randomly chosen. You think about, actually... A good example, much of the modern deep learning revolution has of course been enabled by GPUs. GPUs were created—basically, my understanding is for video games. Just because modern displays had gotten such high resolution they needed dedicated chips to drive them. They're also very good at just doing linear algebra in general. And so circa sort of-- the latter part of the [inaudible 25:58] or whatever we call that decade, there was a relatively small number of people who were both familiar with AI or familiar with neural networks and also had some experience of GPU programming. And those people had an interesting competitive advantage in getting into neural nets. I just don't think that the intersection of video game programming and artificial intelligence is something that you could have predicted was going to be a good spot to be at the intersection of.

Ben (26:29):

Yeah. Hard to get that right; top down. And maybe that brings me onto one of your last ones, at least in this section, which was the anti-portfolio. You gave some quite interesting examples in the essay. For instance, one of the key scientists behind mRNA pretty much didn't get funded within the public university system a little bit. And then the university kind of celebrated it and said, "Well, actually you should definitely have funded her more." And then these other near misses or complete misses that we can see. Sometimes scientists now being a taxi driver rather than being in science. I had this overwhelming sense of, "Oh my gosh. How much have we missed out?" So I'm not sure if the anti-portfolio would really help with that, but I thought it was quite an interesting idea borrowed maybe from investing in VC about, "Well, these were the mistakes that we could have invested in and we didn't."

But then the next step being, "Well, why did we miss those and why didn't we invest further?" Because some of it, like in mRNA, they could have controlled it because everyone thought, "We're not really ever going to get a drug out of this.” So very low chance, let's not really fund it. But to your first point if you are funding high risk, this is exactly what we probably should state. “Government public, something other funded for a while because it is so low chance of success.” And the people in the field were telling you, “But if we make this work it's going to have a really high impact.” I just thought it was really interesting how you sketch that out and we missed it. Do you think that's a system--? Because you can always have individual misses and you make this point. But the way you sketch it out, it seems to me that it does seem to be more of a systems problem. How certain are you of that and what do you think we should be doing about it?

Michael (28:12):

So one of the most famous stories that you learn as a young scientist is that of Albert Einstein stuck in the Swiss patent office, unable to get an academic job. It's totally sort of a funny story. In this one year, 1905, he did plausibly four or five things that are Nobel Prize worthy which is pretty good for a guy who couldn't get an academic position, including I should say, he reconceived the notions of space, time, energy and mass which is not bad for sort of a side hustle. The thing that is omitted from this story is anything about what changed in the university system as a result. Did people say, "Oh, maybe we made a bit of a mistake here? Maybe this guy should have been able to get a good job." Of course, after having done that he quickly was made a professor and had a long academic career after that. But as far as I know, there was no postmortem done sort of systematically to say, "Why did we miss this person? Why was he actually not offered jobs before that?" That's sort of a spectacular example. You mentioned this example, Katalin Kariko, the scientist or one of the key scientists behind mRNA who in fact lost her job or was demoted, I guess, by the University of Pennsylvania. Had basically a lot of problems in getting to the point where she could pursue that work. Repeatedly turned down for NIH grants.

The University of Pennsylvania and the NIH will now happily take a lot of credit for her work. But as far as I know, they haven't done a serious postmortem internally. I mean, these are very spectacular, sort of very relatable examples. But if you've worked in science, actually you will know a lot of examples which are like this. But there is no then systematic reform. There is no way of conducting a postmortem which actually leads to systemic changes. So of course, it's fine. Any system at all is going to miss some people. The question that I think needs to be asked is, "How do you change in response? Do you just accept that that's the price?" Maybe that's the right response to the Kariko example but maybe it's not. Doug Prasher, the scientist who discovered green fluorescent protein which later won the Nobel Prize-- At the time of the Nobel was awarded for that work. He was working as a shuttle bus driver at a car dealership which is fine work but probably not the best use of his talents. There are many such stories and just no way for them then to feed back to cause systemic change. That's the issue, not that mistakes are made.

Ben (31:20):

Yeah. And I worry that although science is a discipline where outsiders with good ideas can be proved by the science, that it's not as strong as we would believe. And you have all of these data, whether it's minorities or outside, or your CV doesn't come from a prestigious university or your idea is just way out the mainstream where it just doesn't seem to shed light. So I'm not exactly sure but I do sense that might be a problem. Then it's interesting because you go on to make this analogy of the funder as a kind of detector system where they're looking for dark intelligence matter out there. And that that as a model is like, "Well, if it's out there, maybe there could be more ways of finding that type of dark matter and intelligence." When I was thinking about that I was reflecting that in some ways you're also incentivizing more dark matter intelligence to form in ways which won't work in the current systems that we have. How useful is that analogy in thinking about, "Well, are they really just looking for stuff which is out there and if we change some of this system, we could find it more as to your point is that that dark matter like it is in the universe just to be so much wider than we can currently invest."

Michael (32:40):

Okay. Let me just give it a really practical example. So a friend of mine, Adam Marblestone, has this notion of what he calls a focused research organization. A focused resource research organization is basically, think sort of tens of millions of dollars, an independent organization which is typically going to-- It has one very specific goal and it will create a particular type of a tool or a particular type of a data set. So for example, the cultivarian is an example of a focused research organization. And what the cultivarian is trying to do is it's trying to develop synthetic biology for non-model organisms. So a lot of the work that's done in synthetic biology at the moment it's typically done using e-coli as the particular organism which is modified.

So we have great tools for doing it in e-coli, but we don't have great tools for doing synthetic biology in a lot of other organisms. And so what they're trying to do in the cultivarian is they have a very specific list of tools that they want to create so that they can be used in certain other types of organism. So this is a very specific kind of-- It's a crossover between engineering and science. In many ways, this is something that is extremely likely to succeed because they just have kind of a checklist. "We need to do it here, here, here, here, and here. Here are some of the bottlenecks. Here's what the outputs are going to be. We're going to exist for a certain fixed period of time. It's going to cost--" I have no idea how much that's going to cost, but let's say 30 million dollars or something like that.

This is just a type of scientific problem that has been certainly been solved in the past but it's always been solved in a very bespoke fashion. So something like the Large Hadron Collider, the Human Genome Project, LIGO; the gravitational wave detector, these are all examples of projects where you had a very specific outcome very clearly in mind. You had a very specific process which was intended to get you to that goal. You just needed a large enough organization, the right set of resources to do it. But those were funded in a bespoke fashion. They weren't funded... Basically, people had to go off and sort of make an individual case. The clever thing about the focused research organizations is they're trying to do it in a scalable way. They're creating this container, convergent research, which seeks out people who have ideas for things which fit this general template.

And it just turns out-- What they tell me, they talk to a large number of scientists about this and most of the scientists don't immediately necessarily have anything to say. They're not used to this kind of container. There's no funding vehicle for it previously. And so if they've had an idea which would be a good match for the focused research organization, they don't necessarily-- It's not something they've developed in the past because there was no avenue for taking it further. It was a form of intellectual dark matter, this kind of very nascent thing and you got on with doing other things. You got your NIH R01 one grant or whatever was available and you did that kind of work. So it's interesting what they're essentially doing with the focused research organizations is building a kind of a detector to search out this intellectual dark matter.

Hopefully people have many, very nascent in most cases, ideas for focused research organizations that might actually be just a much better use of their talents than doing more conventional projects. So that's just kind of to sketch a very specific example of a person who identified a particular type of knowledge that at the moment, most funders just have-- They have nothing they can do with that. They're not set up to do it at all, but they founded this template and now they're systematically searching it out. I like to think of it as-- It's almost an antenna for eliciting this kind of information. Most people of course say, "Oh, I don't have any ideas like that." But a few people say, "Oh, actually I have this very half-baked idea." And then they may go off for three months or six months, think about it a lot more, and then actually come back with a really solid proposal. It's very early days for the focused research organization. I think the first two were funded last year. So we'll know whether it's a good model in five or 10 years. But certainly I think it's very interesting as an example of expanding the range of things which funders look to fund.

Ben (37:22):

Yeah, that makes a lot of sense to me. So correct me if I'm wrong, but I think in the essay you make two points around meta-science as a field. First of that it should be treated as a research field. And I could see that related to philosophy of science, history of science and things that this particular element of meta-science could definitely be seen as a research field. And then your other point is that it should be seen as an entrepreneurial discipline. So you actually have to trial out things, scale it, try these new social processes out. I'm interested in both, but on that entrepreneurial part, it seems that that's particularly challenging to say, "Well, we've got this new field and now we've got to try a lot of other things and then scale it out which we see on the experimental level. But we are seeing this at the meta level.” Do you think this is the only way that we should pursue it? How strong do you feel is this entrepreneurial field and what is it about it which means that you think, "Okay, this is what we've got to do. We've got to trial these small things. There's loads of ideas out there we don't know and then we scale and do it." That's kind of what entrepreneurs do and therefore I'm saying it's an entrepreneurial field. Have I got that right in terms of what you're saying and arguing on that?

Michael (38:35):

Well, there's certainly at least two components-- Actually, I would say three. We might come back to the third. One is just studying the processes people use and how well do they work? How can they be improved? What's not working at all, these kinds of things? So taking an evaluative approach, I mentioned, for example, the focused research organizations before. At some point in the future it's going to take a little while. It's going to be very good to do an evaluation of those and to start to think about questions like, "How good a use of the money actually is? Would it be better to repurpose that money into more conventional types of thing or not?" It's difficult to know exactly how to do that kind of evaluation, but hopefully, meta-scientists in the future will in fact think very hard and very carefully about how to do those kinds of evaluation. So that's the meta-scientist research discipline.

As an entrepreneurial discipline, somebody actually needs to go and build these things. For working scientists it's often remarkably difficult to do that because it doesn't look like a conventional activity. This isn't sort of science as normally construed. Something that I found really shocking-- you may be familiar with and hopefully many listeners maybe familiar with, the replication crisis in social psychology. So this was, I guess most famously in 2015, there was a paper published in which 100 well-known experiments in social psychology were replicated. I think it was 36% of the significant findings were found to replicate and typically the effect size was about roughly halved.

So this was not a great look for social psychology as a discipline and raised a lot of questions about what was going on. That story I just told is quite well-known. What is much less well-known is that in fact going back many decades, people had been making essentially the same set of sort of methodological criticisms. Talking about the file drawer effect, talking about p-hacking, talking about all these kinds of things which can lead to exactly this kind of failure. And there are some very good papers written in-- I think the earliest I know is from the early sixties. Certainly in the 1970s and 1980s you see these kinds of papers. They point out the problems, they point out the solutions. “Why did nothing happen?” "Well, because there's no entrepreneurial discipline which actually allows you to build out the institutions which need to be built out if anything is actually to change."

So it's just academics doing what academics do; studying a problem, figuring out what needs to happen maybe. But then they don't actually have the kind of infrastructure which is necessary to go and really make the changes. Turns out there's just a lot of institution building. And fortunately, that has happened in the modern era but it required a tremendous amount of energy and foresight and intelligence and also, I should say, actually funding from some unusual sources. A lot of the money for this work actually came from the Arnold Foundation run by the Hedge Fund operator, John Arnold.

Ben (42:18):

It didn't came from this different source of funding which I think is interesting. It struck me that these institution builders may well be not desktop scientists but they might well be meta-scientists. But actually the disciplines are potentially a little bit different for institution building. Actually, you see that in some other perhaps slow to change or slower to change industries. So for instance, lawyers have potentially run themselves by lawyers but now they're slowly evolving. They realize that actually, "You know what? If you're really good at law does not necessarily mean you're good at running your own organization." And they started to get in people who might be better at that. Accountants are the same. They might be very good at accounts. Is that the same as running an organization, small or large? And it struck me that science is having this similar problem that actually there's even more in law than in accounting about what you should do. It should maybe be attracting not those who've necessarily studied physics, but could have studied anything or interested in this to help build those institutions. And that's the kind of entrepreneurial part. Do you think that can happen just understanding some of science to build those organizations like a whole different discipline? Or do you think it's got to come from scientists themselves?

Michael (43:33):

I think just looking at examples where this has happened successfully in the past, there are a certain number of people who largely come from outside of science who have contributed successfully. But with that said, most of the examples begins with a scientist who is very close to the particular problems, understands them well, feels them very acutely, has an existing network and begins to develop solutions from there. Often when they do it, a classic is that they need to leave their home institution. Sometimes they need to leave science entirely. You’re often beginning to seek very unconventional sources of funds. Actually, I should say the default thing that happens under those circumstances is it just fails because there is no sort of support mechanism. It's done in this very strange, very bespoke fashion.

Brian Nosek who was the person behind some of this work on the replication crisis, he started the Center for Open Science sort of as an entrepreneurial kind of an organization to build out a lot of the required infrastructure. He took a leave of absence from his university position. And partially, just because the kinds of person who they needed to hire, there was just a person that is quite difficult to hire conventionally in an academic environment. So that's an example where it was done successfully. But when I talk to people who have ideas like this, they don't have any model for how to do this successfully.

Ben (45:17):

Yeah. I was talking to innovation historian Anton Howes and he has this idea that back in history, you essentially get people who were disgruntled but couldn't let it go. And from that you have all of these type of things which happened. He's got a lot of historic examples whereas more individual inventor scientists forming these things, but scaled in slightly different does seem to apply to today. And you think about homes for other types of thinkers. I think of the Santa Fe on complexity. There was no home for these, I guess you call them slightly maverick scientists and thinkers, although maybe not today. But they couldn't find homes in their home institutions and so homes had to be built for that.

The other thread of work I see through the essay and from your previous work which I feel is quite important although it doesn't necessarily touch on this exactly, is on decentralization and also open science. Entrepreneurial stuff seems to work better decentralized. We can talk about how much you can do top down versus bottom up like we said on kind of pairing disciplines. And then this idea of open science-- Even going back in history, how much you could build on what was public knowledge versus whether it was shared or whether you kept it as patents. But this idea that if you are thinking of just the glory of science as opposed to any profit motive, then building on other people's knowledge or knowledge out there might be quite useful for speeding up that which is a kind of open science idea. How important do you think are those two threads to expanding the field of meta-science, particularly entrepreneurial? Or are they really separate threads which may or may not happen and don't have to intersect with how meta-science is?

Michael (47:00):

I think that they are sort of separate. The point about decentralization, it's really just this point that you want change to be able to come from anywhere. So in particular, you don't want gatekeepers who are able to inhibit change. The ideas of science, we have a pretty good balance there. There's certainly a large number of Noble prizes are awarded to people who actually a long time ago when they were doing the original work, they were the outsider. They were the grad student or whatever. And maybe the famous expert was saying, "No, this can't be done." Certainly there are examples where science is not so receptive to outside ideas but overall, has a really remarkable track record of accepting that kind of thing. An example actually given in the essay which I rather like is the determination of the structure. The DNA molecule done by Watson, Crick, Franklin and arguably her student, Goslin as well.

They were very much kind of scrappy outsiders. They were at well-known institutions but they were almost completely unknowns. The other person who was in this race at the time was Linus Pauling who was the most famous chemist in the world. He was a Nobel Prize winner. I don't know whether he'd won his second Nobel Prize by that point, but he certainly had won. And it wasn't any Nobel Prize, it was a truly spectacular piece of work. Pauling actually announced that he'd found the structure first before them but he was wrong. The remarkable thing is Pauling accepted this immediately. They pointed out-- I think it was Watson pointed it out to him the error that he'd made and then showed Pauling the structure that they'd found, and Pauling just looked at it and realized that he'd goofed.

And this kind of situation where nobody with a good idea can sort of essentially emerge almost immediately victorious over the incumbent is obviously a very healthy thing. It's possible in science at the level of ideas, it is much harder at the level of institutions. If you have a much better idea for how the NIH should be dispersing its funding, good luck. I mean, the only way you can be taken seriously is if you are already at power. If you're sort of at that level-- If you're at the level of the director of the NIH or you have a Nobel Prize or something like that, sure, you can get taken seriously, maybe, but you're not going to do it as a grad student like Watson or Crick or somebody like that. So that ability to do decentralized upgrades in the social processes, the institutions of science is something which we just have no mechanism for at the moment. And I think that's the reason for much of what I see as sclerosis.

Ben (50:10):

And so your solution is new institutions, or can you do it new arms of old institutions?

Michael (50:16):

A few things. I mean, there are some patterns that work decentralized already. I won't talk about them now. One thing that would help a tremendous amount is a much more serious discipline of meta-science which is able to do evaluations which are dispositive. So I mentioned before this example of the replication crisis in social psychology. That is an example where you had an outsider who was actually really able to change an entire discipline because the strength of the evidence was so good. But it was also rather a peculiar situation where sort of the key paper-- arguably the key paper that they wrote, 270 authors replicated a hundred papers over multiple years of work. So it wasn't just get a data source, write some Python scripts, generate a few nice graphs.

This was a very serious kind of a project where they had said what level of evidence would be so overwhelmingly convincing that in fact even people who hated this conclusion would be forced to accept it or would be forced to... I think really it's change your mind; that's the relevant question. So even somebody who is initially hostile would change their mind. At the moment we just don't have very strong techniques for doing that. This is an example of where it was done, but what you would like is many more such examples and in particular, just a lot of people to work on developing techniques which are strong enough to do that. I think there are a few people who are doing that. Pierre Azoulay, an economist at MIT has I think also done some very strong work where it starts. I don't think it quite meets this bar, but it's getting close to the point where the evidence is so strong for certain processes that you might actually start to think, "Oh, I was wrong before in what I believed was the right way to do things." So that's kind of the bar. We introduce this term, decisive meta-scientific result, meaning a result which is so compelling that it would actually cause somebody initially skeptical to change their mind.

Ben (52:46):

Skeptics changing their mind. That's a good hurdle to cross. So I'm going to see if I can pin you down then on a meta-science question or idea. What would be the one thing that you would bet on most or most want to change in meta-science or some experiment to run do you think would be most valuable? Or you could also do the work like high variance. We could do, what do you think is the best idea in meta-science and what do you think is the worst idea in meta-science? And maybe we should go from the worst because then we'll be... Anyway, best and worst?

Michael (53:17):

Going to the bottom of the barrel you can always go further down. I'm certainly very fond of the idea of people thinking much more explicitly about increasing the rate at which new fields are founded. Actually, some funders do a certain amount of this work. They think explicitly in terms of trying to support fields in their relatively early days, but never that early. Something funny happens at the beginning of new fields very often. It's often very hard for people to get support for their work. I was involved in quantum computing in the relatively early days. I started working on it in 1992 just doing things like, “What journal do you publish in?” Surprisingly hard question. If you submit it to a lot of journals that you might think were relevant they'll just say, "I'm sorry, what is this?"

So it requires some editors to actually be a little bit friendly. If you try and arrange something like a scientific meeting actually people are like, "What is this?" So there's all these sort of very interesting barriers. But at the same time if you look at the papers which are being published retrospectively, they're enormously important; not all of them, but many of those papers are incredibly important. So there's kind of this very interesting mismatch where the work that is being done is often of incredibly high value relative to the resources that are being put into it. But also the barriers are much higher than much more conventional work. And I've used the example of quantum computing because I saw that very firsthand. But in fact, reading histories and talking to people seems to be quite a common feature across many fields.

There's often a lot of really very important low hanging fruit in the early days and yet pioneers surprisingly often it's very difficult for them to obtain even very extremely minimal resources. So certainly I think there's a lot of room for funders to think about this question of how can we accelerate the rate at which new fields are being produced? One of the things that they can do is to try-- To think about the question much more seriously, "What are the very early signals which currently we're not able to detect at all?" If you have a really good story, a really compelling kind of an account of how this is all going to play out and so on, that's actually a sign that maybe you're a little bit later on in the whole process. Very few fields start that way. Alan Turing inventing computing in the 1930s. He certainly didn't anticipate video games or something like this. He was actually trying to solve a logic problem. So many fields are born in very strange, almost entirely illegible ways. And this is one of the reasons why funders have trouble funding a lot of that work very often. But it's also an opportunity for them. So that's something I would love to see done.

Ben (56:41):

I could definitely see new fields. I've seen it tiny bit afar from science but actually more in the arts when you're talking about new ways of working or work. The interesting thing what I've seen in the very early days, I would say call it something where it's about a quarter baked-- So working on something else is their main thing and they've got this quarter baked idea and typically, almost everyone in the world or in that room won't understand what the person's talking about. It's quarter baked, so they don't even have the language to describe it. Quantum computing is quite a good example. You have to invent the terms and the language which are derived from other things but its own unique thing. Yet what I see, at least in art world-- And I think tangentially I see it in some science as well, is this quarterback idea is part obsessive.

You still have to do other things because you've got to do other things, but it nags at you or it nags at the person and it doesn't go away at the back of their mind. And then somehow the successful ones I see is they get a piece of usually chance funding or time or something and they work it into enough of a language that someone else can then potentially get it and go-- And someone with status or something saying, "You know what? This is worth a bet on. I can barely understand it, but I think I understand enough of it to know that it's true as opposed to completely wacky." You see that in art easier and I guess the stakes are lower in art. Then if it develops, it develops into it a language. You see this in language of art. They develop the whole other language and go, "Okay, that is a compelling vision," which in its very early days everyone thought was crazy or couldn't even understand and maybe wasn’t art. You could see it sometimes too early for its time. So I give a lot of credence to this quarter baked or fifth baked idea thing where they're trying to express something and in a language that you can't quite understand. But I can completely see it from a funder, if they can't understand it, well, how are you going to fund it? Except that that's maybe the signal that they're really obsessed about it. You can't understand it, but someone may be something and they're trying to invent something new about it. There seems to be tentative signs around it, still very low probability at success, but you have got these kind of meta-science which I think could be explored.

Michael (58:58):

I think the difficulty there, the challenge of a funder is the natural tendency is to want to wait to see whether or not things will become clearer. But of course they don't necessarily become clearer because if the person is not being supported at all, well, the demands of everyday life just mean that they will do other things mostly. So that's really quite a significant barrier. To your point about sponsors, so the English physicist, David Deutsch, wrote what is arguably the first really serious paper-- one of the first serious papers about quantum computing. In 1985, was communicated to the journal by Roger Penrose who has since won the Nobel Prize. My understanding is that actually Penrose was very skeptical of the paper, but sort of in a friendly fashion enough to be willing to have an opinion.

And really, certainly the community of physicists had almost no opinion at all for 15 years. It was just certainly in the late nineties that the most common conversation I would have with other physicists about quantum computing was, "Is this physics at all?" There was no interest. It was just, "Is this physics at all?" Got that question from many people who now work full-time on quantum computing. But I think it's really to Penrose's credit and to your point, it's an example of how having this kind of friendly, maybe somewhat skeptical but also ultimately supportive kind of sponsor can be very helpful.

Ben (01:00:42):

Great. There are two other things in the essay which you sort of look at askance which I'm quite interested in. One is essentially science in non-European, non-American type of institutions. I guess India, China, Russia, maybe to some degree, which you only fleetingly talk about but I was quite intrigued by. Do you think there's any particular learnings from that? There's a whole other type of, I guess sibling ecosystem. Anything we can learn from that or do you think it will go on a different track and should it go on a different track rather than converge?

Michael (01:01:19):

Well, you will know from the essay certainly we hope that it will go on a different track or at least there won't just be mindless duplication of the existing-- There's obviously a large research ecosystem in the United States, in the UK, in Europe and many other countries. But for countries who have a lot of scale in terms of population and whose economies are growing very rapidly, obviously China and India are the two most obvious examples. But you also have places like Brazil.

Well, there are many other places; Indonesia which match this kind of description. They have a really interesting opportunity. Historically because their economies have been relatively small, small relative to their population at least, they have tended not to put that much of their GDP into developing a research ecosystem. Now they're at a point where in fact they are very rapidly developing that research ecosystem. And the question is, "Do they do the same kinds of things? Do they duplicate the NIH? Do they duplicate UKRI or do they try and do something more adventurous?" I certainly hope that they'll take the opportunity that they have to study these systems, hopefully identify what they think are some shortcomings and then maybe make some big bets about doing things in somewhat different ways.

They can actually probably to some extent have their cake and eat it too. Nothing says that they can't spend 70% of their research budget in a way that looks relatively similar to the UK or to the United States but then also actually have a big chunk which is spent in very unconventional ways. Sort of just a way of saying, "Well, maybe we can actually do this much, much, much better." As far as I know this is not happening. I don't know of any large scale initiative to do it. It is a little bit encouraging. Certainly a lot of the innovation you do see actually comes from small countries that have just decided, “Let's try a little bit of a random experiment. We're not going to beat the United States on scale. Maybe we can do it in some other way.” Some of the foundations in Denmark do very interesting experiments. The New Zealand Health Research Council did, I think the world's first large scale experiment with randomized funding or lotteries that you just sort of give some of the money to an applicant at random. That kind of innovation I think is fairly natural in places which are already somewhat peripheral. But I certainly hope that India and China will do some experiments in that way. They have this opportunity.

Ben (01:04:13):

And I think these mid-size countries-- So Denmark's already well on its way. Singapore maybe. But places like South Korea, I think about their social housing as an example. They didn't bother with wired phones. There's a joke that those are stock phones and they went immediately to mobile. So skipped all of that infrastructure because it was of a generation that they didn't need anymore. I kind of simplify the story. And I wonder whether there might be in those countries which are already there, and like you say, there's a variety. The other one on the-- Again, which you saw askance was you mentioned Hume’s is-ought because you mentioned that all of this meta-science does not tell society or anyone what should be studied. Whether that should be climate, physics, quantum computing, how the economy is run or anything like that. Do you have a sense about what should we and where we are on the balance? Do you have any projects where you think, "Well, this is really important science questions?" Do you think the mechanisms that science decides on it is any good at the moment? What should we be exploring on this? Or is this something which is just that much far away from meta-science which it should leave well alone and just stick with trying to develop its own field?

Michael (01:05:34):

So I guess my-- This is partially a personality thing. I certainly have very strong opinions about the way the world ought to be but I also feel that's not my own personal comparative advantage sort of professionally. It's more sort of a personal thing. Something to chat about with friends and to have some opinions about as a civic actor. So a very concrete example, in the United States for somewhat arbitrary historic reasons, some of the big funders-- There's the National Institutes of Health (NIH) which does biomedical research typically with a human focus. There's the National Science Foundation which does basic research. There's DARPA which is doing defense oriented stuff. There's the Department of Energy, does a lot of high energy physics. Those organizations, the way in which their budget changes is determined in fact largely at the political level.

So the NIH, for example, has actually grown consistently at a rate-- It's roughly 1% above if you look sort of over a long enough time period. 1% per annum faster than the NSF. And I think this is most likely really a reflection of politics and political constituencies. People, taxpayers, perhaps understandably rather more interested in research which seems like it might be connected with improving their health than with the more speculative kind of basic research which the NSF tends to fund. That might be a bad choice and there are certainly some sort of 'is' questions which can be asked. For example, you might ask the question, "What fraction of human health span extension or change has actually been due to discoveries made at the NSF versus discoveries made at the NIH?"

It's quite plausible that that's a reasonable kind of a research question or might be turned into one with a lot more thought than I've just put into it. And you can imagine trying to sort of disaggregate that in a variety of ways. That might then feed into some sort of decisions at the political or the values level but I'm not going to try and wade into that. I think you want to develop good techniques which can be used to answer that kind of question. But then it's an entirely separate question and to some extent, I want it to remain separate. How one reasons about it after the fact, you might decide that it's tremendously important that you have a very strong defense research establishment in which case you would be strongly in favor of increasing DARPA's budget perhaps. But perhaps you're not, and that's not a question that can be settled by studying the way the world is. It's a question where you need to bring some of your own values. But to the extent that you can decouple those two things, I think actually both really strongly benefit. Both sides of that benefit.

Ben (01:08:47):

Well, maybe delving into that just a little bit more. This movement of effective altruists or EA do have some strong sense of this. There's an interesting part of the EA movement which is actually very concerned with progress and actually on new institutions which is a kind of interesting overlap. You had really interesting initial reflections in a blog on EA which EA by its nature also they solicit criticism so thought about that. You had a conversation with some EA's and they're obviously worried about existential risk, pandemics by security and how you can think about utility and doing good. After all of that, and you've gone through that, has it changed the way you think rather than going through all of that which was kind of on the record already. Have you changed your mind at all around some of your thinking around EA and how they think about that? Or have you arrived at roughly the same place still with an open mind but it's kind of like, "Well, they have the way of thinking which is good?”

Michael (01:09:50):

Actually, I'm not sure I'm quite understanding the question, Ben. Can you...?

Ben (01:09:54):

Yeah. I guess, having had all of these conversations around effective altruism and what they're thinking, do you think that is a good way of thinking around the world and has your view changed since your blog and conversation on EA?

Michael (01:10:04):

Okay. So I think the connection you are drawing to your previous question, like in some sense they have a set of values which they're using to drive, their giving in a whole lot of ways.

Ben (01:10:15):

And essentially they've decided this is how things ought and is to be. So that's one world view. I'm a little bit too old. EA was not invented when I was at university so I've also come to and it's like, "Oh, I find that quite intriguing. It's not my way." And obviously you've done some work on it. So I was just kind of intrigued partly because that's the connection. Yes, they have a way and also you've done some thinking and it might have changed. So it was kind of like, "Is that EA way of doing it? How much does that resonate in part?" It's a way of thinking about the world where they have-- And it's a new movement as well as opposed to utilitarians were there before and you had the various things. But I find it's interesting because it's a new movement about how to think of the world and at least some of them have quite strong views as to how to do this, maximizing good or maximizing utility.

Michael (01:11:04):

Yeah. It's a really interesting question. They are using their values to drive their giving. I suppose you can break your question down into two pieces. One of which is, "Are they just mistaken about certain facts about the way the world is?" And actually, I probably think they are. But then there's also the question of, “How do I align with them on values?" And just so happens that for personal reasons I don't particularly align with them on values. I like many EA's a great deal. I have some close EA friends but it's not for me. There's a big long personal conversation about why that's the case. You could talk about why that's the case.

I don't know that it's necessarily so interesting why do I have the particular set of values I have? Well, that's of interest to my close friends and family and probably not to too many other people. The question that you might find more interesting is, “How do I think they're mistaken about the way the world is?” Certainly I think EA is actually practiced when they're oriented towards research. I think they don't appreciate nearly enough the value of illegible work, very early stage work. So most of the things which they seem to be strongly oriented to support, they're very big, very legible goals. The most common one is the people-- Well, the one that people perhaps talk the most about and think the most about is AI safety.

It's a big scary monster which is very easy to describe. To sort of think about it, it is a big goal. But historically, a lot of the most significant work on progress has not been legible in advance. I mentioned Turing before as an example solving a logic problem. Didn't realize that it was going to become maybe the most important industry of the 20th and 21st century. He wasn't working towards a big goal. And this is such a common pattern across research. I tend to think that systematically the EA organizations undervalue that. When I talk to EA friends they're perfectly aware of this. I don't really understand why they're not more indexed on it though.

Ben (01:14:05):

Yes, it's hard to... I guess there's a corporate truism that what you can't measure doesn't get managed. But then there's an aphorism above that which is this idea that if you only try and do the things that you can measure, this is not everything that can be counted counts and not everything that counts can be counted, actually, you go even more wrong because of those things.

Michael (01:14:36):

I love this book, "Seeing like a State" by James Scott where he talks about the way in which or what he called the high modernists have often caused problems when they're in governance because they will adopt some goal as being good for the locals in some part of the country which they're governing. But they're ignorant of so much local knowledge which is actually important for the functioning of the systems. And so with complete good intentions, sometimes they actually make things go very backwards. I think it's sort of the government version of this. Figure out what you could measure, use it to manage it, and then sometimes be very surprised when in fact it's a disaster.

Ben (01:15:22):

Some of the disasters in international aid can be well traced back to those routes. I guess I partly ask because I see EA's are building new institutions and new organizations and some of them will be quirky and that's probably a good thing.

Michael (01:15:38):

That I'm totally in favor of. Actually sort of...

Ben (01:15:42):

But it's also separate. It kind of doesn't matter in that sense that they may have got the version of the world wrong because they're a new organization doing things in a new way and therefore will hopefully discover something around the dark intelligent matter even if it was not the thing that we may or may not have thought was there.

Michael (01:15:59):

Yeah. I think some interesting question about scale, what fraction of all philanthropic giving do you want to be EA? The interesting thing to my mind anyways, there's not really a natural regulator for that. Nothing sets the scale, or at least there's nothing really strongly sets the scale there. It's a question just of to what extent do these ideas become fashionable? So yeah, maybe it ends up as actually 50% of all philanthropic giving. Maybe it ends up as 5% of all philanthropic giving. I don't know what it is at the moment. I think it's maybe on the order of 1% or something so it's actually pretty tiny. So you might say, "Well, okay, if it's tiny it's fine for it to grow.” But if there's nothing obviously stopping the growth-- and there will certainly be many internal drivers; lots of people whose careers and self-image has now bound up with it that doesn't seem so great. Of course, I mean this is the story of so many things which don't have a natural mechanism limiting their scale.

Ben (01:17:15):

Yeah. And when you don't have the tradeoff audit-- You see this today in terms of like, "Well, how much should we be working on climate versus deep poverty versus pandemics?" No necessarily natural capital working on any of those so really tricky.

Michael (01:17:32):

Whenever it's politics which is deciding it there's a natural sort of oligarchy tends to form where... You would think that if you had two pretty promising approaches to solving some problem and one of them had let's say 90% of the funding and the other one had 10%, and if you did a serious evaluation and conclude the 90% funded one was only slightly more likely to be successful, you'd get a rebalancing. But actually that's often not what happens. What really matters is politics. And many more of the people on the 90% funded one are actually in a position where they're able to influence future flows of capital. So actually it gets even more lopsided. It's kind of a, I think a crazy-- It's just a natural way in which oligarchy is formed and that pattern you see absolutely everywhere. But it's such a problem in philanthropic funding, this kind of rich get richer effect.

Ben (01:18:38):

Great. Okay. Well, wrap up with a couple of questions and then your current projects and advice. So maybe just a couple of quick underrated, overrated things then. So rogue AI risk, AI safety, do you think underrated or overrated?

Michael (01:18:53):

I have no idea.

Ben (01:18:55):

No idea. We'll pass. Okay. Critical rationality or rationality as a movement?

Michael (01:19:03):

Well, I think those two things are used to mean very different things actually.

Ben (01:19:07):

Okay. Yeah. That's probably true.

Michael (01:19:08):

Critical rationality I tend to associate with Karl Popper and with David Deutsche and with a few others. Rationality sort of in many of its modern incarnations is a slightly different branch of the tree.

Ben (01:19:20):

Perspective, value, probability and that type of thing.

Michael (01:19:21):

Yeah, exactly. And EA actually for that matter which is a different group of people completely. Just having clarified the terms, what was the...

Ben (01:19:32):

Let's go through critical rationality because we covered...

Michael (01:19:34):

I think probably underrated.

Ben (01:19:37):

Yeah. Maybe that's part of a...

Michael (01:19:39):

It's funny I can't read Popper-- That's not quite true. I just don't respond to him. But I still think underrated.

Ben (01:19:51):

I found Popper and actually Deutsche quite hard. I think I get it, but I'm not sure I do get it. And I speak other to people who I think they think they get it. I'm not sure they get it, but maybe that's where it is. So I probably think mildly underrated too but I'm very uncertain because maybe I just completely don't understand. One on this-- although this could be quite a long answer you can just give a short one if you'd like. Memory systems? Obviously you've done a huge amount of work on that. Maybe I'll just split them because you've got the kind of your card based repetition space systems and then you've got memory palaces. I get the overall sense that you probably think memory is more important than people think. So the overall underrated. That's my impression from you. But I'm not entirely sure because you've written a lot on it. So is that your view and what should we know about memory systems?

Michael (01:20:48):

So certainly underrated. It's a combination of I guess technology and science that really points out how people with very minimal effort... Actually, it feels like one of those products that is promising you the world with no effort. You know, "Lose 25 pounds in three weeks while eating hamburgers only."

Ben (01:21:16):

Just by saying this mantra three times.

Michael (01:21:18):

Yeah, exactly. It turns out though that with memory systems actually you can have a much, much better long-term memory for relatively minimal investment and it's just due to some quirks about the way the human brain works which have been known to psychologists for more than a century. There are thousands of papers about it. People have now built systems. They're a little bit aversive in some ways. They're like bicycles. It takes some work to get good at using them and most people just give up because they don't see the effect. If you stick with it and master them, actually they can be very useful if you are doing the kind of work that benefits from a really good long term memory. Not everybody is, but for the people who are, I think they're absolutely wonderful.

Ben (01:22:06):

Yeah. I think for me personally, the space repetition type ones I find are a little bit more useful because I think they can do more interactions with other forms of whatever you are learning. Whereas the memory palace ones are great for long lists of things but are actually less practically useful. They can be. So you've got a long list of history places, you've got your history exam, well why not learn them that and you'll learn them perfectly? But less practically relevant although not always. Whereas I actually think the space repetition ones amongst with other ways you can actually form new-- Well, maybe new to myself ideas, but newer ideas just due to that. But that's a personal thing. So last couple of questions then. What are some of the current projects or things that you're working on or what you're most excited about?

Michael (01:22:59):

So I'm on vacation at the moment which is part of the reason why I'm here.

Ben (01:23:02):

Very exciting.

Michael (01:23:03):

Little break in London. I don't know what I'll do next. We'll see. I've just finished this book that took way longer than I thought. I am just enjoying goofing around; reading about religion, trying to understand religion better, trying to understand art better and trying to understand a little bit about the latest technical progress in AI. So maybe a project related to one of those, but I don't know. And then there's some follow up things from the meta-science work that I will almost certainly do. But I just want to clear my head a bit before making a decision.

Ben (01:23:35):

Sure. Well, if you're looking at art, is this particularly visual art and do you have a view on where, I guess AI art will be? I would say that I'm less worried about what some people worry about AI art because my view art has-- I don't know what percentages, but there's this term in philosophy of art of the beholder share. And so this is the fact that art is valuable because someone receives it. So it's not just obviously you've got paint and creation and then it's all great. I see this very much in my theater practice. Essentially, theater is not really theater unless it has an audience. And actually that's true of quite a lot of art. I think people-- because it's new and because we're going to generate so much stuff and because of where it's come from, I have not quite seen that part. That's my own personal view. A lot of people have the other side. But if you're thinking about it, do you have a view or a thought there?

Michael (01:24:33):

I have many, probably far too many. What's a particularly interesting thing? Okay. So maybe one concern and one observation. The concern is just essentially capturing of the commons. So if people-- Where it used to be that anybody could buy a set of paints and just start to paint, if you actually need technology which is the IP of somebody, yeah, they've inserted themselves as an intermediation layer and historically that's... It always causes some problems. There's always some tradeoffs. I don't know how that will turn out. Most of the conversation which I hear about it is by people who have some sort of motivated reason either to be very, very keen on this possibly because they may own some of it, or to be very, very anti because they're worried that they will be put out of a job by it.

I don't think either of those things necessarily-- It leads to interesting perspectives but doesn't necessarily lead to a particularly clear perspective unless those people are really unusually honest. So a lot of the conversation I hear about that just seems, "I'm a VC and wow, I'd love to own some of this" is kind of the subtext very often. But yeah, you don't get actually very interesting thoughts from that or not a lot of clear thoughts. The interesting observation, I do just enjoy seeing what some of the artists who are working with this are doing. Like when they're able-- I admire so much people like Matisse or Cezanne or Picasso who were able to discover new ways of seeing. Picasso, probably particularly to some extent, Rembrandt, that's sort of a slightly different thing that he was doing.

And I wonder if we're going to see the same kind of genius with the new AI art systems. That would be very exciting. It's very tempting if you look at the way the systems work to think, “Well, they're not really going to be creating new ways of seeing in the same kind of way." You don't get that sort of-- Once you begin to understand cubism, it's really remarkable. You get what they were trying to do and you realize that you've expanded the way you can see the world. Somebody discovered that and it's just incredible. I guess I'm sort of just cautiously optimistic that maybe the same thing will happen with AI art. Brian Eno, the musician and composer has this really interesting observation that it took hundreds of years for us to figure out what possibilities are latent inside the grand piano.

And today, an instrument as rich as the grand piano is being invented every day and nobody will ever master it. That is kind of sad in some sense; to have all these kind of latent possibilities which will never be explored. So that's also a potential outcome maybe from the AI art systems. Maybe it turns out that actually the system is changing sufficiently rapidly that nobody ever masters it. Certainly, I think you see this with software systems at the moment. The rate of change in something like JavaScript frameworks or whatever is so rapid that nobody ever gets really good. A friend who's a dancer and a programmer committed to me that she finds it irritating when people have been programming for five years and they think they're really, really good. They're a senior software engineer at Google or meta or whatever. She's been dancing for 28 years and feels like she's just getting the hang of it. There is something to be said for that kind of deep art and if the AI systems are changing sufficiently rapidly, it might be that nobody ever masters them. That would be a little bit sad. That's a very long answer.

Ben (01:28:50):

Yeah. That's a really interesting observation. I hadn't known that. I guess I can take the other side and say it's quite nice that there's always going to be opportunity. But I think of the other observation then is like we'll never even master the programming language of sea. There will be no real Picasso of sea or something about that which was so elegant and beautiful or something. We're beyond and past it and it might not have even been-- I don't know whether the current languages are better. Is English really that much better of Latin? But you've got heights of Latin expression which we've had and we'll have in English which we might never get in these programming languages which have been around for a short time. I hadn't thought about it like that but that could well be true.

Michael (01:29:35):

Somebody who was at Google pretty early on-- So one of the people who famously helped build Google's early systems, Jeff Dean-- I guess they knew Dean at that time and commented just unselfconsciously that he poured out code. I just thought that was such a lovely way of describing somebody. Sort of a real master is pouring out code.

Ben (01:30:02):

Yeah. Maybe this settles back I think this act of creativity; arts, humanities is much closer to where we are on science coding software than I think it seems like first glance when you speak to actual scientists and particularly that messy layer before you make it legible and put it into an equation. All of that thinking before that seems to me to have much more in common with what dancers do, with what artists do, with what performance do than you might have thought. So I do think that ties around and I think that might even be true in what we discover on the meta layer when we think about those. That act of imagination that you need for something which isn't quite discovered yet, that part of whatever makes us human on that is still quite mysterious and seems to dwell in this creative part or blob or however we do it. In any case then, last question. Do you have any advice or thoughts for people? Maybe young people thinking about their careers or what they might want to do or maybe someone who's wanting to do the leap into a new organization or something within meta-science or open science? Do you have any advice or thoughts about what you would do, what you would tell them?

Michael (01:31:21):

I'm certainly interested... They say that the advice you give others is the advice you'd wish you'd given your younger self. That's probably true. Paul Buchheit, the creator of Gmail has this lovely equation that advice equals limited life experience plus overgeneralization. That's certainly true. The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not.

Ben (01:32:31):

Yeah. Well, I guess that might suggest maybe one needs to try out more things to actually know.

Michael (01:32:39):

That does I think is probably generically... It's generically true. One way in which it's not is it depends on what kind of a safety net you have. Some people have, and I am certainly one of those, have pretty reasonable safety nets and so that enables me to try new things. But the great majority of people in the world do not. And so they're I think, justifiably extremely cautious. But then people also the size of the safety net they think they need does tend to expand as well. If your safety net includes driving a Mercedes and having a et cetera, actually you can probably do a little bit better than that. A friend of mine who's a science fiction writer-- Science fiction writers do not make much money. Even famous science fiction writers do not make much money for the most part. He was trying to decide, "Was it fair on his daughter that he had chosen to be a science fiction writer?"

Because it meant that she probably wouldn't-- He couldn't afford to send her to the best high schools and things like that. Maybe she might get a scholarship or something like that. There were certainly some opportunities. He said that he was on a-- I think it was just a little boat somewhere off Northwestern Australia with a collection of his science fiction buddies who are just an incredible group of people. And he said he realized there that she might not necessarily get quite as-- She might miss some opportunities but she would also have opportunities like that, that she just would not have if he had chosen a more conventional and solely higher paying line work. So I think of that also as kind of a living that kind of a life is a safety net of its own. It's certainly something that you're providing for you and your family and your friends. I don't know whether that's clear or not. Hopefully it's clear.

Ben (01:34:58):

I would interpret that saying there's these immeasurable, these uncountable elements which are actually very valuable. And if you only measure it in dollars, then you might miss actually some of the vast positiveness or wealth or safety net that...

Michael (01:35:18):

Yeah. She was getting an expanded conception of the world, I guess, in some really interesting way. And I could see it would be valuable. You can't eat that, unfortunately. That's the flip side. But you actually don't need to make that much money to be able to eat.

Ben (01:35:35):

Great. Well, on that note, thank you very much.

Michael (01:35:38):

Thanks so much, Ben. This was fun.

In Life, Podcast, Writing, Science Tags Michael Nielsen, Metascience, Open Science, Podcast
Join the mailing list for a monthly blog digest. Email not to be used for anything else.

Thank you! 

Follow me on LinkedIN
Contact/Support