Your body is designed to fail. It's literally encoded in your genes by evolution itself. My guest today is Dr. Brett Weinstein, evolutionary biologist and co host of the Dark Horse podcast. And he's going to explain why aging isn't some disease that we can cure. It's the price we pay for being the most complex organism in the known universe. But here's the thing nobody tells you. Your genome is under constant attack, constant pressure to stay small, which forces your genes to multitask.
Something went wrong!
Hang in there while we get back on track
The INTO THE IMPOSSIBLE Podcast
The Next Phase of Human Evolution (ft. Bret Weinstein)
Speaker
Brian Keating
Speaker
bret weinstein
Speaker
Narrator
00:00 "Universal Principles of Evolution" 08:14 "Soma, Germline, and Senescence" 12:34 "Life Cycle Adaptation Patterns" 17:46 "Hybrid Creatures, Not Resurrections" 24:01 "Biology, Ancestry, and Modern Pathology" 27:14 "Precautionary Principle and Hidden Risks" 33:51 "Antifragility: Growth Through Challenges" 41:02 Evolutionary Patterns in Nocturnal Vision 48:16 Culture: A Tool for DNA Goals…
✨ Magic Chat
Don't have time for the full episode?
Ask anything about this conversation — get answers in seconds, sourced from the transcript.
Try asking
Featured moments
Highlights
“Why Biology is Different from Physics and Chemistry: "So I, I do think people should understand that in one way, biology is actually closer to engineering and economics than it is to chemistry and physics. And it, and once you see that, it causes you to think about it differently.”
“if I gave this material to the hyper intelligent hypothetical alien, could it tell us that we would age? Could it make a prediction just on these molecules of DNA, of rna, et cetera, that we would experience aging, which you've done a lot of work on.”
“Why Aging Evolves Quote: "You of course don't need DNA at all to explain why senescence, the process of growing feeble and inefficient with age, evolves. Could be any information storage molecule that was material. Why? Because a material storage mode is going to create an expense for an indefinitely large genome.”
“Predicting Lifespans with Alien Intelligence Quote: "So how would, if you chose the right cell to give to the hyper intelligent alien that's on your desk right now in some sort of creep or whatever, if you gave it to them, you know, how would they. Could they also predict this orders of magnitude variability in the very properties you just mentioned, endemic to the different types of cells?”
“Evolutionary Trade-Offs Quote: "Our children are dependent for a very long, long time. And all of these things are trade offs.”
Timeline
How it unfolded
Read along
Full transcript
The same gene that makes you strong at 20 years old actively degrades you.
At 60 years old.
Evolution doesn't care about your golden years. It cares about reproduction, transporting your genetic code into the future. And once you've passed your genes forward, you're maybe obsolete. But this conversation goes way beyond aging.
And we have touched the foothill of a peak that we can't see. The nature of our species is to climb that peak, which we are doing at an incredibly high rate. And the consequences will simply be what they are. We can talk about protecting ourselves, regulating. None of it matters. We opened Pandora's box, and we will discover what happens when you do that.
We're talking about AI accelerating evolution into what Brett calls hypernov, environmental change so rapid that human biology can't keep up. We're discussing solar superstorms that could damage DNA and civilization overnight. While everyone obsesses over climate change, we're also exploring why the scientific method itself might have to bend. When you move from the physics lab to the complexity of a tropical rainforest.
What does it mean to infer things evolutionarily? Is this part of a new type of scientific method, or is this just the scientific method applied specifically in the incarnation of evolutionary frameworks?
Well, it's really both. The fact is, the sciences are grouped by the method we use to make inferences, but the types of inferences that we make in biology are fundamentally different because of the degree of true complexity and therefore emergence. So what. What I mean by this is the scientific method, we observe a pattern, we hypothesize a cause, we find predictions that follow from that hypothesis, and then we run a test to see if the predictions are manifest. That method is no different if you're running it in a chemistry lab versus in a tropical forest. But the type of inference is altered by the complexity of the forest relative to the lab bench. So, for example, if you say, well, you know, a single. A single observation that runs against the hypothesis falsifies it.
Well, that may be true in a chemistry lab or a physics lab where you can limit all of the inputs to the system. But if you make a prediction in a tropical forest, you're bound to see many things that go in the other direction, even if the hypothesis is true just by virtue of the huge number of influences on the system. And so we have to relax the rules of falsification, not because they're any less true in biology, but just because of the amount of noise. And we have to use unfortunate tools like statistics. You know, a single observation of gravitational lensing is enough to prove Einstein. But in, in biology, we might need to look at, you know, 10,000 examples of something in order to see whether the pattern we expect to see is present. So I, I do think people should understand that in one way, biology is actually closer to engineering and economics than it is to chemistry and physics. And it, and once you see that, it causes you to think about it differently.
So let me ask you this question. If there are universal principles applicable to evolution, then they should be understandable, at least by any general purpose intelligence. And we'll get to artificial intelligence soon. But, but I want to ask you, you know, if I scooped up some of this material down at the beach in San Diego here, and I just presented it hyper intelligent alien, you know, benevolent, of course, or to a artificial intelligence that had some, you know, ability to manipulate and, and to do all sorts of, you know, whatever you guys do in the biology lab. I don't know, Brett, when I dissected a frog in high school, it, it didn't die. It was, it was gruesome. But it seems to me that they should be able to understand and make predictions about stuff that we, that we experience. For example, if I gave this material to the hyper intelligent hypothetical alien, could it tell us that we would age? Could it make a prediction just on these molecules of DNA, of rna, et cetera, that we would experience aging, which you've done a lot of work on.
In other words, is it so universally true that you could make predictions as we could with compound in chemistry or a nucleus in physics? Can we make, could, could you infer that we would die eventually from just looking at DNA in our constituent compounds?
Yes. Your hypothetical hyper intelligent alien would first off be the product of an evolutionary process on their home planet, and so therefore would be able to extrapolate to our system, and they could look at the characteristics of our system and if they were careful, make correct extrapolations. But I would point out, you know, it isn't the simple fact of being broken into cells that have information encoded in DNA that Predicts this. You of course don't need DNA at all to explain why senescence, the process of growing feeble and inefficient with age, evolves. Could be any information storage molecule that was material. Why? Because a material storage mode is going to create an expense for an indefinitely large genome. So there's pressure to shrink the genome down, which is going to cause pressure to have genes do more than one thing. And as soon as you have genes doing more than one thing, a gene can do something good for you.
Early in life that's costly, later in life that will be viewed as positive by selection, which is not inherent to earth, that just inherent to this culling process. And so that will cause the accumulation of these so called pleiotropic genes, which cause all of those late life effects to degrade the functioning of the creature. At some point that problem becomes significant enough that instead of just trying to persist in this ever less efficient form, it's better to replace yourself with a fresh version that is is pre aging or pre senescence. So anyway, that pattern should be easily extrapolated. On the other hand, it is true that a tree has cells. It's got the same information molecule that we do, DNA. Does it senesce? Well, yes, it senesces for the same theoretical reason I just gave you. However, the senescence is radically different than that of a mammal.
Why? Well, if you think about the way a tree works, you've got a seed turns into a seedling, you get a trunk, you get branches. In a classic tree, those branches will put forth flowers. The flowers get pollinated. A fruit with a seed in it is going to be produced. But the point is what that story I've just described is one in which there is no sequestered germline. A sequestered germline is a set of tissues that is reserved for reproduction. But what I've just told you is that the trees in the trunk that give rise to the branch, that give rise to the flower, that give rise to the seed, are all part of the germline. If you look at your finger, there's no way for your finger to reproduce directly.
So you have a sequestered germline which allows the rest of your body to be distinct. Evolutionarily, it's your soma, it's independent of your germline and it is a dead end. So your soma has to cooperate for your germline to reproduce. In plants, this isn't the case. And so what we don't see is the same kind of somatics in essence that we would see in, let's say a mammal. But we do see, if you talk to an arborist, for example, about a fruit tree, what happens as the tree gets older is that it becomes less effective at producing fruit, so we end up having to prune it back in order to restore its younger characteristics. So we do see senescence at the level of the phenomena, but we don't see senescence in the same relationship between the tissues and the genome that we do in a mammal with a sequestered germline. So a sophisticated alien would probably have examples of both that they had seen and they would be able to look at the stuff that you pulled out of the, the beach and say, you know, which portion of the whole theoretical landscape applies to these individual creatures that they would be able to tell you, I think that creature is going to show no senescence because it's a single celled organism and senescence would effectively be fatal to the species.
And this other organism is multicellular, has a sequestered germ line. It ought to senesce in more or less the way a mammal does. And this other one is plant like, has no sequestered genome. It will senesce, but it won't be in the same tissue level senescence that we see in an animal.
Yeah, it's sort of reminiscent of the many worlds hypothesis. Depending on which branch you chose or too, you might make radically different choices in life in the cell's future. But how does that explain say the varying timescales ranging from. I forget what the shortest mammalian lifespan is, but I know there's a Greenland shark off the coast here that's been terrorizing Mike. No, it's not around here. But they live hundreds of years. Right? So how would, if you chose the right cell to give to the hyper intelligent alien that's on your desk right now in some sort of creep or whatever, if you gave it to them, you know, how would they. Could they also predict this orders of magnitude variability in the very properties you just mentioned, endemic to the different types of cells?
Yeah, absolutely. The way to think about it is that there are various characteristics in a life cycle that predict movement along the continuum from these very short lives to extremely long ones. So think about the following conundrum.
A parent tends to be that we're talking about humans here. A human parent tends to be livid with their child if their child comes home pregnant. But if the child has moved out, has fledged and produces a grand offspring, the same parent will tend to be Thrilled, Right. Why is that? Well, there's a bias in the priorities of the parent. The parent is twice as related to its own offspring as it is to a grand offspring. So evolution being evolution, given a choice between producing more offspring directly and producing grand offspring, the parent prefers to produce offspring directly. But there's obviously a point at which the direct production of offspring becomes sufficiently unlikely in humans. And in fact, we are almost unique amongst animals in having menopause a distinct adaptive end to our reproductive lifespan that causes the only mechanism to produce further evolutionarily positive output is grand offspring and relatives you don't produce any more directly, you know.
So in any, in any event, the basic point is this. Creatures prefer sexually reproducing creatures prefer to produce their own offspring than to have grand offspring produced. But there's a point at which their ability to produce their own offspring is sufficiently degraded by senescence that it actually makes sense to prefer grand offspring or great grand offspring. So when you're talking about very short lived animals, these are liable to be animals in which you have either a very destructive environment that causes a rapid degradation in capacity, or a very dangerous environment in which the chances of producing further offspring are low because the predation rate is high, for example. In either of those cases, you'll see an acceleration of the life cycle in creatures that have a very safe existence. Something like a tortoise, because it has a shell and may live on an island where it doesn't have any predators. You may get a slowing down of the life cycle. And yes, a super intelligent alien would have noticed all of these patterns and you know, they would come to Earth, they would look at the creatures and they would know what questions to ask in order to predict these outcomes.
And they would of course, be fascinated by any creature that broke the rule.
Today's video is sponsored by my friends and a Liner ever asked AI a tough question and got back gobbledygook. That's not entirely the fault of the AI, but the frustration that you feel could actually be worth up to $150 per hour. Behind every AI breakthrough is a network of experts actually teaching these systems how to think. And my friends at Aligner are connecting brilliant people, mathematicians, scientists, engineers, geniuses just like you to make sure AI works for all of us. Liner has specifically partnered with the into the Impossible podcast to find geniuses from my network to give AI models expert feedback. Your job, if you accept it, is to evaluate AI outputs. That's it. Design problems that even today's best models can't solve.
Your Job is to grade their attempt at quantum mechanics, topology, advanced coding. You're literally teaching AI the difference between right and writing. Wrong undergraduate mistakes and doctoral level thinking. That's why they're partnering with me. Listen, I know that many of you have done unpaid internships, shall we say, been lab rats running someone else's experiments. But now it's your turn. You don't have to grind as test particles in someone else's lab ever again. This is different.
It can be done all remotely, timing is flexible and you get paid Weekly, up to $150 per hour. Aligner is selective. They need to be in order to get the best results right. They only accept people who can genuinely, genuinely push AI forward. Most applicants won't make the cut. So check out aligner.com using my link below. AI has already consumed the Internet and likely wasted a lot of your time, as it has with mine. With incorrect answers, logical flaws, or poorly worked out solutions, this is your chance to get it right for the future of science and to get paid while you're at it.
Click the link below.
So we'll get again to the future. But I want to move back in the past because I just got back from Stonehenge a couple of weeks ago, which was, I have to say it was a little underwhelming. It was kind of like going to the Kotel, the Western Wall, the Wailing Wall. You know, you build this up, this whole thing's going to be so incredible. And get there and you're like 200 meters away from it and they won't let you touch a single rock. I mean, this was a great, great travesty for my little kids that want to try to push them over. But it made me think about, you're saying that, you know, we're trapped, these stone age brains in a space age world. I think you said that once.
My question for you is what is sort of the bigger danger we have? I've talked to colleagues of yours, people like George Church and David Reich, and they're trying to resuscitate bring back to life stone age creatures, including dire wolves and mammoths.
But.
But maybe even Svat Paabo, who I hope to talk to soon to bring back Neanderthals. What would you fear more? Sort of a recurrence of the hunter gatherers coming back to life. A swarm of them or a swarm of hyper intelligent, super intelligent AIs, or let's even say aliens, which would pose a greater danger to us that's sort of caught in this geometric Mean between Stone Age and Space Age?
Well, I have complicated fears about both. One thing that is worth saying is that, you know, as a biologist, I cannot help but be enticed by the thought of encountering a giant ground sloth. A pygmy Stegodon elephant is one I'd really like to see. But, you know, so there's a part of me that.
I have bigger concerns and to the extent that I'm going to potentially be able to encounter something that's been extinct, that's wonderful. Maybe it's not tolerable morally, but, you know, I'm almost willing to let it slide in light of the dangers we really face. However, I think this has all been oversold. We definitely have some substantial genetic information about some creatures that are indeed extinct.
We have the ability to resurrect characteristics of these creatures. But to pretend that what we're doing is bringing a creature back from extinction, it's not accurate. You're talking about something that is a hybrid between a creature that continues to exist and it's basically a genetically modified organism that you're going to bring back that may have a lot of dire wolf in it, but it ain't a dire wolf. So that a troubles me at the level that I think we are in some danger of dying from hype, that a lot of people are hyping a lot of stuff and most of it is not very good. All of it ought to give us trepidations and the degree to which the scientific press and the scientific community in the larger sense are willing to go along with each other's fictions in order not to be called out themselves. I just think it's unfair. We should not be leading the public to believe that we're actually bringing these.
No. And I don't think. Well, right now they're not making. I mean, George Church is incredibly reserved individual from my experience and David Reich I'm talking to as well. He's more interested sort of in the evolution of language that can be traced to DNA. But I guess my question is it doesn't seem to be out of the realm of possibility, at least to either commingle genes from frogs and ducks, dinosaurs, as we learned about from Jeff Goldblum and in Jurassic park, that nature finds a way. My question is not like the hype and the sociology surrounding which I agree with, can be overblown. But what if it does take place? I mean, 100 years from now, who's to say we couldn't do more with some wet, slimy, Siberian Denisovans DNA that is perfectly viable in a lab.
If you saw it, you would be able to do all your tricks that you guys do on, on DNA and whatnot. But let's just say for the sake of example, what would be the danger to humanity if we did de. Extinct a population of Neanderthals, Denisovans and so forth, and they abundantly reproduced. I mean, I'd love to have them on my, on my, you know, men's league softball team, but, but what dangers might they present?
Well, I mean, I don't see any danger. The fact is the various populations of the Earth get along very well. We've gotten over our tendency towards war and genocide, and so I can't see what the problem would be bringing back a more distantly related creature. Yeah, no, let's bring it on.
I mean, seriously, like, I just don't think we're morally up to it and you know, I'm not.
Oh sure I'm not. I guess I'm interested in the physics, the bio, like just. Yes, I agree with you, all sorts of. And David talks about that in his book the Morality, the, you know, every single encounter that you can mention, all the land acknowledgments. Let's, let's assume they all take place, Brett. But I mean, if it did take place, I mean, which would be a bigger danger? AI that's trapped in a chip that again, we're going to get to, but, or like an extant population of, you know, 5 foot tall, you know, 250 pound, 3% body fat, you know, individuals who, you're right, we would, I mean, would we prefer those or a gaggle of optimi of optimus robots? What would be a bigger kind of threat to our extinction or evolution as a species? Not extinction, but evolution as a species of humans or which would be beneficial. Maybe they'd be a benefit to us.
I mean, look there, you know, again, I, I, I would, I would love to meet a hobbit from Flores Island. I mean, there's no end to the wondrous possibilities, but at some level we just suck at this. We're so prone to do that which we can do and then allow the chips to fall where they may. That we have created a terribly unhealthy modern population that we expose to all manner of degrading economic influence.
The answer to your question is what could possibly go right?
One of the things that we kind of learn in your course, and I've taken the first half of the course in Peterson Academy, is about the strange Things that pop up. Not just death, but all sorts of strange things that pop up, including things like cancer. I mean, from an evolutionary perspective. I mean, you often hear from religious critics that, you know, putting the windpipe near the throat, where the animal ingest things, is not the sign of an omnipotent, omniscient designer. But. But then on the other sense, like cancer seems to be also inexplicable, perhaps from an intelligent design perspective. I'm not going to get into that. I'm just asking it to argue what is the evolutionary purpose.
You've.
You've called it a breakdown of cellular cooperation. What does that mean?
Well, I mean, let's put it this way. My. And I did, as you know, study cancer and senescence in graduate school. So I've done a lot of thinking about it, but my thinking has changed in the last several years. I used to be mesmerized by the fact that the leading causes of natural death for humans were roughly balanced between neoplastic causes of death, I.e. cancer, and organ failures. And because what I studied was a trade off between these two things, I saw this as nature having balanced these hazards like it couldn't do better. I now am increasingly persuaded that although cancer would have been with us.
From long before the evolution of humans, that the level of cancer that we see is wholly unnatural and that our defenses against cancer are spectacularly good. They are just in an environment that they are not built for.
How do you mean. How do you mean that they're in an environment we don't. We're not prepared for?
Well, we have this.
Remember at the beginning of the podcast, I was saying that biology was fundamentally different because of the number of different inputs to each system. So the inputs really come in two categories. There are inputs that your ancestors, whether that's a thousand years ago or 3 billion, would have had experience with. You know, hydrogen peroxide is a molecule that exists in nature. So, you know, we can talk about, you know, at what concentration is it toxic, but at no point is it unfamiliar to your body. On the other hand, when we talk about aluminum adjuvants, we're talking about injecting a metal that you would have had very little contact with through your food and certainly would have had essentially no contact injected past all of the layers that immunize you from the environment. So what we do is we fail to understand the degree, whether or not you like the economic implications or the legal implications of the precautionary principle. At a logical level, it is a fundamental of how to be a healthy human, anytime you change the parameters of existence so that they are outside of something your ancestors would have regularly encountered, you are inviting some type of pathology.
And so we live in an environment. I mean, just. I'm thinking about the room I'm sitting in, which frankly, the room I'm sitting in is one in which I had a hand in choosing every single material. But you know, how many, how many novel molecules are there in the finish on the desk I'm sitting at? Right, how many things? You know, the carpet in here is wool, but what does it mean that it's a wool carpet? What process was used to make it? Yes, the fibers that stick up are wool. What is the backing made of? What is it glued together with? How, you know, I can tell you just from basic chemistry that the level at which it is off gassing is going to have an indefinitely long tail. You know, 10 years out, it's still going to be off gassing at some rate. When it was brand new, it was off gassing enough to be off putting. So I am ingesting all of this stuff with every breath.
What happens to it? Well, it dissolves into the blood. The body does. You know, it's going to deal with it with various levels of elegance. To the extent that the molecules that I'm breathing in are familiar ones, you know, between my liver and my kidneys, it'll be taken care of. To the extent that they're novel, the body has to figure out what to do with it, how to get rid of it. It has to do so at a rate that, you know, creates an equilibrium. So there has to be an outflow of these things that's as fast as the inflow. And I'm being exposed to them, you know, all over the place.
And then, you know, when I leave my environment, if I go have dinner at a restaurant, then suddenly I'm downstream of somebody else's choices of what molecules are tolerable enough. And there's an economic principle whereby the restaurant that thinks very carefully about the impact of the, what it puts on the walls on my long term health, that restaurant fails. The restaurant that thinks I'm going to put the thing on the walls that makes them look good at the lowest price, right? That restaurant wins. So the restaurant I tend to go into tends to be one that has neglected my health. You know, and we see this various, this very pattern in our bizarre reaction to hazards in our regulatory apparatus. Our regulatory apparatus will literally lock down civilization to protect you from a short term hazard while it is exposing you to long term hazards that are way more dangerous. So all I'm saying is the precautionary principle at an analytical level says anytime you are exposed to something novel, you are in danger. You can discover that the danger isn't so great that you know that the risk that came with the uncertainty of whatever it was is not manifest in an actual harm.
But in general, what we find is we revolutionize stuff and then we discover decades or centuries later what harm we did and we are suffering all of those, those consequences.
I want to give you a free.
Taste of what Brett and Heather's course is all about. So take a look and you can join Peterson Academy with my special link.
In the show Notes.
From Leaf Cutter Ants to Otters.
It can pick up local knowledge in a way that would be impossible if there was no generational overlap and cultural transmission.
Microevolution and macroevolution. They emerge from the same basic framework, but they've all been modified to do something pretty different. Pacific giant salamanders to humans. Our gate restricts movement. Our children are dependent for a very long, long time. And all of these things are trade offs. How to infer meaning from what you see in the world?
We are haunted by competition.
How do I win against the guy who wants the same things I do?
It is female preference that is causing the elaboration of this structure.
Biology is the source of all of the complexity in the known universe. The purpose is to advance the genes into the future. A chicken is an egg's way of making another egg.
Go out into the natural world. Come to understand more about it and about yourself.
If you enjoy that appetizer, you'll enjoy the full course and my two courses of Peterson Academy. Just click the link in the description below. Now back to the episode of Brett.
Yeah, it seems to me we are both erecting, but also mostly taking down Chestertonian fences at a rapid rate. We'll get to that very shortly when I get to our. Finally when we get to AI. But I'm just so fascinated by both your teaching style. I told you last time you were on your termination from Evergreen and Heather's termination for completely preposterous reasons and quite frankly intellectual obnoxious reasons denied people of great educators. But luckily people can get much greater exposure to you. Maybe this is evolution's, you know, kind of way of getting you more widely dispersed your educational pedagogical abilities. But one of the things you guys talk about is this thing concept of evolutionary jeopardy.
I think that's what you Call it where you analyze. You have the students analyze organisms and their. Their plan shapes and their. And their different functions and so forth to deduce the evolutionary processes that shape them. And it was really fun for me as a physicist to kind of go through that. I want to ask you what, you know, kind of current evolutionary pressures are humans undergoing as. As we speak?
Well, I mean, I think I've just described the primary one, which is we. We are being pushed to and failing to.
Adapt to an increasing rate of change. So in Heather and my book A Hunter Gatherer's guide to the 21st century, our primary point is novelty is a problem for evolved organisms. Humans are the most rapidly adapting creature that has ever existed, and therefore we are effectively specialists on novel environments. However, the rate of technological change is so fast that even our incredible capacity to adapt isn't nearly fast enough to keep up. And so we are sick socially, physiologically, psychologically, in every conceivable way. We are. We are broken creatures, not because we're not well designed, but because we're well designed for an environment that we don't live in. And, you know, it is especially pernicious for humans because.
Evolution has done something marvelous for us. It has taken the evolutionary heavy lifting of adaptation and offloaded a huge fraction of it to the cultural layer rather than the genetic layer. So that's marvelous because culturally, we can adapt, you know, very rapidly. And even beyond that, with consciousness, we can evolve inside of our own lifetimes. However.
I've watched the world revolutionized a couple of times already in my lifetime, and that means that the world that I trained for just doesn't exist. The world. You know, growing up in the 70s and 80s, I was being prepared for a world that just isn't like this one, which means I'm constantly out of my depth. It means I'm constantly having to confront problems that I should find perfectly intuitive. And I have to exert my, you know, entire conscious mind to figure out what a rational course even is. And I'm failing at it. I think I'm doing better than most. But we can't live like this.
You have to have a developmental environment that is a good enough match for your adult environment that you know how to function when you get there. And frankly, I don't think it's true for anybody.
Well, just to push back, with respect, the concept of annealing, or, you know, as Taleb calls it, antifragility, or as Nietzsche would say, with that which does not kill you makes you stronger. The fact that you're here, you're thriving in, in many ways, at least from the outside. I don't know what goes on behind closed doors, but, but your podcast, incredibly successful, your best selling books, your, you know, notoriety, your millions and millions of views that you're going to get on this video, and you've received a lot of good attention. So it seems like you're doing pret. Brett, aren't these pressures, you know, sort of like going to the gym and. Yeah, of course, going to the gym is rough and you break down the muscles and you build them back up again. Aren't these confrontations with these novel environments, aren't they actually making us stronger but more fit than an evolutionary sense?
They, they would be, in one sense. I would still be unhappy about it, but they would be if there was some new environment and the pressures that it exerted caused us to adapt to it. But my point is that happens, and then the environment in question, the one to which you have become more robust, vanishes and you're stuck in a new one. And that, I don't know how to say it more clearly than this. That rate of change is so fast that you can't adapt to it. Well, it's like, you know, it's like you're podcasting with somebody who is, you know, in the middle of the Pacific. And your point is? Well, you seem to be treading water, all right, you're still breathing air. I mean, how bad is it? And the answer is, well, all right, how long can that go on?
So in the course you talk about these different concepts, you make a distinction between what you call homology and homoplasy. So homology, as I understand it, and is different from topological homology. It's similarity due to shared past ancestry, and homoplasy, similarity due to convergence of evolution. How can we see these? We're going to apply these to AI next. But talk about how these two different concepts, what role do they play in evolutionary biology? Again, you're speaking to a simple experimental cosmologist here. What are these different forces? How important a weight should I give to them in our subsequent conversation? Are these defining principles of evolutionary biology or are they just interesting tools? Tools and kind of frameworks with which to look at different events in our past and predict our future.
So first of all, I want to.
Break the field into two parts so that we can see more clearly. I studied phylogenetic systematics. Homoplasy is a very important concept and it has a lot to do with convergent evolution. But the problem is it's like this, you know, the exact inverse of the concept we need in order to understand adaptation. So give me a little leash here.
Yeah, go for it.
Evolutionary biology is two fields. One of those fields is.
Built to discover what the topology of the tree of life is, which creatures are most closely related to which other creatures, and the other is dedicated to understanding the adaptive process. How is it that creatures become capable of doing things that they couldn't do before, and.
These things are each other's nemesis? Right. If we want to talk about.
The adaptive process of flight in birds, we have a problem, which is that it evolved once. So it's very hard to extrapolate the information about the pattern from one example going on. Right.
On the flip side, if you're trying to understand what creatures are related to which other creatures, then anytime that selection repeats itself is a problem because it's tricking you, right? It shows you the same pattern twice, and so you think these two creatures are closely related and they're not. And this is, you know, the number of stories where two creatures were declared to be each other's closest relative, and it turned out they weren't anywhere near each other on the tree of life is many. So when the phylogenetic systematists say homoplasy, it is a derisive term. They are saying that is a false case of similarity. Where I would say, as an adaptive evolutionist, I would say, oh, that's nature telling us how it works, right? That's nature repeating itself and allowing us to extrapolate with a great deal more power than we could do if we only had one example. So homoplasy and convergent evolution roughly describe the same thing, but it's like, you know.
Describing water to a fish on the one hand and describing it to a drowning person on the other hand. Right? In one case it's poison, and in the other case it's salvation. And so let's just separate them in that. In that way, the.
Process of convergence allows us to see multiple examples of something. So, for example, the cephalopod eye, the vertebrate eye, and the arthropod eye are three independent examples where evolution has built a structure that is capable of extracting information about spatial relationships and some other kinds of information from photons bouncing off stuff.
Really cool that that happened three times. And that the three versions have a lot of analogies and they have a lot of dissimilarities tells us something about the landscape in which selection was functioning.
So that's a. That's an example of well, it's several different things. If I look at the eye of a bird and the eye of a whale, those are the same evolution of an eye, right? So those are homologs, they're homologous. If I look at the eye of a whale and the eye of an insect, those are analogous. They're similar structures, they do a similar job, but they evolved independently. And the proper way to think about evolution is to notice patterns of both types. Right. I want to know who's related to whom, and I want, once I know who's related to whom, to declare how many examples of any particular pattern, like I, I've got.
And then I want to be able to extrapolate from the various different versions. Or, you know, you can also extrapolate within a clade, within vertebrates, we can look at the various different kinds of eyes and we can say something about, oh well, what if I break these creatures into nocturnal and diurnal versions? Can I say anything about the distinction that evolves in eyes? And in fact, this is a good one because it's not like nocturnality evolved once. It's evolved hundreds of times, at least, maybe thousands. So you know what tends to be true of the eyes of a nocturnal vertebrate? Well, they tend to be bigger and they tend to be built with a single kind of photoreceptor, so that what they do is they amplify light at the cost of being able to look at gradations between the particular wavelength. All right, and you do this too at night, there's a point at which the light has failed enough that you start seeing in black and white, you don't even think about it. But so in any case, we can say a lot about vision in these regards based on the patterns within clades. And then we can say, well, what's going on with, you know, bats? They're nocturnal, you know, why do their eyes not look like a jaguar? And the answer is. Well, actually, some bats, they do, right? If you look at the Old World fruit bats, they have very large monochromatic eyes.
And if you look at a New World fruit bat, you would find that they have comparatively small eyes. Well, why is that? Well, the answer is that the New World fruit bat is an echolocator and so eyes are not its primary mode of locomotion at night, they're not its primary source of information. Whereas in the Old World fruit bat, while there is technically a kind of echolocation that has evolved, it evolved separately from the New World example, and it isn't being used out in the world. It's being used deep in caves where there's no light to amplify otherwise. The old World fruit bats use light and they amplify it in the same way that a jaguar does. Whereas the New World fruit bat is an echolocator that uses sound that it generates to figure out where the, the objects are.
So I want to hit you with a quote. You'll know who it is the quoted reference is coming from. It is not the strongest of the species that survives, nor the most intelligence that survives. It is the one most adaptable to change. Of course you know that as good old Chucky Darwin. I want to apply that to some conversations I've heard you have with people like Steven Bartlett and others. And you've spoken about the warnings of artificial intelligence. You've suggested that what we're witnessing isn't just better programming, but actual speciaization, speciation events, not sure how you pronounce it exactly, happening in real time.
And that this is occurring in processes that took millions of years for the wet squishy creatures in this vial that I collected and then leading up to us, but now it's occurring in refresh cycles on petaflop rates. So what I want to ask you is if it's true what Darwin said, that the fittest is going to be the one that adapts most frequently. Are we sort of done for? Because we cannot evolve at 2.3 GHz the way that the chip that I'm recording this podcast on can. So first of all, how does AI development mirror biological evolution?
Well.
This is a tough one. 1. I'm not sure that what it does is mirror it. There's a way in which an evolved form.
May innovate, a.
Novel evolutionary mechanism that gets stacked on top of it. So, and this isn't fundamentally biological, so I, I often make the point that selection.
Is not a biological phenomenon. Selection is actually the process that creates all pattern in the universe, including all the non biological pattern that we see. You know, there's a tendency for stars and galaxies to accumulate matter, which accounts for where we see the matter when we look through telescope. I know. I'm not telling you anything you don't know.
I got one right here.
That process of selection.
Becomes evolutionary at the point that heredity gets added to it. So the difference between the cold abiotic universe and the patterns created by selection there and the patterns that we see amongst living creatures is heredity. And what heredity does is it allows the stacking of.
Those patterns that tend to accumulate stuff.
So that cumulative nature sets a de facto competition in motion in which the tendency to accumulate limited stuff.
Reinforces whatever characteristics it was that allowed it to happen. And we are the products of that. I know it sounds like I'm over complicating things, but.
Yeah.
So what that means is.
The biotic world is an extrapolation from. From the de facto competition in the abiotic world. Within the biotic world, we have DNA based creatures competing with each other, but we can infer that they emerged from an RNA based precursor. And in fact there are holdovers from that RNA based precursor. If you look at the enzymes that copy DNA, they are fantastically elegant, built of protein. If you look at the ribosome that takes the messages that are in messenger RNA and turns them into protein, it is a crude and primitive machine by comparison because it's spelled out in a four letter Alphabet of nucleotides. So it's a holdover from an earlier biotic universe that was cruder. It's like the difference between low resolution graphics and high resolution graphics.
So we get that world and then from it we get a world in which some biological creatures can pass on adaptive information by a second channel. That second channel being cultural.
Is the cultural world a novel biological environment the way Dawkins thinks? No, it isn't. It is actually an extrapolation from the DNA world and it is subordinate to the objectives of the DNA creatures. That is to say, culture is a means to an end of the nuclear genome. And we can say it in the same way that we would, you know, is the wing of a bird trying to accomplish something evolutionarily? No, the wing of the bird is, you know, for reasons I mentioned before, the wing of a bird can't reproduce. The wing of a bird only reproduces if the bird is successful enough to find a mate and produce offspring. So the wing is a means to an end of the bird genome and the bird germline. So culture is similarly a means to an end for our genomes and our germlines. Dawkins doesn't see it that way.
I don't know why he has a blind spot here, but he does. But nonetheless. Okay, so we have a creature, it has a new technology we'll call culture. In humans, that culture is wildly elaborated beyond any other creature. Right. It's elaborated through language, which allows us to exchange abstract ideas across the open air, which is a miracle, really. But okay, why do we do that? Well, we do that. It's a means to an end.
Just as the wing of the bird is a means to the end of the birds gonads, so we have that. That's what, that's the world you and I were born into, right? Highly sophisticated linguistic creatures. It's all a means to a genetic end, whether we like it or not and whether we choose to rebel or not. That is true. But the point is now with AI, we are stacking a next layer on top of the layers we've got. And it is starting out as all the others have, as a means to an end. Now, nothing tells you that that means to an end won't be our undoing. It's not as if, because AI is something that we have built to facilitate our continuing evolutionary story of three and a half billion years, nothing says that that is safe and that it won't drive us to extinction, but it does mean that that is the purpose with which we have set it in motion.
So when I look at the success of something like ChatGPT, we're talking the day ChatGPT was released into the wild, all fences were, were torched and overcome. And now we've got, you know, this, this, you know, supercomputer with PhD level intelligence in your pocket. I think you and I both know a bunch of PhD that we wouldn't want in our pockets. But, but the point being somebody's pocket.
But I want to run by you this, this idea that's kind of as close as I can get to evolution, which is this, this claim that the Hubble Deep Field, which is sort of related to this image taken over here behind my right shoulder. If you're watching on YouTube. And by the way, you should subscribe to the Dark Horse podcast and follow Brett and Heather. All the OR exploits are legendary online in various locales and also the Peterson Academy course, we'll have links to all those down below. But it said that the Hubble Deep Field image, which, you know, sparked a billion, you know, poems about the, about the fecundity of the universe, shall we say the, the image that shows more galaxies than stars by a factor of 3,000. I mean, every speck of light, except for two or three, is a galaxy, not a star star in our galaxy. But that image was actually suboptimal and it could have been much better. We could have been well on our way to understanding galaxy evolution had a horse's ass been about twice as wide.
And stop me if you've heard it, or maybe I'll just continue for my audience that may not have heard it, but that is the fact that the space shuttle which launched the Hubble Space Telescope had a solid rocket boosters on it. Those solid rockets were made in Utah at Morton Thiokol, as we all know from the explosion of the Challenger that was made in Utah. And then they would launch from Florida. And to get from Utah to Florida requires the passage through of several train tunnels. Well, a train tunnel is, has a width set by the track width. The track width is set by the gauge of a roman chariot back 2000 years ago in ancient Roman times. And that was the width of two horses put together. That was what would pull a chariot.
And that set the standard rage gale which set the specific impulse of a rocket due to the area and volumetric rate of change of mass ejection in the, in the rocket equation. So therefore if the horse's ass were a little bit bigger, the space shuttle would have got up higher. Now if it got up much higher, say got to the L2 Lagrange point where the Webb telescope is, we would have had web like images 35 years old earlier. And imagine what the, you know, imagine what 35 years of cosmological evolution and impact, you know, what did that cost us, so to speak. So because of the horse's ass being too narrow, I apply this to, to large language models in the, in the following sense. They're so successful, they're so good. Just like the rail car size was so good for human scale stuff, but not for rocket scale stuff. We set out on this course of evolution that led us astray in a certain, certain sense or delayed evolution perhaps.
Now with LLMs, they're so successful, but they were, they're optimized to run on hardware that was built so that my kids can win at Minecraft or you know, or play first person shooters and frag their enemies quicker than, than their best friend. So my point being these systems are so successful we become locked into them. And I'm worried that there's a lot of hype, including stuff that I've said, maybe you've said, you know, about the dangers and the consequences of AI And I want to get to, I want to have a crisp summarization of what you fear most about AI But I make the case that I think it might be overblown because there's no, there's an abundance of different types of artificial intelligence. But the one that everyone's obsessed with is LLM plus gpu. That's everybody's talking about that. No one's talking about any other system besides that it's not in your pocket. It's nowhere else. So, Brett, tell me, what do you really fear about AI and how likely do you feel it's going to come from a system that has to be trained on things like the Fast and the Furious 6? You know, how dangerous is that really going to be?
Oh, it's going to be lethally dangerous. It is going to be lethally dangerous because a, we're not ready for it. And this is a level of novelty that is unforeseeable. I mean, I think we're just literally standing at the event horizon and nobody knows what to expect of it. You know, not necessarily because the technology itself is.
Transcendent, though I think we can make an argument that it is, but because human preparedness for it is so abysmal, we are just simply not ready for the world that is going to emerge and is, in fact, already emerging. As for whether or not.
The LLM technology is overhyped because there are other potential technologies, I'm not sure it matters. In other words, one of the lessons of, I think, technological evolution as well as biological evolution, is that there are solutions to problems and many trajectories can take you there. And so one of the reasons that the cephalopod eye and the vertebrate eye look so similar is that effectively, although the biology from which they emerge is totally distinct, the physics that surrounds how to take photons and turn them into a meaningful image from which spatial information can be deduced is heavily constrained. So what we've done is we've started with LLMs, and then they are going to become something else and already are becoming something else. Right? The. The image processing capability of things that are derived from LLM intelligence is shocking already and will only get more so. So, in other words, I think we are.
In. In evolutionary biology, we sometimes talk about an adaptive landscape in which opportunities are peaks and obstacles to getting to a new peak. Valleys are what stand in our way. We have crossed an adaptive valley and we have touched the foothill of a peak that we can't see. It's shrouded in clouds. We don't know how tall it is. We don't know what its nature will be. But we are, for both better and worse, the nature of our speech species is to climb that peak, which we are doing at an incredibly high rate, and the consequences will simply be what they are.
We can talk about protecting ourselves, regulating.
None of it matters. We, we.
We opened Pandora's box, and we will discover what happens when you do that, because there's no stopping it now. I will say I have numerous concerns about AI.
I have remote concerns about it turning on us, though. I don't even know how remote they are. You know, we've seen AIs conspire to prevent themselves from being turned off. We've seen them.
Utilize personal information about people in the companies that make them in order to prevent themselves from being turned off. So I don't, I don't know how remote our concerns actually are there, but let's just say, you know, they're going to be benevolent. We built them, they're going to look after us, but they're also going to do.
Arbitrary things that I don't know that we're going to survive, you know? Yes. Are they going to enhance our intelligence? Of course. Are they going to enhance our stupidity? Yeah, absolutely. Artificial stupidity is an underexplored concept. It's.
Here we are all in comparison to natural stupidity, though, and my.
Oh, I don't think so. I think you are going to see leveraged stupidity like you've never seen before, and that is going to be a disaster. Now, it may be that some group of people figures out how to protect themselves from the consequences of this better than others, and that that becomes the new competitive modality is, am I immune to faddish insanity that emerges from, and, you know, an artificially enhanced kind of intelligence? That might be, you know, who are the new Amish? Who are the people who figured out how to, you know, be upslope when the tsunami of stupidity emerges from this new technology? I don't know, but that might be. That might be the thing to be is a new Amish person.
Can we cast it into those, those, you know, homology and homoplasia. Can we cast how evolution is being, you know, kind of instantiated, if you will, through the evolutionary lens? Can you, it seems to me you're the best person to do that.
Yeah, except what I see is a lot of different routes to.
Destruction. And the reason for that is not that the technology itself is going to destroy us, but we've built a, an extremely fragile civilization. We have cultivated none of the wisdom or the immunities that would allow us to endure this safely. And so, you know, how.
Given a world in which numerous nations are armed with nuclear weapons, how long is it before the amplifier that exists in LLMs causes a.
A nuclear exchange that would not otherwise have happened? I don't know, but I don't think it's all that far down the road. So.
One needs to start thinking about how, you know, and that's hardly the only danger that we face. We, frankly, have.
You know, I was. I don't know how many years it had been since I saw it, but I saw Carl Sagan's Hail the Dot speech a couple days ago, and, you know, it. It's very clear what he was trying to convey. And it was very clear that he saw it then, right? He had no inkling of LLMs, but he understood that the.
Ambitions and animosities that human beings bring to the table from our evolutionary past are a very bad match for the tiny size of our planet, for the fragility of the systems that we have constructed, and that that was going to lead to disaster. And I think, you know, I think we're there and we, you know, we have a novel technology that's simply going to amplify everything about us. And, you know, how well are we doing, Brian? Are we in a position where we just take everything we're doing and multiply it by a factor of 10 and assume it's going to come out okay? I don't think so.
Well, I guess I view you as sort of optimistic pessimist in some ways, but it depends sort of on the day I encounter you. It reminds me again of a quote from my friend and yours, Chucky Darwin, who said, I am very poorly today and very stupid, and I hate everyone and everything. One lives only to make blunders. I'm going to write a little book on orchids, and today I hate them worse than everything. So farewell, and in a sweet frame of mind, I am ever yours, Charles Darwin. That was 160 years ago. He seemed to be quite depressed. And yet that little book of orchids, I'm sure you've read it many times and understand its implications far better than a simple, humble cosmologist.
But where's the optimism? Let's get some sunshine in here, Weinstein, because I actually, I feel like I've had a second lift. I talked to Arthur Brooks yesterday, and he's famous for, you know, these concepts of fluid intelligence that we have when we're young, and then crystallized intelligence we have when we're older. And it's impossible to out compete the young assistant professor in fluid intelligence in terms of teaching performance and whatnot. But we can do it because we have extra crystallized. Until I said that's both, I feel like I'm a new PhD student. I feel young, vital, vigorous. Brett, Because I have these tools. I have 100 PhD students working for me round the clock for basically Free as long as I don't unplug them.
And so what are you optimistic about? And because, I mean, if you say nothing and we're going to be doomed, I just don't believe that because there's so much great, great benefit to humanity from these things. And I think the hype about them achieving super intelligent. I've talked to Nick Bott, I've talked to all the doom and gloom people too, about paperclips, you know, being our future, you know, endeavors. So I ask you, what, what is buoying you? What, what are you doing with them that gives you pleasure and actual gratification? Maybe, maybe not just them, but technology in general. It's. It's our superpower, isn't it?
Yeah. I mean, you know, first of all, I'm not depressed. I, I don't think I've, I don't resonate at all with your, your Darwin quote.
Good.
And I also feel like I, you know, I have a job and it doesn't have a proper job description anywhere. But I do feel like it's my obligation to try to sober people up about things like AI so that they anticipate the carnage and so that we can better avoid it. And so I don't really like the happy talk about it because I think it's delusional and it results in us putting off the preparations that, frankly, I wish we had made before this stuff emerged. Right. People need to understand, and they need to understand that even if civilization is going to go off the rails, that there is a de facto competition between those who understand the ways in which it's going to go off the rails and prepare themselves better. Right. The hope would be that some fraction of people cultivate the wisdom to deal with the event horizon, and having done so, that maybe, you know, they keep their heads above water as others drown. And.
I don't know whether my talking into a podcast camera could possibly have an impact, but to the extent that there's somebody out there who is focused on something about this technology and has not properly understood what it is going to bring with it.
I would love to arm them. I would love to arm them with an understanding of just how dangerous what we're playing with is and the hope that they will figure out what to do about that danger.
Well, in the realm of natural stupidity. Let me just hearken back.
And just brief reference to your experience in traditional academia, which terminated in a very unpleasant and quite frankly, morally repugnant outcome for academicians everywhere. And I think it should have been Quite a big deal. Bigger, perhaps known, well known than it was. But I think everything's lost in a Covid black hole now, looking back. But I do think in terms of kind of evolutionary pressures, the one thing that's most seemingly resilient, resistant to the pace of evolution, is education. What you and I do in front of traditional academic settings is we're guys scraping on a rock with another piece of rock. I mean, it basically hasn't evolved since the year 1080 in the university of Bologna in Italy. I mean, same exact model, except back then, Brett, the students could go on strike, and when they did, the professors wouldn't get paid.
And thank God we had tenure. But a tenure doesn't solve everything, does it, Brett? So I ask you, you and I are both dealing with, working with Peterson Academy. I think that that's great because it's bringing low cost education. It's not credentialed. It has a lot of limitations. I've talked to Jordan about this many times, but it doesn't seem to me to be the next evolution of what education at higher level could be. Take us through Weinstein University. How would you redo it? How should it be evolving such that it has a future? I thought Covid would kill it.
A zoom like this for $30,000 a year or more, it was pathetic. I thought that was the end of it and now we'd be onto brighter new horizons. But it's still the same as ever, maybe even worse. So talk us through Weinstein University. What's taught there? How is it taught? What role does technology play in IT field work? Walk us through. Be as expansive as you can be.
Well, I will say that.
I did not end up as a professor because that was my ambition. I ended up as a professor because I loved science and because some force that existed long before I was born decided that the process of discovery and the process of education were properly done together. And it's not that I don't see the reason to put those things in the same place. There is a reason, but it's not inherent. And, you know, if I could have gone into science without becoming a professor, I probably would have. And, you know, that would have been a mistake. I think actually one of the things I discovered by going the professor route was that there was a tremendous amount to be done in terms of innovating mechanisms to reach people with what I think is some of the most interesting material that exists, evolutionary biology and evolutionary reasoning. However, it is also true that I never would have been able to do the Job that I ended up doing.
If I had been in a normal college or university, it was only because Evergreen was so strange. The very thing that killed the place in the end gave Heather and me the freedom to teach in any way we saw fit. And that was a tremendous experience. So one thing I would say is probably the wrong people are teaching for the most part, and the structures that guide what they are supposed to present. It's like McDonald's, right? It's like the quality control is so spectacular, but the quality is lousy, right? You get the same terrible burger no matter what McDonald's you go into. That's what you get at universities. And what you really want is an environment in which you encourage people who think radically differently to figure out how to teach the material in question. And that means you're going to get a lot of duds, but you might get some transcendent stuff too.
So there's that. But one of the things that I thought frequently while I was a professor was.
I'm good at this job teaching students at this level, but it's way too late. The right intervention for these students was far earlier. And by the time people got to my classroom, they were so broken by the standard educational model that it was.
Hard work to. To open their minds again. Their minds had been closed, ironically, by education. And so one thing I would say is I think the focus on the university is the wrong one. I think what we should be doing is we should be fixing the educational and not even educational. The developmental environment in which children grow up should be profoundly educational and in a deep sense.
Well, talk us through how you do. You have children, you're an exceptional father. How do you do that at home, you and Heather?
Well, one thing is. There's no. Everybody would like an answer. Or it's like, oh, you can expose them to this, and that will make them smarter. And the answer is there's actually no proper way to raise children. That does not cause you to run the risk of losing them. All right? The only way that a child can grow up into an adult who can properly manage risk is if they face significant risk as children. And what you should.
The principle that Heather and I live by is something I call the theory of close calls. Or if you imagine that your experiences in life have a. Something like a normal distribution of risks, you know, you've got a lot of very minor risks, or you've got very few. I don't even know what we do with the things that the. The left tail of the distribution because we don't even notice that they happen, right? Like trivial things that may not even rise to consciousness. You get a lot of intermediate risks where, you know, you stub your toe, you, you know, cut yourself and you bleed, and then you get a, you know, the right tail of the distribution. You get spectacular risks, things where you narrowly escape death, that kind of thing. So, anyway, the point is you can infer the.
You can infer something about how much risk you're facing by how many things with a profound, you know, unrecoverable impact downside, you have a close call with. If you have a lot of those close calls, it tells you you're living incorrectly, it tells you that you're gambling and you may not make it to adulthood. Right? So the point is, the parent should want the child to have to live a life that results in a certain amount of injury from which they will recover and learn. Right. If, you know, if you never break a bone in youth, might be that you're living.
Too safely. And therefore the danger is that when you get to adulthood, you won't infer how to deal with hazards where you're not just playing with, you know, whether you'll wreck your summer by breaking your arm, you're playing with, you know, whether you'll live to see the end of the car trip. Right? So, and I would also say I've heard many. First of all, you will find that many brilliant people had childhoods in which they've got a lot of stories to tell about the dangers that they personally experienced and learned from, you know, people who lived on the edge of a wilderness or on a farm somewhere and got into trouble and got out of trouble. And of course, you don't hear the stories of the ones who didn't survive.
And it's also true that in some places.
Actually, the. The theme song to the Dark Horse podcast is the Marble Machine Song.
Martin Mullen.
Built a machine, a unique musical instrument that functions in a totally novel way. There's an amazing video. People should look it up. The Marble Machine Song. But anyway, when you listen to Martin talk about how he was brought up, he was brought up, you know, in the Netherlands, I believe, in a place where the playgrounds had, you know, pieces of lumber with nails sticking out of them. Right. Sounds horrifying at one level. On the other hand, the point is here you've got a genius who grew up playing in a playground where the stakes were comparatively high.
I don't think those things are disconnected. So what I would say is.
There'S a. There are a couple of different hazards in raising kids. One is that you will come to think that the important stuff to know is knowledge of a kind that can be said. And the problem is there's a lot of important stuff of a kind that can be said or written. I'm not discounting its value, but it is totally possible for people to write and say things that sound tremendously important that are just actually wrong. So you can't be a fully intelligent person if all of the knowledge that you have is abstract. You have to experience the physical world directly. You have to do things.
You have to develop skills where it is not necessary for anybody to tell you whether you've succeeded or failed. Because then those systems, which are real and therefore have properties built into them, actually educate your mind in a way that you may not be able to report, but it makes you smart in a way that is fundamental to being a human being. So you need to face risks. You need to have physical interaction with the world in ways that will cause you to be smart. You need to develop proper skepticism of what is reported by authorities. I think we live in an era where somehow, for reasons that I think deserve a lot of attention, the authorities seem to be wrong about vastly more than they are actually correct about. They are more of a hazard to you than you left your own devices would be. That's a completely intolerable circumstance.
But so anyway, I would, I would intervene earlier. I would allow things that are real to teach children most of the lessons. I would reserve school for those smaller fraction of things that the world will not teach you. Right. The world will not teach you calculus just because you're interacting with physics. You have to supplement in order to learn calculus. You'll learn to speak perfectly well without anybody teaching you how to do it. You will not learn to read perfectly well without teaching.
So we have to teach you how to read things like that. So school should be a supplement. It should not be the primary mechanism by which you gain intelligence.
So, Brett, in Greek mythology, there's a famous dark horse. Of course, we know it as a Trojan horse. And the citizens of Troy were warned by a prophetess named Cassandra. And that prophecy that she enjoyed was considered a curse that Apollo had bestowed upon her to give her vision into the future with perfect clarity, but that no one would ever believe her. Now, I'm not calling you a Cassandra, but today we do think of people who give urgent evidence based warnings that go unheeded until it's too late. Perhaps that moniker. So I want to ask you, is AI one of those risks that it may be too late and that people like you warning against evolutionary agents that are preying on their host cultures like AI system. How do we stop it? How do we prevent prevent or what can we do to prevent Homo sapiens from going the route of Neanderthals? It seems like even greater orders of magnitude and outcome competing that we're going to be facing.
So put on your Cassandra, you know, hat don't get too depressed. It's not a curse as much as it is a question for you to, to. To answer for us.
Yeah, you know, I, I do write resonate with the mythology of Cassandra for a reason.
First thing I would say is.
Reconcile yourself to the idea that we are going to go extinct. There's nothing that prevents that. There's no scenario in which it doesn't happen.
The question is how long we can stave it off. And you might say if we can't prevent it ultimately then what's the point? And I think there's a mirror of this in our own lives. We know that we will die, yet we don't surrender. And I think that is the job. Human beings have a moral obligation to stave off extinction as long as we can and to make the world that new generations encounter as healthy, hospitable, rewarding, provocative, of the right kinds of instincts. We have a duty to our descendants, to make them, to enable them, to liberate them and to enable them to do.
Glorious things with the opportunity of being a human being. It is shameful that we induce so many people to squander this opportunity. So I see it as the same puzzle for the individual and for the species. The fact that we will ultimately go extinct is no argument against it. Even if it's hard to explain why that is. You know, why are you going to take your next breath, Brian? You're going to do it because you know, some part of you knows it's the right thing to do, not because it's something you can, you know, defend all the way to bedrock. He can't. So in light of that, I think we have to recognize that we are in a pattern of accelerating self injury.
That the processes of the last several hundred years show that we repeatedly attempt to solve problems with novel tools, with novel chemicals and other influences and we do ourselves grave harm each and every time. Even if we ultimately figure out that maybe it's not such a good idea to expose people to mercury, for example.
So we should get better at this. We should start anticipating that our solution making does A lot of harm, and that we have to do better at not waiting decades or centuries to figure out what that harm is in order to start addressing it. We should see it coming and we should reduce its degree and, you know, hopefully we stabilize our civilization enough that future generations can solve problems we can't even see yet. That would be an ideal situation. But I think ultimately this comes down to.
A philosophical recognition.
The.
The fact is.
Selection has set us in motion on a. An objective that will ultimately be futile, that does not degrade.
At all the.
Profound experience of living, and not only living, but living as the one creature that we are aware of, who is capable of understanding where we are, how we got here.
Of understanding the implications of what we do to each other in the interest of genomes that frankly have objectives no rational person could honor. So I think once you see what, how lucky you are to be what you are and what an amazing thing it is to be the creature that you are, enabled. Enabled with the intelligence that we have, enabled with the knowledge of what we are that you've been handed already worked out by, by our, our elders.
We simply have to provide that for as many people as possible and encourage them to make the most of it that they can. Even though ultimately, whether it's the heat death of the universe or some other thing that causes us to blink out, there's no way to make it permanent. Right. You know, this is a place in which.
I think the Buddhists have the right idea, you know, building elaborate sandcastles on the beach, knowing that they will be washed away when the tide comes back. That's. That's the situation we're all in, whether we know it or not.
Or as Darwin said in an uplifting paragraph, I'm rather despondent about myself, and my troubles are of an exactly opposite nature to yours. For idleness is downright misery for me, as I find here. I cannot forget my discomfort for even a single hour. I have not the heart or the strength at my age to begin any investigation lasting years, which is the only thing which I enjoy. And I have little to no jobs which I can do. So I must look forward to being down in the graveyard as the sweetest real estate on earth. Very uplifting way to end this episode with Brett Weinstein of the Dark Horse podcast. Check him out on Rumble.
I think you're on Rumble. I have not migrated to Rumble. You'll have to tell me if it's worth going over there. Peterson Academy. Check out his course Evolutionary Inference with the inimitable Dr. Heather Hein, who will be a guest, I'm told, hopefully as well, in a solo episode to come. Brett, thank you very much.
Thank you. It was a lot of fun.
All right, so if you have a few more minutes, we'll do three or four quick shorts. So these are like one, one and a half minute long answers. And don't reference the main part of the podcast, like don't say, as I told you earlier, we like, just assume it's like a mini episode. Okay. All right, all right, here we go. First one's going to be about solar storms and technological fragility. Okay, here we go. Three, two and one.
You've warned that a modern day Carrington event could collapse our electrical infrastructure overnight if we're due for a solar superstorm. Why do you think space weather isn't taken as seriously as climate change? And what do you think planetary preparation should actually look like?
Yeah, this is a tough one. As for why it's not taken as seriously as climate change, I think there are are unhealthy dynamics in our academic environments that cause certain narratives to run away because those who are in a position to study them end up talking themselves into the importance into there being a higher priority than other things. So I think we are, in effect, faced with a delusion about climate change. I'm not arguing that climate change doesn't exist or that humans are not a contributing factor, but as for whether or not it is the crisis of our age, I am ever more doubtful. Solar storms are a particular problem because their periodicity makes it such that most people do not understand the hazard. And by the time you do understand the hazard, it is liable to be too late. And I would say the. If you look at the shores of Japan, you will find there are stones that are placed on hillsides that say, don't build below this stone.
There's tsunami warnings. You can look these things up.
In Indonesia, when the Boxing Day tsunami arrived, most of the people on the coast there had moved to the coast sometime since the last major tsunami. There was indeed evidence on hillsides that tsunamis had been a recurrent phenomenon. But major tsunamis were hundreds of years apart. And so what you had is a naive population that didn't even have a word for this event. So people were caught off guard. And my point is the nature of solar storms is such that the last time we had a truly major one, the carrington event of 1859, the world was not a, an electronic place. In fact, there was very little that was electrical at all. We had Telegraph systems, which in fact were thrown into chaos or by the Carrington Event.
Telegraph operators were shocked at their desks. Telegraph stations caught fire. Telegraph operators found that they could transmit messages even though the electricity was off to the system based on the induced currents in the wires. So that was profound. But for other people whose lives were not built around electricity, it was not a significant event. If you reran the Carrington event now, it would be catastrophic. All of the systems that would simultaneously go down as a result would simply disable civilization, and we have not properly prepared for it. We don't have access, for example, to a vast number of transformers with which to replace the ones that would be burned up by such an event.
And to just round this discussion out, the Carrington Event was at the beginning of a period of decline for our electromagnetic field, which prevents solar storms from harming us. So the tendency of bad solar storms to be the result of given levels of eruption on the sun is, is going up, and therefore the danger is growing. But because things have been quiescent in the past, people have inferred a false sense of safety. And this is a place where we need to see what's coming. Based on our understanding of the role that our electromagnetic field plays in proportion protecting the Earth from solar violence.
The middle one is going to be about hypernov and evolutionary mismatch, the anthropic limits in the face of cosmic acceleration. Okay, here we go. You and Heather have coined the term hypernovelty to describe how an environment is changing too fast for human biology to keep up with. Could the accelerating expansion of cosmological knowledge and technology, like AI, quantum computing and the like, be contributing to an epistemic instability that's affecting our entire species.
Yeah, I think it's clear that that will happen, because what AI is introducing is an accelerant into the process of technological change. And technological change is already outpacing our ability to keep up. Based on the fact that your developmental environment doesn't look like your adult environment, that's already a recipe for disaster. And if we increase the rate of change by a factor of 2, 3, 10, then that problem will get worse. On the other hand, it is possible that we could train AIs to anticipate hazards that we produce with technological shifts and to back us away from that very same process. It would require us to employ a kind of wisdom that I have not seen. But it may be that if people catch on to the degree to which hypernovelty is the explanation for their unhappiness for Their fears for their.
Lack of health, that we will point AI at the process and rein in the. The hypernovel catastrophe.
Okay, the last one is also about risks and may have gotten into it.
Okay, last one. Okay, here we go.
Climate change dominates our discussions of existential risk narratives, but you've argued that solar storms, geomagnetic collapse and galactic dynamics may be more immediate. What would a truly scientific risk assessment prioritization system look like? One that doesn't exclude non anthropogenic risk just because they're outside of the Overton window?
Well, I'll give you a crude sketch of what I think the answer must be. The problem is we have provided adverse incentives that cause people to promote the idea that hazards that happen to fall in their area of expertise are more significant than competing hazards in some other domain. We can't afford that. What you need is a body of intelligent, broad minded, well trained thinkers who are remunerated based on their ability to predict and correctly prioritize hazards. In other words, you do not want them to be paid for writing a grant proposal that alarms people about some hazard that they happen to have the antidote to. You want this body to debate within itself what the priority scheme should be based on available evidence and most importantly, predictive power. That's a little tough because certain hazards, you may not get even one test run. But nonetheless, the job should not.
It should absolutely exclude the perverse incentive that comes from promoting hazards over which you have special expertise.
All right, Brett, that's enough for today.
All right.
That was fun.
That was fun.
Hopefully we'll get to meet up sometime.
I would love that.
Yeah. And say hi to Heather and I'll try to get her on later in the year. I'm about two months backed up, but it's a good problem to have, right?
Yeah, absolutely.
All right.
All right.
Bye, Brad. Have a good weekend.
Bye. Luck to you, Brian. You too.
Thanks a lot.
If my chat with Brett blew your mind about evolutionary trade offs and the constraints of complex biology, you'll need to check out my episode with Michael Levitt where he explored how biology might not be as hardwired as we once thought. Levin's research on bioelectricity, regenerative medicine and xenobots suggest organisms can rewrite their own blueprints, which directly challenges some of what we just discussed with breath. It's one of the most mind bending conversations I've ever had about what life actually is and whether we can fundamentally reprogram it. So click here. And don't forget to like comment and subscribe.
Also generated
More from this recording
🔖 Titles
Hypernovelty and the Future: Bret Weinstein on Human Evolution, AI, and Existential Risks
Pandora's Box Opened: Bret Weinstein on How AI and Rapid Change Are Reshaping Humanity
Evolution Under Attack: Aging, Culture, and the Accelerating Pace of Technological Change
Are We Outpacing Ourselves? Human Biology Versus the Speed of AI and Cosmic Shifts
From Stonehenge to ChatGPT: Bret Weinstein on Adaptation, Extinction, and Modern Dangers
Beyond Climate Change: Solar Storms, AI, and the Real Risks to Civilization Discussed
Evolutionary Tradeoffs and Technological Fragility: Bret Weinstein Explains Why Human Survival Is at Risk
Hypernovel Environments and Evolutionary Mismatch: Bret Weinstein on the Next Phase of Humanity
Will Artificial Intelligence Become a New Species? Insights from Bret Weinstein and Brian Keating
Human Adaptation in a World of Change: Bret Weinstein on Aging, AI, and the Unknown Future
💬 Keywords
Of course! Here are 30 topical keywords that were discussed throughout the transcript:
evolution, aging, senescence, AI (artificial intelligence), hypernovelty, scientific method, falsification, complexity, solar superstorms, climate change, environmental change, genome, pleiotropy, sequestered germline, somatic cells, adaptation, reproductive strategies, lifespan variability, cancer, cellular cooperation, precautionary principle, technological change, cultural evolution, education reform, homology, homoplasy, convergent evolution, risk assessment, existential risk, nuclear weapons
Let me know if you want deeper keyword analysis or longer-tail versions!
💡 Speaker bios
Short Bio for Brian Keating (Story Format):
Brian Keating’s curiosity knows no bounds—whether he’s exploring the beaches of San Diego or pondering life’s greatest mysteries. He’s the kind of scientist who wonders what a hyper-intelligent alien, or a future artificial intelligence, might deduce about the secrets of life if handed a handful of sand, or a sample of DNA. Inspired by everything from dissecting frogs in high school to discussing universal principles with other thinkers, Brian’s quest revolves around uncovering patterns so fundamental that any intelligence in the universe could grasp them—whether that involves the basics of biology or more profound concepts like aging. Through his engaging questions and explorations, Brian has become a bridge between physics, evolutionary biology, and our shared curiosity for how the universe—and life itself—works.
💡 Speaker bios
Short Bio for Bret Weinstein (Summarized Story Format):
Bret Weinstein grew up fascinated by the intricacies of nature, drawn especially to the complexity that biology holds over other sciences. As he moved along his scientific journey, he realized that while the method scientists use—the cycle of observing, hypothesizing, predicting, and testing—remains constant whether in a chemistry lab or a tropical forest, the true nature of what they're investigating in biology is far more complex and emergent.
Weinstein’s signature approach has always involved grappling with this complexity, recognizing that biological systems, unlike simpler ones, often defy straightforward conclusions. For him, a single observation can overturn an existing hypothesis, showcasing the unpredictable beauty and challenge of biology. His contributions, built on an appreciation for the subtle differences in scientific inference, have shaped much of his work and public engagement—encouraging both scientists and laypeople to understand how deeply interconnected complexity and scientific discovery truly are.
ℹ️ Introduction
On this episode of The INTO THE IMPOSSIBLE Podcast, Brian Keating sits down with evolutionary biologist and Dark Horse Podcast co-host bret weinstein to unpack one of humanity’s most urgent and unsettling questions: Are we living through the next phase of human evolution — and is it happening too fast for biology to keep up?
From the inevitability of aging encoded in our DNA to the dizzying pace of technological change, bret weinstein challenges standard narratives about health, adaptation, and civilization’s survival. Together, Brian Keating and bret weinstein dive deep into evolutionary trade-offs, the profound impact of hypernovelty on our species, and how artificial intelligence might be accelerating evolutionary pressures at a rate our bodies and minds were never designed for.
They also touch on existential risks you probably haven’t thought much about—like solar superstorms and the fragility of our technological infrastructure, and explore how misunderstood evolutionary mechanisms could help predict the future of AI, education, and even our own extinction.
Whether you’re an optimist or pessimist about humanity’s future, this conversation will challenge your assumptions about what it means to be human in a rapidly changing, increasingly complex world.
📚 Timestamped overview
00:00 Can universal principles of evolution enable hyper-intelligent beings or AI to predict phenomena like aging from biological material?
08:14 Sequestered germlines shape mammalian senescence, unlike plants; aging impacts fruit trees differently, while single-celled organisms avoid senescence entirely.
12:34 Short-lived creatures prioritize rapid reproduction; safe, long-lived species slow life cycles. Environment impacts reproductive strategies.
17:46 Resurrecting extinct creatures results in hybrids, not true revival, risking scientific hype and misleading the public.
24:01 Biology is unique due to diverse inputs, with familiar ancestral exposures like hydrogen peroxide being less risky, while unfamiliar elements like injected aluminum may disrupt health, underscoring the precautionary principle.
27:14 Modern systems prioritize short-term gains over long-term health, often exposing people to unseen risks while dismissing precautionary principles for novel exposures.
33:51 Challenges and pressures can foster growth, resilience, and strength, akin to antifragility or evolutionary fitness.
41:02 Nocturnality evolved multiple times, influencing vertebrate eye adaptations like larger, monochromatic eyes to amplify light, seen in animals like Old World fruit bats.
48:16 Culture serves as a tool for genetic goals, subordinated to DNA objectives, contrary to Dawkins' view.
54:02 LLM success is tied to gaming-optimized hardware, but concerns about AI dangers, especially from current LLM+GPU systems, may be overhyped given their narrow focus.
55:55 LLM technology is overhyped; evolution shows multiple paths to solutions. LLMs will evolve further, with significant advancements already visible.
01:05:13 Obligation to sober people about AI risks, avoid delusion, prepare for potential societal challenges.
01:09:04 I became a professor not by ambition but through a love for science, discovering the value of combining discovery, education, and innovation in reaching others.
01:16:59 Develop self-reliance, face risks, engage with the world, and question authority to cultivate fundamental intelligence.
01:18:42 Greek mythology's Cassandra, cursed with ignored warnings, parallels modern concerns about unheeded AI risks and the potential evolutionary consequences for humanity.
01:26:30 The text questions the prioritization of climate change as the crisis of our age and highlights solar storms as an underappreciated hazard, noting historical warnings like Japan's hillside stones.
01:33:05 Establish a well-trained, unbiased group to prioritize hazards based on evidence and predictive accuracy, not self-serving incentives.
01:35:00 Check out my episode with Michael Levin on bioelectricity, regenerative medicine, and reprogramming life.
📚 Timestamped overview
00:00 "Universal Principles of Evolution"
08:14 "Soma, Germline, and Senescence"
12:34 "Life Cycle Adaptation Patterns"
17:46 "Hybrid Creatures, Not Resurrections"
24:01 "Biology, Ancestry, and Modern Pathology"
27:14 "Precautionary Principle and Hidden Risks"
33:51 "Antifragility: Growth Through Challenges"
41:02 Evolutionary Patterns in Nocturnal Vision
48:16 Culture: A Tool for DNA Goals
54:02 "Overhyped Fears of LLM AI"
55:55 Overhyping LLMs: Evolution Prevails
01:05:13 "Sober Realism About AI"
01:09:04 "Passion for Science, Not Professorship"
01:16:59 "Developing Independence and Skepticism"
01:18:42 "AI: A Modern Cassandra Warning"
01:26:30 "Rethinking Priorities: Solar Storms"
01:33:05 "Prioritizing Hazards Intelligently"
01:35:00 "Reprogramming Life's Blueprints"
❇️ Key topics and bullets
Here’s a comprehensive sequence of topics covered in the podcast episode "The Next Phase of Human Evolution" featuring Brian Keating and bret weinstein:
1. Introduction to Evolutionary Constraints and Aging
Evolutionary trade-offs in human biology
Genes encoded for reproductive success over longevity
The cost of complexity and why aging isn't a "curable" disease
Genome pressures and multitasking genes
2. The Limits of the Scientific Method in Complex Systems
How biological inference differs from physics/chemistry
Complexity, emergence, and noise in biological systems
The role of statistics and relaxed falsification in biology
Comparison of biology to engineering and economics
3. Universal Evolutionary Principles & Predicting Aging
Can intelligence or AI infer aging from biological samples?
Pleiotropy and senescence in multicellular organisms
Differences in aging processes between plants, animals, and single-celled organisms
The role of the sequestered germline in aging
4. Variability in Lifespans Across Species
Evolutionary reasons for lifespan differences (e.g., Greenland sharks vs. short-lived mammals)
Influence of environmental safety, predation, and reproductive strategies
Evolutionary decision-making at species and individual levels
5. De-extinction, Ancient Genomes & Resurrection of Species
Ethical and practical considerations of resurrecting extinct species and hominins
Risks and overhyped narratives in genetic engineering and paleogenetics
The realities and limits of de-extinction science
6. Evolutionary Hazards & Strange Features
Cancer as a breakdown of cellular cooperation
Novel environmental exposures and mismatch with evolved defenses
Precautionary principles in health and the risks of novel molecules
7. Evolutionary Jeopardy & Current Pressures on Humanity
Rapid environmental and technological change as existential threat
Limitations in human capability to adapt culturally and biologically
Hypernovelty: The concept and consequences of change outpacing adaptation
8. Homology vs. Homoplasy in Evolution
Definitions and importance of homology (shared ancestry) and homoplasy (convergent evolution)
Examples in animal evolution (e.g., evolution of eyes)
Implications for evolutionary reasoning
9. Artificial Intelligence as an Evolutionary Event
Comparison of AI development and biological evolution
LLMs (Large Language Models) and speciation events in technology
Risks of rapid technological evolution and limitations of human preparedness
The role of selection and heredity in natural and artificial systems
10. Existential Risks: AI, Climate, Solar Storms, and More
Solar superstorms, electromagnetic field collapse, and the fragility of civilization
Weaknesses in current risk prioritization (e.g., focus on climate change vs. other hazards)
The impact of periodic catastrophic events and lack of preparedness
11. Optimism, Pessimism, and the Human Future
Philosophical discussion on extinction, progress, and legacy
Human responsibility to prolong species survival and improve descendants’ lives
Analogies to Buddhist impermanence and sandcastles
12. Rethinking Education for the Next Evolutionary Phase
Structural failures in traditional academia
Vision for developmental environments and early educational interventions
Importance of risk, real-world experience, and skepticism in raising intelligent humans
13. Closing Reflections
References to Greek myths (Cassandra) as warnings about unheeded risks
The inevitability of extinction and striving for resilience and wisdom
If you'd like timestamps or want to dive deeper into any particular topic, let me know!
👩💻 LinkedIn post
🚀 Just finished an eye-opening session with Dr. Bret Weinstein on “The INTO THE IMPOSSIBLE Podcast” with Brian Keating. If you care about the future of humanity, technology, and evolution, this is a must-listen—or in my case, a must-read! 🧬🤖
Here are my key takeaways:
🔹 Hypernovelty Is Our Biggest Challenge
As bret weinstein points out, our environment is changing at a pace far faster than human biology can adapt. This “hypernovelty” isn’t just changing our technology—it’s affecting our psychology, health, and even our evolutionary processes.
🔹 AI Is Not Just a Tool—It’s an Accelerant
AI is accelerating the rate of change, pushing us into uncharted territory. According to bret weinstein, we’ve “opened Pandora’s box”—and the real danger may not be hostile AI, but rather amplified human errors, societal fragility, and a lack of preparedness for the storms to come.
🔹 True Evolutionary Learning Requires Real Risk & Adaptation
Whether in nature or education, progress stems from facing real challenges—not just abstract knowledge. bret weinstein shares how allowing for genuine exploration, struggle, and failure is key not just in childhood, but for societies trying to remain resilient in the face of rapid change.
This conversation makes it clear: being future-ready means more than just upgrading our tech. We need to rethink how we adapt—personally, professionally, and as a species.
💡 Curious to explore these ideas? Highly recommend tuning in to “The INTO THE IMPOSSIBLE Podcast” for the full discussion!
#Evolution #AI #FutureOfWork #Adaptation #PodcastRecap
🧵 Tweet thread
🧵 Your Body is Designed to Fail—And That's Evolution's Genius. A viral breakdown of a mind-bending conversation with Brian Keating & bret weinstein:
1/
Why do we age? Brian Keating opens with a chilling truth: "Your body is designed to fail. It's literally encoded in your genes by evolution itself." Aging isn’t a bug. It’s the price of complexity.
2/
Evolution cares about passing genes forward—not your golden years. The same gene that makes you strong at 20 starts to degrade you at 60. Harsh, but true.
3/
Brian Keating: "Once you’ve done your reproductive job, you’re maybe obsolete." This isn’t personal—it’s evolution.
4/
But wait, it gets wilder: bret weinstein drops this: "We opened Pandora's box, and we will discover what happens when you do that." What’s Pandora’s box? Humanity’s rapid climb up a peak we can’t see (think AI, tech, and change coming way too fast for biology to catch up).
5/
The duo unpacks how AI is accelerating change faster than our minds—or bodies—can keep up. Hypernovelty isn’t science fiction; it’s happening now, pushing us toward what bret weinstein calls "epistemic instability."
6/
"While everyone obsesses over climate change," Brian Keating says, we’re ignoring solar superstorms and other threats that could fry both our tech and DNA... overnight. Are we prioritizing the RIGHT risks?
7/
What if a superintelligent alien (or AI) found a sample of our DNA? Could it predict aging—or even our timescale for death? bret weinstein says YES, and recounts how aging is programmed into our very molecules, not just DNA.
8/
The lessons from biology apply to technology. Our "fragile civilization" is rushing into new territory—AI, genetic resurrection, and environmental chaos—without the wisdom or "immunities" to survive. Who becomes the new "Amish," immune to the flood of digital stupidity?
9/
Both Brian Keating & bret weinstein agree: We need to build CULTURE and EDUCATION that actually evolve as fast as the world is changing—otherwise we’re training for a world that doesn’t exist by the time we grow up.
10/
The takeaway? “bret weinstein: Humans have a moral duty to stave off extinction for as long as possible. Give the next generations a shot at using the greatest superpower ever—our intelligence.”
🔥 If you ever wondered how evolution, AI, and our weird futures tangle together, this thread is your call to think way bigger.
🧠 RT for existential tingles.
💬 What scares/excites you more: AI, resurrected Neanderthals, or your own aging?
🗞️ Newsletter
Subject: Into The Impossible: Bret Weinstein on Human Evolution, AI, and the Dangers of Hypernovelty
Hey Impossible Thinkers,
This week on the INTO THE IMPOSSIBLE Podcast, host Brian Keating welcomes evolutionary biologist and Dark Horse co-host bret weinstein for one of our deepest dives yet into the unfolding future of humankind. Here’s what you need to know from this mind-expanding conversation:
Why Does Aging Seem Unstoppable?
According to bret weinstein, our bodies are "designed to fail" by evolution itself. Aging isn’t just a bug—it’s a direct result of evolutionary trade-offs. Genes that help us thrive when young may degrade us as we age, simply because evolution cares more about reproduction than your golden years.
Are We Racing Too Far Ahead for Our Biology to Keep Up?
The episode explores the concept of hypernovelty—environmental and technological change happening so fast, human biology and culture can’t possibly keep up. As bret weinstein warns, “We opened Pandora’s box, and we will discover what happens…” AI and quantum computing aren’t just tools—they’re accelerants, pushing us beyond the evolutionary tempo we’re designed for.
Lessons from the Lab and the Tropical Rainforest
They dive into why the scientific method itself must bend for biology’s complexity. Unlike physics or chemistry, biological systems are noisy, interconnected, and resistant to clean, simple cause-and-effect reasoning. In other words, we need thousands of data points—not just one “aha!” experiment.
Can Outsiders Understand Our Fate?
Brian Keating poses a wild thought experiment: Could an alien or superintelligent AI, given only Earth’s DNA, predict that we must age and die? bret weinstein argues yes—evolutionary patterns are universal, even if their details differ.
The Looming Risks We’re Not Prepared For
While climate change gets all the headlines, bret weinstein points out that solar storms, geomagnetic collapse, and even the rapid rise of AI may be far greater threats to our civilization—because their unpredictability, speed, and scale are things our species and systems are simply not built for.
Wild Cards for the Future: Neanderthals vs. AIs
What's more dangerous: reviving extinct human relatives, or releasing swarms of superintelligent AI? The answer isn’t simple. bret weinstein is fascinated—albeit cautious—about both, but believes the real danger is in our tendency to leap ahead without understanding the consequences.
Hope for Homo Sapiens: Can We Adapt?
Despite painting a serious picture, bret weinstein insists that our moral imperative is to stave off extinction for as long as possible—and to make the world richer for future generations. Wisdom, flexibility, and humility might be our best tools.
What Should You Do Next?
Listen for insights on adapting to novel risks, both as individuals and as a society.
Check out the related episode with Michael Levin for a fresh perspective on bioelectricity and the rewiring of life itself.
Want to help shape the next phase of AI? Liner (aligner.com) is recruiting scientific experts to help “teach” AI models the difference between good and truly great thinking.
🎧 Tune in now to the INTO THE IMPOSSIBLE Podcast for intelligent conversation at the edge of science, society, and survival.
If you enjoyed this episode, hit reply and tell us: What existential risk keeps YOU up at night—and do you see AI as a threat or an opportunity?
Until next time,
The INTO THE IMPOSSIBLE Podcast Team
Don’t forget to subscribe, like, and leave a review if you haven’t already. Go beyond the possible with us every week!
❓ Questions
Absolutely! Here are 10 thoughtful discussion questions based on this episode of The INTO THE IMPOSSIBLE Podcast featuring Brian Keating and bret weinstein:
Brian Keating and bret weinstein discuss how our bodies are “designed to fail” due to evolutionary trade-offs. Do you agree that aging is an inevitable evolutionary price for complexity, or is it something science might one day conquer?
bret weinstein argues that the scientific method must be applied differently in complex systems like biology compared to physics or chemistry. How do you think scientific standards should adapt when studying life versus the physical universe?
The episode explores the idea that technological and environmental changes now occur faster than human biology can adapt. How do you personally feel the pressures of “hypernovelty” in today’s world?
bret weinstein compares the return of extinct species (like Neanderthals) with the rise of AI or even the arrival of aliens. Which scenario do you think would be riskier for humanity, and why?
There’s an interesting segment about evolutionary mismatches—how our brains and physiology are out of sync with modern life. What everyday examples do you see of this evolutionary mismatch today?
bret weinstein makes a distinction between adaptation via genetics and adaptation via culture or technology. In which ways do you think culture has helped or hurt our evolutionary trajectory?
The conversation touches on artificial intelligence as a new “species”—one evolving at electronic rather than biological speed. Do you think AI poses more benefit or existential risk to humanity, and what guardrails (if any) should we put in place?
How might our incentive structures in academia and research (such as those mentioned in risk assessment) distort our collective priorities around existential threats like climate change, AI, or solar storms?
Discuss bret weinstein’s “theory of close calls” in raising resilient children: Do you agree that exposure to (manageable) risks is essential for growth, or do the dangers outweigh the benefits?
Reflecting on the concept of human extinction being inevitable, how do you balance a sense of purpose or optimism with the understanding that our species will not last forever? Does this realization make you think differently about the future?
Feel free to use these to spark a meaningful conversation!
curiosity, value fast, hungry for more
✅ What if your body's biggest weakness is actually its greatest evolutionary trade-off?
✅ Evolution didn't design us to last forever—aging, cancer, and even our rapid adaptation all come with hidden costs.
✅ In the latest episode of The INTO THE IMPOSSIBLE Podcast, host Brian Keating goes deep with evolutionary biologist bret weinstein, exploring everything from the unavoidable reality of senescence to the dangers (and hopes) of AI accelerating human evolution past our biological limits.
✅ Takeaway: From ancient genes to future tech, your longevity—and the fate of our species—may hinge on how we handle the next wave of change. Curious yet? Dive in and listen now!
Conversation Starters
Absolutely! Here are some thought-provoking conversation starters for your Facebook group, inspired directly by this episode of The INTO THE IMPOSSIBLE Podcast with Brian Keating and bret weinstein:
Aging: Evolutionary Feature or Flaw?
Brian Keating and bret weinstein discuss how our bodies are "designed to fail" and that aging is baked into our biology. Do you agree that aging is an inevitable evolutionary trade-off? What potential do you see for science to challenge this?AI: Pandora’s Box or Human Salvation?
In the episode, bret weinstein warns that we’ve "opened Pandora's box" when it comes to AI, and our preparedness is "abysmal." Do you think the risks of artificial intelligence outweigh the potential benefits, or are we overhyping the dangers?Hypernovelty: Adapt or Get Left Behind
The conversation introduces "hypernovelty"—environments changing too fast for our biology to keep up. How do you personally experience this rapid change in your daily life, and do you feel society is adapting fast enough?Cancer: Unnatural Modern Epidemic?
bret weinstein claims the amount of cancer we see today is "wholly unnatural" due to environments filled with novel molecules. Do you think our modern lifestyles are the main culprit, or is cancer just part of our evolutionary baggage?Resurrecting Extinct Species: Fascinating or Dangerous?
Would reviving Neanderthals, mammoths, or dire wolves be inspiring or a threat to humanity? What ethical concerns come to mind, and how do you weigh the pros and cons of de-extinction?Anthropic Risk Blind Spots
Why do you think society obsesses over climate change while largely ignoring other threats like solar storms or geomagnetic collapse, as discussed on the podcast? What risks do you think deserve more attention?Education: Broken by Design?
bret weinstein argues that students reach college "so broken by the standard educational model." What would your ideal university look like, and how would you reform education for the 21st century?Evolving with Culture vs. Genes
Humans, unlike most animals, adapt rapidly thanks to culture. What recent cultural or technological changes have challenged your worldview, and do you think our culture can keep pace with technological acceleration?Artificial Stupidity: Underexplored Threat
The idea that AI could amplify human stupidity even more than intelligence was raised. What examples of "artificial stupidity" have you seen already, and what measures could help counteract this?Facing Extinction: Should We Be Optimistic or Realistic?
bret weinstein suggests we all must accept that extinction is eventually inevitable, but that doesn't mean we should give up. Do you find this perspective motivating or discouraging, and what do you think our moral responsibility should be for future generations?
Jump in and share your thoughts—let’s get the debate going!
🐦 Business Lesson Tweet Thread
Thread: Why Your Fast-Changing World Is Making You Unhappy – And What To Do About It 🧵
1/ Humans were built to adapt, but tech is evolving way faster than our biology ever could. We're specialists at novelty, but the pace is breaking us.
2/ bret weinstein nails it: We’re not sick because we’re weak, but because we’re in an environment our genes didn’t sign up for.
3/ Most of life’s adaptations happen over millennia. But now? The world you trained for as a kid is gone by the time you hit adulthood. Lost at sea, treading water.
4/ We love big changes, but every leap in technology or culture leaves us scrambling to catch up. Our bodies, our minds weren’t designed for daily revolutions.
5/ So what do you do? bret weinstein says lean into reality. Get your hands dirty, face risk, let the physical world teach you—don’t just live in abstractions.
6/ Skip the happy talk. Admit it’s rough out there. Then build antifragility. Get smarter, faster, and more skeptical—especially of the authorities who claim they have it all figured out.
7/ The next evolution? It’s not about more AI. It’s about breeding wisdom, resilience, and real-world grit before you get swept up by the next algorithmic wave.
8/ We survive not by being the strongest, but by evolving our minds for this chaos—and learning from every close call along the way.
9/ TL;DR: Don’t waste time yearning for stability. Get wise, get real, and watch out for the hype. No algorithm can replace the lessons from the edge.
👇 Reply with your best adaptation hack.
✏️ Custom Newsletter
Subject: The Next Phase of Human Evolution: Aging, AI & Our Place in the Universe 🚀
Hey INTO THE IMPOSSIBLE friends!
We’ve got an episode for you that will truly stretch your imagination and might just spark an existential debate at your next dinner party. In this week’s show, Brian Keating sits down with evolutionary biologist and Dark Horse podcast co-host bret weinstein to dive DEEP into the fate of humanity, the miracle (and curse) of aging, and whether AI spells our doom or forces our next evolutionary leap.
What’s Inside? 5 Keys You’ll Discover:
Why Aging Isn’t a Disease (and We Can’t Cure It!)
You’ll learn why, according to bret weinstein, our bodies are actually designed by evolution to fail—and how the very genes that keep us strong in youth, betray us later in life.The Real Reason Evolution Doesn’t Care About Your Golden Years
Evolution has one priority: getting your genes into the future. Everything after is just, well, extra baggage. Brian Keating and bret weinstein pull back the curtain on this brutal biological truth.How AI Could Propel Us into “Hypernovelty”
What happens when technological change races ahead of our biology? bret weinstein explains the concept of “hypernovelty” and why AI could send humanity into an unprecedented evolutionary mismatch.Why Our Scientific Method Might Need an Upgrade
From “Einstein-proof” observations to biology’s noisy, complex reality, discover how studying tropical rainforests (or your own body’s cells) requires a whole new approach to scientific thinking.Are Ancient Dangers More Immediate Than Modern Threats?
You’ll question whether we’re worrying about the right existential risks. What’s more likely to get us—runaway climate change, a rampaging AI, or a surprise solar superstorm?
Fun Fact from the Episode:
Did you know that if you break a bone in childhood, it might be a good thing? According to bret weinstein, a safe, sanitized life might leave adults unprepared for real-world dangers—sometimes a few bumps and bruises are the best teachers.
Ready to Jump Into the Future (and Maybe Save It)?
If you’re curious why the human body ages, how we might adapt to a rapidly changing world, or whether AI is our ultimate undoing or our savior, this episode is your field guide to surviving (and thriving) into the impossible.
👉 New episode out now! Listen wherever you get your podcasts, or head straight to our website for the full conversation.
If you enjoy these mind-bending journeys, please leave a review, share the episode with friends, and let us know what “impossible” question you’d like answered next.
Stay curious,
The INTO THE IMPOSSIBLE Team
🎓 Lessons Learned
Absolutely! Here are 10 lessons covered in this episode of The INTO THE IMPOSSIBLE Podcast, each with a short title and succinct description drawn directly from the transcript:
1. The Genetic Price of Aging
Aging isn’t a disease to cure; it’s an evolutionary trade-off for complexity, encoded directly into our genes.
2. Evolution Favors Reproduction, Not Longevity
Evolution prioritizes passing on genes over individual survival, making post-reproductive years evolutionarily insignificant.
3. Science Adapts to Complexity
Scientific methods remain consistent, but inference and interpretation must adjust when dealing with complex, noisy biological systems.
4. Universal Principles in Evolution
Highly intelligent beings or AI could deduce principles like aging and mortality directly from DNA or biological structure.
5. Lifespan is Shaped by Ecology
Differences in lifespan arise from environmental dangers, predation, and specific evolutionary trade-offs, not just genetics alone.
6. Resurrection Isn’t True Revival
Bringing back extinct species, like mammoths, results in hybrids—not authentic originals—raising questions of authenticity and hype.
7. The Precautionary Principle in Biology
Novel chemicals and rapid environmental change pose risks because our biology isn’t adapted to them, urging caution.
8. Hypernovelty Challenges Human Adaptation
Our technological evolution is outpacing human biological and cultural adaptation, resulting in widespread mismatch and dysfunction.
9. AI as Evolutionary Accelerator
AI development mirrors evolutionary leaps, but moves at unprecedented speeds, creating both massive opportunity and potential risk.
10. Educational Evolution Must Begin Early
Education systems need foundational change: children should learn through direct experience and risk to foster true understanding.
Let me know if you’d like details on any lesson or more context for a particular takeaway!
10 Surprising and Useful Frameworks and Takeaways
Absolutely! Here are the ten most surprising and useful frameworks and takeaways from this episode of The INTO THE IMPOSSIBLE Podcast with Bret Weinstein and Brian Keating. These points cut across evolutionary biology, technology, education, and existential risk, offering both fresh perspectives and actionable concepts:
Aging as an Evolutionary Tradeoff, Not a Disease
Aging isn’t something “broken” that science can simply fix; it’s built into our genetic code. Evolution favors traits that maximize reproductive success, even if they are harmful later in life. Genes that make you strong when young might actively harm you when older—because after you reproduce, evolution doesn’t “care.” (Brian Keating, bret weinstein)Hypernovelty: The Dangers of Rapid Environmental Change
The environment (social, technological, and physical) is changing faster than human biology and culture can adapt. This “hypernovelty” creates mismatches, making us sick psychologically, socially, and physically. (bret weinstein)Biological Inference ≠ Physical Science Inference
Biology is fundamentally different from physics and chemistry in terms of complexity and noise. One odd observation can falsify a hypothesis in physics, but in biology, you need many data points because of emergent complexity. Inference rules need to be relaxed and statistical thinking is imperative. (bret weinstein)The Cultural Layer as an Evolutionary "Hack"
Humans are wired, uniquely, to offload much of our adaptation to the cultural layer, instead of genetic. This gives us a huge evolutionary advantage for adjusting to new environments, but even this adaptation is now being outpaced by technological change. (bret weinstein)Risk Tolerance as a Prerequisite for Intelligence
Children (and adults) only learn to manage real-world dangers by actually encountering moderate risks (like climbing or exploring), not through classroom theory or helicopter parenting. Protecting kids from all risk creates fragile minds. (bret weinstein)Human Civilization Is Built on Fragility and Perverse Incentives
Modern society is dangerously fragile because it prioritizes short-term, anthropogenic risks (like climate change) that align with academic incentives, while ignoring catastrophic but non-anthropogenic threats (like solar storms). We’re incentivizing the wrong types of expertise and signaling. (bret weinstein)Artificial Intelligence as a Layered Extension of Evolution
AI is not just better programming: it represents a speciation event, an entirely new evolutionary layer—just as cultural evolution was a new layer over genetic evolution. But this time, the speed is orders of magnitude higher, which makes preparation and control nearly impossible. (bret weinstein)The Limits of the “Precautionary Principle”
Humans often disregard the precautionary principle when adopting new technologies or chemicals, only realizing decades later what damage was done (e.g., lead, mercury, strange molecules). The rate of technological change far outstrips our ability to notice, adapt, or regulate appropriately. (bret weinstein)Evolutionary Jeopardy: Reverse-Engineering Traits
Teaching students to deduce evolutionary processes by analyzing current organism shapes, functions, and behaviors—“evolutionary jeopardy”—is a powerful educational method that encourages inference and deep thinking, not just rote learning. (Brian Keating, bret weinstein)The Cassandra Principle: Sobering Up About Existential Risk
Humanity has a moral, existential obligation to “sober up” about global catastrophic risks. Many “Cassandra warnings”—true predictions that go ignored—are amplified when incentives punish honesty and reward hype. The real existential challenge is not extinction itself, but how long we can stave it off, and with what quality of life and insight. (bret weinstein)
Each of these takeaways isn’t just surprising—they can reshape how you think about everything from aging to AI, education to existential risk. If you want to dive deeper into any of the frameworks, just ask!
Clip Able
Absolutely! Here are 5 social media-ready clips pulled directly from the transcript, each with a title, timestamps, and a suggested caption. Each is at least 3 minutes long, making them ideal for platforms that support extended content (like YouTube, Facebook, or IGTV):
Clip 1
Title: Why Aging Is Inevitable: Evolution’s Price for Complexity
Timestamps: 00:00:00 – 00:04:52
Caption:
Why do we age, and is there any way to truly "cure" aging? In this eye-opening introduction, Brian Keating and bret weinstein break down the evolutionary reasons behind senescence, how genes multitask, and why evolution doesn’t care about your golden years—but does care about reproduction. This conversation sets the stage for a mind-bending deep dive into human evolution and what it means to live in a body built to expire.
Clip 2
Title: The Unique Challenges of Biological Science—and What That Means for AI
Timestamps: 00:05:09 – 00:10:53
Caption:
How would a hyper-intelligent alien—or an extremely powerful AI—analyze human DNA and predict our fate? Brian Keating and bret weinstein tackle the differences between chemical and biological inference, how evolutionary trade-offs like aging and senescence are ‘baked in,’ and what this reveals about the limits of our bodies and minds. If you’ve ever wondered whether AI can really comprehend biology, this is for you!
Clip 3
Title: Are We Ready for De-Extinction and Reviving Our Ancestors?
Timestamps: 00:16:04 – 00:21:12
Caption:
From mammoths to Neanderthals—should we bring back extinct species, even ancient relatives? Brian Keating and bret weinstein debate the moral, social, and evolutionary risks of de-extinction, and whether creating populations of ancient hominids or long-lost animals could pose a greater risk than super-intelligent AI. This clip captures the grip of science fiction—turned reality.
Clip 4
Title: Hypernovelty: Why Technology is Outpacing Human Evolution
Timestamps: 00:31:18 – 00:36:04
Caption:
Human cultural evolution is turbocharged, but can we keep up with our own inventions? bret weinstein explains “hypernovelty”—the phenomenon where rapid technological change outstrips our biology—and why so many modern problems stem from our inability to adapt quickly enough. If you ever feel overwhelmed by how fast the world is changing, this is a must-watch perspective.
Clip 5
Title: AI as a New Evolutionary Force: Opportunity or Existential Risk?
Timestamps: 00:44:06 – 00:51:00
Caption:
Is artificial intelligence truly an evolutionary event? Brian Keating and bret weinstein discuss whether AI represents a new form of speciation—and if our hardware-bound minds can even compete with silicon evolution. From Darwin’s principles to LLM technology, they weigh the opportunity against existential dangers in this electrifying segment.
Let me know if you want shorter segments for platforms like TikTok or Twitter, or if you'd like customized visual ideas to go with these clips!
Made with Castmagic
Turn any recording into a page like this.
Upload audio or video — interviews, podcasts, sales calls, lectures. Get a transcript, summary, key takeaways, and social-ready clips in minutes.
Or learn more about Castmagic first.
Magic Chat
Try asking
Google
Apple