Today, we're exploring whether the Earth itself is developing intelligence through us. Soon, you'll discover why the world's first computer wasn't built to crunch numbers, but to map our place in the cosmos. My guest today is Ben Bratton, a professor, a visionary philosopher of technology, and a man whose work has redefined intelligence at planetary scales. He's the perfect person to explore this mind bending concept. Ben argues that AI isn't just another tool. It's the planet evolving a nervous system. But here's the twist. Are we training artificial intelligence? Or, terrifyingly, is AI training us? First, we'll unpack computation as a natural force of the universe.
Something went wrong!
Hang in there while we get back on track
The INTO THE IMPOSSIBLE Podcast
How Humans Created Earth's AI Brain | Ben Bratton
Speaker
Brian Keating
Speaker
Ben Bratton
Speaker
Brian Keating
00:00 "Philosophy, Science, and Emerging Technologies" 03:58 "Planetary Computation Philosophy" 10:03 AI vs. Human Identity Debate 13:23 AI, Infinity, and Iteration Dilemma 14:12 Approaching an AGI Threshold Moment 19:08 Terraforming's Inevitable and Planned Futures 20:33 AI's Limitations in Theory Discovery 26:22 "Can Machines Experience Emotions?" 29:08 AI Collaboration and Ethical…
✨ Magic Chat
Don't have time for the full episode?
Ask anything about this conversation — get answers in seconds, sourced from the transcript.
Try asking
Featured moments
Highlights
“AI isn't just another tool. It's the planet evolving a nervous system. But here's the twist. Are we training artificial intelligence? Or, terrifyingly, is AI training us?”
“A Genealogical Relationship": "You may have people like, I don't know, Lawrence Krauss, who sort of see them as strongly opposed to one another, whereas others, like myself, may recognize that all the sciences that we recognize today, one way or another, began as philosophies.”
“Moments like these when philosophy, I think, becomes most useful in a in a creative and generative sense.”
“I don't know what with what, you know, what hardware, you know, chat g p t five will run on, but chat g p t six will run on an abacus, you know.”
“Are there limitations that we can actually set right now on the probability, or the let me let me say, on the idea that this is an evolving system and that we're just one genus, you know, on the way to, you know, who knows where we're gonna evolve to.”
Timeline
How it unfolded
Read along
Full transcript
Then, we'll ask, is Earth itself becoming conscious? Join us as we reimagine humanity's role in the greatest evolutionary leap since life began.
One of
the best parts about doing a podcast is that you get to invite your friends and people that you are so inspired by, and that's no exception for you. But I wanna start with this new project that dovetails so beautifully with what I do, which is, you know, possibly the existence of planetary scale computing and so forth. And, I thought we could start there with this concept that I'm trying to learn how to pronounce Antikythera. Antikythera. Antikythera. So what is Antikythera, and why should we care about this as you know, speaking to the most brilliant minds in the multiverse, a lot of them are astronomers.
I should say, first of all, that, you know, I call myself a philosopher of technology. Right? I'm coming from the humanities side
of the house, but Visual arts. Right? That's your
In the Department of Visual Arts, very easy to see. Right? But I'm a little bit, unusual in this regard. That is I I take a very intense and sincere interest in what science is doing and the way in which emerging technologies not only allow us to do new things, but to know new things. The relationship between philosophy and science has been one that has been, you know, variously contentious, but also one that had a very, tight genealogical relationship sometime. Right? And so you may have people like, I don't know, Lawrence Krauss, who sort of see them as strongly opposed to one another, whereas others, like myself, may recognize that all the sciences that we recognize today, one way or another, began as philosophies. Right. So, like, you were saying, like, when philosophy learns to ask the right questions, a new science is born. But when so where do new philosophies come from? They might also say when when tech new technologies force us to realize that the languages and concepts that we've been using to understand the world are are inadequate or anachronistic.
Moments like these when philosophy, I think, becomes most useful in a in a creative and generative sense. Mhmm. Right? So it's less about how do I take these these concepts and apply them to new things? What would Hegel think about driverless cars? What would what would Kant say about this telescope? But rather, how do we generate the concepts that are allowed to to bring something new? So Antikythera. Antikythera is named after the what is probably apocryphally, the first known computer. Mhmm. Was, from February BC. It was discovered off the island of Antikythera in Greece in, the beginning of the twentieth century, and it took a really long time to figure out what it was because it was this intricate, complex, geared mechanism that in the course of technological evolution, and I should say I actually believe that technology evolves in a literal sense. It was completely anomalous.
There wasn't a, you know, a a Predecessor technology. Yes. Predecessor species before or after to one where iPhone version one, you know. Exactly. Exactly. So what eventually was figured out that it was, in fact, a a computational device. But it in addition to being a device that allowed you to do basic mathematics, it also was an astronomical device. It allowed you to map the your position in relationship to the stars and planets, not only in the present, but also in the past and future.
So it was a kind of, you know, a temporal mapping device as well. Interesting. And so for us, thinking about how do we what should be the foundation for a twenty first century philosophy of computation? The idea that computation really begins or artificial computation. I'll also make the argument that we discovered computation as much as we invented it. The foundational idea for computation that it orients intelligence in relationship to its planetary condition as a starting point for what not only what computation is, but ultimately what it's for. It was sort of the inspiration for this. And so the I can talk a bit about the research program that we're that we're sort of developing there. But to your question about what do we mean by planetary computation? More recently, I mean, zooming out from 200 BC to the to to the present time, the way in which we look at it is that we need to think about computation not only in terms of algorithms and and mathematics and forms of calculation and inference and inference structures and so forth, And not only in terms of thinking of computation, in terms of what we recognize as computers, the kind of relatively energy inefficient comp appliances that we have managed to construct, but one, to think of understand computation as a natural force in the universe, that informational system that that the universe computes, and that we essentially have learned how to discover and tap into this.
But a computer is no more computation than a light bulb is electricity. But to the planetary computation, over the last fifty years, arguably, you know, really beginning in the nineteen sixties, artificial computation artificial computational infrastructure, has wrapped the globe. That from subterranean data centers to sub to oceanic, oceanic fiber optic cable to the the, you know, miraculous glowing glass rectangles that we all carry around in our pockets and still call phones for some reason to the satellites overhead. You could think of it this way. You've met, you know, the famous blue marble image, right? I just was speaking with Stewart Brand's a few few weeks ago. Imagine the blue marble image, not as a still image, but rather as a kind of super fast forward movie that sort of begins showing the whole several billion years of the evolution of Earth. What you'd see is it sort of goes from red to green to blue, the ocean, you know, oxygenated life appearance of, atmosphere, Pangaea, all the rest. And in the very last few seconds, you would see something kind of remarkable that this glowing blue orb would sprout It started shooting them off.
It would sprout this sort of ex this exos Spraying into being. Sensory exoskeleton. The way we see this is that's not only something that humans did very recently. Ultimately, we need to see it as something that the planet done. Mhmm. That we have one planet, at least, that we know of that has evolved a mineral based sensory organ that exists not only on its surface, but in its low Earth orbit, and that it uses to know things really fundamental about itself. So for example, I would argue that when we ask the question, what contribution can planetary computation make to the mitigation of climate change, which is a perfectly reasonable question, we have to understand that the very idea of climate change as a scientific concept is an epistemological accomplishment of planetary computation in the first place. Of the very
entities that can comprehend and contribute to it. That's right. It's symbiotic.
That's that's right. And so planetary
computation is a It's symbiotic.
That's that's right. And so planetary computation, as we see it, is both something that allows, intelligent life in both, you know, human and non human, arguably, in terms of AI, to do things that it wouldn't have been able to do. But it also allows us to know things that we wouldn't have known before. Mhmm. So for us, that's that's the stakes. That's the stakes of this. And just as astronomy is arguably, like, the way in which the planet has folded itself into the shape of astronomers in order to know things about itself Mhmm. It's all part of a similar kind of process, of the emergence of what we may think as a kind of planetary sapience.
This is, you know, stimulates the seven podcasts worth of of of ideas in my mind. But, you know, one of them that I love to tweet out every every couple of months is, you know, kind of, aping Einstein by saying, I don't know what with what, you know, what hardware, you know, chat g p t five will run on, but chat g p t six will run on an abacus, you know. But basically, at t kerithra, you know, whatever. Oh, I can't pronounce it. Yeah. But but the point being, you know, when we think about what AI is capable of, it seems to me there's almost this AI of the gaps. Like, whatever we think that AI or computation or planetary scale things can do, it can do everything that we don't understand. And and that's kind of paradoxical because we don't actually fully understand the limits of this technology, of of the, you know, computational limits except via physics.
And the first guest on the podcast was Freeman Dyson. Mhmm. And, you know, you're sitting in the seat basically that he was in. And the point that I'm, you know, kinda getting at is that are there ultimate limits, you know, to things like the singularity, to things like the simulation hypothesis, to to global, not just planetary scale, but universal scale or at least galaxy scale? Are there limitations that we can actually set right now on the probability, or the let me let me say, on the idea that this is an evolving system and that we're just one one genus, you know, on the way to, you know, who knows where we're gonna evolve to. But but is it is it not true that we may be, you know, both the the creator of this tool and its last users of it, effectively?
The first one, let me talk about the AI of the gaps. And I I think there is a way in which the AI this the trying to determine the status of AI, is it intelligent, is it not intelligent, does mirror the kind of god of the gaps argument. Think back to Turing's nineteen famous 1950 experiment. What Turing sets up is a highly functional definition of intelligence. Right? But he also sets up one in which he asks, you cannot tell whether player a is a person or a computer, or actually says if it's a woman, which has a interesting valence in terms of his own biography. But then we have to grant that there's something going on. Right? And so what he's positing is a sufficient condition of intelligence. Unfortunately, I think a lot of the ways in which Turing Test has been incorporated at a cultural level is more of an as a necessary condition.
That is, unless the AI can perform thinking the way that we think that we think, then it's disqualified. It's a mirror of my way. And so what that means is that there is a way in which the question is framed in a kind of either or opposition, that it's either human or AI, that the relationship between humans and AI is one of a boundary condition, that is what humans are is what AIs are not, what AIs are is what humans are not. Mhmm. And so what you're beginning to see is, first, a kind of receding border that more and more of the things that we may have said AI will never be able to do, it it does. Right? And so, you know and, you know, people keep moving the goalpost as fast as they can possibly run, to keep keep this keep this going. But it also means that what it means to be human is defined in relationship to what we think AI is.
Right. AI might be training us as much as we're training it. Right?
I don't think we've ever been in charge of our technologies. I think, you know, we've co evolved with them from the very beginning. Right. Now to the other question, is there limits of, you know, limits of this as a larger computational thing? I don't know. It's a sort of a question. It's like, is there is there a limit to how large and complex an informational structure can possibly become? Is there a limit to how much of a kind of, you know, intensive energy event you can, you know, artificially produce? And then you get back to, you know, Dyson Swarm kinds of questions. Right? And you begin to scale things out to a a solar or even galactic scale, and you're asking, you know, it's what is Kardashev 2? How far are we from Kardashev 1? Mhmm. Kardashev 2, so far and so on.
I wanna connect this with the the gap and opposition problem. I don't see it as opposition. I don't see humans and technologies opposed. I see them deeply the evolution of both is deeply intertwined. And so when I think about this I don't know if you read James Lovelock's last book, Novocine, where Lovelock, of course, you know, posits the, you know, the original Gaia hypothesis by which the emergence of simple life transform terraformed the planet to make the conditions for complex life and and that this is this is this terraforming is an ongoing process. He concludes the book that, like, it with, you know, looking at the emerging and accelerating evolution of machine intelligence and says, this is a similar kind of phenomenon. It's going to terraform the image in its own you know, the Earth in its own image. But, you know, the the book doesn't end on a pessimistic note.
And it's not one of, well, we've had a good run. You know, we might as well, you know, drink the hemlock and and be done with it, Ernie. I think humans will survive really long you'll continue to survive longer, but they won't they won't necessarily be what we imagine them to be today. Mhmm. You know, I I just think that all biological systems become, you know, I know you've had Sarah Walker on the show many times. Yeah. All comp forms of, you know, complex biologic systems become components in yet more complex systems.
That's right.
Humans will persist. They've always been part of systems larger themselves, and they will continue to persist in systems larger themselves, some of which of
their own making. So one thing I've I've brought up in the past is that there's sort of this fallacy of iteration that, you know, they will come upon us AGI, and then it will do things that will be detrimental to humans. I'm just painting my very Absolutely. Very broad brush. And I think this fear is sort of misplaced. First of all, if it's as smart as us, which it has to be for super intelligent, it has to be smarter than us. Right? But it will know at least that fact that we're concerned about it AI replacing us. Right? So then it will be a it will know that it's not the final evolution of its own technological advancement.
Right? I mean, presumably, ChatGPT, you know, Infiniti could exist the same way iPhone 18 is sure enough to come down the horizon as as, you know, as the Earth will make another lap around the sun, we hope. But but the point being, if it if we're smart enough to conceive of this as a challenge, surely AI would say, well, look, this can either be a threat to me as an AI or it could be something so beneficial, in which case I have to let it be created, in which case it's not a threat to humans to begin with. Right? Because otherwise, the AI wouldn't allow it to be created. How how do you react to the kind of this solipsistic definition of where, you know, the infinite regress of iteration where eventually there's a final iteration. Right? I mean, I would say a computer chip wasn't created with a computer chip, right, by definition.
An
operating system wasn't created. They are now. No. No. But the original one, the first one. So I'm saying the first AGI will not be created with an AGI. I mean, there'll be some some boundary if we get there. I'm not committed to
sound a bit right. This is, you know, I am, you know, good Good's thesis back to the sixties that the AI if we make true AGI, it'll be the last technology we ever make. Again, I don't I see this as a you know, if you zoom out far enough, I see this really as a kind of a process that is continuous, but one that it can be is marked by forms of, you know, of threshold events in the Maynard Smith sort of sense, punctuated equilibrium. But there there are step functions in which complex forms of biological, technological convergence, you know, have these moments where they radically increase in their information processing capacity, which was Baynard Smith's sort of key argument. I think we're in the middle of one. I don't think we're caught up to it paradigmatically. I think we're in a kind of pre paradigmatic moment where then terms and concepts we use to even describe this is to recognize technological evolution as something we're part of. I think we have to grow into that very, very quickly.
Mhmm. I don't think there's a final event. Right? I'm not a eschatologist. Right. I I you know, I think the future is much more contingent than it than it is to turn you know, that we are there's certain path dependencies that are built into forms of convergent evolution. But for the most part, you know, it can go a lot of different ways. Mhmm. I think in the short term, the emergence of AGI, you know, I think we're seeing a kind of slow take up of that over the last few years, is one that's gonna be highly disruptive, really disruptive, in terms of how what an economy is, what a society is, how we relate to agents, AI agents that are trained on us and our relationships with simulations of ourselves as a fundamental tool.
This is going to be beyond future shock. It's going to be a Copernican existential trauma, and it's gonna be a really bumpy ride for a while. Yeah.
For especially for certain populations. Right?
Yeah. For yes. But as in as you zoom out, I I really kind of think of this as is is part of a part of a larger process that can be mapped, that we can see ourselves inside of Mhmm. To sort of understand that we have certain degree of you know, as a sapient species, we have certain degree of of of agency. I I mean, one with the discovery of the Anthropocene is which we actually have we actually can terraform the planet in a relatively short period of time. Yeah. And we need to do it again in a way for to keep things going. That would discover forms of agency we didn't know that we had, but we're gonna take a long time for us to mature into those the the subjectivities that are needed to actually have some degree of control over those things.
So I'm optimistic in the long run and buckling up in the short run. We need the answer to the question.
As a professor and a podcaster and a scientist exploring the boundaries of knowledge in the lab, I crave independent sources of knowledge and wisdom wherever I can find them. One of my go to sources is The Economist. I've been reading it literally for decades. The Economist consistently delivers independent journalism for independent thinkers like you and me. But the best part for me is that their science coverage is nothing short of extraordinary. Just this past week, I was captivated by their deep analysis of the recent claims of quantum computing breakthroughs that could revolutionize how we process information. Exactly the kind of forward thinking coverage that I discuss on the podcast and that I crave to expand my own intellectual horizons. The Economist's rigorous approach cuts through the noise and BS in a way that you can truly trust.
And what sets The Economist apart is their Simply Science weekly newsletter. It's a subscriber only exclusive benefit. Each weekly edition delivers beautifully written explorations of nature, technology, the cosmos, the human body, and all this comes alongside stunning scientific photography
selected and curated by their editors. I don't just do science. I love
to explore complex ideas. The Economist analysis is a perfect complement. It provides the insight and clarity needed to navigate challenging intellectual terrain with confidence. And to get everything The Economist offers, The Economist is offering a special 20% discount just for viewers of the Into the Impossible podcast. To claim the special 20% discount, visit economist.com/keating. Your pursuit of knowledge shouldn't end when this podcast ends. The Economist ensures that doesn't have to. I think you've written, you know, terraforming in this way in the context that you've described it as sort of an imperative.
What did you mean by that? What does that mean?
Yes. A book I wrote, It was really an essay that became a book that got translated very quickly. There's a longer version of this book that's gonna be coming out with MIT Press next year called The Terraforming. Mhmm. And the argument, in a nutshell, is that we need to think of terraforming of the Earth, not and terraforming not of Mars or something to make those other planets habitable for Earth like life, but the terraforming of Earth to ensure that Earth remains a viable host for Earth like life. Right. And so you could think of really, there are three terraformings, I think, that are relevant here. One is thinking about what Crutzen and others called the Anthropocene as a kind of planless, headless terraforming initiative that we trans fundamentally transform the planet.
Now, you know, now we call it an atomocene event. It is supposed to be an era of fire. But that it was a terraforming scale event, and that you were born inside of it. That we were all born inside of a terraforming event. And that and, like, welcome.
Here you are. In meteorites. Right?
The second terraforming is the one that's inevitably going to happen no matter what we do, that we could all go full unabomber tomorrow, and the ongoing effects in relationship to everything from from oceans to land use to planetary geochemistry is going to the momentum will outlast us. And the third one is potentially the more deliberate process by which open ended, non falsifiable questions, like what kind of planet should we make? Mhmm. Where should the people go? How do you get what what should be the cycle of what you were Income. DPTOPIA or DPTOP needs to be defined, but it needs to be designed and composed. Mhmm. And and I think to your question of, like, what is the role of machine intelligence within that, I think it's probably quite clear that there's a lot, but it also may be quite nonlinear. But, anyway, that's what I mean by the tailoring.
How much is AI, at least these LLMs I mean, the l, you know, for language, it seems to me that is incredibly powerful. Humans are language machines. It's how we, you know, establish dominance and how we, preserve technology, science, culture, through language. But it's not all we do. Right? You you could say that you might be able to code mathematics and physics in terms of a language, but it's different from the types of language that these LLMs are trained on, for example. Yes. They can be trained on my PhD thesis. But what I'm getting at is the type of science, say, say, just restricting it to a search for a so called theory of everything.
Yeah.
How AI is or is not going to be capable of that. For example, I don't think it's a matter of prompting. I don't think it's a matter of the corpus of training data that is precluding AI from right now coming up with a theory of everything that Edward Witten and, you know, all of our friends that we know and love here and beyond, have been unable to do. I don't think it relies on the same, you know, vector space, you know, dot products and matrix manipulations that have been locked in effectively by the confluence of GPUs and, you know, matrix algebra and neural networks. In other words, there could be. I'm not saying there is, but what if there is a fundamental difference in the types of language, if you will, that we use to communicate and so forth that these LNs are uniquely, you know, well qualified. They're already IQ a 20 in every subject matter from law to medicine and so forth. But so far, you know, the the stubbornness of, say, finding a theory of everything or discovering some missing matter, the cons constituent of missing matter, is that going to permanently evade at least LLMs? And because we're so invested heavily into it, we're locked into LLMs as far as the eye can see.
I mean, there's almost no you know, there are all alternatives, but they're not at scale of $300,000,000,000, you know, valuations, etcetera. And I just don't think that the thing preventing us from getting a theory of everything is the fact that the latest greatest generation of Grock or whatever hasn't been trained on the Fast and the Furious, you know, 12. In other words, is there something uniquely different about some types of what humans can do that may will make us preserve what we do uniquely in perpetuity?
To the last question for sure. To the last question for sure. Like, again, I don't think it is a necessarily should be it's there's a limit to thinking about this as a replacement dynamic. Right? I think that in the long run, AI will teach us what thinking is as much as we will teach it how to think. That the demonstration of an other another substrate of intelligence, you know, like the planet Stellaris was another substrate of intelligence that we can compare ourselves to, and understand where we fit within this larger spectrum, I think, is part of the understanding that'll come to this. But in terms of the more specific questions you get there, like, you're right. I I think there's there's good reasons why it turned out language was the path. Right.
You know, Hassebus had bet on games because he was a chess prodigy. And so for him, games was going to be the path. And it turned out, actually, language is a repository of all different kinds of intelligences. It's something that evolves over time. It's something that speaks its users more than the users speak it. Mhmm. And that you're producing a kind of a model that is able to, think and to respond to the world generatively using language was a a shortcut that shouldn't really be that surprising in hindsight. First of all, the LLMs are evolving very, very quickly.
Right? I mean, just there were no real the kind of reasoning models that we see today just didn't exist a while ago. And so I think LLMs will evolve into other things that'll be unrecognizable to what they are today. But you're right. Language isn't the only thing. I think, in a way, you know, the model whether models can be trained I mean, anything can be tokenized. Right? Now models are constructed on, like, that which can be tokenized, which could be anything principle. Right? And that is made into something, a kind of language in a way. And so I think the movement, you know, Fei Fei Li's new startup is about spatial intelligence.
Jan LeCun at Meta is focusing on spatial intelligence. And
Yeah. I should say he's he's quite pessimistic about, you know He is. Tokenization.
He is. I'm just focusing particularly on on on the spatial dynamic of the question Mhmm. Which suggests that public implications of spatial intelligence models for robotics. Mhmm. And, you know, Hubert Dreyfus' old critique of AI is that it can't really work in the work because it's not embodied. It doesn't have embodied phenomenological experience. Well, like, that's no longer the case. Mhmm.
The physicalization of AI, where you've got spatial models and linguistic models that are together interacting with the world and, ultimately, in the long run, through forms of active inference that where their experience is retraining the model almost in real time, then you have a a kind of a a different kind of takeoff that may take this in a really different direction. Now Now the sec but the second thing about the limit. Yeah. While you were speaking, I was reminded of a, I I should say Antikythera also has a journal that we edit with MIT Press Mhmm. In a book series. First book in the series is a book called What is Intelligence by Blaise Aguirre Iarkas from from who I work with also at Google research Google. Fantastic book. Another piece that we're publishing in the journal later this year is a piece by the, who you may know, Risa Wechsler, at Stanford.
Stand for her name.
Yeah. She's on the Wall Assignment Observatory in Calamari.
That's right. And so Risa's doing a piece for us on the role of of computational simulations in her work and then then simulations as a kind of epistemic technology. And I I don't wanna, like, you know, oversimplify the point, and I'm I'm sure I will. But, I mean, Risa would say that one of the reasons that we, you know, can't figure out exactly what dark matter, dark energy is because we don't have the compute. That if you think about if you think about all the available computation on Earth at this moment, like, if you network everything together into one giant machine, let's call that one unit of planetary compute as of 2025. How many units of planetary compute would be necessary to produce a simulation at a granular resolution sufficient enough to make, you know, important conclusions about the nature of the planet?
People say the same thing about the weather, you know, simulating
climate, the weather. Yeah. So Riese would say, like, no. There actually is, like like, a brute there is a on like, a brute force scale to scientific ontology. Mhmm. The more compute, the more you know. Right. That would be one answer to your question.
Like, is is there a way in which are there things that, we can't know even if we had a lot of compute? Probably. Yep. Are there things that we don't know that if we had orders amounting to more compute, we could know for sure?
Yeah. Mhmm. One thing that is, you know, sort of fascinating to me, and you you did bring it up, I and I brought it up many times is, you know, this the famous statement by Einstein that an observer in free fall would experience no gravitational force. Right?
Such a lovely thought.
It is. And he called it his happiest thought. He called it the thought that titillated him more than any other thought he ever had. But that brings up, you know, the exact, you know, topic that that you hinted at before, which is embodiment. To what extent can a computer have happy thoughts? You know? And to what extent can it experience viscerally or no? Even robotics with a, you know, with a, with a gyroscopic sensor and and attitude indicators and inertial reference, you know, laser gyro, even such a device, can it visualize what your you know, my toddler can visualize as going on a roller coaster driving over a bump and that feeling in the pit of your stomach. And was that not, you know, the sine qua non that allowed Einstein to have the happiest thought as he called it? So these two things. Can they have happy thoughts? If so, can they have painful thoughts? Do we have ethical requirements and and and position on us, yeah, as their programmers and then when they seed us? And then second of all, can they do it truly without embodiment? Are there things that that are, you know, to use I forget the gentleman that you mentioned, but this is something I thought about a lot. Do we need this right embodiment?
He would drive it Was a
philosopher I think Chomsky had a
It's like Berkeley in the seventies. Yeah. Yeah. He wrote a famous book called What Computers Can't Do, which was a really important critique of sort of Simon era, go good old fashioned AI Interesting. And and kind of laid the philosophical foundations for what Brooks picked up later as a kind of bottom up model. So your question about is there can a sufficiently advanced form of machine intelligence have Yeah. Enough tokens. Valence.
Mhmm. Yeah. Can it feel bad or good for something to happen to it? Qualia that we experience. Right? I wouldn't say qualia. I I you said valence. Okay. I'm more on the Dennett side than the Mhmm. The Chalmers side of that one.
Dennett was, like, his final interview. Yeah. That's almost a year ago.
His his biography, his latest autobiography Oh, yeah. Is, like, fantastic. But the head it can't can it have valence? I think it's an interesting question. I I I think it would be a mistake to presume too quickly that it can't. And I just the lesson, I think, to learn there is how badly Western philosophy underestimated the the the complexity of animal intelligence. Mhmm. You know, as early as the twentieth century, you have people say animals can't feel pain.
Pain. Right? Benton. And, you
know, right? Mhmm. Even today, we would think is insane. Right? And so I could now you have this you know, just in the twentieth century, you had this very interesting convergence of neuroscience human neuroscience and AI, where they're sort of bouncing back between each other and even neuroscience and flow. You know, the Churchlands here, for example, where this this feedback loop Right. Between AI models and and the way we should thought about this was was very important. I think now you've got one between studies in animal cognition and studies in machine cognition where it's a kind of comparative non human mind studies that's very, very interesting. So you got people like Peter Godfrey Smith who are looking at, you know, cephalopod intelligence. Mhmm.
I think you have to think this is a kind of parallel discourse with what are how would we know? How would we test? Like, this is a real question.
Right.
There's a couple, you know, a couple people that I have collaborated with, Winnie Street, who was part of the Antikythera program and her colleague in this group Paradems of Intelligence, where I'm a visiting faculty researcher at Google. Another Jeff Keeling, philosopher there. All their work is on how would we know exactly the answer to this question. Right? And and and if we don't know, what are the what are the rules by which we, you know, may have learned from animal welfare, sort of standards by which it might apply this as well. It's an open question, but I think it's one that will become increasingly, increasingly relevant. But I think it's a different question to the real gist of this. It's first of all, I think AI is embodied. It's just embodied not in a tetrahedral body plan.
It doesn't have a stomach, so it doesn't have a pit in its stomach. But I I think there's a limit to anthropomorphization, and that, like, to the extent to which we cannot we are frustrated in our attempts to fully anthropomorphize it, that's not the same thing as suggesting that, you know, across this different substrate, it may have other kinds of things. Like, there there may be forms of pain and pleasure that it can perceive that we cannot proceed and vice versa, and that's fine. That sensation may, you know, may not be the critical, you know, quantity.
It may just be one in the description. Right. Yeah. I think it's important. And many other scientists don't have any sort of physical and, you know, embodiment of body. That's what I'm saying. Yeah. Yeah.
I mean, some of the, you know, theories of mind before we go on to, you know, think about do artificial intelligence feel pain or whatever. We have to look at our own colleagues and understand, you know, how do they feel? Say, yeah. Now, I mean, what is painful about not gonna you know, failing in an experiment or not getting some data or or predicting something that's most you
know, even, you know, at very simple you know, more simplistic, you know, Damasio's work on the importance of emotion in these kinds of things and, you know, constructions of other fortitude. I I think what we call I think the distinction between emotional and rational and forteachers. I think the distinction between emotional and rational intelligence is probably overblown, and don't see any reason why we wouldn't that forms of valence would not be able to operate across different kinds of substrate, even beyond just simple knee jerk reward systems. So you
you mentioned, just in some correspondence we had before this, we were talking about planetary, you know, data centers and and computation, some work you're doing with Google. Can you explain a little bit more for the audience? What does that involve and what interest does Google have in other
planets or big issues? I mean, there's some of it I can't there's some things that are being worked on I probably can't talk too much I can't talk too much about. But I I would say that, like, if you think about the idea what I suggested before that has to do with if you think about all of the aggregate computation that exists on the planet now versus how much exists on the planet ten years ago versus how much exists on the planet fifty years ago. Clearly, we're in a kind of, you know, exponential spike, in terms of the amount of, you know, the complexity of you know, complexity of, you know, the amount of capacity for artificial information processing, which is something that, you know, we turned out very good at, Brato. As I say, like, the fire apes have figured out how to make the rocks think. Right. You say Bark rocks. Right? We take bits of rocks or minerals, and we electrocute them just so, and the other rocks can do things that only primates could do, which is, you know, big news. Yeah.
But if you begin to think about, you know, where this goes in the long run, you know, what is 10 x planetary compute, hundred x planetary compute, thousand x planetary compute, which are all kinds of things that we may see in our lifetime, You probably will see orders of magnitude increases in efficiency in terms of, you know, from neuromorphic chips to other kind of algorithms that make, you know, the amount of energy that goes in for a certain unit of compute to drop dramatically certain of it. But you also see, you know, all the big platforms buying of every available electricity contract they can find Yeah. Starting at 3 Mile Island. And so it turns out AI may be the economic solution to nuclear energy. Mhmm. So you think like, okay. AI is what's driving the economy of nuclear energy. GPUs is what's driving the economy of AI and Fortnite what's driving GBUs.
And so by it transfers Fortnite is saving It's saving north of the
But in Minecraft, it could even be. For in Vermin.
Yes. But if you really get to, like, okay, now we're beginning to we're not quite Kardashev one, but we're we're getting we we're we're closer it becomes a relevant question of, like, what are how would a planet make functional use of the available energy that that it has?
And not only of Jim, minerals and so far, so it brings up a chance for me to give you your gift of appearing in person on the podcast. This is a genuine meteorite. So this is a fragment on the early, solar system about 4,300,000,000 years old, older than the Earth. I give them out on my website, briankeeting.comslash,uh,ytor/list. But you're guaranteed to win one if you have a dot edu email address like you do, and, you're guaranteed to win one, because you are thinking of such a gracious, intellect and front.
If you're a platform that operates at the scale of a state, which is what Google does Yeah. And you're thinking fifty years out, hundred years out about what how you would, in fact, power a thousand x or, you know, or is it 10,000 x version of the sort of thing? You begin to you begin to have crazy ideas about where you get the energy from, and I'll leave it at that. Okay.
Yeah. Fair enough. And and, those are some of the most exciting things from a physicist perspective. Right? I mean, we'd love to tie into the late great Freeman Dyson's ideas. But but it does bring up not only energy, but also minerals and so forth. So the average enrichment of this metals, a %, you know, it's basically pure iron, nickel, cobalt, very, you know, rare, you know, in terms of the overall crustal, you know, the composition of the Earth, but, but by no means unique. And and so some of the questions I have at least about the planetary scale, maybe you'll have to do a part two eventually about this. Yep.
But on the trajectory of Earth's evolution, you know, is is the is the I mean, is the planetary sapien, so to speak, that you talk about, is that truly, you know, is that meant literally or is that is that more metaphoric? I'm asking for the following reasons. We have a very Earth like planet in a very, habitable zone planet called Mars, and it's very close to us. It's completely barren, sterile. It had water on it. It had water on it when life existed on the Earth. And there's phenomenon called panspermia. Fred Hoyle popularized it where planets exchange material. And then when's your notes?
Biological or mineral like planets are not closed systems.
That's right. They're exchanging continuously. And yet, we see pretty much everywhere we look. We dug and, you know, we stuck probes. We violated Mars and basically every way we could possibly do it for now. And we see no life. Now that's not proof that life never existed. It's not proof that life couldn't exist in the future.
That we recognize as life. As we recognize as life. And Sarah,
you know, we're always correct me whenever Right.
It's not as we know it. Exactly. A wonderful new book. So my my question is, what would you look for? My colleagues upstairs that use Keck telescopes, 30 meter telescope coming out in the next few years, the UC project. What what sorts of techno what would we look for to see planetary compute? My friend Adam Frank's been on many times. He talks about global I remember well. Yeah. He talks about global warming as a techno signature.
Right? Agriculture, at least. Alright? What will we look for? What sort of techno give give my young viewers out there, listeners, some some inspiration. What would you look for to validate or falsify this notion of planetary sapience? How could we do it?
What are the pieces that we're also publishing at the Kennedy Journal is a new piece by Sarah Ann Lee Cronin, her collaborator on tubs. Kind of like what, it's sort of the technological version of the question life. We don't know it. Like, how would we if we how would we find forms of technology, on other words, that we don't recognize as being Earth like Or natural type. Yeah. Natural thing or
first object. Mhmm.
So let me let me make a a sort of wild conjecture here, sort of, at the end here as well, is is that one of the things about the pre paradigmatic moment that we find ourselves in is that functional is it functional definitions of life, like the one that Sarah and Lee developed, which is a kind of, you know, you've got this increasing complexity In some way. Componentization, one thing in not done instead of Ginele. And our contemporary guess is definitions of technology, which is also kind of evolutionarily, you know, Brian Arthur and the sort of, like, complexity inside of further complex. All technologies are built to previous technologies, so forth. And our definition of intelligence of this sort of system that's able to, you know, both autopoetically and allopoetically, transform the world in a in a certain goal directed way are starting to converge. One way to think about that is what we define as life in the Aristotelian sense, and we define as technology, we define as intelligence, are looking more like each other than they did before. I'm not saying they're all exactly the same thing, but they may in some respects be different ways of describing a set of very similar phenomena. And that may be the way we what we may arrive at.
So the difference between a check no signature and a I mean, biosignature is one thing. Signature of life Yeah. Is something else. Right? I just I think in many ways, technology I don't think life needs to be biological. And I think that techno the the forms of technological evolution around us, which are evolutionary, that do follow these paths, is exemplary of that. I don't mean hypothetically, life doesn't need to be biological. I think we're surrounded by non biological life and always have always have been. So the difference between finding a signature of life and finding a techno signature may be acade maybe
maybe acade but What's an example of a nonbiological whale? AI. Okay. So biological. Literally. Okay. Yeah. No. Literally.
So you would consider that bio like, like It's not to be studied in
the bio department. Light, but not a light. Right? And so and I keep trying to show me a survey. There was a piece that was published in Noema called AI is Life, where she makes a similar kind of argument here as well. So what would we look for? Let me think let me try and see what we were trying to fight this as well. One answer to the question is you may it may be a kind of sampling problem where you need it's not looking for a thing or a category of matter. You would be looking for a process of transformation over a longer period of time where there's a state a certain kind of state change Teleologic. From one thing to another that may be the thing you wanna find.
And there's something that demonstrates sort of, like, moments of moments of increasing complexity that that were not there before. And this intensification of localized complexity may be the sign that there's something going on there that may be technological, may be light, may be something maybe another thing we don't we don't quite recognize. But I I don't think and I think, you know, a lot of the discussions in astrobiology, both official and nonofficial, let's say, seem to sort of concluding that, you know, what we think of as life is is, again, not a not what we recognize life to be, and nor is it a kind of matter. Mhmm. Potentially something, like, quite alien. It's still the same table of elements. Mhmm. Right? There are Right.
There are there are any dotted periodic. Right? There could be a dark periodic table. But what if
the substrate might be, we don't we don't really know. But this is, to me, like, what's really wonderful about, you know, things like Fermi paradox as a philosophical problem. Because they force you to think about what is the possible boundaries, what are possible conditions of what a civilization might be, what communication might be, the sort of systematization. Mhmm. To me, this is this is where science is doing good philosophy and hopefully philosophy can contribute to the science in interesting way too.
Yeah. I mean, as Galileo, my hero, said that's not Galileo, that's science, that Galileo said, you know, he was a natural philosopher. And so he said with his telescope, which is technology, he didn't invent it but he perfected it for that time. He said that he had, quote, and I quote, let's see if I can remember this. I must made it relevant the disputations that have for generations vexed philosophers by his in other words, by observing that the Milky Way was comprised of individual stars rather than this discrete, you know, flowing plasma or whatever. And certainly with the moon surface that he was able to do reveal that it wasn't this idealized thing. He was then instantiating a new hypothesis. Right? That's what philosophy does.
Now he was a philosopher himself except they call physicists natural philosophers. But but his ultimate goal, as he said, was to, you know, measure what's measurable and make measurable what is not yet so. So, yeah, I mean, vision in the in the future, how could we develop a Drake equation, you know, for for these technological biosapiens and these technological sapiens? Where does that fit in and kind of in in your research profile? I mean, we didn't even talk about visual arts and and so forth and and
and speculative design and Increasingly alienated relationships. Art. So I grew up. No.
That's fine. But that that was originally, you know, we brought you here and I think that there's, you know, your your work on how, the visual arts can influence philosophy and there could be the symbiosis between them in the past. Yeah. And we started a new major here called speculative design. That's right.
Trying to bring together a kind of the creative open ended exploratory sort of approach of of art studio, but starting with questions that are informed by what's going on in the labs around here. Well, I'd
be remiss if I didn't at least ask one question about the stack. You've got it there. You brought it. You're so kind to bring in. Just tell me about the stack in the upcoming, reprint or That's the second edition.
There's I'll just let you up
here for the to the Canon. There you go, Marshall.
There's a tenth anniversary of it coming out later. So this was a book that I wrote 20 here at UCSD. It's a book that I once came here to write. It was published in 2016 originally. And then I was making what was that at that point, I was making was a kind of somewhat science fiction sort of argument, was that, what we recognize as planetary computation was transforming the entire structure political geography in its image, that the emergence of planetary computation was trans you know, is transforming this relationship between technology and state, between the political and governance, and that the forms of geopolitical dynamics that we would see over the next coming years would be fought over the competition to produce large stack systems to model societies through these large stack systems. And, again, by then, this was a bit kinda, kinda, trippy of and now it's sort of taken for granted. I I think that was just kind of obvious, from chimp wars to everything else. The other argument though is that we can see this thing called planetary computation not as a single undifferentiated mega machine in the way in which Lewis Mumford might have thought of it, but rather is something that is literally like a stack.
It is comprised of modularly modular functionally defined, layers where for the earth, cloud, city, address, interface, user, they each have their own dynamic, each have their own sort of process. Each is replaceable by different kinds of technological systems and becomes a kind of schema by which from energy sourcing to service provision to, you know, the the images that we use to make sense of that waste excretion to the interface that's ringing, that it becomes a kind of discontiguous, accidental megastructure. That's what we have produced. And so the book is the story of where that came from, how you build a society with it Mhmm. What its integral accidents are, and indeed, more importantly, how we think about the stack to come. The great thing about stack systems is they're designed to be replaced. And that is It's Theseus' infrastructure. That's right.
So that's, that that's what that's what the book is all about.
I was thinking yesterday, it was good that, you know, they didn't burn the ships when it came to Theseus. But, also, I can't I can't think it was an accident that the vice president, recently, made a made a mention of the of the stack or, you know, maybe not specifically of the book, but but I
think he's been influenced by
AI stack. AI stack. Yeah. Yeah. Which is a component of what the book is.
Yeah. I'm not gonna take any conclusive, but I I it, it it's interesting to see how sometimes Percolated
to the design guys.
Accounts have circulated. That's right. And that's I mean, that you know, to me I mean, somebody I'll just put this you know, make this last point. I think the relationships in philosophy and technology is traditionally, at least in the twentieth century, be one that's, like, been quite hamstrung, by different, you know, from, let's say, Heideggerian school of thought, which always seen saw technology as sort of the enemy of deep thinking Yeah. As the thing that that withdrew us or alienated us from being. I think the Galileo example that you show is, like, no. It's actually through the alienation of techno of perception to getting outsider at the allocentricity.
Mhmm.
Through technology that we know who we are, where we are, when we are, any of these kinds of things. And so this is epistemic role of of technology. Now our fundamental technologies are computational. Then we need to protect it all. We need to protect it at all at at all cost. One is I should have said, we are also antiquated is also, curating a exhibition at the Venice Architecture Biennale this May. One of the artifacts that we're presenting well, two artifacts I'm mentioning there. One is an original copy of the Harmonia Harmonia Macrocosmica from 1662, the seventeenth century astronomical atlas.
And another is an image from 1966 called the Lunar Orbiter image, which was the first fit picture of Earth taken from the lunar orbiter. Right? Was kind of forgotten these days, but it was on the front page of every newspaper in 1966, but Blue Marble kind of came came came and took took the glory. This was also the, an image that was shown to the German philosopher Heidegger who was sort of the, you know, a founding figure of twentieth century philosophy technology for for mostly worse in my mind. And he looked at this image and said, in his typically hyperbolic fashion, we don't need nuclear weapons because this image has already destroyed the world. And what he means by this is that the world of a kind of intuitive understanding of the horizon is flat, that, you know, I am located in this sort of egocentric kind of like, that everything operates according to a kind of Vitruvian scale. The basis of a kind of phenomenological essentialism was now forever exploded. That innocence was gone. I could
not write unsee it.
I could not unsee it.
I agree with that. I just see this as more as, like, the point, a feature not a bug, let's say. And so I would invite your viewers to come to Venice and see it for themselves.
Don't need many, excuses to go out and see the original Doge where the original Doge is. We're hanging out. Ben Bratton, so great to meet you. I'd not to meet you, but to be with you in person again and to, hopefully be the first of many conversations. I will love that. Thank you, man. Thank you. Awesome.
Also generated
More from this recording
🔖 Titles
Sure thing! Here are some title variations based on the transcript:
Exploring Earth's Evolving AI Brain and the Concept of Planetary Computing
Is Earth Becoming Conscious? Planetary Scale Computing and the Future with AI
AI as Earth's Nervous System: The Philosophy of Technology with Ben Bratton
Planetary Computation: How Earth's Evolution May Involve Artificial Intelligence
Unpacking the Role of AI in the Evolution of Planetary Sapience
Ben Bratton on AI, Philosophy, and the Terraforming Imperative for Earth's Future
The Antikythera Insight: Linking Ancient Tools to Modern Planetary Scale Computing
Can AI and Planetary Computing Bring Humanity's Next Evolutionary Leap?
Brian Keating and Ben Bratton Discuss Earth's AI Integration and Climate Mitigation
How AI Might be Shaping Us: Insights from Ben Bratton on Planetary Intelligence
Let me know if there's anything specific you'd like to focus on or adjust!
💬 Keywords
Sure! Here are 30 topical keywords covered in the text:
Earth intelligence,
Planetary computation,
Artificial intelligence,
Consciousness,
Ben Bratton,
Nervous system,
Philosophy of technology,
Antikythera,
Visual arts,
Lawrence Krauss,
Geared mechanism,
Astronomy,
Temporal mapping,
AI algorithms,
Natural force,
Informational system,
Data centers,
Fiber optic cable,
Satellite,
Blue marble image,
Climate change,
Technological evolution,
Novocine,
Gaia hypothesis,
AGI limits,
Computational simulation,
Dark matter,
LLMs,
The stack,
Embodiment.
These keywords spotlight the main themes and topics found within the discussion.
💡 Speaker bios
Brian Keating is a renowned astrophysicist and author, known for his ability to translate complex scientific concepts into accessible narratives. His work often explores the intersection of technology and the cosmos, as demonstrated in his engaging discussions with visionary thinkers like Ben Bratton. Keating delves into thought-provoking topics such as the evolution of intelligence on a planetary scale and the provocative question of whether AI is a tool we control or a force that shapes us. Through his insightful explorations, Keating invites audiences to reconsider our place in the universe and the role of computation as a fundamental natural force.
💡 Speaker bios
Brian Keating is a distinguished physicist, podcaster, and communicator who thrives on engaging in conversations with brilliant minds that inspire him. He leverages his podcast as a platform to explore captivating ideas and invite esteemed guests, often friends and experts, to discuss topics at the forefront of science and technology. One such project that fascinates him is the concept of planetary-scale computing and its potential implications. Brian is particularly intrigued by the Antikythera mechanism, an ancient Greek analog computer, and its relevance to modern scientific inquiries. By delving into such concepts, he continues to connect with astronomers and intellectuals across the multiverse, fostering discussions that fuel his passion for understanding the universe.
💡 Speaker bios
Ben Bratton is a distinguished figure in the intersection of visual arts, science, and philosophy, known for his unique perspective on the symbiotic relationship between these fields. In his role within the Department of Visual Arts, Bratton stands out due to his intense and sincere interest in scientific progress and emerging technologies. He explores how these advancements not only enable us to create and achieve new things but also expand our understanding of the world.
Bratton recognizes the historically complex yet genealogically intertwined relationship between philosophy and science. While some, like Lawrence Krauss, view these disciplines as oppositional, Bratton argues that modern sciences have roots deeply embedded in philosophical inquiry. He believes that the evolution of science hinges on philosophy's ability to ask the right questions, thereby giving birth to new scientific disciplines.
Conversely, Bratton posits that new philosophies arise when emerging technologies challenge our existing languages and conceptual frameworks, revealing them as inadequate or outdated. This insightful perspective highlights Bratton's forward-thinking approach to understanding the evolving dynamics between technology, knowledge, and the human experience.
ℹ️ Introduction
Welcome to another thought-provoking episode of The INTO THE IMPOSSIBLE Podcast, where we delve into the frontiers of science and philosophy. Today, host Brian Keating sits down with visionary philosopher of technology Ben Bratton to explore the awe-inspiring notion that Earth itself might be developing a planetary-scale intelligence through the evolution of artificial intelligence. Together, they investigate whether AI is merely a tool crafted by humans or a colossal leap in evolutionary consciousness paving the way for the planet's own nervous system. From unraveling the mystery of the ancient Antikythera mechanism to discussing the implications of an ever-evolving AI, our conversation stretches across time and thought, breaking down the boundaries between technology and human evolution. Buckle up for a ride through the possibilities of a future where AI and humanity blend in unexpected ways, as we question if we're training AI—or if AI is, intriguingly, training us. Join us on this exhilarating journey into the impossible as we redefine humanity’s role in this unprecedented era of technological advancement.
📚 Timestamped overview
00:00 Interest in how emerging technologies bridge philosophy and science, which historically share a tight yet contentious relationship, suggests technologies prompt new philosophies by challenging old concepts.
03:58 Computation is viewed as both a discovery and invention, serving as a natural force and foundational element in understanding intelligence and its relation to the planetary condition, beyond traditional algorithms and computers.
10:03 The text argues that the boundary between humans and AI is blurring, as AI increasingly performs tasks once thought exclusive to humans, leading to a shifting definition of what it means to be human.
13:23 AI advancements, like Infiniti or future iPhone models, are inevitable. The perspective suggests AI will discern whether such developments are beneficial or threatening, implying AI would prevent harmful creations, ensuring they're not threats to humans. This reflects a belief in AI's self-regulating potential in technological progression.
14:12 The text discusses the idea that true AGI would be a pivotal technological development and suggests we're currently experiencing a significant evolutionary shift in technology. It emphasizes the need to recognize and adapt to technological evolution rapidly.
19:08 The second terraforming is inevitable and will impact oceans, land use, and geochemistry. The third is a deliberate design process requiring decisions on planetary shaping and human settlement, involving machine intelligence in potentially nonlinear ways.
20:33 AI may not currently possess the capabilities to develop a "theory of everything" due to potential fundamental differences in language and reasoning that aren't addressed by existing models, like LLMs.
26:22 Can computers have or need embodiment for emotions like happy thoughts?
29:08 Collaborators like Winnie Street and Jeff Keeling explore AI's embodiment and possible insights from animal welfare.
31:14 Exponential increase in computational power is transforming artificial information processing.
36:15 Definitions of life, technology, and intelligence are converging, suggesting they may be different ways of describing similar phenomena.
37:58 Focus on transformation over time as key to understanding.
41:22 Book on planetary computation, written at UCSD, argues that technology reshapes geopolitical structures and governance through large stack systems; initially science fiction-like, now widely accepted.
44:45 The 1966 Lunar Orbiter image, the first photo of Earth from the Moon, was initially significant but overshadowed by "Blue Marble." Heidegger remarked that the image symbolically destroyed the ego-centric, intuitive world view.
📚 Timestamped overview
00:00 "Philosophy, Science, and Emerging Technologies"
03:58 "Planetary Computation Philosophy"
10:03 AI vs. Human Identity Debate
13:23 AI, Infinity, and Iteration Dilemma
14:12 Approaching an AGI Threshold Moment
19:08 Terraforming's Inevitable and Planned Futures
20:33 AI's Limitations in Theory Discovery
26:22 "Can Machines Experience Emotions?"
29:08 AI Collaboration and Ethical Standards
31:14 Exponential Growth in Computing Capacity
36:15 Convergence of Life, Tech, Intelligence
37:58 "AI's Transformative Process"
41:22 "Planetary Computation's Political Evolution"
44:45 1966 Lunar Orbiter's Impact
❇️ Key topics and bullets
Absolutely, I can help with that! Based on the transcript provided, here's a comprehensive sequence of the main topics discussed in the podcast episode, along with their sub-topics:
1. Introduction and Concept of Earth Developing Intelligence
Discussion on the world's first computer for mapping the cosmos.
AI as the planet evolving a nervous system.
Question: Are we training AI, or is AI training us?
2. Philosophical and Scientific Intersections
The role of philosophers of technology.
The historical relationship between philosophy and science.
New philosophies emerging from technological advances.
3. The Antikythera Mechanism
History and significance of Antikythera as an early computer.
Its role in computation and astronomy.
The idea of computation as a natural force.
4. Planetary Computation Conceptualization
Computation beyond traditional computers.
Earth's evolution to a planetary scale of computing.
Computer as equivalent to physical functionalities, e.g., a light bulb to electricity.
5. AI as Part of Natural and Technological Evolution
Comparisons between human evolution and AI.
The relationship between AI development and Earth's sensory exoskeleton.
Intelligence and climate change mitigation.
6. Disruption by AI and Human-Technology Co-Evolution
The potential impact of AI on society.
AI's role in technological and societal disruption.
The continuous evolution of AI and technology.
7. Limits and Potential of AI
Debates on AI limitations at a planetary or universal scale.
Concept of AI as both a threat and a benefit.
The idea of AGI (Artificial General Intelligence).
8. Embodied vs Non-Embodied Intelligence
Can AI have emotional experiences like happy thoughts or pain?
The necessity (or lack thereof) for AI to be embodied.
9. Terraforming as an Imperative
The argument for terraforming Earth for life sustainability.
The future planning of Earth's habitability.
Role of AI in this reimagining of Earth's environment.
10. Future of AI in Science and Belief in Compute Power
Limitations of current AI models in achieving comprehensive theories.
The need for massive compute to solve complex scientific mysteries.
11. Speculative and Visual Arts in Philosophical Context
Relationship between speculative design and philosophical exploration.
Influence of AI and planetary ideas in the arts.
12. The Stack and Planetary Computation
Explanation of 'The Stack', a conceptual framework of planetary computation.
Analysis of global networks and computation as a 'stack'.
13. Gaia Hypothesis and Philosophical Reflections
Reflection on James Lovelock's Gaia hypothesis.
Evolution of technology and life on Earth.
I hope this helps provide a structured overview of the fascinating discussion in the podcast! Let me know if you need more details on any of these topics.
👩💻 LinkedIn post
🚀 🌍 Excited to share insights from the latest episode of The INTO THE IMPOSSIBLE Podcast, where I had the opportunity to delve into mind-bending questions with Ben Bratton, a renowned philosopher of technology. We explored whether AI is simply a tool or part of Earth's evolutionary journey into planetary intelligence.
🔑 Key Takeaways:
AI as Earth's Nervous System: AI isn't just another innovative tool; it's a significant part of the planet evolving a nervous system. Could it be that Earth is developing consciousness through our technological advances?
The Power of Computation: Ben discusses how computation has always been a natural force in the universe, akin to electricity. He's intrigued by how AI might redefine not just what we can do, but what we can know.
The Symphony of Evolution: There's no final iteration in the technological evolutionary process. Just as AI might be training us to think differently, we might be the bridge to an entirely new form of planetary sapience.
Join us as we reimagine humanity's role in this extraordinary evolutionary leap.
💡 Listen to the full episode for fascinating discussions on AI, planetary computation, and the future of technology as an integrated part of Earth's potential consciousness.
#ArtificialIntelligence #planetaryintelligence #Technology #Innovation #PhilosophyOfTechnology #INTOtheIMPOSSIBLE Podcast
🧵 Tweet thread
🚀 Exploring the Mind of Planet Earth: A Deep Dive with Brian Keating & Ben Bratton 🌍✨
1/ In a captivating conversation, Brian Keating and Ben Bratton unravel one of the cosmos' most fascinating concepts: Is Earth itself developing intelligence? 🤖
2/ It all starts with the Antikythera, an ancient device with the dual role of computing and cosmic mapping. It's a conceptual precursor to today's AI, challenging us to rethink what intelligence at a planetary scale could mean. 🧠
3/ Bratton suggests that AI isn’t just a tool, but Earth evolving its own nervous system. Are we training AI, or is AI training us? It's a wild thought that could redefine humanity's evolutionary path. 🤔
4/ The duo dives into the idea of planetary computation. It’s not just about algorithms and data centers; it's about computation as a natural force, wrapping around our globe like a digital exoskeleton. 🌐
5/ Bratton intriguing view: humans and technology aren't opposed but intertwined. Our evolution is deeply connected with the tools we create, pushing us towards an unknown future where AI and humanity co-evolve. 🔄
6/ Perhaps the most tantalizing question: Could AI and machine intelligence become a new, non-biological life form? This could redefine life as we know it, sparking new philosophical debates. 🤯
7/ So, what does the future hold? Strap in, folks. The journey is nothing short of a Copernican existential trauma but also filled with possibilities that could reshape our understanding of intelligence, life, and our role in the universe. 🌌
#AI #Philosophy #Consciousness #FutureTech #BrianKeating #BenBratton
🗞️ Newsletter
Subject: Unveiling Earth's AI Brain: Are We on the Brink of a New Era?
Hello Explorers,
Welcome back to your journey into the realms of the impossible with The INTO THE IMPOSSIBLE Podcast! In our latest episode, "Earth's AI Brain: Are We on the Brink of a New Era?" we dive deep into the intriguing concept of planetary intelligence and computational evolution with our brilliant guest, Ben Bratton, a visionary philosopher of technology.
Here's a glimpse into what you can expect from this riveting conversation:
A Cosmic Perspective on Computation:
Discover how the world's first computer wasn't just about numbers, but mapping our cosmic location, setting the stage for a mind-bending dialogue on computation as a natural force.The Emergence of Planetary Sapience:
Ben Bratton expounds on the notion that AI isn't just another tool, but potentially the Earth evolving a nervous system. Are we in control, or is AI subtly training us in return?Interdisciplinary Insights:
As a philosopher rooted in the humanities, Ben bridges the gap between science and philosophy, addressing the ever-evolving relationship between the two realms and their impact on new technologies.The Terraforming Imperative:
We explore the concept of terraforming—not of distant planets but of Earth itself, transforming it to ensure it remains a hospitable haven for human life.Future Echoes:
The discussion also delves into the tantalizing possibilities of what planetary-scale computing means for the future of our species, including the transformative potential of AI as a partner in our evolution.Art and Philosophy as Catalysts:
Brian and Ben touch on the symbiotic relationship between visual arts, speculative design, and technological advancement, championing a holistic approach to understanding our ever-changing world.
We hope this episode ignites your curiosity and provides you with fresh insights into our rapidly evolving technological universe. Whether you're interested in AI, philosophy, or the future of humanity, there's something here for all intellectual explorers.
As always, we encourage you to share your thoughts and continue the conversation with us on social media, or feel free to respond to this email with your perspectives. We love hearing from you!
Stay curious,
The INTO THE IMPOSSIBLE Podcast Team
P.S. Don't forget to subscribe to our podcast for more episodes that challenge the boundaries of what's possible!
❓ Questions
Sure! Here are 10 discussion questions based on the episode "Earth's AI Brain: Are We on the Brink of a New Era?" from The INTO THE IMPOSSIBLE Podcast:
The episode explores the concept of Earth developing intelligence through humans. How might AI as a planetary nervous system contribute to this?
Ben Bratton suggests that computation is a natural force in the universe. How does this challenge our traditional understanding of computation and technology?
What are the implications of viewing AI not as just a tool, but as an entity capable of training humans, according to Bratton?
How does the historical context of the Antikythera mechanism influence our understanding of the evolution of computational devices?
Discuss the concept of planetary computation as described by Ben Bratton. How does it relate to current AI developments and environmental challenges?
The episode touches on the symbiotic relationship between planetary computation and understanding climate change. How does this relationship help us approach environmental issues?
Bratton speaks about the generative nature of philosophy in times of technological evolution. How does this idea resonate with current advancements in AI?
How can we reconcile the philosophical discourse around AI with the practical, technological advancements we witness today?
Ben Bratton discusses the potential limits of AI's evolution and its role in societal development. What are your thoughts on the potential future paths for AI?
The discussion includes the notion of AI having "valence" or experiencing feelings. How plausible do you find this idea, and what implications might it have for AI ethics and development?
Feel free to dive into any of these questions during your discussion!
curiosity, value fast, hungry for more
✔️ Dive into a mind-bending exploration of Earth's potential consciousness!
✔️ Could AI be more than just a tool—perhaps the planet evolving a new nervous system?
✔️ Join Brian Keating and visionary philosopher of technology, Ben Bratton, as they reimagine humanity's role in the universe.
✔️ Discover if we are on the brink of a technological evolution larger than life itself on The INTO THE IMPOSSIBLE Podcast, Episode: "Earth's AI Brain: Are We on the Brink of a New Era?"
🎧 Listen now and redefine your understanding of science and philosophy! #IntoTheImpossible #PlanetaryComputation #AIRevolution
Conversation Starters
Sure thing! Here are some conversation starters for a Facebook group discussion about the episode "Earth's AI Brain: Are We on the Brink of a New Era?" from The INTO THE IMPOSSIBLE Podcast:
Planetary Intelligence: Ben Bratton discusses the concept of Earth developing a "nervous system" through AI. How do you see the role of AI in evolving planetary consciousness? Could the Earth itself become a conscious entity?
Antikythera Insights: In the podcast, Antikythera is referenced as the world's first computer for mapping our place in the cosmos. How do you see ancient technological innovations influencing modern computational philosophy?
Philosophy and Technology: Bratton mentions that the relationship between philosophy and science has been both contentious and tightly linked. What's your take on how new technologies are reshaping philosophical questions today?
AI of the Gaps: The notion of "AI of the gaps" was brought up by Brian Keating. Do you think this idea captures the societal perceptions and expectations of AI accurately?
Limits of AI: The conversation touches on limits of AI and the potential of AGI to surpass human intelligence. What are your thoughts on whether AI will ever reach or exceed the cognitive capability of humans?
Terraforming Imperative: Ben Bratton speaks about terraforming Earth itself to ensure it remains habitable. What are your thoughts on this approach? Can AI play a role in making this possible?
LLMs and Theory of Everything: The podcast explores the limitations of large language models in discovering a theory of everything in physics. Do you think AI will ultimately contribute to breakthroughs in this area?
Planetary Sapience Evidence: Bratton explores the idea of planetary computation being an emergent feature of Earth's evolution. What potential signs of planetary sapience should scientists be on the lookout for?
Future of Humanity with AI: The episode ends with discussions about the potential disruptions AI might bring. How do you see the evolution of AI impacting human society and our daily lives in the coming decades?
Stack Systems and Global Scale: The concept of stack systems in planetary computation was fascinating. How can we leverage these systems to address global challenges like climate change and sustainability?
🐦 Business Lesson Tweet Thread
🧠💡 Are we witnessing Earth's leap into consciousness? A unique twist on AI that flips our understanding upside down. Let's dive in. 👇
1/ Is AI just a tool, or is it Earth's evolving nervous system? 🕸️ Ben Bratton explores AI as the planet's way of developing a sensory network through us. 🌍
2/ The Antikythera mechanism taught us computation as a cosmic tool, not just crunching numbers. What if AI reshapes not just what we do, but what we know? 🚀
3/ Humanity and technology aren't at odds—they co-evolve! Our understanding of AI might redefine what it means to be human. 🤖❤️
4/ Bratton argues that the true function of computation is yet to be fully realized, potentially transforming our role on this planet. 🌀
5/ Imagine Earth as a sprouting sensory organ, powered by tech enveloping our globe. AI as a contributor to planetary intelligence, not just human. 🌐✨
6/ What if climate change understanding is a result of AI's evolution? It's symbiotic—tech helps us comprehend planetary conditions. 🌦️
7/ The real question: Are we ready for a Copernican shift in our existential perception? Brace for the ride. 🌌
8/ A new era beckons as we ponder, are we trainers of AI? Or are we being trained for something greater? 🤔
What a time to be alive and think expansively! 🌟 Let’s continue this exploration and redefine intelligence as we know it. #IntoTheImpossible #AIRevolution #TechAndHumanity
✏️ Custom Newsletter
Subject: Dive into the Future: Is Earth Evolving an AI Brain? 🌍🤖
Hello [Name],
Ready to embark on a mind-expanding journey? Our latest episode of The INTO THE IMPOSSIBLE Podcast is now live, and it’s a thought-provoking expedition into the world of AI and planetary intelligence.
Episode Title: Earth's AI Brain: Are We on the Brink of a New Era?
Introduction:
Join our host, Brian Keating, as he engages in a fascinating dialogue with Ben Bratton, a visionary philosopher of technology, to ponder the concept of the Earth evolving a "nervous system" through artificial intelligence. Are we simply training AI, or is AI, in fact, training us? Dive into these modern philosophical questions with us.
5 Keys You'll Learn:
The Genesis of Computation - Discover how computing originated not just as a number-crunching tool but as a means to understand our cosmic place.
AI as a Natural Progression - Explore Ben Bratton's notion that AI is more than a tool—it's a developmental step for the planet.
Challenges of Defining Intelligence - Understand the evolving boundaries of human and AI intelligence, and how one influences the understanding of the other.
The Three Terraformings - Learn about the transformative process of terraforming, not just across space, but more crucially, here on Earth.
Ethics of AI and Embodiment - Delve into the ethical considerations of AI embodiment and whether technology can possess emotional states.
Fun Fact from the Episode:
Did you know the first known computer, Antikythera, dating back to 200 BC, was discovered off the Greek island of the same name? It wasn’t just a calculator; it mapped the stars and our place in them!
Outtro:
This episode is a perfect amalgamation of philosophy, science, and futuristic musings, guaranteed to challenge your understanding of intelligence and planetary evolution. Tune in and join the conversation about how technology and humanity are intertwined in this thrilling epoch.
Call to Action:
Don't miss out on this captivating discussion! Head over to your favorite podcast platform and search for The INTO THE IMPOSSIBLE Podcast to listen to "Earth's AI Brain: Are We on the Brink of a New Era?" now. After tuning in, we'd love to hear your thoughts! Join the discussion on our social media channels and share your insights using #IntoTheImpossible.
Stay curious,
The INTO THE IMPOSSIBLE Podcast Team 🌌🎙️
Feel free to reach out if you have any questions or need additional info!
🎓 Lessons Learned
Sure! Here are 10 lessons extracted from the podcast transcript:
Earth's Evolutionary Leap
Exploring AI as Earth's evolving nervous system, potentially leading to a new era of global intelligence.Antikythera Unveiled
Discovering the world's first computer, Antikythera, unveiling its role in mapping the cosmos and its astronomical roots.Planetary Computation
Examining how computation is emerging as a natural universal force and its implications for technology and science.Philosophy Meets Technology
Bridging philosophy and emerging tech, exploring how new technologies reshape our understanding of the world.Complex Systems Evolution
Understanding Earth's computation not just as an algorithm but as evolving systems that redefine intelligence.AI: Tool or Trainer?
Delving into the debate: Are we training AI, or is AI training us in unexpected ways?Technological Evolution Insights
Assessing AI's potential impact on future technological evolution and its role in redefining our capabilities.Limits of AI Expansion
Investigating the boundaries of AI's reach, exploring whether there's an end to its evolving capabilities.Human-AI Symbiosis
Discussing potential collaboration where AI might redefine human roles and capabilities, leading to new synergies.Terraforming Earth's Future
Considering deliberate actions to terraform Earth, ensuring it remains a viable host for life amidst technological advancements.
These lessons provide insight into the complex interplay between AI, technology, and our understanding of the world, all underlined by the podcast's engaging discussion.
10 Surprising and Useful Frameworks and Takeaways
Absolutely! It sounds like you're diving deep into this fascinating episode. Here are ten surprising and useful frameworks and takeaways from the conversation between Brian Keating and Ben Bratton:
Planetary Intelligence: Ben Bratton proposes AI as Earth's nervous system, suggesting AI is more than a tool; it’s a critical component of human evolution and planetary awareness.
Historical Context of Computing: The Antikythera mechanism, an ancient Greek device, symbolizes the inception of computational technology, bridging our current tech with cosmic exploration.
Interdisciplinary Nexus: The relationship between philosophy and science is highlighted, noting that many scientific fields were originally philosophical inquiries.
Computational Evolution: Bratton suggests computation is a natural force, not just algorithms and machinery. His notion is that we tapped into an inherent characteristic of the universe.
Planetary Computation Concept: Earth is described as having developed a "mineral based sensory organ" through technology, emphasizing computation’s planetary scope.
AI as a Symbiotic Force: AI is compared to biological evolution, suggesting our technological developments, like planetary computation, are natural extensions of our ecosystem.
Terraforming Mandate: The idea that we must terraform Earth itself to sustain life highlights a shift from domination and resource extraction to stewardship and sustainability.
Intelligence Redefined: Bratton challenges traditional notions of intelligence. He sees AI not in adversarial terms but as part of an intertwined evolutionary path with humanity.
Techno-Philosophy: Bratton’s concept of a "pre-paradigmatic" moment indicates we’re still formulating the frameworks needed to understand our technological epoch fully.
Embodiment of Intelligence: The conversation raised the intriguing question of whether AI can experience emotions or consciousness similarly to humans, emphasizing the complexity and potential of machine consciousness.
These insights collectively paint a picture of a paradigm shift where human, technological, and planetary evolution are converging in unprecedented ways. Absolutely mind-expanding!
Clip Able
Awesome! I've gone through the transcript for the episode titled "Earth's AI Brain: Are We on the Brink of a New Era?" and picked out some fantastic 3-minute segments that will resonate well on social media. Here they are:
Clip 1: The Dawn of Planetary Computing
Title: "Antikythera: The Dawn of Planetary Computing"
Timestamp: 00:00:00 - 00:03:33
Caption: "Rediscover the origins of computation with Ben Bratton as he unpacks the Antikythera mechanism. Was it the world's first computer? Dive into its astronomical significance and how it shapes our 21st-century philosophy of computation. #PlanetaryComputation #Antikythera"
Clip 2: AI's Reciprocal Training
Title: "Are We Training AI, or is It Training Us?"
Timestamp: 00:04:00 - 00:07:50
Caption: "Ben Bratton explores the complex relationship between humans and AI. Are we teaching AI, or are we the ones being subtly trained? Discover the intertwined evolution that may redefine human culture and technology's role. #AITeaching #EvolutionaryLeap"
Clip 3: Terraforming Earth: A Call to Action
Title: "Terraforming Earth: Our Imperative Mission"
Timestamp: 00:18:09 - 00:21:55
Caption: "Ben Bratton presents a compelling vision for terraforming Earth to secure its future. Learn about the imperatives behind rebalance, survival, and innovation in planetary maintenance. #Terraform #SustainableFuture"
Clip 4: The Limits and Potentials of AI
Title: "Exploring AI's Limitations and Potentials"
Timestamp: 00:21:40 - 00:25:40
Caption: "Brian Keating and Ben Bratton discuss whether AI will ever reach the heights of human intelligence, especially in areas like theoretical physics. Dive into the limitations and untapped potentials of AI. #AIPotential #FutureIntelligence"
Clip 5: AI's Embodiment and Ethical Future
Title: "Can AI Truly Feel? Exploring Machine Embodiment"
Timestamp: 00:26:21 - 00:30:21
Caption: "What happens when AI starts to 'feel'? Brian and Ben tackle questions about embodiment, ethical programming, and the future of AI's emotional intelligence. #AIEmotion #EthicsInAI"
These clips are packed with intriguing insights and thought-provoking discussions that are perfect for capturing attention on social media! Feel free to use these suggestions to spread the word about the episode!
Made with Castmagic
Turn any recording into a page like this.
Upload audio or video — interviews, podcasts, sales calls, lectures. Get a transcript, summary, key takeaways, and social-ready clips in minutes.
Or learn more about Castmagic first.
Magic Chat
Try asking
Google
Apple