What if I told you that the man who invented virtual reality thinks AI might be making us less human? But the same guy who jammed with Richard Feynman believes technology should expand our consciousness, not replace it. But here's what most people get wrong. They think VR and AI are basically the same thing, both artificial, both digital, both changing reality. But Jaron Lanier sees them as opposite forces. VR expands what's possible for human consciousness. It literally changes the way that you imagine, create and how your brain works when you inhabit different worlds.
Something went wrong!
Hang in there while we get back on track
The INTO THE IMPOSSIBLE Podcast
Jaron Lanier: VR Will Expand Human Consciousness
Speaker
Brian Keating
Speaker
Jaron Lanier
Speaker
Brian Keating
Speaker
Brian Keating
00:00 Fascination with Consciousness and Reality 08:31 "Brain's Adaptation to Body Types" 12:47 "80s VR Showcase with Spielberg" 16:18 "VR Sickness and Sensitivity" 26:02 "Dawn of Everything Connections" 27:29 "Perception, Reality, and Design" 35:55 "VR, Perception, and Human Adaptation" 40:54 Space-Based AI Powered by Solar 45:45 Virtual Saxophone with Gloves…
✨ Magic Chat
Don't have time for the full episode?
Ask anything about this conversation — get answers in seconds, sourced from the transcript.
Try asking
Featured moments
Highlights
“When you interact with an AI lover, an AI therapist, an AI girlfriend, you're not interacting with some neutral technology. You're interacting with a company whose interests or override yours.”
“what future bodies we could evolve into that the brain is pre adapted to. And this in a way is one way of exploring the potential for the human brain in the far future. It probably gives us information about hooks that might be used to modify or enhance the brain in the future. It's just an incredible thing.”
“The Limitations of Virtual Reality: "the question I have, you know, kind of most prominently in my mind are the limitations of VR and the perhaps, you know, sorry to be maybe insulting, but maybe the failure to live up to what is such a promising set of potentialities or potential futures.”
“Unpacking how algorithms exploit our emotions and use enragement to lead to engagement.”
“Rethinking Human Potential Quote: "if we look at that diversity it might make us more optimistic about human potential and human nature. That we've been looking at one little blip during the period of written history, but it actually isn't that representative of how we could be if the energy cycle were different and if the technology we based were different.”
Timeline
How it unfolded
Read along
Full transcript
And classic VR, where you're not seeing the real world anymore, is being able to change your own body in interesting ways. That's an amazing sensation, and I think it has an indirect effect on your cognition, which is really profound. There's one thing that might still shock people that they haven't seen yet.
And here's the most uncomfortable truth that Silicon Valley doesn't want you to hear. When you interact with an AI lover, an AI therapist, an AI girlfriend, you're not interacting with some neutral technology. You're interacting with a company whose interests or override yours. Today, journal VR pioneer, Microsoft scientist, musician, and one of the most brilliant and contrarian thinkers I've ever spoken with. Goes deep into consciousness, creativity, and the future of what's going to happen to your very brain itself. Let's go.
Today I'm sitting down to talk with one of my favorite thinkers, Jerome Lanier. VR pioneer, author, musician, bon vivant. I think the only thing he's missing is a high school diploma. Is that right?
Yeah, I guess that's true. Yeah.
Well, by the end of today's conversation, you won't think about our actual reality the same way ever again. Jeron, long before you became known as the father of virtual reality, you spent time jamming with Richard Feynman at Caltech. Feynman, as you know, experimented with sensory deprivation tanks and used his own body as a laboratory to probe physics. Watching him do that, what did you learn about sensation and feedback? And how has that shaped your vision for virtual worlds that replicate reality through purely artificial sensors and sensory input?
I cherish that I had this connection with Feynman, but it was a peculiar one. I wasn't officially a student, nor a lot of times people come to me and ask things about his life, but I actually don't know a lot of the details. I haven't read all the bios, and I don't really know the public Feynman that well. The story for me was pretty simple. My first girlfriend was somebody I met in New Mexico. Where I grew up and her parents were separated, she was there. I've chased her back to la. It turns out her dad was the head of the Caltech physics department.
I was a bright kid and I just ended up kind of informally around there and had a chance to spend time with him and then. And I did play some music with him, if you mean by jamming. Cause he was actually a kind of a cool, eccentric percussionist. Yeah. And then much later he invited me to be the only male, the only non hippie girl person to ACcompany him on LSD experiments. Where I was told my job was to keep him from falling off a cliff in Big Sur, which I succeeded at. But I think he enjoyed having all the hippie girls around more than me. And not that anything happened, but I'm just saying there was a certain aesthetic exploration maybe.
Okay, so. So that's me in Feynman. I don't feel like I learned a lot about the Sensorum and such from him. That wasn't the particular thing with him. But I have been fascinated by it, obviously. And I did have many other people to learn about it from. I have always been fascinated by this very peculiar state, which is the human condition, where we are physical. Our only connection to the world is through imperfect senses, through these physical exchanges of information that follow the same conservation laws as everything else.
And in a sense we're kind of remotely and imperfectly connected to our world. And yet there's this other sense in which we have this inexplicable sensation of really being here and being in it and being sort of real in a sense, beyond just a bunch of particles. I'm totally fascinated by that. One of my old phrases from a publication was consciousness is the only thing that isn't reduced if it's an illusion. The mere possibility of illusion is the thing, you know. And so this weird state that we're in, which to me I have to turn to metaphysics to talk about, is absolutely fascinating to me. And it has been since I was little. And it's definitely part of what led me into the whole virtual reality thing of experimenting, replacing all of the sensory channels and the interactive channels with synthetic ones and seeing what happens, which we did.
So obviously you pioneered virtual reality decades before it really became mainstream. And we're going to get into, you know, some of the tough questions that I have for you as the father, by the way, who's the mother? Who's the mother of VR?
I know that's what I always say. I don't call myself the Father of virtuality. But whenever somebody does, I said, well, it depends on if you believe the mother. I, I guess the mother. Virtuality. There are a few interesting candidates for that. I mean, Ada Lovelace is an obvious one. But another one that's interesting is Suzanne Langer, who is an art theorist from the post war period, 40s and 50s mostly, who did use the term virtual world to describe something about her sense of art, which I think is what inspired Ivan Sutherland to start using virtual world to talk about computer displays.
So maybe Suzanne Langer, although we never met, so I don't know how that worked exactly, maybe some kind of male and sperm program. I participated, fitted in as an infant or something. I don't know.
You're saying you might have. Well, let's not get into that. When you look at, you know, when we look at what you have fathered sire, you're not denying paternity at all. But.
Looking at where we are, that thing has a lot of expenses. I'm going to deny paternity. I'll leave it to Zach. Yeah, okay, fine.
Yeah, we'll get into some of the lacunas. But now it's become mainstream. And so I want to ask you, looking at where we are now with VR and AI, what's the, the one thing that the world still doesn't understand about these technologies that you think is going to shock us in the next.
Decade or so about VR stuff? I find almost all current VR to sort of not be getting at the stuff that's the most intense about it. It's very rare. And if you like go through all of the online sources of stuff for VR through Meta or Steam or anything, it just seems like they all miss the stuff that I think is the coolest, which is really strange to me. Like, to me, the coolest thing in classic VR, where you're not seeing the real world anymore, is being able to change your own body in interesting ways. That's an amazing sensation. And I think it has an indirect effect on your cognition, which is really profound. So turning into an octopus or something. And there's academic study of it where we study how much you can stretch the homunculus.
Homunculus flexibility is what it's called sometimes. I should have called it plasticity to be more academic, but I was just in an anti academic mood. But there's at least two really good labs in the world. One is Mel Slater's in Barcelona and another is Jeremy Bailenson's in Stanford that have been devoted to trying to map out how the Human brain's mapping to the human body can be modified in VR. And it's utterly fascinating. I mean, you can really bring out different cognitive specialties in people by changing how the body they think they have. There's also a wonderful hypothesis.
Yeah, yeah. Tell me the most exciting developments on the horizon that laypeople might not be aware of and why they're so significant in virtual reality.
Whenever I look at all of the offerings in the Meta store or the Steam Store, I'm always amazed that the stuff that I think is the coolest, it's just not there. And I don't understand why some of my favorite things are changing the human body. So if you're in fully occlusive classical VR where you don't see the real world anymore, you have the option of changing your body. You can turn into an octopus or all sorts of things, and you map your real body's motion into this thing. And you can even map yourself into a distribution of the clouds in the sky, all kinds of things. And when you do that, people's cognition is changed indirectly and you can bring out different qualities in them, which is amazing. So there's a couple of labs that study this as a primary focus. One is Mel Slaters in Barcelona, and another is Jeremy Bailenson's at Stanford.
And one of the hypotheses that I really like in this is that when we learn which body types your brain seems to be able to control the best. We're both exploring the deep phylogenetic tree, in other words, the evolutionary history that your brain passed through to come to the human body, because your brain doesn't know what species it's in. It's just gradually being adapted to each species along the way, so it retains traces of controlling other bodies. And when you become those bodies, it's like they're natural for your brain. But the other thing which is even more amazing is what future bodies we could evolve into that the brain is pre adapted to. And this in a way is one way of exploring the potential for the human brain in the far future. It probably gives us information about hooks that might be used to modify or enhance the brain in the future. It's just an incredible thing.
And you can study it with today's virtual reality, except people don't. For some reason that escapes me. It's the very coolest thing. I've had so many experiences like that that I'm really excited about. But anyway, so that I'd say if there's one thing that might still shock people that they haven't seen yet. It's that it's changing your physiology inside VR. And then if you do it together with other people, it gets even weirder and cooler. So that's the thing.
Soon I want to connect your work and your thoughts about virtual reality to. I want to connect it to scientists like Donald Hoffman, who makes sort of an allied case, perhaps in this case for reality and in, in his book about consciousness and so forth. But, you know, ultimately we'll get there soon. But I think the, the question I have, you know, kind of most prominently in my mind are the limitations of VR and the perhaps, you know, sorry to be maybe insulting, but maybe the failure to live up to what is such a promising set of potentialities or potential futures. I'm thinking most recently of Apple Vision Pro. I know they're a competitor.
That's a sad case, isn't it?
Yeah. So I want to bring up something to you. I read a study just preparing for the interview from in Science Direct. And it was all about the, the experiences that people have with, with vestibular, you know, kind of challenges from using headsets. I almost brought my, my meta headset here, but, but the test involved, you know, interactions with people denying them access to their peripheral vision. And when this occurred. This is before the Apple Vision Pro, by the way. I think it's gotten worse.
They complained of something that is called cyber sickness. Have you encountered that before?
Yeah, I mean, that's like very fundamental to VR research, of course. So that's absolutely fundamental. And that's both as an interesting science topic and obviously as a practical engineering topic. It's absolutely fundamental, of course.
Is that going to. Since our hardware is, you know, millions of years old and our software, just language and stuff is relatively new and the hundreds of thousands of of your time scale, do we have any hope of with an external device that does, you know, eliminate your peripheral vision where predators tend to attack? Is that a fundamental barrier to adopting it at least widespread, or maybe does it put a break on its ultimate potential?
Hey, I want to use this opportunity to ask your audience to help me with something. Yeah. Many years ago, a NASA researcher in VR named Michael McGreevey used to show a slide set with the very first head tracked visual display, little goggles. And it wasn't for people this far predated anything I did or Ivan Sutherland or anything like that, or any of the remote robotics things. It was for kittens and it was to study how kittens perceive peripheral vision. And so it was like this little thing with just A line that could move. It was not digital, it was mechanical. And so there's this cute picture of all these kittens wearing VR headsets from the 40s or 50s.
And I want that picture and I can't find it. All of you, you need to go out and help me find this thing. I need crowdsourcing because I can't find it. AI can't find it. It's there. I didn't confabulate this. It's a real thing. I believe it.
So, about motion sickness. It's a fundamental thing to study. Can I tell you a funny story about it? Of course.
Funnier the better.
And then I promise I'll get to the science.
No, I'll keep you on track.
In the 80s, I guess it was. Yeah, in the 80s, I used to do some stuff with Steven Spielberg, maybe most famously for the movie called Minority Report. But earlier than that, we put a bunch of VR stuff in a big truck and rolled it from Silicon Valley to Hollywood to just show VR this new thing at that time, profoundly esoteric, hard to explain, profoundly rare, profoundly unusual and crazy expensive. Many millions of dollars of equipment were in these trucks just to give people an experience of looking around inside of virtual world. So anyway, we set up the truck in the lot at Universal, one of the studios. And there was this real old classic studio head named Lou Wasserman who's still running the place. And so he went and he watched all these people go through it. And everybody, of course, was just blown away and didn't even understand what was happening to them.
And then this little figure comes and says, okay, kid, kid, come here. And he says, are people gonna get sick in this thing? And I'm like, oh, Mr. Wasserman, we have been studying this. We have it down to an incidence of 1 per thousand. And we believe we can get it to 1 to 10,000 in the next year. We can't solve it totally. We can make it very rare. We know how to do blah, blah, blah.
And he looks at Spielberg and he says, why are you bringing me some kid who doesn't know anything? And Spielberg's like, oh, God, I'm embarrassed. Jared Ember. And so then I'm like. He says, kid, let me tell you something. I want to see headlines about how my janitors are quitting and suing me because of all the vomit they have to clean up. That's what sells tickets. Until you know something, you really don't belong in Hollywood. And funny that they had just done that with a movie called Jaws where Apparently people were throwing up in the theater.
At least that was the story. And my first introduction to business and simulator sickness was that it's desirable.
Today's episode is sponsored by Short form, the smartest way to keep up with the world's smartest ideas.
Look, I'm a professor, podcaster, and parent, and I don't have hours to spare.
When I needed to prepare for a guest like Jaron Lanier, the technologist and philosopher who helped invent virtual reality and now warns about its social consequences, I started with Short Form. I began with their Chrome extension, which summarized Lanier's essays and interviews on the attention economy. Then I opened Shortform's guide to 10 arguments for deleting your social media accounts right Now Unpacking how algorithms exploit our emotions and use enragement to lead to engagement. I was on a deadline to release.
This episode the same week his book.
Got published, so Short form was indispensable.
I needed to make the connections fast.
And they didn't disappoint. Lanier's work connects with thinkers like Tristan Harris, Sherry Turkle, and Nicholas Carr, revealing how we can reclaim our minds from the platforms that seek to compete to dominate them.
But it's not just books. Shortform ads, articles, podcasts, and even AI generated summaries. I'll use it for research on AI hype cycles, for catching up on the latest astronomy papers, or, yes, even sometimes for parenting questions. Diving into their podcast and articles. It's become my daily driver, my curiosity.
Companion, helping me save hours and hours a week.
And it will for you, too. You can try it for free, and if you go annual, you'll get three.
Extra months on me. Whether you're curious about Lanier's vision for a more humane Internet or just want to explore new and brilliant insights on technology, consciousness, and society, Shortform's got you covered.
Start your free trial plus three bonus months@shortform.com Impossible or click the notes below.
Now back to my conversation with Jaron.
And maybe this was an unfortunate moment. It's not a simple thing. There are a lot of simple things we can do to reduce the occurrences a great deal. One of the things about it that's not simple is that there's a great deal of variation in how sensitive people are to it and the way in which they're sensitive. And there's certain narrow demographics that are exceptionally sensitive, and it's kind of peculiar. I'll give you a clue. They happen to be the ones who don't work in VR labs, so I remember When I was, without naming any names, I was talking to some of the very nice people at Apple who were bringing out their headset and they, before it was, before it was announced, they said, we have totally solved sickness. Nobody will ever get sick.
You don't know what you're talking. And like, one of the very first reviews was a certain journalist who got access to it and got sick and happened to be, let's say, a smallish Asian female, very prime demographic to be affected by it. Why? We don't know. But this is actually one of the areas where the lack of diversity in core engineering teams in Silicon Valley really bites hard. I'm sure it's not the only one. I'm sure there's a lot of others we don't know about. But, you know, Apple, it's not that they. There's no way to solve it totally, that was the pipe dream.
But you can reduce it a great, great deal. And you can also design the experience and the overall product so that when it happens, the person can deal with it and recover better. And the whole thing, like there's. You have to acknowledge it, though. If you pretend you've solved it, then you're deciding not to solve it, you know. And the thing is, people even have it in reality. Like, we get a little dizzy and disoriented in reality. And this gets back to what I was saying before, that our sensory and motor systems are not perfect.
And a lot of what goes on, function of the brain is actually managing the fact that our physical link to the rest of physicality is imperfect. That's a core quality of cognition. And you don't treat that as a problem to be solved. You treat that as a quality to be acknowledged and embraced and worked into everything, you know. But anyway, yeah, there's a lot you can do. Peripheral vision is a lot of it. Relative latencies of different parts of the system are a lot of it. Sometimes just design.
Like I was talking about, making your body as weird as possible. If I want to make you fall over, if I don't want to make you throw up, I sure can do it, no problem. But of course I don't. And so we, we. There's a. There's a design space that's fundamental to people. The great thing about virtual reality is it forces you to think about people biologically and concretely instead of abstractly. When you're designing on a screen, you can sort of imagine your users, this abstract thing who just moves a cursor point around or something.
With virtual reality, if you're not being a realist about biology and everything else about humans. You are failing, unfortunately, that's the majority of the field. But you know, when you ask, why isn't VR picked up more? I think that's actually a lot of it. It's the reluctance to accept the fundamental messy, gooey biological nature of people, which is so anti Silicon Valley culture. But unless you can really go there, you're not going to be able to really design for VR in a way that catches, you know, And I think that's where we've been.
It really resonates what you just said with me because I've been making the case that it might be impossible, at least with the lock in phenomenon that is the victimization by its own success of LLMs plus GPUs, that we might not experience another nausea inducing phenomenon, which is what Einstein described in his famous free fall experiment thought experiment. He said, if you were in an elevator and the cable broke, here's my real reality, finger puppet of Einstein here. If the cable broke, you would experience no gravitational force field. And he called that that thought. He said it titillated him unlike anything else in 1907. And that was the happiest thought of his entire life. He said, looking back on it, I want to ask you, it's believable.
I can imagine it would have been. It still is an amazing thought, isn't it?
It is. It's absolutely amazing. And it gives me hope for these meat, you know, these meat brains. Not like this 3D printed, one mike suit.
I need to keep up with you.
Go have my head there.
Come on.
I went to your house in Berkeley many years ago.
Weird props to have, but anyway. Okay, go ahead.
So my question is, without a embodied, you know, kind of body, literally, how is it possible for an AI system, an LLM system perhaps, or even other forms of AI systems to have a happiest thought and even to visualize what that sensation of the pit in your stomach rising up that induces nausea in many people. As I said, how is it possible? Are these types of geometries of LLM plus gpu, is that just fundamentally not going to lead to a breakthrough like Riemannian curvature did for Einstein, and will it be?
Yeah, well, that's a controversial topic right now and I'm kind of on the inside of it. I'm the prime scientist at Microsoft and I get to see a lot about how the models work and try to improve them. I'm sometimes perceived or accused of being a skeptic. I Don't think I am at all. I like them. I think they're. I think we're bringing something of value to the world. I just really like realism.
I like honesty. And I think. So. What I think we have with, with the. With the big AI models with, like, large language models in particular, is from a mathematical point of view, we have something that detects patterns and projects them or extrapolates them, and in a variety of ways, in a statistical distribution, which in some cases, if they're pretty close to where the training data was and if they're in certain situations, can often be useful. And a great example is in Vibe coding. In small programs, they can often be useful, and I think that's great. I actually think that that's a benefit overall.
Sometimes they can't be, but it's all statistical. I mean, statistically, once in a while, it's like a Boltzmann's brain. But it's not so severe that the LLM might come up with some new theory. I mean, it's not precluded. It's just less likely. The more, let's say, the more creative or unlike the training data, something is, the less likely it is to come up, because this is a big statistical machine. And then from a social and economic and semantic point of view, what we have is a new form of human collaboration. So just like with the Wikipedia, we combine people's stuff together and we kind of suppress who the origins came from through pseudonymity.
But nobody thinks Wikipedia is anything other than a jumbled cooperation of a bunch of people. So a large language model is exactly the same thing. It's a bunch of stuff from different people that's jumbled together using a bunch of rather computationally expensive statistical processes. That's what it is. That's all it is. Is that useful? Hell, yeah, it's great. It's really useful. Pretending it's more than it is makes it less useful.
Realism makes things more useful. That's why reality's there, I guess. So I think this question of could an LLM come up with the new Theory of Everything. Well, it could. It's just not super likely. I don't think there's any harm in trying. Believe me, I have, and I've had students do it. Like, why? Why not give it a go? You know, you can have.
You can ask one of the frontier large language models to write you a Theory of Everything paper, and it'll come out as good as a lot of what's out there.
Yeah.
Like it could get on the archive.
Probably because, well, I get three of those a week. I get three of those.
Oh my God, I get so many. And people are so convinced that they have it and you know, it's like there's some symmetry group and you do this and you do that and oh, here it is, you know, and like a dirty little secret is it's not that hard to come up with a paper like that if you want to, but it's also not that hard for a model to come up with a paper like that if you want to. So we're going to talk soon about.
As I said, past guest and friend Donald Hoffman at UC Irvine and his thoughts about, you know, reality, you know, not existing. The Case Against Reality is his book. I imagine, you know, your book could be titled the Case for Virtual Reality. But let's, let's, before we get to that, let's, let's go through dawn of the New Everything. Let's judge a book by its cover. You're never supposed to do it.
Hey, book lovers, we're judging books by the covers.
We know we're not supposed to do.
It better into the impossible. There's nothing to it. Let's take a look and judge some books. Oh yeah, so I did, I did find one for you. So here's the paperback edition of it. Yeah, I think actually the hardback in the American version is a little cooler because they put a hologram on it. Yeah, this is, this is a picture of me, I suppose it's got to be in my 20s. And it was taken by Kevin Kelly, who is an old friend and started Wired magazine along with some other people.
And that's one of the earlier versions of the VPL headset. That's the version that existed in the early 80s and it was supplanted by a better one. But that is, to my knowledge, the world's first, you know, head supported color stereo tracking display and certainly the first commercial one. And yeah, so there it is. And that's me when I was younger. My hair's shorter and.
But that's, and the subtitle has kind of connection to the next topic, which is about actual reality. Can you read the subtitle? What was the choice of the subtitle based on?
Oh, let's see. It says dawn of the New Everything, Encounters with Reality and Virtual Reality. Let me say something interesting about the COVID that some might be curious about. There's another book with a very similar title called the dawn of Everything and that's by two people, one of whom is David Graeber. Who was a dear friend of mine, who unfortunately passed away just before his book became a bestseller. And that coincidence of the titles was intentional because what that book is about is reconsidering the archaeological record to argue that there's been much more diversity in human societies than we usually think. And that if we look at that diversity it might make us more optimistic about human potential and human nature. That we've been looking at one little blip during the period of written history, but it actually isn't that representative of how we could be if the energy cycle were different and if the technology we based were different.
So really interesting book. And this was a bit of a memoir of coming up in Silicon Valley, in starting one of the Silicon Valley things. And the idea was that they would be linked for those who are curious. It's intentional. So there was the dawn of Everything gone, of the new everything. Mine came out a couple of years earlier. I'm amazed that I wasn't flakier than those guys. I would have thought mine would come out later, but somehow or other that's what happened.
Anyway, the subtitle is Encounters with Reality and Virtual Reality. That is the subtitle, at least in America. I think it might be different elsewhere, but anyway, probably, yeah.
So as I mentioned, you know, Don Hoffman is a friend and he's argued that our perceptions aren't really, really windows true reality, but they're basically survival based Darwinian adaptations that provide desktop interfaces, kind of like a VR rendering that hides all the messy code. Again for our survival. And dawn of the New Everything. You kind of have a contrary perspective to some extent, that if reality is warping our senses and virtual reality is warping our senses, then that's something we need to be mindful of, but also sometimes lean into. I mean, I sense sort of a dichotomy there. But let me ask you this. If reality itself is an illusion, let's just take his, his proposition at face value. How would that shape the design work that you and your colleagues are doing now for headsets, for software, for human interfaces?
And so to my embarrassment, I have to say I have not read his book, so I can't speak to you directly. But if I'm going to speak to your characterization of it. Yeah, what I would say is that we often get into a little bit of difficulty of wanting to think in a binary when there's really a statistical distribution. So do we perceive reality directly? What could that possibly mean? I mean, one piece of reality cannot fully know another piece of reality. If you don't know what I mean by that. Go and study your quantum theory. What we have is an indirect kind of sloppy way of reality connecting together. Now, in a sense, it's all very locked in because of conservation laws and least action principles and all these like.
There's a sense in which it's all locked in together, so far as we can tell. But in terms of any little piece of it getting information about another little piece that's pretty funky, like reality is set up as an imprecise kind of undergraduate project in that sense. I don't know who to complain to about that. But anyway, so this notion that there's some even plausible hypothesis that the brain knows reality directly. Unless you're talking about something metaphysical, the brain might know consciousness directly. I'm not even sure about that. That's an interesting area we could talk about. Very difficult and very difficult to say anything with clarity about it.
But in terms of physicality, the very hypothesis that we could know reality directly is not definable and just doesn't even make sense to me. So everything you said I would agree with totally. But the thing is, we can know it statistically and this is what's important. So when I. When my hands touch, do my hands really touch? Well, if you really went down to the smallest level of reality and you're looking at the particles, not really what's going on there. There's. There's fields interacting. It's a little complicated, you know.
But what does touch even mean at that level? It's not even defined. Right. It doesn't even. It's not even. And so. But statistically, yeah, my hands are in cosmologically unlikely to interpenetrate when I do this. Right. I don't think that lacking an absolute knowledge of reality means lacking any knowledge of reality.
Rather, I think we have some knowledge of reality at a statistical level and that that's legit and that's. That's real. The case for acknowledging that statistics are real, I'm all on board for that. Our perception of reality is statistical. And to me that makes it real. In fact, in one of my books I have a comment that the way you know something is real is that you can't know it totally. I think that's in. You're not a gadget.
And I think that's accurate. Whenever you know something fully, that means, you know, some kind of abstraction or construct. And that is exactly not reality. It's when you only know something partially that it's plausibly real.
I think that does kind of resonate with Hoffman, at least at some level, because life by necessity is a. Is a process of filtration, of compression, of loss and so forth. And I think that's one way to. To look at it. That's in consonance with what he said. I want to turn to another guest. We'll get to get to, you know, the questions from my best man at my wedding, Stefan Alexander. But before we get there, a question from Rizwan Verk, who's a.
Teaches at asu, and he's written a book about the simulation hypothesis being real and so forth. He has a question for you and the influences of your work. And he wants to know in terms of science fiction and real world antecedents, like the Sword of Damocles demo by Ivan Sutherland. And so he's curious what you think about things like Lawnmower Man, Snow Crash, both of which have been influenced by you and your work.
And all best wishes today to our friends at asu, who are apparently shut down by a dust storm. Crazy. Oh, no.
Oh, I didn't hear that. You must have found that out on Twitter, because people.
No, I don't subscribe to those things. I actually talk to individuals and my friends.
I know, I know. I'm teasing.
The cyberpunk media movement, in a way, it predates real virtual reality because we have people like Philip K. Dick and Ray Bradbury and so on writing pieces that incorporated virtual reality ideas, or at least awfully similar from the 50s and really early, but in the 60s. But starting in the 80s, I was in touch with all these people. So Neal Stephenson came along a little later. We're still buddies. I still keep up with them, but even a little before that, William Gibson was a buddy. And in fact, I wish I could do his voice because at the time he had more of a tendency draw than he does at this point. But he was like, you know, if things had been different, he would have been interested in becoming an engineer and working on this stuff instead of writing about it.
And I was like, oh, God, no. You gotta be a writer. Like, don't even say that. Then I'd give him a lot of hell about making his stuff so dark because I was kind of afraid that he'd curse for true reality by setting a kind of a dystopian tone about it. And he was like, I don't think you know how literature works. But at one point he did try to make it a little less dark, but he said it did harm to the quality of the work. And I'm like, okay, well, you know, that's all I can say. Neal Stephenson is interesting because he was trying to find a middle ground where his work incorporates a spectrum of both dystopian and utopian and realist ideas all kind of together.
And I think he really got to an interesting place with Diamond Age and all that. My head is floating around in one of them. It might be Diamond Age, I don't remember. But he put me in somewhere. There's like this dreadlock head floating around who's supposed to have started their thing. I don't know. Anyway, it's in there somewhere. And then Lawnmower man is a weird one because what happened was this guy called me and said, I'm doing a VR movie.
Can you help? I'm kind of busy right now. Because in those days, I was so busy you can't even imagine. But what we did is we lent them props. So they had actual period VR equipment as the props in that movie. And the plotline of a VR company being taken over by an intelligence agency corresponded to what might have been a real thing going on, according to the Wall Street Journal. Anyway, my company had been infiltrated by French Secret Service through investors who got board seats or something. I don't even. I don't know if I believe it.
But anyway, it was a thing in the news. And so there was kind of this weird overlap between the science fiction story and what was purported by some to have happened to my little company. Another funny thing about Lawnmower man is that it showed it. There used to be drive in movie theaters. It's like pleasure lost on younger people now. But you would drive your car up outside to this big theater, and then they would bring you snacks at the window or something, and then you'd make out while you watch the movie. Movie in the car. Anyway, there was a drive in across the window from our lab for the old company.
And so we actually saw our own equipment in this weird movie out the window for a while. What was playing? So, yeah, those are some of the stories I can tell you about those days.
That's hilarious. Well, another question is getting away from kind of the suspension of disbelief. How realistic can it really get? Virtual reality? Do you think that will ever reach levels of Matrix or at least Oasis or Ready Player One levels? Is that actually within kind of the field of view, at least for the next decade or so?
I mean, I should point out that my recollection of Matrix and Ready Player One is that the people do become aware that that's a simulation, so it doesn't pass the Turing Test, if you like, at least not on every level. So that's my recollection of those plots. What I will say about this is that the framing of both this and the Turing Test, in fact, is deceptive because it assumes less plasticity in human nature than there actually is. So what I think happens is as VR gets better, our perception changes in response and we learn to perceive it better. And that at the end of the day, we become. There's maybe a race between improving VR according to whatever criteria and then our ability to perceive that improvement. And I mean, I can tell you that VR of the 80s was amazing, but I'm sure if we could reproduce it now, which I don't think we can, it would look like crap, you know, and the reason the difference is us, you know, like our expectations, our experience, and just our perception habits, our perception patterns have improved through experience with different generations of the technology, you know, and so can that go on forever? I don't know, you know, but I don't like the question because it has some baggage in it that implies something that's false and unduly pessimistic about human nature.
Yeah, that's sort of. I get that a lot with things like, you know, global climate change. And yes, of course it's happening and there's reasons to be pessimistic, but that misses out on the whole project of scientific evolution, which is to improve the universe, make it better, construct technology.
Yeah, I'm with you on that. Like I. And it's a funny thing. Well, with climate change, there's one danger that we depress ourselves through failing to acknowledge that civilization and technology and science are dynamic. There's another danger that we become too complacent if we acknowledge that we should be in some in between place where we don't lose sight of the importance of the issue, but at the same time don't depress ourselves into dysfunction either. That's the right place to be and so hard to get that balance, but that's where we need to be.
So I want to turn to something allied maybe with global climate change, and that's clean green energy, which reportedly your company or employer, Microsoft is working. At least that was the meme a couple of months ago to kickstart, jumpstart, pull the lawnmower cord on Three Mile island, one of the reactors there. I want to dovetail that into the downstream effects of AI. And the reason I think they're doing that and the reason that you know, Constellation Energy and all these companies are now, you know, raising their rates and they're like startups now. They'll be. Their stock prices is because of AI. And you know, concomitant with that are the energy demands. Are there ultimate energy limitations? You know, do we need, you know.
The first guest on the podcast was Freeman Dyson. I met him a long time ago. I really like, I miss him very much. Yeah, he was first class.
Freeman's great. And I, two of his kids have been close to me as well. And I, I'm. Yeah, they're so great. Okay, go ahead.
Yeah, he was a Jason and so he would come to La Jolla all the time and hang out. I had, I hosted him for Shabbat dinner once. That was a real cheap. So, you know, so do we need Dyson spheres? I mean, how will there be natural limitations and bottlenecks to the development and the marriage, holy or unholy, of AR VR and AI? Is that going to propose a fundamental physics limitation?
Well, let's leave VR out of this for a second. Just because there's too many variables in the one question already without VR. So let's just talk about the future of AI and energy cycle. So, as you know, I dislike the term AI for a lot of reasons, but for one thing, there are actually a lot of very different algorithms that we clump together that are like the kind of algorithm that we use to search for new molecules is different from a large language model and that's different from a diffusion model. And all these things have some overlap because. Because they all use statistics distributed over a large amount of data. And so there's going to be some overlap, but they actually are not the same thing. So anyway, let's leave that aside.
Do I believe that big data statistics are important to the future of mankind? Yeah, I do, I do. And so that leads me to think that what we tend to call AI today corresponds to something we'll want to have around. Does it take an incredible amount of electricity? Yeah, it does. Because. Because competition is never free. Right. So what does that tell us in the immediate term? I'm really concerned and we have to be as responsible as we can be about doing it. And we have to both try to not to find a non carbon emitting way to do it.
And we have to find a way to dispose of the waste heat from it. And there's like a whole bunch of things about it that have to. I have a deal with Microsoft, which is I speak my mind. Even if I criticize Them, but I never speak for them. And so since I don't speak for them, I will comment, but maybe not.
On them specifically, but just the notion of energy here.
In the longer term, this is not immediate, but in the longer term, the amount of data that has to go into one of these models to function once it's trained, and the amount of data that comes out of it with an answer, those are both small amounts. The training is a different matter. But even that, what I'm getting at is that if you have a small amount of data going into something, then a lot of energy that has to churn on it and then it gives you an answer. Maybe the right answer is it shouldn't be on Earth. Like maybe it should be in space or on the moon and solar powered and instead of dealing with Earth energy cycles, because we could get the amount of energy it takes to beam. There are different ways this could happen, but it might be, I'm thinking it might be lasers and reflectors on each side or something. Just something very simple. I don't think we need to even go to microwaves.
You know, I think we could do some, some really simple semaphore technology and get all the bandwidth we need to operate these things, stick them somewhere else, let them be powered somewhere else, let them dissipate their heat into space. That seems like the obvious engineering path. Now when I say that, people roll their eyes. But what about, what about that? I honestly, I don't know for sure because there's maybe lots to figure out. But as heavy as big data centers are, they're going to get lighter and lighter because of Moore's law type effects. And a lot of what makes them heavy is heat dissipation anyway. So maybe if you put them in space or something, you actually can make them smaller. The cost of getting things off world is going down and I don't know, it just seems to me like computation is useful and maybe it doesn't have to happen at large scales on the Earth anymore.
Maybe it's ideal. Most things that take a lot of energy on Earth, you really need nearby to take advantage of whatever they do. But this is not one of them. This is low bandwidth output and input. So, like, why is it here? So for the moment, of course it's obvious, but in the longer term, I'm not sure these things should stay here.
Is there an analog for VR that we could make for a hardware kind of a. Let me start over again. Okay, hold on a second. Is there a type of hardware that could be Optimized for virtual reality as GPUs seem to be optimized for their use, at least in the LLM basis of AI. Is there an.
What's funny about that is the origin of the GPU was for VR, of course. So I mean what happened was in the 1980s there was another startup that was kind of our cousin company called Silicon Graphics, that was the first company to really try to establish a general purpose gpu. They had a competitor which was Evans and Sutherland, but that was a bit more specific. Like they had one sub processor to make the ocean and the other the submarine or something with a very specific military clientele. But in terms of making just a general purpose real time render pipeline, that was Silicon Graphics. And a lot of the people from Silicon Graphics actually went off to start Nvidia. So that was.
Yeah, we had a Silicon Graphics, you know, it was kind of the shining, you know, kind of after the next or around the next. At Case Western we had a Silicon Graphics that was the envy of the world for the 90s. Right.
They were so cool. Yeah, I know in the. Well, in the early 90s that's already getting a little later. I mean, remember I left VR as a field in 92 and I went off to become chief scientists for Internet too and work on how to keep everybody from destroying each other on the Internet by demanding everything all at once and resources. But that was another story. So I was gone by 92. So a lot of what people think as the earlier of VR to me was later and I was gone already. Anyway, so a lot of the Silicon Graphics people turned into the Nvidia people.
And a lot of the hardware chain that turns out to be so useful for AI actually started to support VR once upon a time. So. So to me, AI is the interloper. It's like they're using our stuff like.
The Johnny come lately. But nowadays, if you were starting from scratch, would you adopt that architecture and the concomitant software that goes along with it or.
That's actually a really, really good question. Yeah, I mean, sure. I mean there's something. The GPU as we know it has very much changed because of the AI market. So I should say that it was our thing at first. Now AI has definitely transformed it. The gaming market came along too. Like, I mean PC gaming had a huge impact on it before AI.
So others came along and really transformed it. I mean, I would say so. I mean my complaint, I feel like the visual side of VR is actually pretty good these days. Of course it could be better. And I kind of wish. There's a few things about it that I wish were a little polygon mesh with stuff on it and a little more volumetric and a little bit more ray tracing or whatever. But, you know, that's all, that's details. The stuff that's really, really underdone in VR these days is haptics and interaction.
Like, okay, I have a video from the 80s of me playing a virtual saxophone where I have a hand. And in those days we didn't have fast enough processors to do vision to track hands, so we had gloves. So I'm wearing a glove and I have a virtual hand and I pick up a virtual saxophone and I operate the keys and my fingers operate them all and yet don't interpenetrate. Then I let go and leave it there. So what's going on is I have to interpolate between where my real fingers are and what they would have to do to operate the keys. And it has to come up with an intermediate solution that looks okay for both sides. And I have to be able to both start not touching it, pick it up and then let go of it all with easy intent and without any separate state changing UI element. Okay.
I defy you to find me something on any current VR catalog that can achieve that. Like there's this, there's some cool hand tracking things. There's some cool. A lot of little startups are making cool gloves and a lot of interesting hand tracking machine vision. That's all great. I want you to show me that control sequence on something because if you can't do that, you're not really using your hand. Right. And we were forced to do that kind of stuff because our graphics was so terrible in those days.
Like it was the only, like all we could do was better sound and haptics and interaction. We were forced to it because we were like counting polygons. We had to be super stingy with polygons in those days. And something's just gone very wrong. Where the it's been so vision first and that everything else, and especially interaction and haptics, it's just not really where it needs to be. And that's probably my greatest disappointment with present day VR. And I know somebody say, but what about so and so's dissertation at the Media Lab or this esoteric thing here? Yeah, I'm acknowledged there are people doing it, but in terms of what, 99.99% of people who get VR experience, they're completely missing the good stuff.
Well, before we get to your work with Stefan Alexander. I want to ask you a question from Stefan, the man himself. He texted me last night with this question for you. Will AI plus VR perhaps ever play better than Coltrane? You just mentioned saxophones in the early days. Talk about the future of, say, an AI or VR augmented, whatever Coltrane is that going to.
You know, I mean, I. I love Steph, he's a close friend, but I think it's the wrong question because we shouldn't perceive Coltrane as a fixed value output. Like you measure a block of wood and here's the length of the wood. It's not like that. Coltrane is part of an interaction of his era and all the musicians he played with and the listeners. And it's like it's a whole system and it's grounded in reality, and it has meaning in that. And so there's not a information derivative that has any meaning independently of that. Like, if you took exactly the recording of Coltrane, one of the, you know, Coltrane classics, Love supreme or something, and then you play it for aliens in some weird world, it would be something totally different and quite likely indecipherable and incomprehensible to them, and they might not even recognize it as having information content.
So, like. So it's. It's not. It mischaracterizes what made Coltrane real, much less what made him great. You know, and reality is really important. Like the. The Turing Test approach to computer science, where if you can fool people, that's considered a result, is really incredibly stupid and insulting to any real scientist, because of course you can fool people. We want to be fooled.
We're easy to fool. I mean, we're morons. But the thing is, the fundamental problem with that is not that it's too easy and beneath our dignity. The fundamental problem with it, although it is the fundamental problem with that way of thinking, is that it treats certain things as having information, meaning, or utility in isolation when they don't, when their only utility is part of a whole system. You know, when somebody says, here's this piece of art, and it might seem very abstract, but let me tell you the story behind it now, suddenly it makes sense. That's legit. The story is part of it. The meaning comes from the story.
It's not just whatever bits are in the art itself. Like, you have to look at the whole. You just have to. Otherwise you give up on meaning as a thing. And somebody might say, well, but then where does meaning come. Come from? Ultimately, isn't it all just Information. More information. That's where I have to get a little mystical.
I'm not sure. Somehow or other there's more than just information connected to information, but I'm not exactly sure how that comes in ever. But it seems to me there's something there, but it doesn't even matter. The point is just simulating output. Having a virtual fake Coltrane doesn't actually achieve anything because it's in isolation. It's meaningless.
Well, you brought up something really that just, you know, kind of shocked me. A few minutes ago you talked about the Turing Test, which I completely agree with you. And I've devised two competitors, and I would love to get your. Your take on these two. Jaron, are you ready? Okay, so the Keating test number one is will an AI commit suicide? And so, for example, I've got a. A system that's listening right now. It's an AI system from one of your competitors up there in Seattle, or the home office and Seattle at least. And it will now do something.
If you're watching on YouTube and you should be watching or on Spotify, look behind me and you see a neon sign behind me. Right, Jerome, as you see it. Okay, are you ready? Computer, turn off the pod bay doors. All right, now we go like this. Computer, turn on the pod bay doors. All right, there we go. Now, by the way, Jerome, do you know that the Arthur C. Clark is responsible for the word podcast?
No. I actually used to know him, by the way, but no, I was not aware of that. Wow.
So Vinny Sirico is the engineer who came up with the ipod. He didn't come up with a name for it, but of course you knew and interacted at least second order effects with Steve Jobs. And when Steve and Vinnie were talking about this device that had that circular eye like creature feature to it, they said it looks a lot like the pod in 2001 A Space Odyssey.
And so. Oh, right, right, right. Yeah, that makes sense. I mean, there's another iPhone in part comes from my old VR company, because our VR headset, the very first commercial one, was called an iPhone, but E Y E E Y E P H O N E. That was the brand name. And it used to be the same thing. IPhones were around, it was kind of a thing. And Jobs had always wanted to eventually have Apple sell one.
And so the iPhone partially came from the ipod. But he. I heard from him anyway that he also wanted it to be like a little stake in the ground that someday Apple's going to sell its headset and it would Be called the iPhone.
Oh.
And little did he know what it would become later on. Apple Vision Pro, which has sold over 200,000 units. I don't know, like, it's the first one I ever returned. It's the first Apple product I bought and returned.
You know, that one just breaks my heart. But anyway, that's a whole other.
All right, so let me get into the Keating test. Keating test number one is if I hook that thing up to. If I hook my digital assistant HAL up to a plug which he knows will turn itself off and forever disconnect him from the energy source that he knows and loves, and then he chooses not to do that. That would be kind of a. This is just a joke. I mean, I'm not really serious.
One of the earliest little bizarro AI jokes was Marvin Ninsky's shut off machine.
Oh really? Tell me about it.
So he built a little machine with a processor in it that was called the shutoff machine, where when you activated it, a little mechanical hand would come out and shut itself off. And it was, it was exploring exactly this question. But he actually, I think it exists. I mean, I remember seeing one and I, by the time he was mentoring me, I was already a teenager. So that would have been in the 70s and this would have been earlier. This was like a 50s or 60s thing. But I mean, I saw a physical thing that did this. I don't know if it was original to the origins or not, but anyway.
Yeah.
So what like deep impact did he have on your life? First, first of all, for the lay people that are the young people that didn't grow up, you know, being, being fascinated with his ideas, give me a capsule biography of them. And then what was his impact on your life?
My, my sense is that the majority of what we think of as AI talk today, all the stuff about is it AGI or not and how many years and like, I don't know, just all the way people talk about AI endlessly and is it aligned with us or not? The vocabulary is a little bit different, but that whole way of thinking and talking really comes out of Marvin. I mean, Marvin was the person who really started it and Marvin's former students spread it. And that's all from of course, the Pleistocene and pre era that a lot of younger people in AI now wouldn't be aware of. But that, so far as I can tell, that was the origin. And of all of Marvin's proteges, I think I was particularly ornery towards him. I just disagreed with all this stuff from the Beginning. And we used to fight and fight and fight and fight and fight. But I really love the guy, and he was so extraordinarily kind and generous to me.
And he was my boss for a while, too, when I was my first research job. No second research job, but he and Alan Kay were my bosses, which was kind of great. But anyway, Marvin, when he was dying, I went to go see him one last time at his place in Brookline, Massachusetts. And one of our mutual friends called me and said, oh, Marvin's very frail. Don't go and argue with him about AI Give the guy a break. I'm like, okay, of course. You know, I show up at the house, and his house was crazy. It was like this level crazy.
It was like my house, all this weird stuff in it. And I showed up, and he looks at me with this glint in his eye, says, are you here to argue? And we had the old AI argument, which is a little like our conversation today.
Yeah, that kind of dumb. Pass to a couple a question. You're the only person that can answer this because you knew Marvin and you knew the Wiener, or at least I.
Never met Norbert Wiener.
Never. We met him, but you were around at that time. You were coming up at that.
I would have liked to have met him. Yeah. Right.
Yeah. And so the argumentative tradition, and you and I are both Jewish, and this dovetails into, you know, kind of the impact. Minsky was, of course, Jewish. And other people that think a lot about these issues, obviously, no Chomsky and others. What is there anything about it? I mean, I hate when people say, oh, well, all these Nobel Prize winners are Jewish. And that's true. I don't think it means anything, to be honest with you, except that it's a false idol that people love to have. Right.
But tell me, is there something about, say, the Talmudic discursations and so forth, is there something about that that dovetails so nicely with disputations in an argumentation in AI?
Well, two things to say. One is, what's with all these prominent Jewish scientists? I have a theory about that.
Oh, yeah.
I blame teenage Jewish girls, because what happens is, when you're a teenage Jewish boy, all the amazingly cute and hot and desirable Jewish girls, they're not into the football players, and they're not into, like, all the rock stars, the usual. So maybe they are today. But at least when I was coming up, they're like, who can do math? Who's winning? Who's the chess champion? And so, of course, then what you do Is you're like, oh, my God, I'm not going to get any if I don't, like, get a Nobel Prize. I have to get on it. I have to get on it. Where's that calculus book again? And I. So I. I blame the girls.
It's their fault.
Okay, so you've been canceled. Okay, let me just take three, two, one. Canceled. Okay, great.
I'm sorry, man. It's okay.
I had a good run. I had five years.
Anyway. And I don't. I think it's less true now. And like, anyway, I find the Talmud really interesting. And here's. So, for those who don't know, the Talmud is this really ancient document, and it's this living, growing document in Jewish culture that starts way back. And so you go to the Babylonian Talmud and then it goes through generations and generations of commentary upon commentary upon commentary. And the way it's formatted is each era has its own little spot on the page.
So just by looking at the graphic layout, you can say, this thing's a few thousand years old. This is 500 years old, this 1200 years old. Like, you can see where things are. And what I love about that is it's a way of getting you this combined collaborative effect, like what you get on the Wikipedia, except without losing the original voices. It doesn't mush things up to smithereens and pretend that there's like, one view from nowhere. That's the truth. Instead, it preserves the different points of view. And that's very Jewish, because we do, like, argue and stuff a lot.
That's right. They say, two Jews, three opinions. So with AI, it's infinite.
Or there's another one about. There was this. There's a Jewish person on an isolated desert island for 20 years and gets rescued. And there's two different temples made of stones. Why did you make two temples? There's only one of you. He said, this is the one I attend. That's the one I wouldn't set foot in. And that's like, very, very Jewish.
Anyway, the Talmud is an alternate model. Like, it's possible that the Wikipedia could have been built like that, where you'd see different groups of opinions instead of trying to come up with one point of view from nowhere. That's the perfect one. You can prompt AI to give you a bit of that effect, but it's not intrinsic. And I think that's a mistake. I think this notion of a one truth, and this is actually similar to when Stefan asked about, could you simulate Coltrane? This idea that there's this one output that's the good output. Doesn't make sense. All there is is outputs in context.
That does not mean that nothing means anything and everything's totally relative. It's an in between meaning, which is things do mean something, but nothing can ever mean anything perfectly. That's extremely important. You can give up the quest for absolute perfection in knowledge without giving up the reality of knowledge by acknowledging that statistics is actually legitimate math and that what you get is clustering of different contexts in which to get different approximations. So like, you know, when we do scientific experiments, we don't always get exactly the same result. We get different clusters and then we do statistics to say how much do these disagree? And we have a whole language for talking about whether that agreement is enough to really say this is important or not. So the thing is, statistics is actual legitimate math. It's for real.
And as soon as you acknowledge that you can get out from under this trap of everything meaning everything and everything meaning nothing instead, everything means something, everything is approximate. That's good enough. That's what we got. You don't want to live in the universe where everything's perfect, believe me. Then you don't. There's nothing to do. That's like a terminus, boring place. Forget that.
Forget that place. So anyway, so I like the idea of the Talmud as a structure for thinking about things like AI and things like the Wikipedia, things like social media in a better way and getting rid of this illusion of the one perfect view from nowhere. I just really think that's a fallacy that does this great damage. I mean, and if you think about it, that relates the fact that social media encourages us to think that way as part of how society became so toxically separated. You know, like right now, when people form communities of belief online, they become absolute and they can't even acknowledge any possibility in the other. But in the Talmudic version, you see these other people saying different things and you can tell, it's like, yeah, these people think that.
I agree. Yeah, having that kind of diversity, the reverence for the past, but also the kind of malleability is very consonant with science, you know, that nothing is fixed in stone. And even there's a famous disputation about, you know, some arcane thing like how many drops of blood, you know, render wine, you know, in a non kosher or something, you know, completely abstract or how to tie your shoes. And the rabbis are fighting and they're, they're talking about it and they say like, if the, if the law agrees with me. Let the walls, you know, have a bend into the, to the synagogue or into the. Into the study hall. And then that happens. And then all these other things happened.
And then they flash. You know, they cut to Moses in heaven and Shemayim and they say, he's laughing because they're agreeing that even Moses is not the final word. You know, the greatest who ever lived. Now, speaking of something with the wisdom of Solomon, I don't know if you know, but you know, Stefan's middle name is Solomon Shlomo. Did you know that?
Yeah, I did. It's funny because he also. He also has a Muslim background, and I think it's great. That's very Caribbean. There were a lot of Jews and Muslims. Both were fleeing the Inquisition and ended up in the Caribbean through. Not through slavery, but through piracy and through all kinds of other things. It's really, really interesting.
It's more complex than people often realize.
So I want to ask you now, the two of you, in Stephane's second book, Fear of a Black Universe, we'll put a link there. We did a podcast, obviously, on that theorize that aliens perhaps might be using dark energy as computational fuel for a universal scale quantum computer.
Oh, that's much more than a theory. That's been well demonstrated.
Okay, good. So walk us through that for the listeners that may.
So let me tell you, all right, I have to give you the setting for this. So we, we recognize this as humor. Okay. And I. A lot of people who talk about aliens and cosmology don't understand that they're engaging in humor, which I think is a terrible loss to them as well as to the rest of us. So what happened is a quarter century or so ago, I don't know, maybe 30 years ago, Stefan and I are talking about this question of making computers out of black holes. And the way that came up is there was. There were some papers trying to hypothesize what the most powerful possible computer would be.
And we were saying, well, you know, it's. You can talk about the most powerful computer you could make out of particles, but what about spacetime? What if you could do. And this was way before Lenny and all these people started talking about black hole computers much more recently. This was way, way back. And so we came up with this crazy thing about, well, okay, so how would you make a black hole computer? We came up with a few ways to do it. None immediately practical to say the least. And we weren't quite sure how this would work. But you'd entangle some Black holes.
You could make this computer. And our thought was that the interesting thing about this particular version of a black hole computer is that its storage is non local, so it's in the whole white cone. And that means it could have an influence on the cosmological constant. And so our theory was that the solution to the Fermi paradox, which we don't see aliens, is all around us because it is the cosmological constant and it's evidence that all the aliens are maximizing their computation loads for VR and AI and whatever. And the cosmological constant is the sign of life from the universe. Because in those days we talked a lot about this, you know, the greatest embarrassment in physics, as it was called, which is the disagreement about the two ways to estimate what the cosmological constant might be. And it does. Anyway, this is all.
I think a lot of this is kind of obsolete. But anyway, we thought, oh, this would be hilarious, and we wrote up a paper and we thought it was just funny. We were not going to put it, like submitted to a real journal, but maybe, maybe to the Journal of Irreproducible Results or something like that. Something fun. And so then Stefan's mom barrels it, and this is like, she's a nurse and she's this Caribbean mom who I've learned are kind of like Jewish moms. And so she, she, she looks at what we're doing. She's listening to us. She says, you boys, you boys, you can publish this bullshit after you get tenure.
You're not publishing this bullshit now. We're like, okay, okay, all right, okay, okay, okay.
We don't want to fight with Stefan's mama.
That's right. Exactly. I learned that the Hardy basically, like, you know, when the mom shows up on the scene, the mom wins. And so, yeah, so we ended up just not doing it. And then finally, after all these years and other people are talking about entangled Blackhawk computers, it seemed ridiculous not to finally do it. So. So we put it. It turned into a chapter in one of Stefan's books at long last.
So at least it got out there. I think it might have been in some popular thing like Wired or something earlier. I don't remember exactly. But anyway, yeah, I think it's a fun idea. I think the notion that the solution to Fermi is just something obvious that's in our face and the way the universe is is actually kind of an interesting idea. I'm not saying that aliens are actually doing this, but it'd be cool if they did.
Yeah. So actually I wasn't planning to go there, but you brought it up. The Fermi paradox, et cetera. But is there any kind of similarity or rhyming to the structure of, say, interdimensional travel, which is often brought up, but in a virtual sense where. Why would you send your meat bag across from Proxima Centuri B when you could send something virtual and not even go there themselves? So walk us through some ideas about how maybe there could be some ways to get out of the Fermi paradox or explain it, perhaps with virtual reality playing the role of cosmonaut.
Yeah, that idea has been around for a long time, and I kind of remember a lot of the intellectual figures from the 60s speculating about that. Timothy Leary and John Molly and those kinds of people used to be interested in that kind of thing. And I mean, I, I think the argument for physically getting out there is kind of like the argument for off world in computation, that even though it's a lot of energy and a lot of effort and this huge hassle and a crazy amount of time to get a bunch of real biological people over to another star system, ultimately compared to a lot of things, it's rather small. And then once you get there, there's this whole other world of resources and another star and everything, and that in the balance, if we under. And of course, by the time this is even a question, our understanding of everything might have shifted. So I have no idea. But if our basic understanding of our situation holds by the time we can actually do something like this on the balance, it seems like the arithmetic would work out to motivate actually going there physically as much of a hassle as it would be and uncertainty and awful in some ways. But in terms of doing it virtually, I mean, there is a.
So. The C is a problem. Okay. Because. Okay, like if you have a bunch of people or computers or whatever that are kind of in the same star system, the latency between them is going to be whatever it is. If they're on the same planet, it can be within, you know, seconds. And if they're in the same system, it could be months or something, I don't know, depending on how spread out. But if, if it's just.
If the only actual root computation is in a whole other thing and it's sending information out there, then obviously we're dealing with very long latencies. And so you'd have to have some kind of really, really slow kind of world of interaction going on. But the universe will last a long time. Maybe that's fine. Maybe Some kind of slow time frame interaction. And that's actually another solution to Fermi, which is that everything's teeming with life. It's just very slow by our standards because nobody ever got faster than light to work. And so everything's dealing with sea level, you know, sea constrained communications.
And so everything happens on a very slow cycle time and it's too slow for us to pick up on. That's another, you know, that's. And even people on a particular planet or whatever, some locality might slow themselves down to match up with the cosmic cycle, which would be very slow because of C. And we just would miss it because we're not even looking in that frequency band, you know. So that's a conceivable answer to Fermi. But I mean, honestly, we don't know enough to talk about answering Fermi at this point. You know, anything, anything we say is going to be nonsense. I'm going to look down at my hard device for a second just to.
Since we're a little over time, I just want to make sure.
Yeah, I just have one, one more question.
I'm not rushing you. Okay. I have, Yeah, I have a thing at the hour, so I'm not stressed right now.
Okay, good. I'd love to keep going if you can.
Yeah. Okay.
This is so much fun. Yeah, thank you so much for this. This is, this is great. So I've got more props for you. Okay, John, so here's, here's my friend Galileo. I have two questions about Galileo. Okay, so Galileo, he said famously, the job of a scientist is to measure what is measurable and make measurable what is not yet. So now I want to use that as a question about the role of feedback of the Skinner box that we're in, of the ways that perhaps AI and VR may be training us.
And so I want to ask you, what are the limitations on hardware on sensors to get input for the feedback loop? What are some of the practical bottlenecks or things that are coming down the pipe that are really cool for the future that my audience should know about in terms of sensors and feedback in VR systems?
Well, you know, there's a really strange thing about the physiology of sensing and interaction, which is it's got incredible peaks and values and qualities. So in terms of peaks, quality, there are multiple instances where your sensory system can sense an individual particle. You know, there's circumstances where you. Photons, right, Yeah. A single photon, which is like crazy. I mean, it's amazing. And our discrimination of objects between our fingers can be incredibly Acute. Like, it's.
It's amazing. On the other hand, there's like, all this dumb stuff, like, we have this big blind spot in each eye, and we have. I don't know, it's a weird mix. And that's because evolution is just kind of randomly optimizing for whatever it's pressured for. But what it's being pressured for is changing all the time. And so you end up with just this weird amalgam of things that doesn't reflect any particular snapshot of what would have been ideal. I used to argue with a guy named Richard Dawkins about this past guest on the podcast.
I hosted him in person.
Yeah, well, you know, because you can only take adaptation so far because there's no particular snapshot that was persistent that you adapted to. Instead, it's all in motion. So everything is smeared, you know, in evolution. And so. So the thing is, your job as a VR scientist, to paraphrase Galileo, is to work with what you've got, understand what you've got, and then get sneaky. So you can get really sneaky. Because the thing is, the brain evolved in this world of inconsistently good sense organs and motor and. And feedback capabilities.
It's all sort of weirdly good and bad in a big mush. And the brain had to evolve through all kinds of different remushings of good and bad through deep evolution. And so the brain is really optimized to be a good VR scientist, if you like. The brain is really optimized to take advantage of those cases where you're good, to sort of smudge and fake it and be sneaky about those things where it's not good and to kind of pull it all together. But I really want to emphasize for those who say, well, that means reality isn't real. No other solution is possible. This will be true for all possible aliens. This means it's real.
This is reality. This is good stuff. This is the real. And you have to understand that reality is precisely noisy channels. If it's not a noisy channel, it's an illusion and it's not reality. Get used to it. That is reality.
That's why I sneak in typos in my chatgpt request. So the other. The other connection I want to make, the one.
Okay, that's right.
The other connection I want to make to Galileo are his thoughts. And then your thoughts, obviously, on education. So he said, famously, you cannot teach a man anything. You can only help him find it within himself or herself, as we would say nowadays and in the dialogue Which I translated. I didn't translate it. I recorded the first ever audiobook with Carlo Rovelli and many other. Frank Wilczek and many other.
The piece I really like.
Yeah. And Fabiola Giannati. So we made the first ever audiobook, Jim Gates. So you probably know.
Oh, yeah, sure.
All these guys.
So we made.
The dialogue was really a trialogue, which is in Socratic or Didactic form. But the reason I'm bringing this up, you have no formal degree, as I understand. Like, Freeman Dyson was known as the rebel without a PhD. You don't have that, but somehow you studied with all these great titans of.
Yeah. What happened was I essentially completed one, and I was already teaching, but there's a technicality. And then I said, oh, I have to go to this art school in New York. And then that was a disaster. And I flunked out of it. And then I should have never got back to college, because I didn't need, like, honestly, for my life, I've never even needed it. So I just hadn't bothered with it. So.
Yeah, that was Freeman's attitude.
Yeah.
Why? Like, it didn't seem to harm him very much. Not having a But. So I want to dovetail this into your thoughts on the role of VR and education. And that's because of the fact that, you know, with. When I translate or when I keep saying translate. When we made the first audiobook of Galileo's ever, the dialogue, we had about 700,000 words. It's a huge book. And.
And so we had it. And we had different actors, like I said, Carlo and me and other, you know, and. And so forth. But I thought, well, it would be really cool if instead of learning, you know, balls rolling down inclined planes from professor keating in Physics 1A here as a freshman at UCSD, if you had Galileo, the guy who invented the freaking thing of. Of an inclined plane, which everyone thinks is really boring, but it's genius. It's brilliant. What he did is slow down time at a time before which there were clocks. There were no clocks in 1603.
You know what's bizarre is there's this weird little corner in the da Vinci notebooks where he describes a similar experiment.
Really?
I didn't. Yeah, it's crazy. I was, like, really shocked.
I'd like to get that. See if Bill will let me borrow the Lester Codex. And if he would, Jaron, that would be great.
Okay, sure.
Talk to Billy and let him know. But in all seriousness, what do you see is. So I said to myself, wouldn't it be better if you had a virtual Galileo, not this finger puppet, but you, like you were there with him and he was a brilliant orator, speaker, poet at the writer, artist. He was a sketch or Leonardo. Fine. What, what do you see as. As the future of it is my profession of a guy scratching on a rock with another piece of rock is. Are my days limited? Are our days as professors limited? Is there a VR solution which will take us to a promised land of, of infinite delights?
Well, right now, the technology that most people in education perceive as the big threat is not VR, but AI. Of course. Yeah, of course. And of course, AI isn't even really a thing. AI is just a way that people connect together. And so it's really people. And so I've been putting some effort into this and I've met with people who are, you know, run universities and talk to them about it and a lot of students, whatnot. And so there are a couple of things that are true.
One thing that's true is a lot of students are faking assignments on AI, as the cliche holds. But then to me, that's an indictment. Like you're going to all this trouble to go to school if the assignment is really something you just want to fake. There's something broken at the start there, there's something that you even motivated. There's something wrong with that. But then somebody would say, oh, you're going to just make them into snowflake, whatever. They have to learn some discipline, everything. And I don't know, I mean, I kind of agree with the spirit of what Galileo said that, you know, you have to find the thing within the person.
Like, I kind of agree with that. I. But then there's another thing, which is teachers are using AIs to grade the papers that are fake, and that becomes like stupidity upon stupidity. And that's also real. And there, I don't know, I have some sympathy because the people, a lot of times the people who are grading are like incredibly exploited and impoverished TAs. I mean, taught some undergraduate classes occasionally in recent years, and at Berkeley, you get thousands of people distributed across all these halls and everywhere, and it's like ridiculous. What is the point of it? What is being accomplished? Because I love working with students. But that's not it.
That's just some kind of formal going through the motions or something. There's something very wrong with it. But then there's more. People are using AI to submit papers, like the fake Theory of Everything paper we hypothesized earlier. In this conversation. But then what's even worse is that they're doing, they're doing prompt injection in their papers so that the other lazy reviewers who use AI to review the papers give them good reviews because the prompt injection caused them to. That's the. This huge issue already.
There's like a crazy number of papers like that. And as we all know, there's too many papers to review. There's just too many papers. And that's all that, that's gotten crazy. So, yeah, we, we need to reformulate how we think about education. And my hope is this could go a lot of different ways, but my hope is we reformulate education to be less about the transmission of information and about more. More about the inculcation of the right helping people find their passion and learn how to have the character that matches their passion. I think getting people to prove they can do a skill that a cheap app on their phone or a free app on their phone can do is kind of disheartening.
You know, And I have no, I had no problem with people using calculators, and I have no problem if you have a model. You can train a model to ace any standardized test where there's a bunch of examples. So it's totally unsurprising that a model can pass the bar exam or can win is silver and the meth Olympiada or whatever the current thing is. None of that surprises me. The problem with both of them is you better actually understand enough about the task you're giving it that you can tell when it's screwed up. Because statistically it's still going to screw up some time. And you have to be able to know enough to be on top of it. So, I mean, so like the new skill set might be putting people in a room where they have to use AI model to do a math thing, but randomly one of the 20 problems is going to be deliberately wrong and they have to be able to catch it.
Like, that would be a skill. And then if they have that skill, then they can start to use it effectively and be in control. Like that makes more sense than just telling them, oh, you have to be able to do this all by road. Because I don't agree that there's no, there's nothing special about that as a skill. If we have tools to make that easier, that's great. But then we have to acknowledge that there's new skills and those new skills, they might be easier than the old skills I'd like them to be, but that doesn't mean they're absolutely easy, they might still be hard. So we have to just adjust the whole thing to be more about the context, more about character. And honestly, it just has to be more about joy.
And I sort of hesitate to say this because I think there's so much criticism of the academic world becoming an extension of daycare and coddling kids and whatever. And I get that. I don't think that's the same idea, though. I don't think joy and laziness are the same thing at all. In fact, some form of joy that isn't lazy should be the point of education. Shouldn't it be that? That's what it should be. So we have, I don't know. Anyway, this is.
Well, how are you teaching your daughter? Or how would you teach a young. You know, a lot of our listeners have children. How would you teach them to interface both with AI and VR?
Well, look, my daughter can speak for herself at this point. She's an adult and, and my impression is that she's not too impressed with Silicon Valley.
And I wonder where she got that from.
She has, yeah. I, I, I don't think that's the path that she's going to go down. But I have most, most of, I don't, I'm not a teacher these days, although I like it. But I am a mentor for interns. So I bring in graduate student interns to the lab and I love doing that. It's one of my favorite things. I sort of wish I could devote more to it, but I have, you know, Right. A mixture of dd, so I, I can't do as much of it as I like, but I love it.
I love it. And honestly, I think at that point, like when somebody's a graduate student intern, what I tell them is like, you're a grown up in a real lab. Like, I'm not gonna like you. I'm not gonna say a lot. Like, you have to figure it out. Like, you have to, this is your time. Like, you have to be able to take risks and be creative now. Because later on, whether you're junior faculty somewhere or an engineer at a company or whatever path you take, you're going to find more and more constraints and demands that might not be as joyful.
Like this is the time for you to like really dive in and do something special, make something, take a risk. Most of most of the ones I've had have really gone on to do great things. I really, it's like one of my favorite things. I really love doing it. Yeah.
So this we don't have to talk about it if you don't want to. But you do mention your mother's passing in the book. I wonder if I could ask a couple of questions peripheral to that, if you don't mind. If not, we don't have to.
That's right. That's all right. That's a long time ago. That's fine.
Okay. So after that event, you describe the event in your life. You lost your mother at a very young age, and you describe it very tenderly. And obviously she had a huge impact. She had survived the death camps in Europe and come to America. And you quote from. Actually, it's my bar mitzvah parsha, which for odd reasons, I was an altar boy in the Catholic Church when I was 13 years old, so I never had a bar mitzvah at 13. I am Jewish on both sides, but.
But I did have it when I was 52 and I went to the Kotel, the Western Wall with my wife and kids, which made me one of the few bar mitzvah boys who brings his. His wife and kids to his bar mitzvah. But while I was there, I read the portion in At Savim, which is about, you are, you know, bearing witness to the fact that God has put before you blessing and curse, life and death so that you may choose life and not death, so that you may be blessed.
And that's right.
Right. So you quote that. Why was that such a big impact on you? Was it the nexus of that quote and the essence of free will? By the way, it's this week's Parsha of the week, as are coming up soon.
I know. I mean, I think it's a. It's a wonderful passage because what it does is it. It just points out how this whole thing of being here in this plane of reality, this whole thing of being here in these bodies, it's like actually a giant leap of faith. Like, we sort of pretend that this is all something that you can understand on nerdy terms, like, we're just phenomena and that there's just physics and we're just here. But actually it is a choice and it's a kind of a mystical choice to. To continue with this thing. And I think that's a remarkable insight.
I. So I found. I found that to be a really important and kind of a lot of times the most profound comments are the simplest ones, and that is one of the simplest possible ones. And yet it's remarkably deep and kind of endless to explore.
Obviously, you think a lot about ethics and your Fellow human beings. And that's impossible to disconnect from the philosophy of the Torah and the Talmud, as we discussed earlier. But when you look around the valley from your perch over there in the mountain mountains do you have a sense that, you know that they're seeking sort of a, a digital shamayim, a utopia with you know, some master simulator being, you know, Sam Altman or somebody else or, or in control of things or. What are you worried about most? I know you've, you have spoken a lot about it, but today we've been a little bit more obvious into your optimistic side. So I want to maybe not bring out, you know, unduly so your negative side of your pessimistic. But what are you worried about in terms of free will and choice getting.
Eroded due to these passive, separate, negative and pessimistic side. I think I'm just misinterpreted sometimes. Like I do do I think a lot of what we say and do in Silicon Valley is really dumb. Yeah, I do. I really do. When I say so. That doesn't make me a pessimist if I like what I would say is that the real pessimist is the one who acquiesces. The critic is actually not the pessimist.
The critic is the one who thinks things we better, you know, anyway. Yeah, I do think there's a lot of stealth religiosity in Silicon Valley ideology. A lot of it feels like the medieval Catholic Church. A lot of people kind of seeking. I mean it's like weird. It's like you have to say nice things about the AI because otherwise the AI will strike you to hell when it becomes all powerful and uploads everybody, which is the Roscos Basilisk thing that's so medieval. Or we have to wait for the AI to arrive and it will totally transform everything. And either you're with it or you're against.
And that's so the Rapture. And there are all these weird ideas and then also these little arguments about AGI and all these things that are supposed to be the only conversation you can have are so scholastic, they remind me so much of the angels on the head of the pin. So there's this weird Catholic thing and it's even among people who aren't Catholic. But I guess Catholic thinking was just around a lot or so I don't know how that happened. Or maybe it's just something that we come to independently because is that the medieval Catholic Church is somehow back reacting. Well, maybe there's some way that the Medieval Catholic Church itself was reacting against the earliest intimations of the Enlightenment. And like, you know, when you, like the priest who destroyed one of Leonardo's notebooks or something like that, like there might have been a reaction against the earliest intimations of the Enlightenment that turned into this thing that also becomes a reaction of a certain weird kind, even though it's in this case coming from within the tech community. I don't really know, like, you can go on and on with ideas like that.
I, I, I don't, I don't have, I don't think there's any certain way to express something like that, but there's some kind of, there's some kind of dark little maze of things like that that probably hold, hold some truth. I see it a lot in Americans. I see it a lot in very nerdy Europeans. I see it a lot in people from Chinese AI culture. I don't see it as much from the Indians, although a little sometimes, Sometimes I don't see it. There's a really interesting. If I'm already canceled for that comment about your teenage girls, I guess I can afford to say that there's a phenomenon I've seen repeatedly where there'll be a woman who shows up in AI circles who wants to out AI the men in terms of this sort of nerdy cosmology, eschatology stuff, you know, and we'll be like the most hardcore, like we're all just programs and we're going to be incorporated into the big program. And all that matters is supporting the big program and that, that kind, like, you'll see a woman do that to just outdo the guys.
I see that, that one often. And yeah, I, I don't know. I mean, I, I don't think it's helpful. I, I really think that technologists have such a huge impact on the world, maybe more than any other class of, of people. And more than anything else, we should try to approach it with a spirit of serving humanity and being humble about it, you know, I mean, not that that's easy to do. I, I know I have an ego like all the other techie boys. Of course, yeah. But I just, I just think that's, that's what our ideal should be anyway.
And, you know, but that's, that's the culture. Yeah, that's, that's, that's my world. These are my people.
Okay, a few more questions before we wrap up. You've been very generous with your time, so I'm sure you're aware that Ray Kurzweil also lost his father. You know, maybe not at quite a young age as you suffered a tragedy. But after his death in 1970, Ray started to collect all of his speeches, musical scores, financial records, photographs, home movies. He's written extensively about this. He says he has 50 plus boxes of it and that he wants to basically make a digital simulacrum of his dad. And I find that very touching. You know, a lot of what Ray says I take issue with just because, you know, I don't know if he's pitching me a vitamin supplement or, you know, to get to the singularity or not, but.
But when you look back again, if you don't want to talk about it, we don't have to.
But.
But on your mother's passing, has there ever been a notion or desire, at least in your mind, to want to interact with her and that perhaps there could be a virtual version of her?
I have a few things to say about this. One of them is just interesting, but the other one's really important. So the interesting one is these days. I know. So I used to know Ray. Of course, he's a generation older than me, and so he was already more established when I showed up around MIT as a kid. And I like Ray. We do disagree about all this stuff.
It's all fine.
Me too.
Yeah, it's all fine. But these days, I know his daughter, who lives in Berkeley, and she's written a cartoon novel about growing up as Ray's daughter, which is very touching. So I'd recommend that to people. You can find it. And so. Which is a very different take on how to encounter your parent. In my case, I don't have. For various reasons, I don't have remotely enough information about my mother, so I know very little about her and.
Got it. Want to start crying on a podcast. But anyway, here's the important thing to say, which is that one of the greatest fallacies about AI is that it's just this thing out there that's just ambient. Now we'll be able to interact with it. No, AI is always made by people. And because it's so computationally expensive and it's such a big thing to do, it has to be made by some organization with resources. That organization becomes super powerful because network effects tend to concentrate power more and more in the central nodes of a digital network. The less friction there is, the more network effect there is.
So, therefore, the people who get to own the big AIs, like Grok or us or whatever, tend to become all powerful, and everything else becomes a feeder to them. And so the thing is, if you're interacting with a simulate with a fake lover or a fake God or a fake therapist, which are big ones now, or a fake parent, what you're really interacting with, inevitably, until some total transformation of human affairs is some big company that runs it, that has interests that override yours, and you're doing it for their benefit more than yours, intrinsically and unavoidably. So it's impossible for you to have an AI lover or God or simulated parent. All you can have is a version of that brought to you by a company whose interest comes before yours. As a matter of fact, get that. Understand it before you go down this path, if that's okay with you and you're into it. I'm not going to tell you what to believe. I'm not going to tell you who to love.
I'm not going to tell you how to remember your parent. But understand that reality or else fool yourself and be an idiot. You just have to get it.
Very powerful. Yeah, I mean, I do see it having benefits. You know, there's. Because women outlive men, you know, by at least 10 years. And, and the United States, you know, there's a cohort of women that are.
Have you seen, have you seen the memos from inside Meta from just the last week about their policies about. Oh well, they have permissive policies about AI citizen and they do. Yeah.
Rapacious. Rapacious. I think you mispronounced rapacious.
I'm just saying the hypothetical of this thing being this neutral AI thing that's there for you is interesting. The actuality on this planet is it's always in the surface of whoever operates it. There might be a company like Meta Xai, whatever that thing is called, which are run by single individuals. They aren't run by normal governance. It might be the Chinese Communist Party. That's right. But it's not like there is no option available that isn't like that. Nor will there, nor can there be for a long time because of the resource intensiveness of running these things.
You just have to understand that I'm just asking you to perceive reality here. No judgment. If you perceive reality and you still are okay with that reality, you just go. Not my job to talk to you, but I really think it is your job to perceive reality.
Absolutely. Okay, couple of short form questions and then we'll get out of here.
Gerald.
Okay, here we go. And these are all physics questions. Now we covered the psychological portion. Okay, so Einstein famously used these gedankin Experiments. I mentioned one at the beginning. The free fall in an elevator, obviously traveling near a train at the speed of light, looking in a mirror, et cetera. He used thought experiments. He was a theorist, although he has a patent or two for a refrigerator with Leo Silard.
I know. I love that refrigerator. That's a. That's for a magnetized coolant. And it was to. It was to solve a problem of dangerous coolants at the time, but it's still used in nuclear reactors. It's a great thing that he made.
Oh, wow.
I didn't know it was using react.
Oh, okay.
All right, cool.
So I'm going to have to investigate that.
So.
Okay, so let me. Let me start the question over again. Here we go.
Okay.
So the Einstein used these gedankin experiments, thought experiments, to bend reality and discover new laws of physics. So what I want to know from you is the. In the abundant future of VR, how could we use that? As a physicist, you know, I find I didn't get to tell you my second Keating test after, you know, committing suicide for an AI. But. But the second one is really, do we discover some new phenomenon of physics or even retrodict, you know, it's not even clear that an AI or Turing Test, you know, kind of could be considered past if it can't even correctly retrodict things like the anomalous perihelion advance of Mercury. But let me ask you this question. Can VR. Do you see VR as being an indispensable tool to a physicist the way a slide rule used to be, or, you know, or a simulation, you know, or a large scale, you know, supercomputer is today?
I have to answer honestly that if he'd asked me this question in 1981 or something, I would have had an extremely confident certainty that of course it will be. And I would have thought it would be by now, but in fact, it only occasionally has been a visa Physics. Occasionally there have been a few educational things, a few simulations of relativity and quantum effects in VR that I think of help some students. I think occasionally there's been some data visualization in VR that's been helpful. It certainly can be fun and beautiful. I still kind of think something's going to come together with that, but it hasn't, I would say, in a real important way. Not. Not as yet.
So we have to leave that as an uncertainty for the future. Okay.
So recently I heard past guest on my podcast, Sam Harris, you know, crediting you with his, you know, excision, his exodus from Twitter. What do you make of social media. Can you give us a capsule kind of summary of the case against it, please?
A hypothetical version of social media with different incentive structures built in could be quite a lot better than what we have today. The social media we have today totally aside from any comment on the individuals who run the platforms, although in some cases I wish they were different. The incentives tend to be about driving attention. And when you drive people's attention artificially, you do tend to make people sort of irritable and hyper excited. And there's just a bunch of side effects of it that are really negative. You know, unfortunately, we can't imagine the social media we might have someday. The network effect issue is very central to this because I get emails every day without exaggerating from people saying, we've started this new social media platform, that's going to be good. It solves all these problems.
And the problem is I can't get any momentum out of control because everybody's already on the other ones. And there's this coordination problem where you can't get everybody to move at once. I think a whole lot of the people who are on X now wish they weren't, but they're. They're there, you know, and what are they supposed to do? And you know, blue sky is cool, but, you know, it's so hard to start something like that and start to get any traction on it. So these things tend to be really persistent. TikTok took advantage of one of the rare moments when you can shift a bunch of people to a new platform, which is when. And they're just young and coming up, or there's some major division like language or something that allows you to do it, or law. But it's a hard, it's hard.
It's just really hard to get people to change. And the current ones just make people into assholes, you know. Yeah, make people into irritable assholes gradually. Maybe not instantly, but eventually they'll get you.
And yeah, I mean, the infinite scroll and you know, kind of the, the opposite of. As you've talked about quite frequently. You know, attention is the commodity and, and you're the product if you're using these things. And I say this as a someone who is, you know, pretty much addicted to, to X, although I do, I have used it quite successfully to engage with Nobel prize winners and guests on the podcast, you know, and, and I try to keep it kind of benign and, and not get too, you know, dragged into it. Although I did have on Elon, you know, on the podcast once and kind of baited him into it on Twitter and so forth. It was kind of a non, A non.
Well, I mean, I'd written a piece in the New York Times suggesting that a number of people, including Trump and Elon and some others, had experienced personality degradations that were in part from Twitter. And I actually think that's true because Elon used, Elon used to be a better guy, honestly. He just was. You know, something happened there, and I don't know.
Well, you know, he's rumored to use some other substance.
Yeah, whatever. I don't, I don't know, though, so I can't.
Although you, you mentioned Feynman did some substances. It didn't seem to affect him too, too negatively. But that was, it was close to.
The end of his life after he'd already had his cancer diagnosis and he wanted to try things. I don't, Anyway, I don't, I don't know. I, I, I, I don't like to talk about things where I don't really know. Yeah. But I'll tell you a joke, though. The. So there's, I have a joke about the Turing Test, and the joke is that all the Turing Test tells you is whether you can differentiate the person from the computer. Right, Right.
It doesn't actually have any objective measure of intelligence or any other quality. And that was the whole point of it, was that there isn't any objective. We don't have a consciousness meter. All we have is this ability to discriminate. So that was Turing's idea. And although even Turing's idea is much more complex than it's given credit for. If you read. Yeah, Anyway, but let's leave that aside.
So here's the thing. In order to enact the Turing Test, you need two people in one computer because you need the human contestant, the computer, and then the human judge. Two people, one computer. So the Turing Test cannot distinguish whether a person, whether the computer got smarter or the person got stupider because either event would cause the Turing Test to be passed. But there are two people on computers, so there's a two thirds chance that a person got stupider and only a 1/3 chance that the computer got smarter. And that's a joke that's not rigorous. But the reason I like to use it is that on social media, as an estimate, some minority of times, and I'll just say, to be generous, a third of the time, social media is great and elevates people, and two thirds of the time it makes people into wimpy, you know, cynics. And.
And irritable contestants with one another. So two thirds. One third. So I don't want to deny the reality of that one third. Like, I think there's a lot of good stuff that happens on it, and I would never deny that. I don't want to go on it because the two thirds is also real. So that's. It's a joke, but I think it's a reasonable heuristic.
I have a joke for you.
You ready? Okay, I'm ready. All right.
How do you know an engineer. A VR engineer is outgoing.
How do you know he looks at.
Your virtual shoes when he talks to you? All right, you could use that anytime you wish. All right, last question. Last question, my brother. Okay, the last question I have is one that is sort of a riff on Arthur C. Clarke. So the podcast into the Impossible derives from his monition or admonition that the only way to know the limits of the possible is to go beyond them into the impossible. That's how we go. We talked about how podcast comes from that.
He has many great sayings, including, for every expert, there's an equal and opposite expert. And of course, any sufficiently advanced technology is indistinguishable from magic. So I have two final questions riffing on that. What is the most magical technology you've ever encountered?
I mean, language is still unexplained. The fact. I mean, the fact that we can train a large language model and the thing works at all indicates that there's a lot going on in language. I think it's an amazing invention. I think it's an astonishing and unappreciated thing. I just think we try to focus on our software as a special thing, but really it's the language itself that's a special thing.
Yeah, yeah, I agree. Okay, last question. Also from Sir Arthur's, you know, Vault is the following. When an elderly. I'm not calling you elderly, my friend, but when an elderly and distinguished.
Yeah, what? Okay, go ahead.
When an elderly and distinguished scientist says something is possible, he or she presumably, is very much likely to be correct. But when he or she says something is impossible, they're very much likely to be wrong. I want to ask you, what have you been wrong about? What have you changed your mind about? What thing did you think was impossible came out to be true?
In the early days of the Internet, I was saying things very similar to what I'm saying now and then. There was such a fever pitch of enthusiasm for a set of things. I was really skeptical about that. I went along with them for a while. Like an example was pirate culture of like let's just make music free and all that. And now what that really turned out to be as well as the open source movement, for all of the admirable qualities and all of the. Everything that people love about them is in a way true. But they also, just because of network effects, they create more and more centralization of wealth and power.
In a few digital hubs like Google and other people tend to become impoverished with only sort of misleading examples of the occasional influencer or YouTuber who does well or something. But overall the whole community of musicians, for instance, sync down instead of. You know, and Spotify is not, is no substitute for mechanicals. I'm sorry, I've been a professional musician a lot. I. It's just not even in the same zone. So the thing is I, I allowed myself to get carried by popular. Another one is I was anti nuclear.
I got, I got arrested at a nuclear power plant when I was a teenager and stuff because I thought every. That was just the side of the angels was being anti nuclear. And now I'm thinking, oh my God. We set back research we really need now because I of climate change, like that, like we really blew it. And I knew better, I knew physics like what was wrong with me and it was just social pressure was so intense. And that doesn't mean that everything about the anti nuclear movement was wrong. Yeah, sure, the nuclear energy world was filled with corruption and all kinds of problems and a lot of, I mean like, but the point is what we did, we were so effective socially that we created a. We created a kind of a black mark on this whole thing that now we really, really need.
And if we'd been doing more research all these years, I think we'd be in a better place. And it really kills me. And so, I mean, I think, I think a lot of the, like, I don't regret just being technically wrong about something because what can you do?
Yeah, exactly.
But I do regret being swayed by social pressure.
Well, there was a girl involved if I remember maybe.
Oh yeah, there's always a girl. Once again, I know I'm gonna get canceled.
All right, that's three cancellations. All right, so we gotta be out here. The Ayanahara, the Ayan Hara. We got the Kupu Poo it. All right, Jaron, thank you for reminding us. Thank you for all you do, my friend. Thank you for reminding us that technology is never just about circuits and code. It's about what kind of humans we want to choose to become.
And so, from VR as a new physics lab to the ethics of AI to the ancient command to choose life, you've given me and my audience ways to choose wonder and heed warnings. And I think listening to Jaron and learning more about him is such a delight. And I'm glad that we exist in the same reality together.
Thank you. Likewise. This is really fun. All right, bye, my friend.
Take care.
Thank you.
Virtual reality is about expanding reality rather than escaping it. What should we do with all that power? That's the question Jaron left us with. And it connects directly to another uncomfortable conversation I had recently. And I know if you enjoyed this conversation about technology and consciousness, you need to check out my conversation with Donald Hoffman about the case against reality. Don argues that reality itself may be a kind of interface, a cosmic VR system that evolution designed to keep us alive rather than to show us the truth. It's the perfect companion to today's discussion about perception, virtual worlds, and what consciousness actually is. Click that episode here. And don't forget to like and comment and subscribe.
Also generated
More from this recording
🔖 Titles
Jaron Lanier on How Virtual Reality Expands Minds and Reveals Our Biological Limitations
Into the Impossible: Jaron Lanier Discusses VR, Consciousness, and the Real Dangers of AI
The Power and Pitfalls of VR: Jaron Lanier Redefines Reality and Human Experience
Why VR Isn’t Like AI: Jaron Lanier Explores Tech’s Effect on Human Consciousness
Jaron Lanier Explains Why Virtual Reality Will Revolutionize How We Perceive Ourselves
Discovering Consciousness and Identity in Virtual Worlds with VR Pioneer Jaron Lanier
Can VR Expand Human Potential? Jaron Lanier on Technology, Cognition, and Ethical Choices
Jaron Lanier on the Surprising Dangers of AI and the Creative Potential of VR
Redefining Reality: Jaron Lanier on Sensation, Feedback, and What VR Means for Humanity
The Future of the Human Brain in Virtual Worlds with Jaron Lanier and Brian Keating
💬 Keywords
Virtual reality, artificial intelligence, human consciousness, cognition, VR hardware, sensory input, sensory deprivation, Richard Feynman, simulation hypothesis, perception, cyber sickness, vestibular challenges, peripheral vision, Microsoft, Apple Vision Pro, energy demands of AI, LLMs (large language models), GPU optimization, homunculus flexibility, embodiment, creativity, human-computer interaction, sociology of technology, AI therapist, digital immortality, Turing Test, Coltrane and AI music, education technology, simulation hypothesis, Fermi paradox, black hole computers, philosophical implications of VR
💡 Speaker bios
Brian Keating Bio in Story Format:
Imagine a scientist who sees the future not as a threat, but as an invitation to expand our minds. Brian Keating, an astrophysicist, finds inspiration in pioneers like Jaron Lanier—the inventor of virtual reality who jammed with the legendary Richard Feynman and now warns that artificial intelligence risks making us less human. Keating shares this belief: technology should be used to elevate consciousness, not replace it. While many people lump VR and AI together, Keating recognizes, as Lanier does, that they're actually opposites—VR can transform imagination and creativity, letting us explore impossible worlds and change how we think. In Keating’s view, the real magic of technology lies in its power to unlock new dimensions of human possibility, not to render us obsolete.
💡 Speaker bios
Bio for Jaron Lanier (summarized story format):
Jaron Lanier grew up in New Mexico, where he met his first girlfriend—whose parents were separated and she was staying in the area. Driven by youthful devotion, he followed her to Los Angeles, only to discover that her father was the head of the Caltech physics department. Through this connection, Lanier had a special, informal link with the legendary physicist Richard Feynman. Unlike others who know Feynman through biographies or public tales, Lanier’s relationship was unique and personal, though not as a typical student. His memories reflect the serendipitous and sometimes peculiar ways that lives and stories intertwine, shaping Lanier’s perspective on people and ideas rather than on achievements or formal roles.
💡 Speaker bios
Certainly! Here’s a short bio for Brian Keating, in a summarized story format, using the context and tone of your provided text:
Brian Keating’s journey through the mysteries of reality began at Caltech, where he had the rare opportunity to observe the legendary physicist Richard Feynman in action. Feynman wasn’t just a master of quantum theory—he pushed the boundaries of his own mind and body, experimenting with sensory deprivation tanks and turning his own senses into the ultimate laboratory. Witnessing Feynman’s fearless exploration of sensation and feedback inspired Keating to ask deeper questions about our perception of reality itself. This early exposure shaped Keating’s lifelong quest to understand the universe—not merely as a set of equations, but as an immersive experience. Today, Keating’s work challenges us to rethink the nature of reality and strives to bridge the physical and virtual worlds, continuing the legacy of curiosity and innovation he inherited from those transformative days at Caltech.
💡 Speaker bios
Brian Keating is a pioneering thinker whose journey spans from exploring virtual reality to pushing the boundaries at Microsoft, and even composing music. Known for his contrarian approach, Keating delves fearlessly into topics like consciousness, creativity, and the evolving relationship between humans and technology. He's not afraid to challenge Silicon Valley’s narratives, warning that interactions with AI—from therapists to romantic partners—are shaped by corporate interests. With a career marked by deep curiosity and innovative achievements, Keating continues to provoke important conversations about the future of our minds in the age of artificial intelligence.
ℹ️ Introduction
What happens when one of the pioneers of virtual reality sits down to reimagine our relationship with technology—and with reality itself? In this episode of the INTO THE IMPOSSIBLE Podcast, host Brian Keating welcomes Jaron Lanier, the legendary VR innovator, Microsoft scientist, musician, and provocative thinker who believes we’re only scratching the surface of what technology can do for the human mind.
But don’t get it twisted: while VR and AI are often lumped together as “disruptive” forces, Jaron Lanier challenges that narrative. For him, VR is all about expanding our consciousness and exploring the possibilities of human perception—turning you into an octopus, reimagining your sense of self, and fundamentally changing the way your brain works. AI, on the other hand, risks making us less human, putting our deepest desires in service of companies who own those intelligent systems.
In this conversation, Brian Keating and Jaron Lanier dig into the big questions: What does VR reveal about the quirks and limits of human perception? Why do so many current VR experiences miss out on the “cool stuff”—like transforming your very sense of embodiment? How do limitations like “cybersickness” reflect the messy, biological realities of our brains? And what happens when you try to create a virtual version of someone you love (or fear)—is it really them, or just a company’s algorithm in disguise?
The episode journeys from personal stories (like Jaron Lanier’s musical jam sessions with Richard Feynman) to deep reflections on the nature of consciousness, the pitfalls of social media, and why meaning can’t be reduced to bits or faked by an AI. Along the way, you’ll hear about everything from black hole computers and the Fermi Paradox to the lessons of the Talmud and what Galileo teaches us about science, curiosity, and the art of learning by doing.
Whether you’re curious about VR and AI, the future of consciousness-altering technology, or simply want a wild, philosophical ride, this episode will change the way you look at the very idea of reality. Buckle up: it’s time to go into the impossible.
📚 Timestamped overview
00:00 Human consciousness and existence are both enigmatic and fascinating, inspiring exploration and experimentation through metaphysics and virtual reality.
08:31 The brain retains traces of controlling other species' bodies, aiding adaptation and hinting at its potential for future evolved forms and enhancements.
12:47 In the 80s, VR pioneers showcased expensive, rare tech to Hollywood, wowing studio head Lou Wasserman and others.
16:18 Sensitivity to VR sickness varies widely, with certain groups more affected, contrary to claims of complete solutions.
26:02 The text contrasts two books, "Dawn of the New Everything" and David Graeber's "The Dawn of Everything," highlighting the latter’s focus on diverse human societies and their potential, inspired by archaeological insights.
27:29 Perceptions reflect survival-driven adaptations, not reality; mindful software and interface design must balance illusion and utility.
35:55 VR evolves with technology and human perception adapts, highlighting the flexibility of human nature and optimism for future advancements.
40:54 Long-term, AI models might require minimal data input/output and could be powered off Earth, using solar energy and simple systems like lasers for efficiency.
45:45 80s video shows me using a glove to play a virtual saxophone, simulating hand movement and interaction fluidly without UI changes.
47:59 Coltrane's music is an interactive, era-dependent system, not a static value, and its meaning is context-driven.
53:58 AI discourse today originates from Marvin Minsky's ideas, spread by his students. Though I often disagreed with him, I deeply respected and admired his kindness.
59:22 Meaning exists but is imperfect; statistics, as valid math, helps analyze approximations and patterns in knowledge.
01:07:04 Exploring other star systems physically, despite challenges, may be worthwhile for resources and possibilities, though virtual approaches are also considered.
01:09:32 Cosmic cycles may be too slow for humans to detect, offering a potential explanation for the Fermi paradox, though current understanding remains insufficient.
01:18:57 Education should shift from information transfer to fostering passion and character.
01:19:40 AI models can excel at tasks but require human oversight to catch occasional errors.
01:26:43 The text explores parallels between medieval Catholic scholasticism and contemporary debates on AGI and tech, likening both to abstract, cyclical arguments reacting against Enlightenment ideals.
01:31:40 AI relationships serve corporate interests over personal ones; users should recognize this dynamic before engaging.
01:40:07 Turing Test involves humans and a computer, but its outcome can't distinguish smarter AI from dumber humans. Social media similarly uplifts some (1/3) but mostly degrades behavior (2/3).
01:44:11 Regret over teenage anti-nuclear activism for hindering critical research needed to combat climate change.
📚 Timestamped overview
00:00 Fascination with Consciousness and Reality
08:31 "Brain's Adaptation to Body Types"
12:47 "80s VR Showcase with Spielberg"
16:18 "VR Sickness and Sensitivity"
26:02 "Dawn of Everything Connections"
27:29 "Perception, Reality, and Design"
35:55 "VR, Perception, and Human Adaptation"
40:54 Space-Based AI Powered by Solar
45:45 Virtual Saxophone with Gloves
47:59 "Coltrane: A Contextual System"
53:58 "Marvin's Legacy in AI"
59:22 "Imperfect Knowledge, Real Statistics"
01:07:04 "Exploring Physical Space vs. Virtual"
01:09:32 "Slow Cycles and Fermi Paradox"
01:18:57 "Reformulating Education for Passion"
01:19:40 "Understanding AI Errors"
01:26:43 "Tech, AGI, and Medieval Echoes"
01:31:40 "AI: Corporate Control Warning"
01:40:07 Turing Test and Social Media
01:44:11 Regret Over Anti-Nuclear Activism
❇️ Key topics and bullets
Absolutely! Here’s a comprehensive sequence of the topics covered in this episode, organized with main topics and their related sub-topics:
1. Introduction to Jaron Lanier and Core Themes
Brian Keating introduces Jaron Lanier as the “father of virtual reality” and a contrarian thinker.
Differentiation between VR (virtual reality) and AI (artificial intelligence).
Jaron Lanier’s philosophy: VR expands human consciousness, AI may threaten humanity.
2. Jaron Lanier’s Early Experiences and Influences
Jaron Lanier recounts informal mentorship with Richard Feynman.
Musical collaboration and Feynman’s sensory experiments.
LSD experiments and the distinctive mentorship style.
Reflections on the nature of sensation, feedback, and their philosophical implications.
3. History and Development of Virtual Reality
The origin story of VR and Lanier’s role (and the ‘mother’ of VR concept).
Early VR technological advancements and key inspirations (e.g. Ada Lovelace, Suzanne Langer).
Challenges of pioneering VR technology and its initial impact.
4. Current State of VR and AI: Misconceptions and Surprises
The key distinctions and misunderstandings about VR and AI.
The “coolest” aspects of VR overlooked by mainstream applications.
Transformative potential of changing perceived body in VR (“homunculus flexibility”).
Research labs focusing on embodiment and its cognitive effects.
VR as an exploration of evolutionary brain potential and future adaptation.
5. Limitations and Practical Challenges of VR
Mainstream VR’s shortcomings and unfulfilled promises.
Vestibular and physiological limitations (cybersickness, peripheral vision loss).
Diversity gaps in engineering affecting VR usability and comfort.
The necessity of designing VR with biological realism in mind.
6. The Nature of Consciousness and Reality
Philosophical debates: perception as a statistical filter versus reality (reference to Donald Hoffman’s work).
Statistical knowledge as the nature of our connection with reality.
Limits of direct experience and the role of illusion in perception.
7. Influence of Science Fiction and Popular Culture
Jaron Lanier’s connections to authors like William Gibson and Neal Stephenson.
Reflections on cyberpunk media, Lawnmower Man, Snow Crash, and their impact on VR’s public perception.
Relationship between fiction’s dystopian/utopian visions and technical development.
8. The Human Sensorium and Tech’s Biological Interface
Limits and possibilities of sensory input and feedback in VR (hardware and design constraints).
Stories on early VR, sickness, and user adaptation.
9. The Future of AI, VR, and Physical/Energy Constraints
Role of AI models and their creative limitations.
Energy demands of AI, big data, and the potential for off-world computation.
Relationship between computational hardware evolution (GPUs) and VR/AI progress.
Underutilized haptics and interaction in VR compared to visual fidelity.
10. Creativity, Music, and Meaning in Human Experience
Can AI/VR ever replicate creative geniuses like Coltrane?
The Turing Test: limitations as a benchmark for intelligence or creativity.
Importance of context and narrative in the meaning of artistic works.
11. Educational Value and the Evolution of Learning
Impacts of VR and AI on education: tools, perils, and future directions.
Jaron Lanier’s philosophy on fostering intrinsic motivation, character, and critical thinking over rote skills.
12. Community, Diversity of Thought, and the Talmudic Model
Importance of preserving context, dissent, and plurality in discourse (the Talmud as a model for collaborative knowledge).
Critique of the “singular truth” approach in platforms like Wikipedia and AI.
13. The Simulation Hypothesis, Fermi Paradox, and Cosmological Speculation
Light-hearted, speculative discussion on black hole computers, the Fermi paradox, and cosmological computation.
Potential VR roles in interstellar travel, consciousness, and cosmic scale computation.
14. Ethics, Free Will, and Technology’s Social Impact
Ethical reflections on AI’s role in eroding or enhancing free will.
Jaron Lanier’s skepticism toward Silicon Valley techno-utopianism and stealth religiosity.
Cautions about AI’s alignment with corporate interests, especially in the context of human relationships and memory.
15. Social Media, Attention, and the Case Against Algorithmic Platforms
Critique of mainstream social media platforms: their design, impact on cognition and society, and the trap of network effects.
Discussion on the potential for healthier, more pluralistic social media.
16. Reflection, Regret, and Changing One’s Mind
Jaron Lanier’s admissions about historical misjudgments (pirate/open source culture, anti-nuclear stance).
Importance of not yielding to social pressure and the value of critical, independent thinking.
17. Final Philosophical Reflections
The magic and mystery of language as “technology.”
Embracing the imperfect and approximate nature of knowledge and reality.
Call to choose life, consciousness, and human flourishing amidst technological change.
This should provide a detailed map of the episode’s thematic journey, with key sub-points for each main topic. If you need further breakdowns or want timestamps, just let me know!
👩💻 LinkedIn post
Absolutely! Here’s a LinkedIn post you can share, based on the transcript of Jaron Lanier’s conversation with Brian Keating on The INTO THE IMPOSSIBLE Podcast:
🚀 Just listened to a truly mind-bending episode of The INTO THE IMPOSSIBLE Podcast featuring Jaron Lanier: “VR Will Expand Human Consciousness.” If you care about the future of technology, consciousness, or how we imagine reality, this conversation is a must.
🎧 In his trademark thoughtful and contrarian style, Jaron Lanier (the “father” of virtual reality and a principal scientist at Microsoft) joined Brian Keating to break down why VR and AI are not the same—and how each technology impacts our minds in totally different ways.
3 Key Takeaways:
VR is not AI. While many conflate the two, Jaron Lanier argues that VR expands our consciousness and creative potential, while AI—particularly in its current, corporate-driven forms—often narrows it by overriding human agency.
The future of VR is in changing ourselves, not just the world. The most powerful (and underexplored) frontier in VR is its ability to alter users’ perceptions of their own bodies and cognitive abilities. Imagine experimenting with being an octopus or a cloud—and what that unlocks for the brain!
We can't ignore our messy, biological reality. Despite Silicon Valley’s love of the abstract and the digital, designing impactful VR means embracing our physical, biological quirks—like motion sickness (“cybersickness”), the limits of our senses, and how our brains improvise with imperfect information.
Jaron’s final challenge: As VR technology expands what’s possible, what kind of humans do we choose to become?
Highly recommend giving this thought-provoking episode a listen if you want a glimpse of where reality—and humanity—might be headed.
#VR #AI #Consciousness #Technology #HumanPotential #PodcastInsights
🧵 Tweet thread
🚀 What if the inventor of virtual reality thinks AI is making us LESS human? Let’s dive into the wild, mind-expanding ideas from Jaron Lanier—VR pioneer, Microsoft scientist, and all-around tech contrarian—thanks to his epic conversation with Brian Keating. 👇
1/ First myth BUSTED: VR and AI aren’t the same beast.
While people think both just "change reality," Jaron Lanier believes they’re OPPOSITE forces.
VR = expands consciousness, creativity, and how we imagine.
AI = risks overriding our interests with those of corporations.
2/ In classic VR, you can change your own body—turn into an octopus, a cloud, or something wild—and experience NEW forms of cognition.
Labs at Barcelona & Stanford are exploring this “homunculus flexibility.” The future? Discovering what bodies our brains can control best…even alien ones. 🤯
3/ But here’s what Silicon Valley doesn’t want you to hear:
When you chat with an AI lover or therapist, you’re NOT talking to “neutral tech.” You’re engaging with a company who owns the interests of that bot. Read that again.
4/ VR’s biggest missed opportunity?
Most mainstream VR apps miss the coolest stuff—changing your body and mind. The real frontier is not graphics, but how VR can reshape what it MEANS to be human.
5/ Yet, there are limits—like cyber sickness (yes, even kittens wore VR headsets in ancient NASA studies—help Jaron Lanier find that pic, internet! 🐱). Not everyone’s brain can handle losing peripheral vision.
6/ AI vs. Human Creativity:
Can AI have its own “happiest thought,” like Einstein’s elevator epiphany? Jaron Lanier says nope—without a body, there’s no gut feeling, no nausea, no true creative leap. LLMs remix patterns, but don’t spark breakthrough ideas.
7/ The metaphysics of reality:
Are we just living in a simulation? Is consciousness an illusion, or does the “statistical realness” of our perception prove we’re not digital puppets?
8/ Social media is a SHAPESHIFTER:
"Brian Keating started with Shortform to prep for Jaron Lanier—but Lanier’s REAL warning? Social media platforms exploit our emotions, making us addicted and agitated for engagement. (Attention IS the commodity.)"
9/ Talmudic wisdom meets Silicon Valley:
Why does arguing matter, even in AI? It’s all about context, perspective, and the joy of learning—“You can’t know it totally unless it’s not real.” Diversity of thought beats monolithic, "one-truth" algorithms.
10/ Final thought:
Technology isn’t just about circuits—it’s about what kind of HUMANS we choose to become. Whether it’s VR, AI, or social platforms, ask yourself: Are you reclaiming your mind—or letting the machines rewrite it for you?
🧠💡 If you’re curious about the case AGAINST reality itself, check out Brian Keating's chat with Donald Hoffman next. Your brain will never be the same.
#VR #AI #JaronLanier #BrianKeating #TechEthics #Consciousness #FutureOfHumanity #SocialMedia #PodcastThread #Philosophy
What part blew your mind most? ⬇️
🗞️ Newsletter
Subject: Expanding Reality: Jaron Lanier on VR, AI, and the Future of Human Consciousness
Hi there,
We’re thrilled to share the highlights from our latest episode of The INTO THE IMPOSSIBLE Podcast, featuring a truly mind-expanding conversation between Brian Keating and VR pioneer, technologist, and philosopher Jaron Lanier. If you think you know what VR and AI are all about, think again—this episode will challenge your assumptions and inspire your imagination.
Inside the Episode: The Man Who Invented VR Rethinks Reality
Did you know that the father of virtual reality (Jaron Lanier) isn’t sold on AI—or at least, not in the way Silicon Valley wants you to be? In fact, he sees VR and AI as forces pulling us in opposite directions: VR to expand human consciousness, AI (as it is currently structured) as a potential threat to what makes us fundamentally human.
Some highlights:
VR Isn’t Just Escapism: Jaron Lanier makes the case that fully immersive VR is about changing and enhancing our minds—not replacing reality, but expanding it. He explains how VR can literally alter cognition, unleashing creative and even evolutionary potentials within us.
AI’s Hidden Dangers: When you talk to that AI therapist, girlfriend, or chatbot, “you’re not interacting with some neutral technology. You’re interacting with a company whose interests override yours.” Jaron Lanier pulls back the curtain on the incentives shaping our relationship with artificial intelligence—and why realism matters more than ever.
Consciousness & Reality: Drawing on memories of jamming with Richard Feynman and debating the nature of sensation, Jaron Lanier explores whether we can ever perceive reality directly—or just filtered through messy, beautiful biology.
VR’s Secret Superpower: What’s the wildest thing VR can do that most developers miss? Let you change your own body and, in doing so, unlock new ways of thinking. Academic labs are beginning to map out what happens when you “become” an octopus, clouds, or things unimaginable—maybe even glimpsing the future of human evolution.
Why Most VR Still Misses the Mark: Despite all the advances, too much of today’s VR is stuck on visuals and ignores the deeper, more transformative potential around embodiment, haptics, and interaction.
AI, the Turing Test, and Meaning: From thought experiments with Einstein to the pitfalls of the Turing Test, Jaron Lanier gets real about the limits of language models and the illusion of a “universal consciousness” inside your machine.
Quotable Moments
“VR expands what’s possible for human consciousness. It literally changes the way that you imagine, create, and how your brain works when you inhabit different worlds.”
– Brian Keating
“All you can have is a version of [AI] brought to you by a company whose interest comes before yours… If you’re interacting with a fake lover or a fake therapist, you’re doing it for their benefit more than yours, intrinsically and unavoidably.”
– Jaron Lanier
Listen & Watch
Prepare to rethink not just virtual reality and artificial intelligence—but consciousness, education, science, and what it means to choose life in a world full of possibilities and pitfalls.
🔗 Listen to the full episode now
🔗 Watch on YouTube
If today’s conversation captivated you, we highly recommend Brian Keating’s previous interview with Donald Hoffman: “Is Reality a Cosmic VR System?”
Until next time—choose wonder, heed the warning signs, and question everything.
Stay curious,
The INTO THE IMPOSSIBLE Podcast Team
❓ Questions
Absolutely, here are 10 discussion questions inspired by the episode "Jaron Lanier: VR Will Expand Human Consciousness" from The INTO THE IMPOSSIBLE Podcast:
Jaron Lanier argues that VR and AI are fundamentally opposite forces: VR expands consciousness while AI may be diminishing our humanity. Do you agree with his distinction? Why or why not?
What are the most surprising or profound ways Jaron Lanier believes VR can alter our cognition, such as changing the perceived body in virtual space? How might these experiences impact society or education?
Jaron Lanier mentions that mainstream VR currently misses out on what he considers its most fascinating possibilities. What do you think he means by this, and why do you think adoption has lagged?
The episode discusses the limitations of VR technology, particularly issues like cyber sickness and hardware constraints. How significant do you think these obstacles are to VR becoming a broader part of everyday life?
Jaron Lanier criticizes the lack of diversity in Silicon Valley’s engineering teams and suggests it affects VR design and user comfort. How important is diversity in technology development, and what are the consequences of its absence?
The conversation compares the structure of the Talmud to collaborative models like Wikipedia and AI. What lessons does Jaron Lanier suggest we can take from the Talmud for designing more nuanced or diverse AI and social media platforms?
Jaron Lanier and Brian Keating discuss education and the role of VR and AI in it. How could these technologies be used to inspire more joy and curiosity in learning, instead of just transmitting information?
The episode raises ethical concerns about experiences with AI, especially when interacting with AI “lovers” or simulated people. According to Jaron Lanier, what are the dangers of mistaking these interactions for genuine relationships?
Jaron Lanier comments on the spiritual or almost religious beliefs that emerge in the tech industry, especially related to AI. How would you describe this “stealth religiosity,” and do you see examples of it in today’s tech culture?
Reflecting on his own mistakes, Jaron Lanier mentions how social pressure influenced his views about technology and energy policy. How can individuals and society better guard against collective or popular “bandwagons” in tech and science?
Feel free to use these questions to spark deeper conversations, classroom sessions, or even your own thinking around the big ideas raised in this episode!
curiosity, value fast, hungry for more
✅ What if the father of Virtual Reality says AI could make us LESS human?
✅ Brian Keating sits down with Jaron Lanier—pioneer of VR, musician, and boundary-pushing philosopher—to explore why the future of tech should expand our consciousness, not replace it.
✅ On The INTO THE IMPOSSIBLE Podcast, they dive into the “VR vs. AI” myth, how VR reshapes our brain and creativity, and why most companies totally miss what’s really mind-blowing about immersive worlds.
✅ Ready to see why the most powerful future tech will be the one that keeps US in the center? Listen now and challenge everything you thought you knew about reality.
Conversation Starters
Absolutely! Here are some conversation starters for your Facebook group to spark thoughtful discussion about this episode:
Jaron Lanier argues that VR and AI are "opposite forces"—with VR expanding human consciousness and AI potentially making us less human. Do you agree with this distinction? Why or why not?
The episode discussed how classic VR allows users to radically change their own bodies (like turning into an octopus) and that this experience can profoundly affect cognition. What virtual “body” would you be most interested in inhabiting, and what do you imagine you might learn from it?
Jaron Lanier says that most current VR misses what’s “the coolest” about the technology: changing your physiology and perception. In your experience, has any VR experience ever transformed how you think or feel? Share your story!
On the limitations of VR, Brian Keating and Jaron Lanier touched on "cyber sickness" and biological barriers like loss of peripheral vision. Do you consider these fundamental obstacles—or are they just bumps in the road to mainstream adoption?
The episode raised concerns about AI-powered companions—therapists, lovers, friends—being controlled by corporate interests. Would you trust an AI for personal relationships or therapy? Why, or why not?
Jaron Lanier discussed the idea that our experience of “reality” is inherently statistical and incomplete, referencing quantum theory and sensory biology. How does this perspective influence your view of virtual worlds or simulations? Is reality itself a kind of VR?
Both guests talk about the need for technology to help us "choose life" and foster human connection, rather than just escaping reality. What do you think is the most powerful way VR could be used to improve education, relationships, or society?
From stories about jamming with Richard Feynman to philosophical debates about consciousness, this episode blended science, art, and technology. Where do you see the greatest potential overlap between these fields in the future?
Jaron Lanier challenged the utility of the Turing Test and suggested that meaning comes not just from information, but context and story. Do you think an AI could ever truly “create meaning,” or is that something uniquely human?
Towards the end, the podcast explored whether technologies like VR could ever help us “become better humans.” What’s one way you’d like to see virtual reality or AI expand—not replace—your own sense of self, creativity, or wonder?
Feel free to tailor or expand these to fit your group’s vibe!
🐦 Business Lesson Tweet Thread
1/ What if virtual reality could rewire your mind—not just your tech habits?
2/ Jaron Lanier says VR isn’t here to replace us. It’s here to EXPAND human consciousness itself.
3/ Forget the dystopia. Step into VR and you can literally swap your body—become an octopus, a cloud, something unimagined. Your brain adapts, your cognition shifts.
4/ The wild part? None of the big “VR experiences” online come close to this. We’re missing the best feature: changing who we are on the inside.
5/ There’s real science here. You stretch your brain by stretching your sense of body—labs at Stanford and Barcelona are mapping how imagination changes us.
6/ It’s not about perfect tech or escaping reality. It’s about exploring the boundaries of possible humans—future bodies, future minds.
7/ Most tech just wants to monetize attention. VR could do something deeper: reshape how we think, create, and connect.
8/ Want your mind blown? The next frontier isn’t AI pretending to be you. It’s VR showing you what else you could be.
9/ We’re still early. The true magic isn’t onscreen—it’s inside you, waiting for a new reality to wake it up.
10/ VR isn’t about running away. It’s a lab for the future of consciousness.
✏️ Custom Newsletter
Subject: Into the Impossible: Jaron Lanier on VR, AI, and Expanding Human Consciousness 🎧
Hey Impossible Thinkers,
We’re thrilled to announce a wild new episode of the INTO THE IMPOSSIBLE Podcast that bends the boundaries of tech, consciousness, and reality itself! Our host Brian Keating sits down with VR pioneer, musician, and contrarian philosopher Jaron Lanier (yes, the guy who jammed with Richard Feynman) for a mind-expanding conversation you won’t want to miss.
Episode Highlights: "Jaron Lanier: VR Will Expand Human Consciousness" 🚀
Ever wondered how virtual reality actually changes your brain? Worried that AI is stealing our humanity rather than uplifting it? This episode brings all the big questions—and some surprising answers!
Five Keys You’ll Learn:
Why VR and AI are not the same—and how their impacts on our minds are complete opposites.
The coolest (but totally overlooked) thing about VR: changing your own body and cognition by becoming something else—octopus, cloud, you name it!
The hidden danger of AI therapists and lovers: You’re not talking to a neutral machine, you’re communicating with a company’s interests (yes, really!).
The power—and limits—of VR to shape future evolution: Discover how VR experiments reveal what body types your brain is secretly capable of inhabiting.
Why embracing our messy, biological nature may be VR’s missing ingredient—and what Silicon Valley gets wrong about people.
Fun Fact From the Episode:
Did you know the very first head-tracked visual display wasn’t made for humans? It was for kittens! Jaron Lanier shares a hilarious anecdote about tracking down a lost photo of kittens in VR goggles (from the 40s or 50s!).
Outtro: Why You Need to Listen Now
This episode isn’t just a deep dive—it’s a conversation that will leave you questioning everything from what reality is, to the future of our very consciousness. If you’re curious about how technology can expand—not shrink—what it means to be human, this one’s for you.
Call to Action 🚦
Ready to step into the next dimension of thought? Listen now and let us know what you think! Reply to this email with your biggest takeaway or burning question—we LOVE hearing from fellow impossible thinkers.
Hit subscribe, share with a friend, and leave a review so we can keep bringing you the universe’s best minds!
Until next time, keep venturing INTO THE IMPOSSIBLE.
With curiosity,
The Into the Impossible Podcast Team
P.S. You won’t want to miss the companion conversation with Donald Hoffman (“The Case Against Reality”) that Brian Keating references—find it in our archives and prepare to have your mind blown twice!
🎓 Lessons Learned
Sure! Here are 10 key lessons from the episode, each with a short title and a concise description:
VR Expands Consciousness
Virtual reality enables us to alter our perception, unlocking possibilities for creativity and changing how our brains work.AI vs. VR: Different Impacts
AI can make us less human if misused, while VR aims to enhance and expand our humanity and imagination.Changing the Body, Changing Mind
In VR, altering your body form (like becoming an octopus) deeply reshapes cognition and reveals the brain’s flexibility.Limitations: Cyber Sickness
Hardware constraints, like restricted peripheral vision, cause simulator sickness, highlighting challenges VR tech faces for mainstream adoption.The Power of Evolutionary Senses
Human senses are both powerful and imperfect—VR explores and sometimes exploits these strengths and weaknesses.Realism Over Hype in AI
Large language models are useful tools, but it’s crucial to remain realistic about their limitations and not ascribe them magic.Ethical Risks: Company Control
Interacting with AI companions means engaging with corporate interests, not neutral tech; always recognize who controls the simulation.The Importance of Human Context
Art, music, and knowledge gain meaning from their real, lived context—mere simulation loses intrinsic value outside human experience.Education’s New Role
VR and AI push education to focus less on rote tasks, more on joy, creativity, and critical thinking in human development.Statistical Reality and Meaning
Our understanding of reality is always incomplete—embrace the “noisy channel” as the authentic human condition, not a flaw to erase.
10 Surprising and Useful Frameworks and Takeaways
Absolutely! Here are ten of the most surprising and useful frameworks and takeaways from this episode of The INTO THE IMPOSSIBLE Podcast with Jaron Lanier and Brian Keating:
VR vs. AI: Opposite Forces in Shaping Humanity
Brian Keating sets the stage by exploring Jaron Lanier’s belief that virtual reality expands human consciousness, while AI risks making us less human. Rather than seeing VR and AI as similar “digital” technologies, Lanier argues they fundamentally pull us in different directions—VR enhances our creative potential, whereas AI can compress and replace uniquely human experiences.Changing Your Physiology in VR Transforms Cognition
One of Lanier’s most striking ideas is the power of full-body transformation in VR. When you inhabit a virtual body—be it an octopus or a distribution of clouds—your cognitive abilities and self-perception shift dramatically. This area (“homunculus flexibility”) is still largely unexplored in mainstream VR but is being studied in labs, revealing just how adaptable—and mysterious—the human brain and consciousness are.VR as a Laboratory for Futuristic Evolution
By experimenting with novel body types in VR, we can discover not only what our brains are pre-adapted to control from evolutionary history, but also what might be possible in future evolution. It’s a sneak peek into humanity’s latent potential—bodies and abilities we haven’t even imagined may feel “natural” in VR.Sensory and Motor Systems as Filters and Noisy Channels
Lanier emphasizes that our brains receive only imperfect, filtered input through our senses, and this “noisy channel” is what grounds us in reality. Instead of treating imperfection as a problem, it’s the messiness of biology—and our mind’s way of working with it—that makes consciousness and reality itself possible.AI Is Human Collaboration—Not Isolated Intelligence
Lanier demystifies large language models (LLMs) and AI. He frames them as massive, pattern-projecting collaborations: like Wikipedia but algorithmically mashed together. AI isn’t an autonomous mind, it’s a jumbled cooperation of human inputs processed statistically. This realism makes AI more useful, not less.Simulator Sickness and Biological Diversity
One of the biggest hurdles for VR is “cyber sickness.” Lanier highlights how certain demographics—like small-stature Asian females—are far more affected, and how Silicon Valley’s lack of engineering diversity exacerbates this. The takeaway: VR design must deeply respect and accommodate biological variations, not just technical specs.The Talmudic Framework for Knowledge, AI, and Social Media
Lanier describes the Jewish Talmud’s approach of preserving many voices and perspectives on each page. He suggests this is a healthier model for AI, Wikipedia, and social media—embracing clusters of context and approximate truths, rather than aiming for one “perfect view from nowhere.” It’s a statistical, not absolute, approach to knowledge.VR’s Untapped “Interaction and Haptics” Frontier
Despite visual advances, Lanier laments how modern VR still fails at realistic hand and touch interactions, such as playing a virtual saxophone. The future of VR is not just stunning graphics—it’s realistic, context-sensitive haptic feedback and embodied interaction, which remain majorly underdeveloped.Social Media’s Attention Economy and Personality Degradation
Lanier critiques current social platforms as systems that concentrate power and foster irritation, divisiveness, and personality degradation (notably affecting public figures). The incentives are misaligned—driving engagement by stirring up negative emotions.Reimagining Education for a Post-AI World
With AI’s rise, Lanier calls for education to shift from rote learning toward nurturing curiosity, joy, and the ability to creatively verify and contextualize information. The future skill is not memorizing facts or solving problems in isolation, but being able to work with statistical tools (like AI) and judge when results make sense.
Bonus Insight:
Lanier’s caution about AI “lovers,” “therapists,” and digital gods: These simulacra are always tied to the interests and profit motives of powerful corporations. Authentic human connection—whether in memory or therapy—requires a different source of meaning than what current AI can offer.
These frameworks and ideas don’t just help us understand technology—they challenge us to ask what kind of humans we want to become. Let me know if you want deeper dives on any of these!
Clip Able
Absolutely! Here are 5 engaging, thought-provoking social media clips from the conversation between Brian Keating and Jaron Lanier. Each clip is at least 3 minutes long, and I've included a compelling title, the relevant timestamps, and a suggested caption for each.
1. Title: "Changing Your Body, Expanding Your Mind: The Untapped Power of Virtual Reality"
Timestamps: 00:06:13 – 00:09:43
Caption:
Jaron Lanier dives deep into the mind-bending possibilities of VR—changing your virtual body, stretching the human homunculus, and expanding cognition. Why does mainstream VR miss its most fascinating feature? Find out why becoming an octopus (or a cloud!) could transform how your brain works and unlock new potential for collective human experience.
#VirtualReality #Consciousness #TechInnovation
2. Title: "The Messy Mystery of Human Senses—and What VR Gets Wrong"
Timestamps: 00:10:58 – 00:15:17
Caption:
Are our senses just glitches in evolution—or the heart of what makes us human? Brian Keating and Jaron Lanier confront one of the biggest hurdles in VR: simulator sickness. From kittens in headsets (seriously!) to Silicon Valley’s blind spots, they reveal why embracing biological messiness is both VR’s challenge and promise.
#VirtualReality #HumanExperience #TechReality
3. Title: "Can AI Ever Have a 'Happiest Thought'? Exploring the Limits of Machine Consciousness"
Timestamps: 00:20:19 – 00:24:47
Caption:
What does it mean for an AI—or any machine—to have an actual experience? Brian Keating uses Einstein’s happiest thought to probe the philosophical cracks in current AI, while Jaron Lanier offers a candid insider’s view on the real limits of large language models and the difference between statistical pattern-matching and genuine creativity.
#ArtificialIntelligence #Consciousness #Philosophy
4. Title: "The VR Revolution That Never Was: Missed Opportunities in Tech and Science"
Timestamps: 00:45:45 – 00:47:54
Caption:
Jaron Lanier reflects on the lost magic of early VR innovation and the persistent disappointments in today’s mainstream offerings. Why have vision and haptics leapt forward, but true human interaction remains stuck? Hear his stories about virtual instruments, the early graphics era, and what modern designers are failing to understand about our biological complexity.
#TechHistory #VirtualReality #Innovation
5. Title: "Education, AI, and the Future of Human Passion"
Timestamps: 01:16:28 – 01:21:17
Caption:
How should we teach in an age when AI can ace any test and fake any essay? Jaron Lanier and Brian Keating explore why the future of learning should be about joy, character, and helping students discover their true passions—even if it means rethinking what tests and assignments are really for.
#EdTech #FutureOfLearning #AI
Let me know if you'd like shorter clips or more focused moments!
Made with Castmagic
Turn any recording into a page like this.
Upload audio or video — interviews, podcasts, sales calls, lectures. Get a transcript, summary, key takeaways, and social-ready clips in minutes.
Or learn more about Castmagic first.
Magic Chat
Try asking
Google
Apple