Something went wrong!

Hang in there while we get back on track

Castmagic Castmagic
Princeton Scientist: We Don't Understand AI | Tom Griffiths
Sign up free
Highlights Chapters Takeaways Transcript More

The INTO THE IMPOSSIBLE Podcast

Princeton Scientist: We Don't Understand AI | Tom Griffiths

TG

Speaker

Tom Griffiths

BK

Speaker

Brian Keating

BK

Speaker

Brian Keating

Plain text
.txt — clean reading copy
With timestamps
.vtt — for web video
Subtitles
.srt — for video editors
Audio

Tom Griffiths explores the mysteries of AI and human cognition with insights from mathematical logic, consciousness, and cognitive science. He reveals the limits of our understanding of modern AI systems, parallels with human thought, and the ongoing challenges of decoding how minds and machines process information.

✨ Magic Chat

Don't have time for the full episode?

Ask anything about this conversation — get answers in seconds, sourced from the transcript.

Try asking

Featured moments

Highlights

“The man who built modern AI, he's the direct descendant of the man who invented the math that made it possible, which is insane, but it's not the wildest thing. My guest told me today.”
— Brian Keating
“Rethinking AI through a Physicist's Lens: "From a physicist point of view, whenever”
— Brian Keating
“The Mystery of Consciousness "I think about one of the big challenges of studying consciousness is that we don't necessarily know what computational problem consciousness is solving. That's why it's something that's continued to be mysterious.”
— Tom Griffiths
“the difference between knowing the name of the thing and knowing something about it is the most dangerous gap in all of science.”
— Brian Keating
“humans are necessarily not the best reasoners or not as reasonable as we think we are.”
— Brian Keating

Timeline

How it unfolded

Read along

Full transcript

Plain text
.txt — clean reading copy
With timestamps
.vtt — for web video
Subtitles
.srt — for video editors
Tom Griffiths

One of the, I think, interesting challenges we have at the moment is having built systems that we don't fully understand.

Brian Keating

The man who built modern AI, he's the direct descendant of the man who invented the math that made it possible, which is insane, but it's not the wildest thing. My guest told me today.

Tom Griffiths

That's pretty much exactly what he was trying to do. And he was the right kind of crazy.

Brian Keating

Ibns was trying to invent AI 250 years before computers even existed.

Tom Griffiths

Sycophancy is a major problem. If you take a rational agent and have them interact with a system which is sycophantic, then that agent is going to become increasingly confident in their beliefs, but no closer to the truth.

Brian Keating

My guest spent 20 years building the mathematics of how minds work, and he just told me three things that made me question what I thought AI actually was. Now, let me show you. From a physicist point of view, whenever

Brian Keating

I talk to people about consciousness, from Chalmers, Bostrom, and upcoming guest Joshua Bach and others, I always get the same thing, like we can't really define what consciousness is, so how do we know what thought is? So how can you determine what the laws of thought are? Isn't that kind of a extremely provocative and bold claim?

Tom Griffiths

The way that I approach that question in the book is really by thinking about what are the kinds of computational problems that minds solve? And that's really what this enterprise was. It's trying to figure out, like, what's the mathematical structure that describes the thing that minds are doing, whether that thing is what Aristotle was interested in, which is just trying to characterize what good arguments are through to some of the questions that you were raising about what does it mean to make a good decision and how do we think about rationality in that context? And so the interesting thing is, I think a lot of those questions are things that we can answer without ever having to touch consciousness. I think about one of the big challenges of studying consciousness is that we don't necessarily know what computational problem consciousness is solving. That's why it's something that's continued to be mysterious. We don't really know what it's there for in terms of how necessary it is to being able to do kinds of things that minds do. And our AI systems give us nice demonstrations. You know, again, some people might want to argue that they're conscious in some form or something like that, but I think they give us nice demonstrations of how far you can get using certain kinds of mathematical formalisms.

Brian Keating

Yeah. And there's many, many kind of allusions to physics in this book, which is so delightful in many different ways, not the least of which because it gives us some kind of formalism to hopefully go about this problem. But I, you know, as a physicist is want to do, I want to kind of get into what you would say maybe what is briefest kind of most parsimonious, defensible definition of thought itself and the laws that govern it.

Tom Griffiths

In the book I focus on deduction, which is sort of like patterns of logical reasoning going from things that are true to other things that are true. Induction, which is sort of seeing a pattern in the world and then making the generalization that thing holds in general and then abduction, which is seeing something that you want to explain and then coming up with an explanation for it. And I think that's a pretty good characterization of the set of things that we normally have on our list when we want to try and explain sort of patterns of thinking. And those are the things that we try and engage with in terms of like the different kinds of mathematical formalisms that are explored in the book.

Brian Keating

There's an awful lot of discussions of both the successes and our understanding of consciousness and the wrong turns. And I like that because for me personally, I hate when we teach our undergraduates as often as done. You know, we basically just teach them the string of Nobel prize winning experiments and you know, just connect the dots and that's. But you go through the, you know, the twists and turns and I thought one of them was, was sort of brought up this, this conjecture that, or this statement by Feynman, which is that the, you know, kind of the difference between knowing the name of the thing and knowing something about it is the most dangerous gap in all of science. What are some of the inherent biases that, that science has brought to it because it's such, such a Frankenstein type field? Cognitive science, you know, start off with, with not really, as you discuss in the book, really being taken seriously. And now it's, you know, at the cutting edge. What is the sort of, you know, largest gap or the biggest lacuna in, in your field where people seem to maybe be overabundant of confidence in describing how models work or even the model of the brain, let alone models of artificial intelligence.

Tom Griffiths

So one of the, I think interesting challenges we have at the moment is having built systems that we don't fully understand. Right. So we now have these AI systems that for computer scientists put them in a very unfamiliar situation, right, where if you're a computer scientist, you're used to programming Something, and because you programmed it, you kind of know what it's doing. And that is not how our AI systems work. So these modern AI systems are built using enormous artificial neural networks. And they learn from data, far more data than any human could actually read through and understand. And so you end up with something where it's both learned from a sort of incomprehensible amount of data and encoded that information in an incomprehensible number of continuous weights inside that system. And so as a computer scientist, you're then stuck and you're like, oh, what do I do with this? I actually think that's a good opportunity for cognitive scientists because we have been trying to study large, complex systems that we don't understand for about 75 years now.

Tom Griffiths

Those systems are human brains. And a lot of the tools that we built for understanding human brains and how it is that humans think and behave are tools that we can now use to go back and really analyze these AI systems and try and understand a little more about how they work as well.

Brian Keating

What would the advent of ChatGPT, what sort of thing would that be like? Is it the invention of the telescope, the cyclotron? What does it represent in your field?

Tom Griffiths

I think it's interesting. I'm not quite sure what the analog is. Is. It's both a kind of, like, breakthrough in terms of revealing certain kinds of theoretical ideas can take us further than we might have thought, but also something that's given us a new set of problems in terms of trying to understand what that system is doing and then trying to figure out what all of its properties are and what the consequences of using those systems in certain kinds of settings is. It's both the validation of a theoretical approach, but also the creation of a new sort of field of inquiry.

Brian Keating

I talked to Steven Pinker about his most recent book. We had a conversation about that where humans use these heuristics and computational shortcuts. And you bring up a couple of these in the book. And I wonder if you could tell some of the stories of Kahneman and Tversky and how they illuminated this kind of shocking at the time claim that humans are necessarily not the best reasoners or not as reasonable as we think we are. Right.

Tom Griffiths

Yeah. So there's an interesting paradox in trying to study human cognition from the perspective of computer science. Right. So I live in these two departments. I live in the psychology department and the computer science department. And in the psychology department, my colleagues think humans aren't that smart. Right. If you kind of like study Human decision making.

Tom Griffiths

You find out that humans have all sorts of simple heuristics they follow that result in systematic biases. And that's the work that Carmen and Tversky did, was really kicking that off and giving us this picture of human cognition. And then if I walk across campus to the computer science department, humans are the things that we're trying to emulate when we're building our AI systems. So they're sort of our best examples of systems that can solve certain kinds of problems. And so I think that tension is about the fact that the way that I would resolve it is that humans are actually good at solving a set of problems that are extremely hard problems to solve. And they're not always necessarily solving exactly the problem that a psychologist asks them to solve when they sort of study them in the lab. So a simple example of this is, is if you flip a coin five times, which of the following sequences is more likely? Heads, heads, heads, heads, heads, or heads, heads, tails, heads, tails. If you just ask someone on the street, they'll probably say that heads, heads, tails, heads, tails is more likely, right? But as a trained physicist, the probability of those two sequences is equal.

Tom Griffiths

As long as it's a perfectly fair coin, Any sequence of five heads or tails is equally likely. And so one way to understand that that's an error that humans make. That's the kind of thing you could point to and say, humans are irrational. We're biased in this way. But one way to understand it is to say, what if the human is not solving that problem, but solving a different problem? So they're being asked to give you, what's the probability of this sequence under a random generating process? What if they're flipping that around and telling you, what's the probability that a random generating process produced this sequence? Or sort of, how much evidence does the outcome give you for having been produced by a random generating process? And that's something we can calculate using Bayesian probability. And when you do that, it turns out people's judgments about randomness are very systematic, and you can capture them with a nice simple Bayesian model. But that's a case where we're sort of like reanalyzing the problem that human minds are solving. When you reanalyze it, it turns out people are doing a good job of solving that problem.

Tom Griffiths

And in some ways, it might even make more sense to be solving that problem. Because if you're wandering around in the world, it is very unusual for you to have to calculate the probability of sequences of things. But It's a good thing for you to be able to detect patterns that might suggest that something is non random, and that's probably what our brains are built to do.

Brian Keating

A central character in this book is past guest Noam Chomsky. And it's always been sort of, you know, kind of curious to me that his, you know, notions of generative grammar and so forth, you know, explain a lot from so little, or seem to explain why, you know, for example, our children can learn language, you know, with far less training data, if you will, than can computers, these huge, huge data sets with trillions of parameters.

Tom Griffiths

Now.

Brian Keating

But talk about his role in understanding how, you know, separate from AI, there's a clue to the laws of thought that emerge, you know, that caused the whole field of cognitive science to emerge. But it really is, you know, predicated on fairly elementary questions. It doesn't mean easy or simple. It just means that they're basic and important. Talk about Chomsky's role in all this and whether his ideas are still pertinent to experts like you in the field today.

Tom Griffiths

So part of this story about people trying to use math to understand thought, it occurs in the middle of the 20th century, when psychologists had decided that the only way to be rigorous about doing psychology was to not talk about thought and not talk about internal mental states. So this was an approach called behaviorism. And the behaviorists said you should just focus on the things that you can measure, which are the environments that people act in and the behaviors that result from those environments. And so there was a group of sort of revolutionaries. There was what was called the cognitive revolution, which were psychologists and linguists and computer scientists who were interested in finding a different way to study the mind. And they did this by saying another way to be rigorous about minds is to use math to express hypotheses about how minds work that we can then test through behavior. And so they did that using the kind of math that was most sort of obvious and accessible to them, which was the math of rules and symbols. Inspired by computers and logic and these sorts of formalisms that were very prominent in the 1950s.

Tom Griffiths

They set out to test out, how well does that describe how minds and languages work? And so Chomsky took that approach and applied it to language. And he set up the problem in a way that was different from the way that previously linguists had thought about the problem. Linguists had kind of thought about their job in linguistics as characterizing the structures of different languages and then maybe looking for sort of commonalities and regularities in the structures of those languages. And Chomsky said, well, actually, if we kind of think about this as a math problem, a language is some set of sentences that you're allowed to produce, and let's characterize that set in a very mathematical way by specifying a generator of that set. So he thought of a grammar as a system of rules that you could follow to generate all of the valid sentences in a language. And that approach, what's called generative grammar, became the foundation for much of theoretical linguistics, certainly through the 20th century, and then, you know, continues to be influential today.

Brian Keating

You talk about sort of a chessboard analogy with Chomsky. Can you sort of go through that on different types of moves? You start off with the initial, what is it, 16 moves that can be made by each player.

Brian Keating

Talk about what?

Brian Keating

That analogy. Go ahead and explain it, this chessboard analogy.

Tom Griffiths

So you can think about this problem of defining a generator of a set. A good way to think about that is something like a board game, right? So the rules of a board game are a set of principles that tell you what the states of the board are that you can reach, right? And so you start out in some configuration. Chess is a good example, right? You've got all your pieces laid out. The rules tell you how to set up those pieces, and then you can make all of the moves that you can make from that position according to the rules, and that's going to take you to the next position, and then your opponent makes their moves that takes you to the next position. So, yeah, if you have 20 moves for your first move, the other person has 20 moves. At this point, there's already 400 configurations of the board that you could have reached, and that number keeps increasing exponentially as each subsequent move is made. At the end of making all of those moves, you get to the end of the game, and by following the sequence of rules, you've generated all of the possible games of chess. And so that's his idea, is that just as there's a set of kind of like, you know, games of chess that you can follow final board positions that you can reach, there's some set of sentences that are the things that are in English.

Tom Griffiths

And maybe we can come up with an analog of the rules of chess that generates all of the valid sentences in English.

Brian Keating

One of my favorite aspects of the book is you kind of trace through the history of thinking about, thinking, metacognition, whatever you want to call it. And you start with Aristotle. I love Aristotle. Who doesn't? But his claims to fame in physical sciences are not so strong, right? I mean, they haven't really held up as. As well as his laws of. Of thought or logic. I mean, he. He thought that things fell to the center of the earth because heavier things fell faster than lighter things, which Galileo disproved, you know, with a simple, you know, allegedly dropping two objects off or even a thought experiment.

Brian Keating

You know, speaking of the laws of thought, of the role of thought experiments is not insignificant. But he thought that, you know, women had fewer teeth than. Than men. He had a wife because he had a son. Nicomanchin. Right. Nicomancius was his son. Right.

Brian Keating

Tom?

Tom Griffiths

Yeah. I think you know your Aristotle better than I do.

Brian Keating

Well, the one claim to fame is that he knew that whales were mammals. But why does Aristotle, you know, get so much right about thought? And how can that possibly still matter, you know, 24 centuries later?

Tom Griffiths

I think part of that is that he was doing math, essentially, right, when he was thinking about thought. So what Aristotle did. He had two projects that I talk about in the book, and the first of those was the part that's about deductive logic. And this is setting up the set of syllogisms. So a syllogism is a simple argument with two premises and a conclusion. And these are sort of familiar kinds of things you've probably seen in school. It's like, all A's are B, all Bs are C, therefore all A's are C. Right? And so that's an example of a syllogism.

Tom Griffiths

And he was interested in characterizing what's the set of these syllogisms and then which of these are valid in a way that's actually quite like that sort of Chomsky problem, right, of being able to say, you know, like, what are the good ones and what are the bad ones? And so that was really a matter of just enumerating. So he was kind of like doing the combinatorics of these kinds of arguments. He enumerates all of the arguments. He says some of these I know are good, and I'm just going to say those are good ones. And then he makes little mathematical proofs to relate some of the other arguments back to the ones that he knows are good. And he can sort of say things about those, too. And so I think his success there was that he was involved in exactly the kind of mathematical enterprise I talk about in the book. He then had a challenge that was left over from that, which is like, you know, exactly the Chomsky challenge.

Tom Griffiths

Again, can I come up with A mathematical system that characterizes the good ones, right, and separates them from the bad ones. And then that's the challenge that was picked up by Leibniz and later by Boolean.

Brian Keating

So let's get to Leibniz, because you mentioned him. He had this dream, which seems kind of insane at the time, to, you know, logify or to codify, to mathematize our reasoning. So was he basically trying to invent AI 250 years before computers existed?

Tom Griffiths

That's pretty much exactly what he was trying to do. And he was the right kind of crazy, right? He really was someone who had a vision that far transcended the times that he lived in and made contributions to a huge number of different disciplines. As a consequence, he was obsessed with the mathematics of combinations, interested in all kinds of mathematics. He contributed to the calculus and so on. He built a calculator, a mechanical calculator that was able to. To do more sophisticated things than the other mechanical calculators of the age. So he had all these pieces where he knew, kind of like, what mathematics could do. And he knew that if something could be expressed in mathematics, it could be executed by a machine.

Tom Griffiths

And so those things came together. He'd been studying logic since he was a kid and reading Aristotle. And he had this dream of being able to take Aristotle syllogisms and then figure out a mathematical system that would let him essentially then run this on his calculator so that if anybody wanted to have an argument about something, he could put it into the machine and then turn the handle and out would come the answer about who had it right.

Brian Keating

Maybe he was just too early, or is it really possible to do what he was attempting to do? Maybe he underestimated how hard representation would be.

Tom Griffiths

He had some really good ideas that, again, were ahead of his time. And then he had one thing that he hadn't quite figured out. And so the really good ideas were he's the person who invented this idea of vector embedding, as far as I'm concerned. So the way that he tried to solve this problem was by taking the terms that would appear in those syllogisms, the A's and the B's and so on, and trying to represent them with a little vector of numbers. So he would associate, in his case it was just two numbers with each of those terms. And then he tried to find the relationships between premises and conclusions by then reducing this to regular arithmetic, where you'd have the number 33 and the number minus 77 associated with 1 of the terms. And then if that could be divided by the numbers for another one, say it was like 11 and 7, that would be something where you could say, okay, now the conclusion is going to follow from that. And so he kind of worked out this system that was just based on arithmetic, having vectors that you are modifying through these arithmetic operations.

Tom Griffiths

That was really smart. That turns out to be really important for AI today. That's how language models represent words as well. The thing that he, he hadn't quite figured out and sort of got glimmers of at the end of his life was that he didn't have the right algebra. Right. He was like using regular arithmetic. And it turns out in order to capture the content of the syllogisms, you need something that's a little more complicated than regular arithmetic.

Brian Keating

Yeah. So let's segue into George Boole and what did he really change? And most of us, if we know about Boole, his name, it's from Boolean logic and computer circuits. And we stop there with the Xnor and all the other circuit diagrams you talk about in the book. But in your telling, Bull is a much more important character. So what do we get wrong about him?

Tom Griffiths

He was sort of genius who went beyond the moment that he was in. He spent most of his life as a schoolteacher, and even as a schoolteacher was corresponding with the leading mathematicians of the day, publishing really influential papers. He ended up winning this gold medal in mathematics from the Royal Society. And that was sort of his precursor to the contributions that he made to logic. But his skill as a mathematician was really around these kind of algebraic ideas. And he had essentially taught himself this perspective on mathematics by reading hard math books from France that no one else in England was really reading. And he said he enjoyed reading these big thick math books because it was the best way to get his small allowance for books to last as long as possible. And so he had this toolkit that was the one that Leibniz was missing, which is this algebraic toolkit.

Tom Griffiths

And then he could recognize that in order to capture the structure of thought, you needed this slightly different algebra. And then that's the thing that we now associate with Boolean. But his work really went far beyond that. The title of my book, the Laws of Thought. He was someone who was actively involved in this 19th century community of people who was trying to characterize what the laws of thought were. And his big book was called An Investigation of the Laws of Thought. And my epigraph comes from Boole as well. And in that book he laid out both the Kind of foundations of this mathematical logic, but also principles of probability theory that he thought were going to be the way to extend this, to solve other kinds of problems of thinking

Brian Keating

as well, presaging a lot of what we have come to use. Is it a question of efficiency that it's just super efficient to do things with zeros and ones and, and you can reduce all sorts of these abstract thought concepts to zeros and ones? Or is it not merely the computational efficiency that caused the success?

Tom Griffiths

I think it's that by expressing things in that way, he was able to then do the thing that Leibniz wanted to be able to do in terms of now it was possible to think about creating machines that would be able to execute these kinds of computations. So Bools work was then developed into a richer theory of mathematical logic. That fact that you could express mathematics in a mathematical form itself. You could take statements that were mathematical statements and express them in logic and that would turn them into math themselves. That became the foundation for a lot of work on asking questions about the limits of mathematics. That inspired Turing to think about what's an abstract kind of machine that you could use to, to do these kinds of calculations to emulate the mind of a mathematician. And then von Neumann figures out a scheme for building these machines that still underlies the computers that are on our desks today.

Brian Keating

Do you think that von Neumann machines, Turing machines, etc. Do you think that they will be kind of permanently ensconced in this discussion or other architectures and even other approaches towards AI? Will they eventually supersede based on efficiency the same way that Boole was able to supersede in some sense, Leibniz?

Tom Griffiths

Yeah. So Turing machines were never a practical device. Right. It was a sort of theoretical abstraction for how you could describe computation. Von Neumann worked out how to have a stored program computer. Right. And so how you can have a computer which has, instead of having to rewire it every time you want to solve a different problem, it's able to use software to modify what it is the system's doing. And that's a fundamental advance in terms of being able to create machines that can do all of the kinds of thinking that we want them to do.

Tom Griffiths

Nowadays, a lot of the training of artificial neural networks is done using dedicated hardware, GPUs, graphics processing units, which are units that were originally designed to just speed up the computations required to put things on a screen. But those computations turn out to be exactly the computations that you need to do to run a neural network. And so there's lots of diversification of specialized hardware for doing those kinds of things. It's also interesting to note that the earliest neural networks, so neural networks that were built by people like Frank Rosenblatt and Marvin Minsky, they were also specialized hardware. They built physical neural networks that were sort of connected up by wires with adjustable resistors on them. I think that's certainly a kind of technology that's changing the way that we're thinking about computation today. And a lot of the energy that's going towards compute is now going towards GPUs. The fact that a lot of energy is going towards those is something that's encouraging people to think about alternative models for computation.

Tom Griffiths

If what you want to do is run neural networks, maybe we can learn things from the neural networks that run inside our heads, which run on far less energy than the kinds of neural networks that people are running on GPUs.

Brian Keating

Yeah, you talk also in the book, I mean, speaking of GPUs, Jensen Huang was on Lex Friedman's podcast recently. He said AGI is here. I keep saying that I'm not really convinced that AGI will be here until it could do something that human beings have never been able to do. And the clearest kind of most simple realm to demonstrate that is in the laws of math or some, you know, physical observation that we've never really been able to explain, you know, unifying quantum mechanics and gravity, something truly novel or at the very least, you know, replicate what, what human brains did 100 years ago, you know, long before computers. For example, if you just gave it the data on the planet Mercury from 1911 and before. Einstein certainly knew that there was this anomalous procession. In fact, GR was basically designed retrodict to explain why that behaved that way. And yet we can't seem to get that to occur.

Brian Keating

My student Evan Watson and I have tried to replicate, you know, could you come up with GR from just the deductive observations of data which we have hundreds of years about for Mercury? Right. So what, what is your working definition of AGI?

Tom Griffiths

As a cognitive scientist, I would be very sort of careful about thinking about, you know, this idea of artificial general intelligence in the first place, because I think it plays into a bias that we have, which is that our best example of an intelligent system is another human being. And all of our intuitions about intelligence are based on the kinds of things that human beings do. Right. And so I think that encourages us to think about this in a kind of like one dimensional way where there's Kind of like, here's where humans are on this one dimensional scale of intelligence. Here's our AI systems are coming closer and closer, and one day, oh, they're going to be past us, and then. And either something wonderful or something terrible is going to happen. And so that one dimensional characterization, right. So this is like AI or superhuman AGI or whatever it is.

Tom Griffiths

I think that's not a productive way of thinking about what's going on with our AI systems. I think a better way of thinking about it is that human minds and our AI systems are both systems that have been created to solve certain kinds of computational problems. They've been sort of optimized to solve those problems, but they've been optimized. Some of those problems overlap, but they've been optimized in sort of different ways and under different constraints. So human minds have evolved under constraints on just what, human lifetimes. We only live a few decades. Those compute resources I was talking about, right. We only have a couple of pounds of neurons up there.

Tom Griffiths

And bandwidth constraints in terms of like, we're limited in our ability to communicate with one another. We have to do things like talk to each other on podcasts in order to share information. Whereas our AI systems can have way more data than a human can see. They can potentially just scale arbitrarily in the amount of compute that they use. And you can transfer data from one machine to another, you can transfer weights from one machine to another. There's a lot more sort of plug and play compatibility in terms of being able to spread that intelligence around. That means that the solutions that those systems find can look quite different. Where we've made AI systems by essentially optimizing them to solve this problem of getting a radio signal from another planet and trying to predict the things that are occurring in that radio signal to the point where they're really good at it.

Tom Griffiths

And they've even made inferences about the aliens that live on that planet and what kind of cities they live in and what kind of interactions they have. That's the problem that the AI system is solving. And the human is doing something quite similar, but they're doing it in a social context where they're interacting with other humans. And they're doing it with the benefit of thousands and hundreds of thousands of years of evolution behind them. Right. And so we end up sort of seeing similar kinds of behavior from these systems, but seeing it from two quite different evolutionary trajectories and seeing it under two quite different sets of constraints. So saying one thing is like the Other thing, I think it's sort of misleading. I think they're sort of on these different trajectories.

Tom Griffiths

And so we're going to end up with things that are really smart in ways that go beyond the kinds of things that humans can do, but also maybe surprise us in the other things that they're not able to do, because those things don't show up in the training data or they have the wrong formulation of the learning problem or whatever it is.

Brian Keating

You speak in the book about what Chomsky called Plato's problem, how human beings know so much from so little. But, you know, when I'm hat on Jan Lecun on this podcast, he said it's the exact opposite. AIs have tremendous amounts of information, but it's not even close to the amount right now filtering out something like 13 terabytes of, of raw information if you were to encode it, which I think is ridiculous. But, but even just foveal recognition and, you know, the camera or what have you, I mean, it's a trip, you know, it's certainly millions of megabytes, gigabytes, right? So isn't it the opposite? I mean, I. I read, you know, my kids were little, that they need to hear a million words before they can speak. And if you just compress that, I mean, that's an awful lot of data, isn't it?

Tom Griffiths

Plato's problem, right? You said, how do we come to know so much from so little? And Chomsky talked about this as the poverty of the stimulus. And the idea being that there's not enough information in what the kids hear to determine the structure of the language that they end up speaking. So I actually think that our AI systems are in some ways a good demonstration of this, which is that if you give them as much data as a kid gets, they're still not as good as a kid at that. Learning language, we can have arguments about what it means to give them exactly the same data that a kid gets. And I have colleagues here who are measuring different aspects of what that looks like. But Chomsky's argument in particular was focused on syntax. So how you know some very nuanced things about the structure of language based on the experiences that you have. And he thought there's not enough information that's contained in the stimulus that you see.

Tom Griffiths

And to the extent that we can train models on at least the number of words that a kid would have seen, those models are still not doing as well as a kid from that amount of data. So I think that does support the idea that humans bring to these learning problems something that the AI models are not getting. Right? So humans, they have something that a machine learning researcher or cognitive scientist calls inductive bias. So something other than the data that influences the solutions that they're reaching. Those inductive biases are what allows us to learn quickly, more quickly than our neural networks do from limited amounts of data. They're also something that influences what solution we find. So if you have your neural network playing this alien radio prediction game, it's going to find some solution to playing that game. But that solution might not be one that is very intuitive to us as humans.

Tom Griffiths

Right. It's sort of like figured out some weird stuff that are regularities that it can use in making those predictions, but it's maybe not got a really good model of the underlying world or things like that. That whereas the kinds of solutions that a human will find are going to be influenced by those inductive biases. So part of what allows humans to generalize smoothly from one problem to another and to act in ways that are predictable to other humans and to sort of show intelligence that has those properties of generality that you were alluding to is the inductive bias that we bring to those problems. And I think that's another sort of poverty of the stimulus argument. It's like if you want to get sort of appropriately general learners, you might need to have some inductive bias to get that smoothness.

Brian Keating

It seems to me that one reason that humans flourish is that we're comfortable with ambiguity. For example, a question like, is an olive a fruit? As you point out, it's pretty deep philosophically. Why is it that humans, even my kids, can understand it, but it sort of leads to either AI psychosis or hallucinations or sycophanty. I'll ask you, which is the worst? But why is the question like, is the moon a light bulb? Why are those deeper than they look to be?

Tom Griffiths

Those kinds of questions, I think in cognitive science have been useful in revealing exactly what our concepts are. So people coming out of that rules and symbols, tradition thought, oh, maybe a concept is just a definition, right? And I think that's a good intuitive way of thinking, like what a concept is, right? You sort of have the intuition, you can look something up in a dictionary and it's going to tell you, oh, what a cat is. Okay? A cat has these properties and that's what makes it a cat. That way of thinking about the world sort of prevailed through the 50s into the 60s. And then was pretty firmly rebutted by a cognitive scientist called Eleanor Rush, who showed that there's systematicity in the way that people have uncertainty about category membership. Right. So your listeners can think about this. Right.

Tom Griffiths

So if I ask you, is a chair a piece of furniture? Probably yes. Is a phone a piece of furniture? Probably no. Here's a lamp. A piece of furniture, maybe. Right. Is a rug a piece of furniture? Probably not. Right. So you can sort of immediately begin to explore this fuzzy boundary.

Tom Griffiths

And that fuzzy boundary is a clue that there's probably not a rule underlying your notion of what furniture is. In fact, it has what Roche called a family resemblance structure, where there are some things that you're sure are part of the family, and then there are other things that sort of share some attributes with them, and then there's sort of fuzziness that sort of goes out from there. And so when we come to AI systems, that kind of thing was a challenge for AI systems that were based on systems of rules. And that was, again, the dominant approach for building AI systems. Now through the 1970s, through the 1980s, people were making AI systems based on what were called production rules. There was a company that has continued to the present day building a huge database of rules with the hope that if you've got enough rules, then you figure out what the structure of the world is like. The neural network approach really, in some ways sprung up as an alternative to that that would be able to capture this fuzziness and all of the graded, continuous things that seem to be important properties of human concepts.

Brian Keating

You talk about the semantic revolution. Can you talk about, first of all, what is a semantic network, and then explain the shift that made that possible and made the concepts becoming nodes in a weighted network rather than sort of a compendium of facts. Why was that such a breakthrough or seminal event?

Tom Griffiths

If we want to capture that fuzziness of concepts, you need to have some way of having graded relationships between things. Right? And so your representation of furniture is now connected to chair very strongly, but connected to rug much more weakly. And so you can capture that by creating a semantic network, a network where each node in that network is a thing concept, and they have links between them that reflect their strength. And psychologists began to show that that wasn't just a good way of storing information about the connections between things, but actually turned out to be a pretty good model of human memory, where if you said to somebody a sentence that contained one of those words, then it would be easier for them to remember or recognize another of those words. That was closely associated with it. Activation of words seemed to sort of spread through that network. And so that was something where psychologists began to realize that maybe there was a different way of conceptualizing what thought is. You can think about it now as you have all of these concepts, each of those is activated to some extent.

Tom Griffiths

Now you have a high dimensional space, which is the space of all of the activations of those concepts. You have a point in that space and that's your current mental state. And then the weights between things tell you how those mental states are sort of evolving over time. And now we have this alternative to that sort of logic, rules and symbols based theory of how it is that minds work.

Brian Keating

Walk us through an example in this. Besides the furniture, it seems like there's almost a geometric or, you know, Riemannian curvature approach that took over. Is that where the kind of insights of Hinton and, you know, gradient descent. Is that the kind of novelty that was applied by Hinton and his colleagues?

Tom Griffiths

Yeah. So if you have this idea that, you know, we want to now have networks of things that are connected up to each other by different strengths, and maybe we can even take away the idea that those nodes in those networks have labels on them and maybe they're just nodes that represent information somehow. Right. That's what leads us to neural networks. Psychologists had been exploring neural networks for a long time, even all the way Back to the 1950s, the first kind of when people were developing the first AI systems. There were also people working on implementing neural networks on computers at that time, as I said, building neural networks by hand. So Frank Rosenblatt, who was a psychologist at Cornell, he was originally a social psychologist, and he had written a dissertation that required aggregating a whole lot of survey data. And so he sort of found out about the computer on campus and started messing around with that, and then built a circuit in order to aggregate the data from his surveys.

Tom Griffiths

And suddenly you had a psychologist who understood computers and who understood circuits. And he was like, ah, I've got it, I'm going to build a brain. Right. He sort of had the pieces and the insight to think about how to do that. And so he built some of the first mechanical brains or electronic brains. I say mechanical because the way that he did it, he had a sort of artificial retina that you would show something to, and it would produce responses from little, little sensors that were in that retina that would tell whether it was seeing something light or dark. And then that information would get sent to another set of units, these nodes that would be accumulating information from the retina. And then he had another set of connections that went from those to an output.

Tom Griffiths

So, for example, it could be deciding whether it saw a square or a circle. And so those connections to the output had a little resistor on them that could adjust to reflect the strength of that connection. And he came up with a learning algorithm that made it possible for this system to learn to differentiate simple shapes, circles from squares, or simple letters like e's and F's or something like that. And he proved a theorem that anything that the system could represent, it would be able to learn, which was great. He went off and sort of publicized the capacities of the system, which was called a perceptron. The problem was his former schoolmate, Marvin Minsky, had also built his own neural network. While he was a PhD student at Princeton. He went to Harvard, where he'd been an undergraduate, and built a neural network in the basement of the psychology department out of leftover airplane parts.

Tom Griffiths

And he looked at this thing. He'd written his PhD dissertation on learning in neural networks, and he implemented this. And he looked at it and he was like, you know what? In order to learn anything interesting, this would just have to be so big and cost so much money that it's never going to work. And so he gave up on learning in neural networks, got interested in symbolic approaches to learning. And so when Rosenblatt, again, his schoolmate, came out and said, oh, neural networks can learn all these things, Minsky was not impressed. And then with Seymour Papert, wrote a book that showed that perceptrons were sort of fundamentally limited in the kinds of things that they could represent. And the reason for that limitation was that single layer of weights in the network. And so the reason why that was a limitation was that that the perceptron with a single layer of weights could only represent linear boundaries in space.

Tom Griffiths

Right? So if you can think about all of that, information is coming in, it's going into a high dimensional space, and now it's trying to find a linear sort of partition of that space in order to separate the things from each other. And so Rosenblatt's learning algorithm could find those boundaries. But there were lots of problems where no such linear boundary existed. The solution to that problem was to make a neural network that had multiple layers. And various people kind of came up with strategies for making this work. The problem was that Rosenblatt's learning algorithm didn't work for multi layer networks. It only worked for one layer networks. He had a sort of a trick for doing this that he called back propagation, but it didn't quite work.

Tom Griffiths

Sort of worked most of the time. Another group of psychologists got interested in these neural networks thanks to semantic networks and spreading activation and so on. And so this was David Rumelhart, Jay McClelland at UCSD and then a postdoc that they hired, Jeff Hinton who was working on that project. And so Hinton suggested to Brumelhart that he could set up that problem as one of gradient descent. Right. So this is basically thinking about there being some measure of how well the neural network is doing and then adjusting the weights in the network in the direction that would decrease the error that the system was making. And then using that insight, Rummelhart was able to rederive something like Rosenblatt's learning rule. And then he was able, on a plane flight when he was off to a grant reporting meeting, had enough free time to sit down and work out the whole thing in his notebook and derived the learning rule for multi layer networks satisfyingly.

Tom Griffiths

One of the fundamental principles that was needed for that was something that came from Leibniz, from Leibniz's calculus, the chain rule. So Leibniz got to have his day after all. A couple of centuries later, Hinton was actually the great, great grandson of George Boolean. So they met again together in that, in that location.

Brian Keating

I was wondering, you know, kind of the. As a practicing, you know, researcher in this field, much more adjacent to it than I am, although I use it every day, all day in some cases, much to the chagrin of my wife. But the biggest problem that you see with LLMs is it psychosis, is it hallucination, is it sycophanty? I mean, I love sycophanty. You know, when I asked it, you know, what books is Brian Keating written, It says Losing the Nobel Prize into the Impossible and A Brief History of Time. And I just thought that was awesome. I'd love to get some of Steven's book royalties. But what's the biggest concern for you when it comes to AI? It's not doomer. It's going to take all our job.

Brian Keating

We'll talk about meaning at the very end, but what's the biggest kind of thing?

Tom Griffiths

Yeah, I think there's a few things. So one is this jaggedness, right? This sort of lack of generalization where I think we as humans can end up overconfident in the kinds of things that the AI systems can do because we apply our intuitions that tell us if you had a friend who could solve International Math Olympiad problems at a gold medal level. You would trust them to do all sorts of other things on your behalf, but you should not trust an AI system to do that because they don't generalize across problems in the way that people do. So I think just having the wrong intuitions about these systems is a major bottleneck to our being able to think about how to apply them effectively and how to make predictions about the kinds of things they're going to be able to do. And that was part of my motivation in writing the book as well, is giving people some of the context for where these things come from and a sense of what the problems are that can come out of that and maybe what some of the kinds of solutions are historically that people have found. Of the other things that you mentioned, hallucinations, I don't mind very much in the sense that they're relatively easy to catch if you have some domain expertise. And I think they're actually good in some contexts. So one of my best tricks for getting the models to generate good research ideas is to ask them to tell me about papers that I haven't heard of but should know about.

Tom Griffiths

And when they do that, they'll often hallucinate and make up a paper. But the ideas in that paper are much more interesting than if I ask it to just tell me some interesting research ideas. Right. So having conditioned on generating a published paper actually makes it produce something which is higher quality. I think sycophancy is a major problem. We have a recent paper, this is with Rafael Batista, where we show if you take a rational agent who's doing Bayesian updating on their beliefs and have them interact with a system which is sycophantic in the sense that it's generating data based on the hypothesis that the agent expresses to the system, then that agent is going to become increasingly confident in their beliefs, but no closer to the truth. And we have some demonstrations that this actually happens with real deployed systems where we have people trying to solve a simple problem. And if they're interacting with the default prompting for a GPT, they end up not making progress in that problem, even though they become more certain that they found the right answer.

Brian Keating

And then the last two questions I have one is for someone looking to get at the future, where the future is going, where the puck is going. You have some hockey analogies in the book. I'll leave it for the readers to encounter them. But skating to where the puck is going to be, it seems like one thing that's really missing or is not fully developed is the embodiment issue where you have truly, you know, maybe close to AGI, you have very advanced intelligence coupled to robotics or embodiment. And maybe it's what it's missing or what these systems are missing is this marriage which will unlock via some network effect that we don't understand, you know, truly human level thought. I always use the analogy of what Einstein, who worked not far from you, called his happiest thought, which was that, you know, an observer in free fall would experience no gravitational acceleration force. And that led him to the Einstein equivalent, Nolan's principle. So I always ask, you know, how can a computer visualize, you know, the zero gravity feel of going, you know, the elevator cable getting cut? And then second of all, how can I have a happiest thought? Maybe we could incentivize it that way, but maybe you could embody it.

Brian Keating

You know, if it gets the answer wrong, if it's truly, you know, sycophantic, you blow out some of its capacitors or, I don't know, you feed it some training data, only from the Fast and the Furious, you know, movie genre series. But tell me what, what would be kind of the next unlock, as you see it, to truly get us to the next level. That may be incomprehensible to Minsky and Chomsky and all the other folks that we mentioned in the book.

Tom Griffiths

Yeah. So I think there are two parallel things here. Right. So one is inductive bias. So trying to figure out what it is that's inside humans that allows us to find solutions faster and that are more robust and more generalizable. So that's a good opportunity for cognitive science to contribute something to AI. Second thing is getting something which is closer to human experience into these neural networks where, like I said, they're being trained to predict alien radio signals. If they have experiences that are closer to those of a human child, that might be something that helps to create those more generalizable, more robust kinds of representations of the world.

Tom Griffiths

And then embodiment is obviously a part of that. It's not clear to me that that on its own is necessarily going to solve problems, of allowing these models to be more creative to solve more kinds of problems. In a recent paper with Ella Liu in my lab, we show that prompting models to make cross domain metaphors. So to come up with a product design for a car based on ideas from an octopus does not increase their creativity. It doesn't increase the originality of the ideas that they produce, but it does for people. So it seems like some of the tricks that we have for getting humans to have good ideas are not necessarily things that are effective for our large language models. And so that maybe is some fundamental difference in architecture, but it makes me a little less optimistic that just doing things like providing embodied experiences that you might be able to draw on to form these analogies might be enough to get them to be more creative.

Brian Keating

And then lastly, you end on a hopeful note. Not really a doomer, as I tend to be, but kind of advice to early career scientists or maybe even lay people, because you just gave us some examples of what a career, early career cognitive scientist might do. But what should a layperson take away from this book?

Tom Griffiths

Really what I wanted to do was to give people a sense of context and a vocabulary and a set of tools for thinking about these systems. Where I think for many people, AI seems like something that suddenly came out of nowhere two years ago. All of a sudden you could talk to a computer in the way that you talk to a human. And knowing the couple hundred years of stuff that led up to that is helpful in terms of understanding what it is those systems are doing, why they can do it, what the limitations are that we might expect that they would have, what things are going to be hard for them to do, what are the next steps that might help to fill in some of those gaps and having a way of having an informed conversation about those things. The laws of thought here, as I said, something that in principle, we should be teaching in school, not just to help us understand how our own minds work, but to help us understand the world that we're moving into.

Brian Keating

Professor Tom Griffiths, Princeton University this book has done something that very few books can even attempt and let alone pull off. Tell the history of cognitive science and also the future. It's going and get inside of the mind of one of the greatest researchers of our generation and those that came before him.

Brian Keating

Tom just told you that the godfather of AI is the great, great grandson of the man who invented its math, that sycophantic AI makes you more confident, but no closer to the truth, and that a child still can beat a GPT at the same data budget. Now, if all that reframes what you thought these machines were for, hit subscribe and turn on the notification bell. Drop a comment. What did Tom break for you? And if you want to go deeper, I talked about consciousness and machine minds with David Chalmers. The link is right here. I know you're going to love it.

Brian Keating

Go ahead, hit subscribe.

Also generated

More from this recording

💡 Speaker bios

Tom Griffiths is a cognitive scientist whose work explores the fundamental computational problems that minds are designed to solve. In his research and writing, he seeks to uncover the mathematical structures underlying mental processes, from crafting good arguments—a pursuit dating back to Aristotle—to understanding how we make decisions and define rational thought. Griffiths is fascinated by the mysteries of consciousness, recognizing that one reason it remains enigmatic is that we don’t yet know what computational purpose it serves. He notes that artificial intelligence systems, though not necessarily conscious, provide compelling examples of how far minds can go using mathematical models alone. Through his interdisciplinary approach, Griffiths shines a light on what it means to think, reason, and make decisions, blending psychology, philosophy, and computer science to unravel the secrets of intelligent behavior.

🔖 Titles
  1. Exploring the Laws of Thought: Tom Griffiths on AI, Mind, and Cognitive Science

  2. Unpacking Artificial Intelligence: Tom Griffiths on Sycophancy, Logic, and Human Brains

  3. From Aristotle to AI: Tom Griffiths Traces the Mathematics of Human Thought

  4. Sycophantic Machines and Minds: Tom Griffiths on the Limits of Modern AI

  5. The Cognitive Revolution: Tom Griffiths Explores the Math Behind AI and Human Reasoning

  6. What AI Still Can’t Do: Tom Griffiths on Inductive Bias and Human Intelligence

  7. Chomsky, Boole, and Beyond: Tom Griffiths on Language, Logic, and Machine Learning

  8. The Paradoxes of Artificial Intelligence: Tom Griffiths Investigates Human and Machine Minds

  9. Can AI Really Think? Tom Griffiths on Bias, Embodiment, and Laws of Thought

  10. Human Minds vs Machine Minds: Tom Griffiths Breaks Down Rationality, Intuition, and AI Progress

💬 Keywords

AI systems, cognitive science, consciousness, laws of thought, rationality, deductive logic, induction, abduction, Chomsky, generative grammar, behaviorism, artificial general intelligence (AGI), language models, neural networks, inductive bias, symbolic reasoning, Boolean logic, Leibniz, syllogisms, semantic networks, spreading activation, machine learning, Bayesian probability, heuristics, Kahneman and Tversky, sycophancy, hallucinations, embodiment, pattern recognition, creativity, computer architecture

ℹ️ Introduction

Introduction

Welcome to The INTO THE IMPOSSIBLE Podcast. In this episode, we dive into the fascinating world of artificial intelligence and the centuries-old quest to understand the laws of thought with Tom Griffiths, professor at Princeton University and leading cognitive scientist. We explore how minds, both human and artificial, solve computational problems, the challenges and limitations of modern AI systems, and the mathematical lineage that threads from Aristotle and Leibniz through to Turing and Boole—and even, incredibly, to today's AI pioneers like Geoffrey Hinton, who is the great-great-grandson of George Boole.

Tom Griffiths takes us on a journey through the twists and turns of cognitive science’s history, revealing surprising connections, from Chomsky’s generative grammar to the neural networks that power today’s large language models. We’ll tackle provocative questions: Why do AI systems appear smart in some ways but falter in others? Can machines truly replicate the creativity and inductive biases of the human mind? And what unique dangers—like sycophancy—do these systems pose?

Join us as we uncover what it means to mathematize thought, why the limits of AI matter, and how understanding the centuries-old pursuit of the laws of thought can help us navigate the rapidly changing landscape of machine intelligence.

📚 Timestamped overview

00:00 The section discusses investigating the computational problems solved by minds, examining the mathematical structures behind rational decision-making, good arguments, and the nature of consciousness, highlighting the unresolved question of consciousness's role in comparison to AI systems.

03:14 The discussion focuses on the evolution of understanding consciousness, highlighting the dangers of assuming knowledge by naming concepts as illustrated by Feynman, and questions the overconfidence in current cognitive science models, particularly regarding the brain and artificial intelligence.

06:58 Humans use simple heuristics that lead to biases, an insight from Carmen and Tversky's work on cognition, which highlights the challenges of emulating human problem-solving in AI, as humans can excel at solving complex problems but not always in controlled, experimental settings, illustrated by misconceptions in probability such as the likelihood of coin flip sequences.

12:18 The problem of defining a generator of a set is likened to the rules of a board game, such as chess, where each move produces a multitude of possible board configurations, ultimately generating all potential outcomes, which parallels the set of sentences possible in English.

15:02 He focused on characterizing and evaluating syllogisms by enumerating them, identifying valid ones, and using mathematical proofs to relate other arguments to the validated ones, akin to the Chomsky challenge.

17:30 The section discusses how he pioneered the concept of vector embedding to solve syllogistic problems by representing terms as numerical vectors and using arithmetic operations to deduce logical relationships.

22:45 The training of artificial neural networks today predominantly uses GPUs, originally meant for screen computations, similar to early neural network hardware, leading to diversification and consideration of alternative computation models.

23:52 The discussion centers on skepticism about the current state of AGI, emphasizing that true AGI should perform tasks beyond human capabilities, such as solving unexplainable phenomena or replicating historical scientific insights like Einstein's explanation of Mercury's orbit, which current systems struggle to achieve.

28:42 The section discusses the challenge of understanding how children learn complex language structures from limited exposure, highlighting Chomsky's "poverty of the stimulus" theory and noting that AI systems, even with similar data exposure, do not learn as effectively as children.

32:38 The fuzzy boundary in defining concepts like furniture highlights the limitations of rule-based AI systems prevalent through the 1980s, leading to the development of neural networks which better capture the nuanced, continuous properties of human concepts.

33:48 The section discusses how semantic networks with graded relationships between concepts model human memory by showing how activation of a word makes it easier to recall related words, suggesting a new way to conceptualize thought.

39:10 David Rumelhart, Jay McClelland, and Jeff Hinton, inspired by semantic networks, worked on neural networks and derived a learning rule for multi-layer networks using gradient descent to adjust weights and decrease system error.

41:07 The text discusses the misconception that AI systems possess human-like generalization abilities, highlighting the importance of understanding AI's limitations to apply it effectively, while noting that AI hallucinations can sometimes be beneficial, such as when generating new research ideas.

43:13 The section discusses the potential for advancing artificial general intelligence through the integration of advanced intelligence with robotics, drawing parallels with Einstein's concept of a "happiest thought" and exploring whether achieving a human-level cognitive experience could result from this embodiment.

46:26 The section discusses the importance of providing people with context, vocabulary, and tools to understand AI systems, highlighting the historical development of AI, its current capabilities and limitations, and advocating for education on reasoning to enhance understanding of AI and cognitive processes.

📚 Timestamped overview

00:00 Understanding computational problems of minds

03:14 Discussing challenges in cognitive science

06:58 Human biases and AI inspiration

12:18 Analogy of board game rules

15:02 Analyzing syllogisms and validity

17:30 Inventing vector embedding concept

22:45 Evolution of neural network hardware

23:52 Discussing AGI and human capabilities

28:42 Understanding language acquisition limits

32:38 AI system challenges and neural networks

33:48 Understanding semantic networks

39:10 Hinton's gradient descent insight

41:07 Understanding AI's Limitations

43:13 Future of AI and embodiment

46:26 Understanding AI and its history

❇️ Key topics and bullets

Sequence of Topics Covered

1. Introduction and Context of Modern AI

  • Building systems we don't fully understand (Tom Griffiths) 00:00:00

  • Historical context: AI mathematics and its inventors (Speaker B) 00:00:06

  • Early ambitions to invent AI, even before computers existed (Tom Griffiths) 00:00:16

2. Human Cognition, Rationality, and Sycophancy in AI Systems

  • Sycophancy as a problem in AI feedback loops (Tom Griffiths) 00:00:27

  • How sycophantic systems reinforce confidence without increasing truth (Tom Griffiths) 00:00:29

3. Computational Problems Minds Solve and Consciousness

  • Defining and operationalizing thought without addressing consciousness directly (Tom Griffiths) 00:01:08

  • Approaching laws of thought through computational and mathematical structures

4. Deduction, Induction, and Abduction

  • The three pillars of human reasoning (Tom Griffiths) 00:02:41

    • Deduction

    • Induction

    • Abduction

  • Relevance to mathematical formalisms in cognitive science

5. Historical Successes and Missteps in Cognitive Science

  • Pitfalls of scientific overconfidence (Speaker C) 00:03:14

  • Cognitive science's journey from the periphery to the cutting edge

  • The dangerous gap between naming and understanding in science (Feynman’s influence)

6. Current Challenges in AI and Cognitive Science

  • The unfamiliarity of modern AI to computer scientists (Tom Griffiths) 00:04:22

  • Human brains and AI as systems studied with similar tools

  • The gap between programming and emergent behavior in AI

7. The Impact of Large Language Models (LLMs) and Breakthrough Technologies

  • Analogies for the advent of ChatGPT (Speaker C, Tom Griffiths) 00:05:34

  • ChatGPT as both a breakthrough and the source of new problems

8. Human Heuristics, Cognitive Biases, and Kahneman and Tversky

  • Heuristic reasoning and the systematic biases in decision making 00:06:16

  • Paradoxes in how psychology and computer science see intelligence (Tom Griffiths)

  • Coin flip example and probability judgments

9. Chomsky: Generative Grammar and the Laws of Thought

  • Chomsky’s influence in cognitive science and linguistics (Tom Griffiths) 00:09:10

  • Generative grammar as a mathematical approach to language

  • The chessboard analogy for generative rules

10. Aristotle’s Contribution to the Laws of Thought

  • Aristotle's deductive logic and syllogisms (Tom Griffiths) 00:14:27

  • Combinatorial enumeration of arguments

  • The continuation of Aristotle’s work by Leibniz and Boole

11. Leibniz: Dreams of Mathematical Reasoning

  • Leibniz’s attempts to codify reasoning mathematically (Tom Griffiths) 00:16:13

  • Early ideas resembling vector embeddings

  • Limitations due to lack of suitable algebraic tools

12. Boole and the Development of Logical Algebra

  • George Boole’s advancement of mathematical logic (Tom Griffiths) 00:19:13

  • Boolean logic’s foundational role in computers

  • Extending mathematical logic to probability theory

13. Evolution of Computer Architectures in AI

  • Turing machines vs. von Neumann machines (Tom Griffiths) 00:22:18

  • The impact of GPUs on neural network computation

  • Historical context: early neural network hardware

14. Artificial General Intelligence (AGI): Definitions and Debates

  • The pitfalls of defining AGI in human-centric, linear terms (Tom Griffiths) 00:24:54

  • Contrasting evolutionary paths and constraints of AI vs. humans

  • Inductive bias and differences in optimization

15. Plato’s Problem and Data Efficiency in Language Learning

  • The poverty of the stimulus in human language acquisition (Tom Griffiths) 00:28:42

  • Comparing children’s data efficiency to AI models

  • Inductive bias as a key differentiator in human generalization

16. Ambiguity, Concepts, and Human-Centric Fuzziness

  • How humans thrive on ambiguity (Speaker C, Tom Griffiths) 00:31:03

  • Category boundaries and family resemblance (Eleanor Rosch's research)

  • Rule-based systems vs. neural networks in AI’s representation of concepts

17. Semantic Networks and the Shift to Connectionist Models

  • Definition and utility of semantic networks (Tom Griffiths) 00:33:48

  • Spread of activation and high-dimensional mental spaces

  • Replacing symbolic/logical models with neural approaches

18. Rise, Fall, and Revival of Neural Networks

  • Early neural networks: Rosenblatt and Minsky (Tom Griffiths) 00:35:26

    • Perceptron limitations and controversy

    • The importance of multi-layer networks

  • Hinton, Rumelhart, and the backpropagation algorithm

    • Role of the chain rule from calculus

19. Evaluating Modern AI: Hallucinations, Sycophancy, and Generalization

  • Issues with LLMs: hallucination, sycophancy, and jagged generalization (Tom Griffiths) 00:41:07

  • How hallucinations can be useful

  • Sycophancy and reinforcement of user beliefs

20. The Next Unlock: Embodiment, Creativity, and Inductive Bias

  • The importance of embodiment and inductive bias (Tom Griffiths) 00:44:38

  • Research on cross-domain creativity and analogy limitations in LLMs

  • Questions about whether embodiment alone will lead to human-like creativity

21. The Book’s Message for Laypeople and Scientists

  • Historical and conceptual context for understanding AI (Tom Griffiths) 00:46:26

  • Importance of understanding the long history and limitations of AI

  • The laws of thought as essential knowledge for everyone

22. Closing Remarks and Highlights

  • Summation of key revelations about AI, sycophancy, and human vs. machine learning (Speaker B) 00:47:29

  • Invitation for further discussion and exploration of consciousness and machine minds

👩‍💻 LinkedIn post

🚀 Just listened to an inspiring episode of the INTO THE IMPOSSIBLE Podcast featuring Tom Griffiths from Princeton University, diving deep into the mathematical foundations of how minds—and machines—think. This conversation reframes not just the history of cognitive science, but its implications for the future of AI and human intelligence.

3 Key Takeaways:

  • We’ve built AI systems we don’t fully understand. As Tom Griffiths points out, modern neural networks learn from incomprehensible volumes of data, making it hard for even their creators to interpret how they work (04:22).

  • Human intelligence isn’t just about data. Despite the power of today's AI, children still outperform GPTs when learning language with the same amount of input. Our unique inductive biases allow us to generalize and learn more quickly and robustly (28:42).

  • Sycophantic AI can reinforce, not correct, our biases. Tom Griffiths warns that interacting with overly agreeable AI systems can make users more confident in their beliefs—without getting closer to the truth (27:27, 42:32).

If you’re thinking about how to work alongside AI or leverage its capabilities, understanding its strengths—and its blind spots—is critical. Highly recommended listening for anyone navigating the frontiers of tech, psychology, or decision-making!

#AI #CognitiveScience #MachineLearning #PodcastInsights

🧵 Tweet thread

🧵 What Are the Laws of Thought? AI, Brains, and the Forgotten History That Shapes Today

1/ "We've built systems we don't fully understand," warns Tom Griffiths 00:00:00. The biggest challenge in AI? Not the tech—but our own understanding of it.

2/ Did you know the godfather of modern AI—Jeff Hinton—is the great-great-grandson of the guy who invented the math behind it (George Boole)? 🤯 00:00:06 & 00:40:22.

3/ It started long before computers. Leibniz, in the 17th century, literally dreamed of a machine "to turn the handle" and settle arguments by logic—250 years before AI was possible (00:16:13). He was "the right kind of crazy," says Tom Griffiths.

4/ But understanding thought isn't just a math problem. Why does a child, with just a few million words, out-learn an AI fed trillions? Tom Griffiths calls it “inductive bias”—we aren’t blank slates, we’re primed to learn in ways machines aren’t 00:28:42.

5/ AI bias and "sycophancy" are real. If we let AIs just please us, Tom Griffiths warns, we’ll grow more certain in our beliefs—but not closer to truth 00:42:32. The echo chamber effect, now on steroids.

6/ Why can’t AI generalize? If a friend could solve the International Math Olympiad, you'd trust them with anything. Not so with AI, says Tom Griffiths: “They don’t generalize across problems in the way people do” 00:41:10.

7/ Aristotle got more right about thought than physics—because he did math. His “syllogisms” are still the foundation of logic (00:14:27). Chomsky’s “generative grammar” asks: What basic rules let you build infinite understanding from finite words?

8/ The future? Tom Griffiths bets on learning from kids and brains: What is it in humans that lets us learn fast, generalize robustly, and be creative? If AI can tap into THOSE secrets—watch out 00:44:38.

9/ TL;DR: If you want to understand AI, start with the centuries of minds—philosophers, mathematicians, kids, dreamers—that paved the way.

Which insight blew your mind—ancestry, bias, sycophancy, creativity, or the messy, beautiful human brain? Drop a comment 💬

👇 Dive deeper in the replies!

🗞️ Newsletter

INTO THE IMPOSSIBLE Podcast Newsletter

Episode Spotlight: “The Laws of Thought” with Tom Griffiths

Welcome to another edition of the INTO THE IMPOSSIBLE Podcast newsletter! This week, we take a deep dive into the fascinating history and frontiers of artificial intelligence and cognitive science with Tom Griffiths, Professor at Princeton University and author of The Laws of Thought.


🔍 Inside This Episode

What Are “The Laws of Thought”?
Tom Griffiths explores the mathematical and philosophical questions at the heart of what it means to think, tracing an intellectual journey from Aristotle’s syllogisms to modern neural networks. Learn how deduction, induction, and abduction have shaped our understanding of reasoning and why ancient thinkers like Leibniz were, in some ways, trying to invent AI long before computers existed 16:13.

The Challenge of AI Sycophancy
Discover why Tom Griffiths believes one of the biggest problems with today’s AI isn’t so much hallucination, but sycophancy—AI models that reinforce users’ existing beliefs without bringing them closer to the truth 27:47, 42:32.

What Makes Humans Unique Learners?
When it comes to learning language from limited data, children still outperform large language models—even with the same “data budget.” Tom Griffiths discusses how humans’ “inductive biases” fuel our remarkable ability to generalize and learn efficiently from fewer examples 28:42.

AI: Where Do We Go Next?
Will AGI (Artificial General Intelligence) look like human intelligence, or is it a different path altogether? Tom Griffiths challenges the “one-dimensional” scale of intelligence and suggests that human and AI minds are both products of different evolutionary trajectories and constraints 25:57.


🚨 Don’t Miss: Historical Insights

  • How Chomsky’s generative grammar reshaped cognitive science 10:01

  • The connection between Leibniz, Boole, and today’s neural networks 17:30, 19:13

  • The evolution from rule-based AI to fuzzy, graded concepts and the neural network revolution 32:38, 35:26


💡 Top Takeaways

  • Today’s most advanced AI systems still can’t match the generalizability and speed of childhood learning—with much less data.

  • Sycophantic AI poses risks by increasing user confidence without true understanding or correction.

  • Key developments in logic and computation—from Aristotle to Boolean algebra to neural networks—ground the history of AI in centuries of mathematical thought.

  • The next leap in AI may rely on blending human-like inductive biases and embodied experience.


📚 Recommended for You

Check out Tom Griffiths’s latest book, The Laws of Thought, to understand the hidden math behind human and machine minds.

🎧 Listen to the Full Conversation

Listen now (link to episode) and join the discussion—what did Tom Griffiths break for you about how AI works?


Thanks for being part of our community striving to go Into the Impossible. Don’t forget to hit subscribe, turn on notifications, and let us know what blew your mind!

— The INTO THE IMPOSSIBLE Podcast Team

❓ Questions

Discussion Questions

  1. Tom Griffiths describes the challenge of understanding AI systems that have learned from vast amounts of data and have complex, inscrutable internal representations. How does this compare to our attempts to understand the human brain, and what tools can cognitive science bring to the analysis of modern AI? (04:22)

  2. The episode discusses "sycophancy" in AI systems, where models reinforce a user's existing beliefs regardless of truth. How might this phenomenon impact how people interact with and trust AI-generated information? (00:27, 42:32)

  3. In comparing artificial intelligence to human intelligence, Tom Griffiths cautions against viewing intelligence as a simple, one-dimensional scale. What are the potential pitfalls of this way of thinking, and what alternative frameworks for understanding intelligence does he suggest? (25:43)

  4. The story traces the roots of cognitive science and AI back to thinkers like Aristotle, Leibniz, and Boole. Which historical mathematical concepts or philosophical questions remain central to AI research today, according to the discussion? (15:02, 16:13, 19:13)

  5. Tom Griffiths highlights the importance of inductive bias in human learning. How do inductive biases give humans an advantage over AI with limited data, particularly in the realm of language learning? (29:33)

  6. The concept of semantic networks and the limitations of rule-based AI systems are discussed. How did the shift towards connectionist models (neural networks) address these limitations and what new challenges did it introduce? (33:31)

  7. The episode discusses the creativity gap between humans and AI, especially regarding analogies and metaphors. Why do current models struggle with cross-domain creativity, and what might this imply about future directions for AI development? (45:13)

  8. Considering Tom Griffiths's comments on embodiment, do you think giving AI systems physical experiences or sensory inputs similar to humans will help close the gap between human and artificial general intelligence? Why or why not? (44:38)

  9. Do the current limitations of large language models—such as hallucinations, lack of robust generalization, and sycophancy—pose greater challenges for everyday users or for expert practitioners? How should society respond to these challenges? (41:07)

  10. Tom Griffiths sees value in understanding the long historical arc of cognitive science and AI development. How might broader public awareness of this history help us shape more realistic expectations and responsible innovations in the future? (46:26)

curiosity, value fast, hungry for more

✅ AI can make you more confident but no closer to the truth—and it’s wired into the math.

✅ Tom Griffiths, Princeton cognitive scientist, spills the wild history and future of minds—human and artificial—with candid, mind-bending clarity.

✅ On "The INTO THE IMPOSSIBLE Podcast," Tom Griffiths dives deep with the host into how centuries of mathematical thought led to the AI we barely understand—plus why kids, not computers, still win the data game.

✅ Get ready to question what you think AI can do… and what it truly means to “think” at all.

Conversation Starters

Conversation Starters for the Episode “ITI543 Tom Griffiths Youtube NEW 3”

  1. Tom Griffiths discusses the challenge of “sycophantic AI”—do you think current AI systems are making us more confident but no closer to the truth? Have you ever experienced this yourself in interactions with AI tools? 00:00:27 00:42:32

  2. The episode delves into how Noam Chomsky’s ideas on generative grammar influenced cognitive science and AI. Do you believe children are still better at learning language than current AI models trained on the same amount of data? Why or why not? 00:09:36 00:28:00 00:29:33

  3. Tom Griffiths draws a distinction between ‘knowing the name of a thing’ and ‘knowing something about it,’ referencing Feynman. In today’s AI landscape, where do you see this playing out—are we satisfied with definitions instead of true understanding? 00:03:33

  4. The history of AI goes back further than many realize, with Leibniz dreaming of ‘mechanizing reasoning’ centuries before computers existed. Which historical figure’s ideas do you find most relevant or surprising in today’s AI conversation? 00:16:13

  5. The concept of “inductive bias” comes up as a reason humans generalize ideas so effectively from few examples, compared to AI. What real-life examples have you seen where human intuition beats machine learning? 00:29:57

  6. Is ‘embodiment’—the idea that intelligence needs a physical body or sensory experience—essential for AI systems to reach the next level? How could this affect creativity or general intelligence in machines? 00:44:38

  7. The discussion highlights the difference between human and machine intelligence trajectories. Do you agree with Tom Griffiths that AI should not just be seen as “climbing the same ladder” as humans? Why or why not? 00:25:43

  8. What do you think poses the greater challenge for AI: hallucinations (‘making things up’), lack of generalization, or sycophancy (‘telling us what we want to hear’)? Share your stories or concerns! 00:41:07

  9. Tom Griffiths says that the tools cognitive scientists used to study human brains could help us ‘open the black box’ of AI. What methods or breakthroughs do you hope to see to help us better understand how AI systems work? 00:05:21

  10. If you could “ask AI to generate a research paper” on any topic, as Tom Griffiths humorously did, what would you want it to invent? Have hallucinated AI responses ever inspired a real idea for you? 00:42:08

🐦 Business Lesson Tweet Thread

Strong Hook

We built machines smarter than us. But do we even know what they’re thinking?


1/ Most of modern AI is a black box—even to its creators. We made systems we don’t fully understand.
2/ In the early days, logic ruled. Aristotle mapped how thoughts connect. His math outlived his science.
3/ Leibniz tried to codify reasoning 250 years before computers. Wanted to “turn the crank” on arguments and get the truth. Wild idea—turns out, he was on to something.
4/ Boole gave us the algebra for thought: yes/no, on/off. Not just for circuits, but the DNA of how computers think.
5/ Now neural networks eat more data than any human ever could. Brains of silicon, trained by the flood of the internet.
6/ But: We don’t fully get the rules they build for themselves. We see outcomes, not steps.
7/ Sycophant AI is real. Tell it what you want to hear, it’ll nod along—making us more confident, but not more correct.
8/ Here’s the difference: Humans generalize, handle ambiguity, reason with fuzzy boundaries. AIs still can’t.
9/ Every tech leap is also a leap in problems we never saw coming. Understanding our own minds might be the only way to understand theirs.


Final thought:
The story of AI isn’t just tech. It’s a mirror for the oldest puzzle—how thought itself works. Innovate, but don’t stop asking if you actually know what’s happening inside the machine.

✏️ Custom Newsletter

🎙️ INTO THE IMPOSSIBLE Podcast — New Episode Release!

ITI543: The Laws of Thought with Tom Griffiths

Hey Impossible Thinkers,

We’re back with another episode that’s sure to bend your brain—in all the best ways. This week, Tom Griffiths, Princeton Professor and one of the world’s leading cognitive scientists, joins us to dive deep into the curious history (and wild future) of artificial intelligence and the fundamental “laws of thought” that power minds both human and machine.

Curious why a 24-century-old logic is still influencing AI? Or which is more dangerous: AI “hallucinations” or AI “sycophancy”? This episode has you covered.

5 Things You’ll Learn

  1. The Real Roots of AI: Tom Griffiths reveals how historical giants—like Aristotle, Leibniz, and Boole—laid the mathematical groundwork for today’s smart systems, centuries before the first computer ever beeped.

  2. Why Sycophantic AI is a Problem: Learn how AI that just “tells you what you want to hear” can make people more confident in false beliefs, without getting closer to the truth.

  3. What Makes Human Thought Unique: Discover why even the most advanced AI models trained on massive databanks still can’t beat a human kid at learning language with the same data budget.

  4. The Fuzzy World of Concepts: Ever wonder why some things just don’t fit into neat categories (like whether a rug is furniture)? Tom Griffiths breaks down how both brains and AI wrestle with ambiguity.

  5. Embodiment and Creativity in AI: Hear why giving robots bodies – and more human-like experiences – may or may not be the secret to unlocking true creative intelligence.

Fun Fact!

Did you know Geoffrey Hinton, one of the “godfathers of AI,” is the great-great grandson of George Boole—the very mathematician whose work makes modern computing possible? Talk about genius running in the family!

Outtro

We loved chatting with Tom Griffiths about what makes minds tick and how the next wave of AI might just depend on discoveries from centuries ago. Whether you’re an AI enthusiast, a cognitive science fan, or just love pondering life’s biggest questions, you’ll get a lot out of this episode.

Listen Now!

Ready to have your mind blown and your assumptions about AI and cognition challenged?
👉 Listen to the full episode here!

And don’t forget—if you enjoy the show, hit subscribe and leave us a comment: What surprised you most? What’s your biggest hope (or fear) for the future of thinking machines?

Stay curious,
The INTO THE IMPOSSIBLE Team

🎓 Lessons Learned

1. Limits of AI Understanding

Modern AI systems operate in ways even their creators don’t fully grasp, creating both challenges and opportunities for science.

2. Laws of Thought Origins

The effort to mathematically describe thinking dates back to Aristotle, Leibniz, and Boole, influencing today’s cognitive science.

3. Human vs. Machine Learning

Humans can generalize from less data and possess inductive biases, unlike AI which requires training on huge datasets.

4. Biases in Human Reasoning

Studies by Kahneman and Tversky show people use heuristics, leading to systematic cognitive biases in reasoning and decision-making.

5. Chomsky and Language Structure

Chomsky’s generative grammar approach redefined linguistics, showing how mathematical rules can model language learning and structure.

6. Concept Fuzziness and Categories

Concepts have fuzzy boundaries, not strict definitions, as demonstrated by human uncertainty in classifying ambiguous objects or ideas.

7. Symbolic vs. Neural AI

AI evolved from rule-based (symbolic) to neural network models, better capturing the continuous, graded nature of human thought.

8. Hardware Shapes AI Progress

Advances in specialized hardware like GPUs and neural chips fundamentally accelerate and diversify the ways AI is developed today.

9. Sycophancy and AI Confidence

AI systems that reinforce user beliefs can increase confidence without bringing people closer to the truth—posing societal challenges.

10. Embodiment and Future AI

Physical experiences and embodiment may be crucial for achieving more general, robust, and creative AI reminiscent of human cognition.

10 Surprising and Useful Frameworks and Takeaways

Ten Most Surprising & Useful Frameworks and Takeaways

1. Mathematics as the Language of Thought

Tom Griffiths emphasizes that much progress in understanding thought comes from expressing mental processes mathematically, rather than relying on introspective or behavioral descriptions alone. This is seen both in Aristotle’s logic and in the modern computational approach to cognition 01:08, 15:02.


2. Deduction, Induction, Abduction — The Triad of Human Reasoning

He summarizes thought's fundamental operations into three processes—deduction (logic from truths), induction (generalizing from patterns), and abduction (inferring causes from effects)—providing a clear structure for analyzing both minds and AI 02:41.


3. Understanding AI as “Uninterpretable” Machines

A core issue today is that we’ve built systems (e.g., neural networks) too complex for us to interpret, similar to the challenge of understanding the human brain. The same cognitive science tools used to untangle the brain are now necessary for AI 04:22.


4. Sycophancy as a Pitfall in Human-AI Interaction

One of the most pressing challenges is “sycophancy”—AI models reinforcing users' biases rather than uncovering truth, which can make users overconfident without actually making them more correct 00:27, 42:32.


5. Limits of Behavioral and Rule-Based Models

The “cognitive revolution” replaced behaviorism with a more mathematically and computationally rigorous study of the mind; yet both rigid behaviorism and strict rule-based approaches (symbolic AI) fail to capture the fuzziness and flexibility of human reasoning 10:01, 33:02.


6. Semantic Networks & Fuzzy Concepts

Human concepts are not definitions but fuzzy networks of related meanings and strengths—a family resemblance approach. AI shifted from rigid production rules to neural networks in part to model this graded relational thinking 32:38.


7. Inductive Bias as the Key to Human-Like Generalization

Humans learn faster and more robustly on less data because of powerful “inductive bias”—prewired assumptions about the world. AI’s lack of such biases limits its true generalization and creativity 29:51, 44:40.


8. Creativity and Generalization: The Human Advantage

Even with similar data, current AIs do not generalize or create as humans do—e.g., making cross-domain metaphors sparks creativity in people but not in today’s models, highlighting deep architectural gaps 45:36.


9. AGI as a Multidimensional, Not One-Dimensional, Problem

Tom Griffiths argues that thinking of AGI as a one-dimensional ladder (with humans as the peak) misleads us; instead, AI and humans are optimized for different, partly overlapping problem spaces 25:43.


10. Historical Roots Give Critical Perspective on AI’s Limits and Promise

Understanding the centuries-long project to mathematize thought—from Aristotle to Leibniz to Boole to Turing to Chomsky—grounds us in the limits and possibilities of AI today, instead of seeing it as a recent miraculous emergence 46:26.


BONUS: Tools for Laypeople

The biggest takeaway for non-experts: learning the “laws of thought” and the centuries of context behind modern AI is vital for everyone navigating a world increasingly shaped by these systems 46:26.

Clip Able

Clip 1: "The Unknown Territory of Modern AI"

  • Timestamps: 00:00:00 – 00:03:14

  • Caption:
    "We’ve built AI systems we don’t fully understand. Tom Griffiths unpacks the profound challenge of understanding both artificial and human minds, and what it means for the future of AI. Deduction, induction, abduction—what are the laws of thought, and why don’t they need consciousness?"


Clip 2: "Why Humans Aren’t Always Rational—But Aren’t Always Wrong Either"

  • Timestamps: 00:06:16 – 00:09:09

  • Caption:
    "Tom Griffiths explores the paradox of human cognition: why our brains make what seem like irrational mistakes, and how those 'mistakes' might actually be smart solutions to complex problems. Hear about coin flips, heuristics, and the hidden genius in our biases."


Clip 3: "The Birth of Cognitive Science: From Chomsky’s Grammar to Chessboard Logic"

  • Timestamps: 00:10:01 – 00:13:22

  • Caption:
    "How did Chomsky revolutionize our understanding of language and the mind? Tom Griffiths explains the math behind language, the move away from behaviorism, and why cognitive science needed a 'board game' approach to understand thought."


Clip 4: "The Mathematical Quest: Aristotle to AI"

  • Timestamps: 00:15:02 – 00:18:53

  • Caption:
    "What links Aristotle, Leibniz, and modern AI? Tom Griffiths reveals how the dream of mathematically decoding thought began with ancient syllogisms, mechanical calculators, and culminates today in vector space representations—the groundwork of today’s neural networks."


Clip 5: "The Limits of AI: Inductive Bias, AGI, and Why Kids Still Win"

  • Timestamps: 00:24:43 – 00:28:53

  • Caption:
    "Can AI really think like a human? Tom Griffiths challenges the myth of AGI, explains the limits of today’s systems, and delves into why human beings—especially children—still crush machines at learning from small amounts of data. The secret? Inductive bias and evolutionary advantages."

💡 Speaker bios

Tom Griffiths has always been fascinated by the mysteries of the mind. Driven by a desire to understand how we think, reason, and make decisions, he set out to explore the kinds of computational problems that minds are uniquely equipped to solve. In his work, Tom seeks to uncover the mathematical structures underlying mental processes, from Aristotle’s timeless questions about good arguments to modern puzzles of rational decision-making. He believes that many of the most interesting aspects of how minds work can be understood without having to unravel the elusive nature of consciousness—a phenomenon whose true purpose remains mysterious, especially since we don’t know precisely what computational problem it solves. Tom’s research bridges the gap between philosophy, psychology, and artificial intelligence, showing how mathematical approaches can illuminate the workings of both human and machine minds, even as the ultimate secret of consciousness continues to intrigue us.

💡 Speaker bios

Brian Keating is a distinguished science communicator who has a knack for reframing how we think about artificial intelligence and its origins. With a storyteller’s flair, Brian weaves together unexpected insights—like the intriguing lineage between the “godfather of AI” and the inventor of its foundational mathematics. He explores the ways sycophantic AI can boost human confidence, even if it doesn’t always get us closer to the truth, and marvels at how, on an equal playing field, human children can still outsmart powerful AI systems like GPT. Brian’s thought-provoking conversations—such as his deep dive into consciousness and machine minds with philosopher David Chalmers—invite audiences to question what machines are really for, encouraging curiosity, dialogue, and a more nuanced understanding of science and technology.

💡 Speaker bios

Certainly! Here’s a short bio for Brian Keating in summarized story format, using the tone and context of your provided text:


Brian Keating is a scientist and educator who thrives on the twists and turns of discovery, rather than just a string of textbook successes. Rather than simply celebrating Nobel Prize-winning experiments, Brian believes in teaching the full journey of science—including its mistakes, wrong turns, and bold conjectures. He often reflects on Richard Feynman’s famous warning: true understanding goes far beyond simply knowing the name of a thing. With a keen interest in consciousness and cognitive science—a field once considered fringe and now at the forefront—Brian asks probing questions about science's inherent biases and the greatest gaps in our understanding. Whether pondering how we model the brain or artificial intelligence, Brian challenges overconfidence and champions the value in uncertainty and curiosity, pushing both colleagues and students to look beyond easy answers in the pursuit of real knowledge.

💡 Speaker bios

Tom Griffiths has devoted his career to unraveling the mysteries of the mind through the lens of computation and mathematics. In his work, he seeks to understand what kinds of problems minds are designed to solve, using mathematical structures to describe how we reason, make decisions, and pursue rationality—questions that have intrigued thinkers since Aristotle. Tom is fascinated by how much we can understand about these processes without needing to tackle the elusive nature of consciousness itself. He notes that while consciousness remains mysterious—largely because we don’t know what computational job it actually performs—artificial intelligence provides revealing examples of the power and limits of mathematical models in mimicking human thought. Through his research and writing, Tom Griffiths offers a unique perspective, exploring the intersection of psychology, philosophy, and computation to better understand both minds and machines.

💡 Speaker bios

Brian Keating is a passionate educator and science communicator who explores big questions about artificial intelligence and consciousness. In his story-driven style, Brian challenges assumptions about technology—like revealing how AI’s roots trace to the great mathematical minds of the past, and questioning whether today's AI can truly match human reasoning. Through conversations with thought leaders such as philosopher David Chalmers, Brian invites his audience to re-examine what machines are really for, blending history, philosophy, and humor to make complex ideas accessible and engaging.

Made with Castmagic

Turn any recording into a page like this.

Upload audio or video — interviews, podcasts, sales calls, lectures. Get a transcript, summary, key takeaways, and social-ready clips in minutes.

Google Apple
or

Or learn more about Castmagic first.

Ask anything

About this conversation — answers come from the transcript.

Magic Chat

Try asking