Skip to main content

One post tagged with "predictive processing"

View All Tags

· 8 min read
Rus

Details element example

Transcript part about from episode 320 - Constructing Self and World

Sam Harris: Tangle however you want, but what do you think is the best hypothesis at the moment describing what the brain is doing? And we might want to start by differentiating that from everyone's common sense idea of what the science probably says about. What the brain is doing.

Shamil Chandaria: Yeah, okay, that's great. So why don't we look at the brain from first principles, and then maybe we can later apply to meditation and spirituality. So the thing is that maybe 20 years ago, the consensus of what the brain was doing was it was kind of taking bottom up sensory data, sensory information, and kind of processing it up a stack. And then eventually the brain would know what was would figure out what was going on. And that view of what the brain is doing is, in fact precisely upside down, according to the latest theory of how the brain works.

Shamil Chandaria: And I think the way to start at this question is really from first principles. It really does help to look at it philosophically, which is we're an organism with this central processing unit, the brain which is enclosed in a kind of dark cell within the skull.

Sam Harris: We are already brains in vats.

Shamil Chandaria: We are already thought experiments. Exactly. And all this brain has access to is some noisy time series data, some dots and dashes coming in sort of from the nervous system. Now, how on earth is it going to figure out what is going on in the world?

Sam Harris: Before you proceed further, I love the angle you're taking here, but let's just reiterate what is meant by that, because it can be difficult to form an intuition about just how strange our circumstance is. I mean, we have you know, we open your eyes and you see the world, or you seem to see the world, and people lose sight of the significance of light energy being transduced into electrochemical energy. That is not it is not vision. Right after it hits your retina, you're not dealing with light anymore. And this has to be a reconstruction And we're not going to talk about the details of that reconstruction, but to say that we're brains in vats right and being piped with electrochemical signals divorced from how experience seems out there in the world, that it just seems given to us, that's not hyperbole. It really is. You know, there is a fundamental break here, at least in in how we conceive of our sectioning of reality based on our on what our nervous system is.

Shamil Chandaria: Yeah, I mean, in fact, I don't know how deep you want to go with this, but actually you can even start before that, which is from the philosophical problem, which is what Plato and Emmanuel Kant kind of pointed to, which is that we only know our appearances, our experience. We have no contact with reality. Most people's common sense view is that "oh, look, we're looking out at the world through little windows in the front of our skulls, and we're seeing trees as they really are". Of course, that cannot be true for precisely the reasons that you said. We're just receiving some noisy, random electrical signals coming in, and the brain has never seen reality as it is.

Shamil Chandaria: I was going to, you know, the tree as it is in itself, if that makes any sense. Now, what the brain has to do is figure out the causes of its sensory data in other words, it's trying to figure out what is causing its sensory data so it can get some grip on the environment and that, of course, is important from an evolutionary perspective, because if we don't know what's going on in the environment, we won't know where the food is and we won't know where the tiger is. So we need to find out the causes of our sensory data.

Shamil Chandaria: And this is ultimately, formally, exactly the statistical inference problem, the Bayesian inference problem. And Bayesian inference is trying to figure out the probability that, given my sensory data, I'm seeing a tree. Okay? Now, as we said, it turns out that the brain can't solve this problem, because actually, formally solving the Bayesian inference problems turns out, for technical reasons, to be computationally explosive. So what evolution has to do and what we have to do in artificial intelligence, is use another algorithm. It's called approximate Bayesian inference. And the way you solve it, because bayesian inference is so difficult, the way you actually solve it is going at it backwards. And what you have to do is you essentially have to have all this data come in and try to learn what you think you're seeing and from what you think you are seeing you then simulate the pixels that you would be seeing if your guess is correct. So if I think I'm seeing a tree, what your brain then has to do is go through something called a generative model and actually simulate the sensory data that it would be seeing if this was indeed a tree. Now, that is incredible, because what it means is that well, the upshot of that, just to cut to the chase, this is the real kind of what's called a neurophenomenological hypothesis, which is that in fact, what we experience, if we're aware of it, is our internal simulation is precisely that internal generative model. Now, you might just then conclude, well, we're just hallucinating we're just simulating. How do we have any grip on reality?

Shamil Chandaria: And this is where the free energy principle comes in. It says that what we have to do is we have to simulate what we think is going on but it's not any old simulation it's a simulation that minimizes the prediction error from the output of your simulation and the few bits of sensory data that we get. In other words, what we actually do with the sensory data is use it to calibrate our simulation model, our generative model. And there is another part of the free energy principle, which is, it turns out that minimizing prediction error isn't good enough. It turns out we also have to have some prior guesses, some prior probabilities about what we're experiencing. In other words, as I grow up through childhood and as you're inculturated, you come to learn that there are things like trees and so there's a kind of a high prior probability of finding trees in your environment. Now, what you want to do is you want to have a simulation, which is minimizing the prediction error with the sensory data, but also minimizing the informational distance between the output of your generative model, the simulation and your priors. In other words, you want a simulation that is as close to what you would normally expect before seeing the sensory data so this is really what the free energy is. The free energy has two terms. The first is roughly kind of a prediction error, and the second is an informational distance to the prior of what you'd be expecting. So it turns out that we can actually do approximate Bayesian inference, which is the mathematically optimal thing to do if we simulate the world and use that simulation to and create a simulation in such a way that minimizes the prediction error with the sensory data that we get and also minimizes the deviation from the divergence from our prior probability distribution. Prior probabilities. So that's kind of the free energy in a nutshell.

Shamil Chandaria: And it's kind of, as I said, it's very interesting because it helps us think about phenomenology, which is what I'm interested in, because if we open our eyes, as you say, and we find the world just appear in front of us, what is this, what is this experience that we're having? And the answer is, it's a kind of we're somehow aware of our internally generated model of the world. And that model happens to be kind of calibrated correctly with the sensory data.

<br/>

To add more information on Bayesian inference (from Dr Shamil Chandaria: The Bayesian Brain and Meditation)