Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
[Adam]: In 'Could a Quantum Computer have Subjective Experience?' you speculate where the process has to fully
participate in the arrow of time to be conscious and this points to decoherence.
If pressed, how might you try to formalize this? [Aaron]: So yeah so I did write
this kind of crazy essay five or six years ago that was called "The Ghost in
the Quantum Turing Machine", where I tried to explore a position you know
..that seemed to me to be sort of mysteriously under-explored! And all of
the debates about you know 'could a machine be conscious?' and so forth or you
know, which was well you know maybe you know like we want to be
thoroughgoing materialists right? There's no sort of magical
ghost that defies the laws of physics right; the you know brains or physical
systems that obey the laws of physics just like any others. But there is at
least one very interesting difference between a brain and any digital computer
that's ever been built - and that is that the state of a brain is not obviously
copyable; that is not obviously knowable to an outside person well enough to
predict what a person will do in the future, without having to scan the
person's brain so invasively that you would kill them okay. And so there is a
sort of privacy or opacity if you like to a brain that there is not to you know
a piece of code running on a digital computer. And you know so you know there
are all sorts of classic philosophical conundrums that sort of play on that
difference. You know for example suppose that a you know human-level AI does
eventually become possible and we have you know
you know simulated people who were running a you know inside of our
computers - well you know if I were to murder such a person in the sense of
deleting their file is that okay as long as I kept the backup somewhere? you know
as long as I can just restore them from backup? Or what if I'm running to exact
copies of the program you know on on two computers next to each other you know is
that instantiating two consciousnesses, or is it just really just one consciousness
because there's nothing to distinguish the one from the other?
So you know could I blackmail an AI to do what I wanted by
saying you know even if I don't have access to you as an AI, I'm gonna say
if you don't give me a million dollars then I'm just going to you know - since I
have your code - I'm gonna create a million copies of your of the code and
torture them, and - if you think about it - you are almost certain to be one of
those copies because you know there's far more of them than there are of you,
and they're all identical! So yeah so there's all these like you know these
puzzles that like philosophers have wondered about for generations: about the
nature of identity you know, how does identity persist across time, can it be
duplicated across space you know, and somehow in a world with copy-able
AIs they would all become much more real! And so you know so one one point of view
that you could take is that: well if I can predict exactly what someone is
going to do right - and I don't mean you know just just saying as a philosophical
matter that oh I could predict your actions if I were a Laplace demon and I
knew the complete state of the universe right, because you know I don't in fact
know the complete state of the universe okay - but imagine that I could do that as
an actual practical matter right I could build an actual machine that would
perfectly predict down to the last detail
thing you would do before you had done it. Okay well then you know in what sense
do I still have to respect your personhood I mean I could just say you
know I have unmasked you as a machine; right I mean you know my simulation has
every bit as much right to personhood as you do at this point right - they're just
or maybe they're just two different instantiations of the same thing. So but
you know another possibility, you could say, is that maybe what we like to think
of is consciousness you know only resides in those sort of
physical systems that for whatever reason are uncopyable - that you know if
you try to make a perfect copy then you know you would ultimately run into well
the the what we call the no-cloning theorem in quantum mechanics that says
that: you cannot copy the exact physical state of a you know an unknown system
for a quantum mechanical reasons. And so this would suggest of you where kind of
personal identity is very much bound up with well with it with the flow of time
right; with sort of you know things that happen that are evanescent right; that
can never happen again exactly the same way because the world will never reach
exactly the same configuration. You know a related puzzle concerns well: what if I
took your conscious you know took a an AI and I ran it on a reversible computer?
You know now you know some people believe that any appropriate simulation
brings about consciousness you know - which as a position that you can take.
But now you know what if I ran the simulation backwards - right as I can
always do on a reversible computer? Does this you know what if I
ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward
consciousness, and then one backward consciousness - whatever that means?
Did it have a different character from the forward consciousness? But
you know we know a whole class of phenomena that in practice can only ever
happen in one direction in time - you know and these are thermodynamic phenomena
right; these are phenomena that sort of create waste heat; create entropy right;
that may take these little small you know microscopic unknowable degrees
of freedom and then amplify them to macroscopic scale. And in principle there
was macroscopic records you know could could get could become microscopic again
right. Like you know if I make a measurement of a quantum state you know
at least according to the let's say many-worlds quantum mechanics you know
in principle that measurement could always be undone. And yet in practice we
never see those things happen - for the same for basically the same reasons why
we never see an egg spontaneously unscramble itself, or why we why we never
see you know a shattered glass you know leap up to the table and reassemble
itself right, namely these are you know these would represent vastly improbable
decreases of entropy okay. And so the speculation was that you know maybe this
sort of irreversibility in this increase of entropy that we see in all the
ordinary physical processes and in particular in our own brains,
maybe that's important to consciousness? Right uh or what we like to think of as
free will - I mean we certainly don't have an example to say that it isn't - but you
know the the truth of the matter is I don't know I mean I set out all the
thoughts that I had about it in this essay five years ago and then having
written it I decided that I had had enough of metaphysics, it made my head
hurt too much, and I was going to go back the better defined questions in math and science.
[Adam]: In 'Is Information Physical?' you note that if a system crosses a Swartzschild Bound it collapses into a black-hole - do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?
Well so I can decompose your question a little bit. So there is what quantum
gravity considerations let you do, you know it is believed today, is put a
universal bound on how much computation can be going on in a physical system of
a given size you know, and also how many bits can be stored there. And I can
you know the bounds are precise enough that I can just tell you what they are.
So it appears that a physical system you know, that's let's say surrounded by a
sphere you know of a given surface area, can store at most about 10 to the 69
bits, or rather 10 to the 69 qubits per square meter of surface area of the
enclosing boundary. And it has a similar limit on how many computational steps it
can do over it's it's whole history. So now I think your question kind of
reduces to the question: Can we upper-bound
how much consciousness there is in a physical system - whatever that means - in
terms of how much computation is going on in it; right or in terms of how many
bits are there right? And that's a little hard for me to think about because you
know I don't know what we mean by amount of consciousness right? Like am I ten
times more conscious than a frog? Am I a hundred times more conscious? I don't
know you know - I mean some of the time I feel less conscious than a frog right.
But you know but I'm but I am sympathetic to the idea that: there is
some minimum of computational interestingness you know in any system
that we would like to talk about as being conscious right. So there is this
you know ancient speculation of pan psychism right, that would say that every
even every electron every atom is conscious - and you know do me like that that's
fine you can speculate that if you want. We know nothing to you know to rule it
out you know; there were no like physical laws attached to consciousness that
would tell us that that's impossible. The question is just what does it buy you to
suppose that? You know what does it explain? And in the case of the electron
I'm not sure that it explains anything! You know now you could say does it even
explain anything to suppose that we're conscious? But you know and maybe not at
least not for anyone beyond ourselves. You could say you know you know there's
this ancient conundrum that we each know that we're conscious presumably by our
own subjective experience and as far as we know everyone else might be an
automaton right. You know which you know if you really think about that
consistently could you know lead you to become a solipsist. So Allen Turing in
his famous 1950 paper that proposed the the Turing test had this wonderful
remark about it - which is which was something like - 'A' is
liable to think that 'A' thinks while 'B' does not, while 'B' is liable to think 'B'
thinks but 'A' does not. But you know in practice it is customary to adopt the
polite convention that everyone thinks okay. So you know it was a very British
way of putting it to me right. We adopt the polite
convention that solipsism is false; right that you know that people who can
you know, or any entities let's say that can exhibit complex behaviors or
goal-directed intelligent behaviors that are like ours are probably conscious
like we are. And that's a criterion that would apply to other
people it would not apply to electrons (I don't think), and it's plausible that
there is some bare minimum of computation in any entity to which that criterion would apply.
[Adam]: Sabine Hossenfelder - I forget her name now - \{Sabine Hossenfelder yes\} - she had a scathing review of panpsychism recently, did you read that?
[Scott]: I can't, if it was very recent then I
probably didn't read it - I mean I did I did read an excerpt where she was saying
that like Panpsychism is what she's saying that it's experimentally ruled out?
Yeah, If she was saying that that I don't agree with that - you know
know I don't even see how you would experimentally rule out
such a thing; I mean you know you're you're free to postulate as much
consciousness as you want on the head of a pin right - I would just say well it's
you know if it's not if it doesn't have an empirical consequence; if it's not
affecting the world; if it's not affecting the behavior of that head of a
pin, you know in a way that you can detect - then Occam's razor just itches
to slice it out from our description of the world - always that's
the way that I would put it personally yeah. So I put a detailed critique of
integrated information theory (IIT), which is Giulio Tononi's you know proposed
theory of consciousness on my blog, and my critique was basically: so Tononi
know comes up with a specific numerical measure that he calls
'Phi' and he claims that a system should be regarded as conscious if and only if the
Phi is large. Now the actual definition of Phi has changed over time - you know
it's changed from from one paper to another, and it's not always clear how to
apply it and you know there are many technical objections that could be
raised against this criterion. But you know what I respect about IIT is that at
least it sticks its neck out right. It proposes this very clear criterion, you
know are we always much much clearer than competing accounts do right - to tell
you you know this is which physical systems you should regard as conscious
and which not. Now the danger of sticking your neck out is that it
can get cut off right -and you know indeed I think that IIT is not only
falsifiable but falsified, because as soon as this criterion is written down
(what the point I was making is that) it is easy to construct physical systems
you know that have enormous values of Phi - much much larger
you know then a human has and that yet that no I don't think anyone would
really want to regard as as intelligent you know let alone conscious or even
very interesting. And so my examples you know so basically Phi is large if and
only if your system has a lot of interconnection right - if it's very hard
to decompose into two components that interact with each other only weakly - and
so you have a high degree of information integration. And so so my the point of my
counter examples was to try to say well this cannot possibly be the sole
relevant criterion, because we you know a standard error correcting code as is
used for example on every compact disc you know has it also has an enormous
amount of information integration okay - but should we therefore say that you
know 'every error correcting code you know that gets implemented in some you
know piece of electronics is conscious?', right and even more than that like a
giant grid of logic gates just sitting there doing nothing would have a very
large value of Phi right - and we can we can multiply examples like that. And so now
Tononi then posted a big response to my
critique and his response was basically: well you're just relying on intuition
right; you're just saying oh well yeah these
systems are not a conscious because my intuition says that they aren't - but ..
.. that's parochial right - why
should you expect a theory of consciousness to accord with your
intuition and he just then just went ahead and said yes the error correcting
code is consciouss, ah yes yes the giant grid of XOR gates
is conscious - and if they have a thousand times larger value of Phi than
a brain, then there are a thousand times more conscious than a human is. So you
know the way I described it was he didn't just you know bite the bullet he
just devoured like a bullet sandwich with mustard okay you know which was and
I'm not what I was expecting right but you know but but now you know the
critique that you know that I'm saying that 'any scientific theory has to accord
with intuition' - I think I think that that is completely mistaken; I think that's
really a mischaracterization of what I think right. I mean I I'll be the very
first to tell you that science has overturned common sense intuition over
and over and over right. I mean like for example temperature you know feels
like an intrinsic quality of a you know of a material; it doesn't feel like it
has anything to do with motion with the atoms jiggling around at a certain speed -
okay but we now know that it does. But you know when when scientists first
arrived at that modern conception of temperature in the eighteen hundreds,
what was essential was that at least you know that new criterion agreed with the
old criterion that fire is hotter than ice right - so at least in the cases where
we knew what we meant by hot or cold you know the new definition agreed with the
old definition right. And then the new definition went on it went further to
tell us many counterintuitive things that we didn't know before right -
but at least that you know it sort of reproduced the way in which we were
using words previously okay. You know even when Copernicus
and Galileo where he discovered that the earth is you know orbiting the Sun right,
you know the new theory was able to account for our observation that we were
not flying off the earth right it you know it said that's exactly what you
would expect that to have happened you know even in the in ?Anakin? you know
because of these new principles of inertia
and so on okay. But you know if a theory of consciousness says that you know this
giant like blank wall or this you know grid is highly highly conscious just
sitting there doing nothing right whereas like you know even like like a
simulated you know mission you know or an AI or simulated person and passes the
Turing test would not be conscious you know if it's like organized in such a
way that it happens to have a low value of Phi - right I say well okay if it well
you still have to.. the burden is on you to prove to me that this new you
know this Phi notion that you have defined has anything whatsoever to
do with what I was calling consciousness right you haven't you know you haven't
even shown me any cases where they agree with each other where and where I should
therefore extrapolate to the hard cases; the ones where I lack an intuition
- like you know at what point is an embryo conscious? or when is an AI conscious? or
whatever right. I mean it's like you know the theory seems to have gotten wrong
like the only things that it could have possibly gotten right, and so then at
that point you know I think there is nothing to compel a skeptic to say that
you know this particular quantity fee has anything to do with consciousness