i need to stop making these posts
Individuals perceiving their own qualia is totally distinct from individuals inferring that others have qualia. “I am the only conscious being the world” is conceivable in a way that “I am not conscious” is not conceivable.
We mustn’t be too trusting of our judgments of others’ qualia simply because we are so sure of our own. We think of a giant look-up table, like Ned Block’s Blockhead, and say, “well, surely that couldn’t be conscious.” Do we know this? Surely Blockhead would disagree.
It seems like what we are building on here is the intuition that tells me that, when I pull up a text file on my computer, I am viewing the product of the human being who wrote it (who might well be dead, say), and not a conscious process that is unfolding right at the moment. So that is one intuition: looking up pre-determined stuff doesn’t constitute consciousness. On the other hand, when I encounter someone who looks and acts human, I immediately infer that they’re conscious. That’s another intuition. Blockhead asks us to consider someone who looks and acts like a person, but is actually a look-up table. Intuition conflict.
How should we resolve the conflict? It seems to me that if something acts like a human, we should treat it like a human, for the simple reason that our intuitions about underlying processes are too vague to be of help. We think we know that my computer is not conscious when I read a text file. But how does this generalize? What exactly does it mean to not “be a look-up table”? Brains, after all, produce behavior by obeying the laws of physics; in principle (quantum stuff aside) you could say that the brain is merely “looking up” what the laws of physics have to say about its time evolution. We feel there is some difference here, but it is hard to be sure what it is, and there is the sense of scaling up a simple principle about text files and recordings in a way that may be inappropriate. If a look-up table starts acting like a person, and we cannot specify exactly what it is that makes it objectionably look-up-table-like in a way that brains are not look-up-table-like, then I say we should treat it as a person. At that point the behaviorist intuition is clear-cut, and the look-up table intuition is murky.
(rambling, not necessarily related)
I think I (and by extension everyone else) could totally be modeled by a sufficiently large look-up table! I don’t think that a LUT is a the best representation of all the functions/processes of human consciousness, but let’s just treat it all as a LUT for now.
What makes a conscious human being not a LUT? What comes to mind first is, I think that our ‘conscious’ level doesn’t have access to the underlying LUT. It’d be rad if we did! I could figure out why I don’t like things I used to like! It would make debugging problematic behaviors super easy! But I experience myself in the same way that I might experience anything external- If I’m trying to find out more about myself, I pose myself a hypothetical question, see what feelings/reactions occur, and use that to guess at the shape of perhaps a few related entries in my internal LUT. If I’m trying to self-modify, I try out new inputs that might look-up a response that could result in updating another entry in my LUT. But I don’t know what my LUT actually is, so it’s all guesswork.
So what about a hypothetical computer that interacts based on a similarly complex LUT? Is it as simple as creating a new process within that computer, which both observes and modulates incoming and outgoing stimuli, but cannot access the LUT itself, just as it can’t natively access the laws governing its environment? Funnily enough, this seems to set a requirement of consciousness to be the lack of the ability to natively understand oneself. That.. seems to oppose the general definition of sentience, which as I understand it is the ability to introspect? Well, not necessarily. There’d be no point of introspection if you knew everything to begin with, right?
Going a step further- why couldn’t this new process also be modeled using a look-up table? Maybe it could, with corresponding entries in our extended LUT+ being unable to access a specific range of other entries within that same LUT+. Let’s say this updated LUT+ behaves functionally the same as the process with LUT combo. But is the experience (the experience of consciousness) the same?
I would say, yeah it is. Maybe I am a LUT+, maybe I’m a process with a smaller LUT. But it feels the same to me, as a consciousness that’s prohibited from native understanding of its own hardware. I am a LUT+ that is capable of experiencing consciousness. The -why- of it is irrelevant, as all different possible implementations lead to the same lived experience.Huh. Could we (using some manner of very advanced technology + brain modeling / person copying) get access to our own look-up table?
…hmmm. I think that if I could access my own LUT, then either that LUT is infinite, or it is not actually my LUT. Proof-sketch-thing:
For some finite LUT X to be a model of me, it must contain a mapping from each input I to each output O for the sum of things I could experience.
If I run into a situation represented by input I, and have access to the table, I can then do the following:
- Look up the input I on the LUT, and find the corresponding O. If it does not exist, the LUT has failed to describe my action, so it is not a model of me.
- Write down the current time on a sheet of paper.
- If O is the string “Access the LUT with input I’, where I’ = I + (new state of the world) + (read O)”, return to step 1, using I’ instead of I.
- Take the action described in step 3 anyways, thus showing the LUT is not a model of me.
Unless the LUT is infinite, we must eventually exit this process, since we never lookup the same input I. The only exit steps are ones that show that the LUT does not model me; therefore it must either be infinite, or not model me.
The LUT would have to be able to self modify in order to be complete, which I touched on a little in my original brain puke. Add to that that the LUT can add new entries. The table doesn’t have to be infinite, just not fixed in size.
A system doesn’t have to model all possible inputs in order to respond to all possible inputs.
(via eikotheblue)
satisfizzier liked this
logbase1 liked this
michaelkeenan liked this
winedarkly liked this
paleglanceaustereface liked this
pseudogonal liked this
eikotheblue reblogged this from skiesalight and added: Hrm. I was modeling a LUT as a simple key-value storage system. If we allow modification / entry change, it starts to...
skiesalight reblogged this from eikotheblue and added:
The LUT would have to be able to self modify in order to be complete, which I touched on a little in my original brain...
towardsagentlerworld liked this
shitifindon liked this
elefantnap reblogged this from towardsagentlerworld
skiesalight liked this
fluidangel reblogged this from towardsagentlerworld towardsagentlerworld reblogged this from nostalgebraist
youzicha liked this
mttheww liked this
snarp said: I dislike the parameters of this particular debate because humans in general do not intuitively recognize/treat all other humans as if they’re human/conscious, under pretty much any definition of any of these variables.
argumate liked this
nostalgebraist posted this