alexr_rwx: (communist underneath)
Alex R ([personal profile] alexr_rwx) wrote2005-03-21 12:59 am

expect this to be a paper soon, or a part thereof?

SEARLE: 1. Programs are entirely syntactical. 2. Minds have semantics. 3. Syntax is not the same as, nor by itself sufficient for, semantics. Therefore, programs are not minds. QED. [from The Mystery of Consciousness, pg 11, paraphrase of his well-known "Chinese Room Argument"]

RUDNICK: In principle, given a powerful enough computer and a sufficient model of physics, I could simulate with arbitrary precision all of the happenings in a section of space -- assuming that this space does not contain the computer running the simulation. Now I choose to simulate a room containing a human, sitting in a chair. The simulated human in the simulated chair will, for all useful definitions of "conscious", be conscious (although will not necessarily realize that he's part of a simulation). The brain-in-a-vat problem needs neither a brain nor a vat, just a mathematical model of a brain. If you want to say that the program is not a mind but contains one, that's analogous to saying that the physical world is not a mind but contains one. The main point is that if we believe the physical world to be mathematically modelable, we can bring about awake minds within a computer.

Furthermore, "semantics" is an inherently shady word, loaded with connotations of some "objectively extant physical world" that one can reference. You Professor Searle, know just as well as I that you can't get outside of your own head -- eyes don't guarantee that you're seeing anything more real than what a robot with a video camera or an agent with a get_world_input() function gets.

[identity profile] elysianboarder.livejournal.com 2005-03-21 07:08 pm (UTC)(link)
I have a question about the Chinese Room Argument. It says that the person in room, given the rules for example if you see x write y. would come out of the room not knowning anymore chinese then when they entered. But the question being is that a vaild statement. Seeing that this is how many people learn another languange. If I say Hola you say Como esta.... Can a computer with AI learn the same way? Like the program you were writing that saw a word or phrase in spanish and could translate it. Was what you were doing basically the same thing as placing a person in a room?
ext_110843: (juggling)

[identity profile] oniugnip.livejournal.com 2005-03-21 07:23 pm (UTC)(link)
Hmmm... well, imagine yourself in that situation -- could you learn Chinese just by looking at a lot of Chinese text? I don't think I could; if I was around Chinese people, and they were doing things and showing me items while saying the words, then I could probably learn it. Sitting in the room, you'd probably get better at recognizing the characters, but you wouldn't have any good way to learn what they mean without a dictionary or some more context...

Maybe if you had children's books, with pictures...

[identity profile] zip4096.livejournal.com 2005-03-21 08:23 pm (UTC)(link)
What is "understanding" though?

What if the experience of "understanding" is that some part of our brain produces the correct output to the given problem, and then another region is sent a signal that gives the *feeling* of "Oh yes, I understand that perfectly." Personally I've had the experience of thinking I understand some material (eg physics), but when it comes test-time, I can't produce the right output :) Or also, an answer to something will come to me in a flash...

I dunno. I think you covered it when you referenced the Symbol Grounding Problem, Alex :) I hadn't heard of any of this stuff...

Alex's journal is educational!

[identity profile] smileydee.livejournal.com 2005-03-21 08:45 pm (UTC)(link)
Well, if we're assuming that our brains are just receiving and responding to input, then how are we to know that the other region is sending us a signal that feels like understanding? Wouldn't we just receive it as some other nameless input? How would be able to differentiate that from some nameless input telling us we don't know what the hell we're talking about?

[identity profile] zip4096.livejournal.com 2005-03-21 09:41 pm (UTC)(link)
Hmm, this is tough! :)

Well... I'm a little uncomfortable defending my "understanding" signaling idea, because it's just that, an idea, and I don't know of a physical basis for it that's been discovered (I'm all about empirical evidence and science; philosophy often seems like a bunch of bullshit to me).

But anyway, let me give you an example... Regions in the brain communicate with one another all the time. Say you see something very scary- well the optic nerve carries the raw image to a large region of cortex (visual processing takes a lot of power!) in the occipital lobe which interprets the image as being scary, and then a signal is sent to your amygdala, which makes you feel afraid, and that has wiring to the hippocampus, which will make you remember it (and even better than normal because there's a strong emotion associated with it).

What I'm talking about is communication within the brain.. It's not input from the outside... does that answer your question at all... ?

I'm sorry, I actually had to reread your post a few times and I'm still not entirely sure if I understand what you were getting at. If it gets too abstract I have difficulty, but I love the physical, real science stuff :)

[identity profile] smileydee.livejournal.com 2005-03-21 10:58 pm (UTC)(link)
Well, my point is, that if we parse all these signals from the brain down, they're still, at the core, electrical impulses, right? Like 1s and 0s. So we're still back to the main problem of how do we translate 1s and 0s into something that makes sense? How does that become consciousness? No matter how complex our brains are, we're still back to the chinese room. When does self-awareness and intelligence come in?
ext_110843: (coffee)

[identity profile] oniugnip.livejournal.com 2005-03-21 11:19 pm (UTC)(link)
That, m'dear, is the big problem. Where's the subjective, conscious experience, and how does it come about out of a physical system?

Hofstadter says (and I'm inclined to believe, and he puts it much more eloquently) that it comes about from a sufficiently self-referential computational system, which is to say a thing that has a model of itself and does processing about its own state...
ext_110843: (juggling)

[identity profile] oniugnip.livejournal.com 2005-03-21 11:15 pm (UTC)(link)
Oh my goodness. That's a really interesting thing to consider -- there's a feeling that we call "understanding". Hrm.

I think... that I couldn't competently answer that -- seems like something that needs to be empirically checked in on?

[identity profile] elysianboarder.livejournal.com 2005-03-22 03:47 pm (UTC)(link)
Remember brett there is the long term and short term memory stores. The amigdula (*buzz word alex!*) converts the to. So this further proves that if your brain is flawed as a system then you can not clearly fuction the input and output of that system. Question placed, since "understanding" is simply a function of the brain, then how do you have a true since of self. Are people who are missing or have damaged sections of the whole not understanding and not self?
ext_110843: (coffee)

[identity profile] oniugnip.livejournal.com 2005-03-22 04:23 pm (UTC)(link)
Mmm... yes. I don't think anybody's going to argue that some people with damaged brains have a messed up understanding and maybe less sense of self.

... but it's a remarkably resilient thing :)

But surely it would be possible to get an even stronger consciousness out of a few million more years of evolution or R&D? Given enough time, could we not breed people who are born with very well-controlled conscious minds?

[identity profile] elysianboarder.livejournal.com 2005-03-22 04:38 pm (UTC)(link)
That's really very trivial. With enough monkeys in a room you can hack out Hamlet. So being able to breed people with a very well-controlled conscious mind or *cough cough* the mind itself is only a matter of time. I mean I think they might even come up with a name for it. I would call it something like artifical intelligence. Since it would be trying to have inputs and from those determine to an extent an output. :D