alexr_rwx: (communist underneath)
Alex R ([personal profile] alexr_rwx) wrote2005-03-21 12:59 am

expect this to be a paper soon, or a part thereof?

SEARLE: 1. Programs are entirely syntactical. 2. Minds have semantics. 3. Syntax is not the same as, nor by itself sufficient for, semantics. Therefore, programs are not minds. QED. [from The Mystery of Consciousness, pg 11, paraphrase of his well-known "Chinese Room Argument"]

RUDNICK: In principle, given a powerful enough computer and a sufficient model of physics, I could simulate with arbitrary precision all of the happenings in a section of space -- assuming that this space does not contain the computer running the simulation. Now I choose to simulate a room containing a human, sitting in a chair. The simulated human in the simulated chair will, for all useful definitions of "conscious", be conscious (although will not necessarily realize that he's part of a simulation). The brain-in-a-vat problem needs neither a brain nor a vat, just a mathematical model of a brain. If you want to say that the program is not a mind but contains one, that's analogous to saying that the physical world is not a mind but contains one. The main point is that if we believe the physical world to be mathematically modelable, we can bring about awake minds within a computer.

Furthermore, "semantics" is an inherently shady word, loaded with connotations of some "objectively extant physical world" that one can reference. You Professor Searle, know just as well as I that you can't get outside of your own head -- eyes don't guarantee that you're seeing anything more real than what a robot with a video camera or an agent with a get_world_input() function gets.

[identity profile] smileydee.livejournal.com 2005-03-21 10:58 pm (UTC)(link)
Well, my point is, that if we parse all these signals from the brain down, they're still, at the core, electrical impulses, right? Like 1s and 0s. So we're still back to the main problem of how do we translate 1s and 0s into something that makes sense? How does that become consciousness? No matter how complex our brains are, we're still back to the chinese room. When does self-awareness and intelligence come in?
ext_110843: (coffee)

[identity profile] oniugnip.livejournal.com 2005-03-21 11:19 pm (UTC)(link)
That, m'dear, is the big problem. Where's the subjective, conscious experience, and how does it come about out of a physical system?

Hofstadter says (and I'm inclined to believe, and he puts it much more eloquently) that it comes about from a sufficiently self-referential computational system, which is to say a thing that has a model of itself and does processing about its own state...