Saturday, November 17, 2007

This post is in response to blog-trawler Mitchell’s fairly extensive comment on my previous post on AGI, which brings up some excellent points. It’s fantastic to know that others are thinking about this sort of thing. Because this allows me to address points in a more expansive manner, this is not a comment, but a full-blown post. Wow… Let's get started.

____________________________________

Mitchell: “1. We need big new ideas to understand consciousness, but not to achieve AGI. Some patchwork combination of already known algorithms and architectures will be sufficient for the latter.”

What you said is (unfortunately for AI researchers) largely true, because, IMHO, intelligence and consciousness are only incidentally related. The fact that we are self-aware is only a tiny part of the picture. There are many processes considered intelligent of which we are only vaguely aware; for instance, when I run to catch a Frisbee, I do not consciously perform the necessary calculations that allow me to know the exact velocity and direction I need to take to catch it, however, I (usually) catch the Frisbee. I’m fairly sure dogs don’t perform these calculations, either. However, we can catch and throw things with ease, with no explicit awareness of our inherent mathematical ability to do so. (This is likely because mathematics is a man-made abstraction, complicating otherwise simple phenomena, but that’s another story.) It’s significantly easier to build an intelligent system than a conscious one, which is why AGI is a realistic goal for the next 25 years, whereas artificial general consciousness (AGC... I just copyrighted that) is likely not going to be.

Mitchell: “2. The natural sciences, as presently constituted, cannot explain consciousness, because they are reducible to physics, and you cannot even get basic sensory qualities like color out of existing physics, which is conceived only in geometric and algebraic terms.”

Agreed. Well, we’re screwed on this one for now, aren’t we? The existence of metamers, etc., makes this pretty irrefutable.

Mitchell: “3. This can be overcome if the quest for the neural correlates of consciousness is combined with the study of consciousness as it presents itself to the individual. Phenomenology is a guide to ontology - whatever entities and relationships do exist in reality, we know that those which show up in consciousness must be among them. Conversely, physics can be reduced to a mathematics largely independent of ontology - all we need is for some part of the mathematics to relate to observations. Therefore, a phenomenological ontology of conscious states has a unique ability to tell us what physics is really about, by telling us what NCCs, physically described, really are. The relationships between ontology and physical formalism established in the case of NCCs could then, one hopes, be extended to the rest of physics.”

Well, wouldn’t this just solve everything? If I knew the answer to this one, I would be rich, successful, and my robot would be doing lines of coke with my Nobel Prize certificate. The problem with this, obviously, is that no matter what, we’ll probably never know it if we’re right. Unless, if we’re lucky enough, these radical changes and breakthroughs are made.

Mitchell: “4. The key to understanding consciousness (which I take to be an aspect of your concept of sentience) thus turns out to be something which must come, at least in part, from within. I might go further and suggest that the key is to be found in the concept of self. The intuitive pre-scientific experience of the world can be roughly described as an experience of things through the senses and an experience of the self through thought. The scientific experience of the world focuses very much on things, even though it employs thought to make progress. When science attempts ontology, it attempts to explain everything using concepts developed on the thing side of the thing/self dichotomy. But in fact a whole other set of concepts - such as 'sensation' and 'thought'! - can be developed by making the self the object of investigation. The synthesis of physics and ontology will lie in knowing some NCC-like thing, formally described in the sense-originated language of physics, to be the very same "thing" known introspectively as the self.”

To clarify, consciousness is *not* a part of sentience. Several definitions that might be necessary for this type of talk are:

Sentience (sentire, "to feel"): refers to utilization of sensory organs, the ability to feel or perceive subjectively, not necessarily including the faculty of self-awareness

Sapience (sapere, "to know"): usually defined as wisdom since it is the ability of an organism or entity to act with judgment

Self-awareness: is the explicit understanding that one exists. Furthermore, it includes the concept that one exists as an individual, separate from other people, with private thoughts

Intelligence: a property of mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn. There are several ways to define intelligence. In some cases, intelligence may include traits such as: creativity, personality, character, knowledge, or wisdom. However, some psychologists prefer not to include these traits in the definition of intelligence.

Consciousness: a characteristic of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment

So despite all the talk of “sentient machines”, consciousness is not a necessary component. To a degree, motion sensors are sentient. However, this is just easy enough to be no fun (unless you’re DARPA). I’m personally a fan of not only sentience and intelligence, but fully sapient, self-aware, *conscious* machines.

Science does indeed focus on “things”; that is the long-established aim of science. While Big Science gets all the headlines now, it will only lay the foundations for AGI (AGC?), once we’ve done all the major steps, like reverse engineer the brain, etc. Luckily, increasing numbers of philosophically-minded rogue cognitive scientists (at least at the Univ of Chicago) are wandering in this direction.

Mitchell: “5. However, it is not likely that this degree of insight is necessary in order to achieve AGI, nor do I think it likely that an entity must actually possess consciousness to have intelligence as presently understood - because intelligence is presently understood in terms of functional competence, the ability to solve problems or achieve goals. Earlier, I distinguished between experience of things and experience of the self, though they are both just aspects of experience as a whole. Similarly, one could talk about intelligence regarding things and intelligence regarding the self - meaning a capacity to get the facts right, solve problems, etc., involving things or self, as the case may be.”

Agreed. AGI is likely to be significantly easier to work on or understand than consciousness, if only for the extreme lack of tools we have to determine whether or not something is self-aware.

Mitchell: “6. In AI it is sometimes argued that self-intelligence is the key to consciousness - the day when Cyc knows that propositions about "Cyc" are about itself, will be the day when Cyc wakes up. I do not agree, and until one has an ontological theory of selfhood, this is actually just mysticism. I can certainly exhibit a toy ontology in which this would not be so: simply suppose that consciousness is always and only monadic (i.e., that the only things which truly have consciousness are elementary in some sense), but that we assemble groups of these elementary things into causal aggregates which have the ability to report correctly on their own properties as an aggregate. Functionally this is self-intelligence, but ontologically it is not.”

Hm. I’d agree at first glance, but I’m uncertain as to how parallel the situations are. If the monadic “things” can report only on the properties of the aggregate, the point stands. However, I don’t think that any AI researcher – well, okay, Minsky doesn’t count – would seriously assert this. If so, well, no wonder we haven’t got intelligent machines running around already – they’re complicating the issue wildly. AGI is not the hardest goal. Consciousness is.

Mitchell: “7. Revisiting proposition 1: we don't need big new ideas to achieve AGI, but we do need big new ideas if we are to understand what we are doing when we do it. We know enough to copy it functionally, but not enough to understand it ontologically. This has to be a dangerous situation, because even the most benevolently programmed AGI will still be capable of making a mess if its ontology is wrong.”

This is indeed the consensus when it comes to AGI/AGC. Is it possible that we will create a conscious machine and not realize it? Entirely. Is it smart to take steps to try and prevent such a discovery from falling through the cracks, or to refute the religious fanatics who will swear that a machine can never be conscious? Absolutely.

The situation will indeed be dangerous; no doubt the first AGI’s we create will be utterly, completely, fantastically insane. God help the poor thing when they first hit the “Run” button. Why? Because we’re bad programmers. However, after much vision and revision, we may well have a genuinely generally intelligent machine. The functional approach is likely to be the most successful, but it is also, unfortunately, the one we are least likely to understand.

What, then, do we do? We will likely need to pour more of our mental resources into understanding such things as consciousness overlap, abstraction theory of intelligence, ontologies of self and unified theories of mind. The biggest obstacle is that we have no concept of what consciousness is, in our own minds or anyone else’s. We can try to think out of the box on this one, but we haven’t even found the box, nor do we know if one exists.

Opinion Statements:

- Programmers will not approach this the right way for at least another generation; the stigma in the hard sciences against such things as psychology, phenomenology, etc, will prevent this from occurring, until people stop caring.

- People are likely to get discouraged and quit; the more we learn about the brain, the further and further away an understanding begins to seem.

- There are most likely “laws” of the mind; we should not underemphasize unified theories of cognition.

- Everyone should read Aristotle. With a grain of salt handy.

- Think about the Fox and the Crow. You know, the fable your mom read to you when you were little. Why are we impressed by the Fox? Modules, prediction, and abstraction theory. It’s as old as time.

____________________________________

I leave you with an encouraging quote, from that fantastic and popular (in 1997) book, Darwin Among the Machines:


“In the game of life and evolution, there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.”

--George Dyson, Darwin Among the Machines

1 Comments:

At 7:10:00 PM , Blogger Mitchell said...

Hi again. A response to the response will appear any day now. :-)

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home