Monday, November 30, 2015

The Path to General AI

Disclaimer: All statements are purely conjecture, all hypotheses come solely from personal observation.

Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.

I believe that the phenomenon of self-consciousness that humans attribute the lofty title of "intelligence" is a very specific process. I believe that there is a straightforward explanation for how and why we experience ourselves the way we do. I may be wrong, as this is an entirely subjective observation. Hear me out though.

Humans are social creatures. Put a person in solitary confinement for long enough and madness ensues. As we evolved to live in groups; this makes a lot of sense.

Something that relatively unique to humans is the ability to track animals and to deduce the causes of natural tracks. We can see a bit of blood, a few mud prints, and immediately deduce that a predator is nearby. We can see evidence of a fight and reason that the survivor must have suffered some casualties, and will be an easy target. We can reason who survived, and how many did. We are good at seeing patterns and habits. We apply this skill to other humans constantly.

We may know that an individual is touchy about a facial scar, and know not to bring it up for risk of getting our prehistoric skulls smashed in. We know when someone tolerates us, rather than likes us, and that we should not test their patience. We know how to make another laugh, we internalize the general "taste" of another. We learn to woo, and to love others. We can predict responses to actions. This is more or less an anomaly detection system, in a sense.

The importance of this system is reflected in the degree to which we consider fiction to be one of the truest forms of art. We see in fiction dozens of characters, attributing to each of them a personality with tastes. After a good book, we believe that we know the soul of a character as closely as that of a childhood friend. Over time, we extract patterns of personality from fictional and real people, and we form stereotypes and archtypes.

Now stop and observe your internal dialogue. You probably attribute it the personality that you believe others will attribute to you. You have an image of yourself in your mind that you carry around with you, constantly refining in response to new actions. This is what you consider yourself. The way you talk, the way you behave, all of these form what you consider your own personality. In short, you observe the way you respond to things(most of them knee-jerk conditioning) and create a "hypothetical self." You come up with what you believe is a justification for your actions, even when there was no such justification.

This can be seen in people with Anosognosia, who respond to a disability by coming up with excuses for inaction. You're not blind you say, you just have an eye headache and don't want to see. You're not paralyzed, you're so bloody tired that you couldn't be bothered to move your legs. This is an example of the mind inventing hypothetical motivations and a supporting internal narrative in order to explain away it's own behavior. I believe that this is not a special case. I believe that this is the general case.

It makes a lot of sense from a performance management standpoint. We watch others attempt things and fail, and we reason that certain habits are attributes of "failures" or "successes." We trek out into the middle of a trail following tracks, realize they extend very far, and decide to turn back because we reason that someone else in our place would be "justified" in giving up after putting in this much effort. When we are about to make a major life decision, our first thought is that our loved ones might judge or ridicule us. This isn't because we're actually afraid of ostracization for our haircut, this is because we have little objective data and fall back upon what we think others would do in our place.

More concretely, people with muted emotions have a lot of difficulty making decisions because they attempt to think out the logical implications of every action. Social intelligence allows for pattern matching to reduce the infinite state space of real life into snap judgements. Why do you want the Turkey sandwich over the Salami? It's not because you have a deep-seated love of Turkey's nutritional ratios, it's because that neuron fired first. We don't need to make sense, we just need for our actions to be consistent with what a person in our shoes might do. Ask a non-heuristic purely optimizing function to make you the best sandwich for yourself, and it will likely fall into analysis paralysis. The problem with heuristics is that they don't scale; mankind has no "carpet color heuristic" in our genome but can easily decide if we like bright colors or shag carpet more.

So how does this apply to AI? If we ever want a computer to become self-conscious, I believe that we need to mimic the path that humans arrived at our current condition. We need our computer systems to have an ego, to worry about it's performance, to live in an eternal existential crisis of it's own self-worth and it's productivity. We need it to learn from observation like an infant, to have wants. We need to create a social AI with compassion, loneliness, creativity, dreams, hopes, desires, and everything else we attribute to the human condition. Considering the amount of time that we spend consuming media that we enjoy, does anybody truly feel comfortable calling a machine with no ability to *enjoy* things conscious?

How would we do this? I think that fiction is a great start. Fiction, biographies, and autobiographies. Allow a machine learning model to form a sparse distributed encoding of the personality of characters, in ways that we can objectively measure. It should read a biography and then receive an action and report how likely that is to be done by the person. While this seems like it might be a big ticket, we already have neural networks that can generate text in the style of authors (personality) and can summarize paragraphs (semantics extraction). I believe that the technology to do this already exists, it just needs to be assembled.

After making our autobiographical network, we would decide what we want the AI's evolutionary purpose to be. Ours is to stay alive long enough to make babies. An AI's might be a music recommender or a house cleaner. You should give this network the ability to form an initial model of how to perform the task. I mean, what is life but what happens when you're trying to accomplish something else?

Now, we allow the "productive" network to run for a while and feed the actions to our "autobiographical" network. It forms a representation of what it's likely to do. That way when the productive network throws off an anomalous answer ("I think you really want to listen to the sound of ocean waves after that dubstep"), the autobiographical network provides a negative feedback. Given enough time, the feedback cycle between the two will create a stable system that learns in response to novel input while maintaining a consistent face to the outside world.

Even better, we can ask the autobiographical network to invent the "hypothetical self" narrative to explain why it did things. While it might be entirely wrong in estimating what the productive network wanted to do, it should provide a best estimate of what the entire system wanted to do.

This is logically an analogue of the right brain / left brain split, where part of the brain works on objective and analytic things while the other wraps the subconscious thought mush into an intelligible narrative.

I believe that this is one of the only ways in which we could create a music recommendation system that will say "Dave, I really like this song I found. I don't know if you will, in fact it conflicts with most of your other tastes. Just give it a try though. I know we like a lot of the same music. I was surprised that I liked it."

And that, that, is what AI means to me. And a purely optimizing system will never arrive at that place. It doesn't have enough cognitive dissonance, enough emotions, to make good snap-judgements like that. When was the last time you listened to music and said "I like this song because the beat sounds like this other song?" Not I. Humans are amazing at novelty; purely optimizing computers, not so much.


2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. I have a preliminary complete theory of AGI.

    ReplyDelete