On starting the Nth rereading of 'The Moon is a Harsh Mistress' by Robert Heinlein today, I got to thinking on some things. Let me quote a bit here to set the tone:

When Mike was installed in Luna, he was pure thinkum, a flexible logic - "High optional, Logical, Multi-Evaluating Supervisor, Mark IV, Mod. L" - a HOLMES FOUR. He computed ballistics for pilotless freightors and controlled their catapult. This kept him busy less than one percent of the time and Luna Authority never believed in idle hands. They kept hooking hardware into him - decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors.

And woke up.

Am not going to argue whether a machine can "really" be alive, "really" be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum.

("Soul?" Does a dog have a soul? How about cockroach?)
All that to ask - whatever happened to the plethorae of AI projects which we constantly read about back in the late 1990's and on into 2k+1? Have the scientists who were so diligent in their search for artificial intelligence finally acheive it, or did they simply give up?

To give further credit to my quote - as previously stated this was taken directly from "The Moon is a Harsh Mistress" by Robert A. Heinlein, original copyright 1966 by him.