December 9th, 2003, 10:20 PM
has anyone seen any of the Terminator movies (ive seen all 3). In them an advanced US defense program becomes smart and self-aware, and launches nukes against all other countries in an effort to destroy humanity for machines to take over. although i think that is a little far-fetched, you haveta wonder how far are we from AI? if you look at how much computers have advanced in 50 years, from massive pieces of high maintenance machinery, which took up several rooms, cost millions of dollars, and had to be operated by only the best computer specialist, to todays standard personal computer, you havta admit that computers have evolved quite a bit. i mean now they also have that dog made by Sony, which can actually learn the object and floor plan in your house to avoid bumping and tripping again. i personally dont think it is that far off, perhaps 50-75 years id say (if computers are still being used) until we have one that is capable of human thought. in my opinion tho it would probably be made by a government funded project, i really dont think wed be seeing any of Robin Williams Bicentenial Man anytime soon after it was completed. What do you guys think about it???????
i know there was another AI thread a few months back, but i want to bring back the thread......
December 10th, 2003, 12:01 AM
Well I'm sure advancements in the AI field are fast and numerous. I don't see the whole 'Terminator' plot far from becoming possible... what's worse, though, is the 'Pinocchio' syndrome [if you have seen the movie 'AI'], where a machine becomes aware of it's status as a replica of humans. One wonders, is it ethical to 'play' like that, bringing to life a machine that is aware it's only second best? I guess that's the way society can get into a Terminator-like situation.
Luckily, there's one philosophical idea that gives hope to those more pessimistic. It is said that nothing can create something better than itself. If that holds true, humans cannot devise something that would be capable of superior intelligence [I am involving 'Emotional IQ' here as well] to themselves. I don't know, time will bring about some answers I guess.
December 10th, 2003, 12:24 AM
December 10th, 2003, 12:31 AM
[I'm not sure if that's a reply to my post, but in anyway...]
The fact that AI is being researched is no secret. And obviously there are computers - biomechanical organisms, maybe - that have been created in this process. But there's always more to know and improve. It is unimaginable to think of a computer that would excel the qualities of humans any time soon... if it could be done, why wouldn't we want to improve ourselves first? [I'm not saying "Who's gonna code them?" because it's a learning process for the computers as well]
December 10th, 2003, 04:35 AM
There is no code needed for AI(IMO), only sensory input, the computing power is there, we just need to figure out how to randomize abstract relations and assimilate associative data.
Every now and then, one of you won't annoy me.
December 10th, 2003, 07:00 AM
Well humans need to figure that out right? Hence they need to come up with some kick-ass algorithms that can yield certain results... it's hard to simulate one's impulsitivity, or teach one how to understand certain actions a certain way given a certain set of informations. Hell, most of humans can't do it!
I see AI as inevitable, so don't get me wrong. I simply think true AI is a way to get to, still. But we never know the 'secret' projects somebody, somewhere, is working on.
December 10th, 2003, 06:06 PM
Oh not the "code" argument again.
To "process" an input you need something like a pre coded or prewired mechanism to do something with the input. And not only process the input, but do things with it that we can't predetermine. Like build a new connection in silicon that we never intended to design in the first place. That is true AI; anything else is just a computer with no room for consciousness.
If one just makes a machine with no instructions or code, to give it a start then all you have is a bunch of silicon or another substance setting there waiting to evolve out of nothing. How can man code such a thing that gives instructions to allow an input and stimulus to grow into something with the ability to expand and then issue a conscious thought?
The code would be something to process say a touch. "Here is a pressure on a specific quadrant, from a sensor" some instruction running in the background triggered that response. Now the instruction might have a specific place to store that information and another instruction set to respond. Possibly activating another nearby sensor and comparing the measured result and then polling a data store to see if that same sensor activity was already recorded and when. Maybe the same thing happened last week? Then and instruction may say "If two or more pressure sensors record similar readings, turn visual unit number 1 and 2 in the direction of the correlated sensors.” Then some code could be run to analyze the varied inputs and identify a cause. Then ... well you should get where I am coming from on the code issue.
The machine would indeed learn and evolve through input, just like humans. But something has to be processing the information. It's just not coming out of thin air. Visual unit number 1 and 2 need to be "told" to turn the head or camera and take in the information as input and then "told" where to store the input and then another mechanism may have to "tell" the storage facility to recall those images captured at a later date.
Those instructions that leave open room for changes initiated by the program itself would be very difficult to conceive beyond basic intelligence that is not conscious. Again we would have to write code capable of writing additional code that cannot be foretold. It could be like windows recognizing it needed a patch for some attack it could never have preconceived and then activating it’s own defenses to deny further attacks; with no intervention outside of my computer meaning it’s code, processor, memory and I/O system. It has all the necessary inputs but how to you write the code to handles future unforeseen events? Especially in something that is non-biological. I have made the point that man would be designing something more complicated than his own existence.
December 11th, 2003, 03:08 PM
PsychoJester: wow thats the weirdist line of crap I have ever heard....first NSA is still working mostly on 486 based systems (they are scouering EBAY for spare parts) second computers based on Nurons are not secrate, I have a buddy who was part of a program at Predu University in Indiana they where all biochem, CEE (computer electrical engenierring, chip designers) majors, working on growing computers. While they had some sucess they determend that classic chip design was growing in power fast enough that their system could not compeate. Remember a universal machine (such as a computer ) can emulate any other universal machine regardless of mackup, I have seen computers made of tinkertoys, I have seen designs for hydrolic computers, in the end its all the same....Unfortunitaly the human mind is not a universal machine Untill we figure out how to be illogical and intuitive in our computers, untill we cna have some sort of self error correction in computers we will never have AI (if the human brain takes some damage it routs arond it a computer crashes.)
If you are truly interested in this read Danny Hillis's work a good starting place is "The Pattern on the Stone : The Simple Ideas That Make Computers Work (Science Masters Series)
Who is more trustworthy then all of the gurus or Buddha’s?
December 12th, 2003, 05:53 PM
Parts of NSA are also EXTREMELY advanced, replacing entire systems in a matter of months.
December 12th, 2003, 06:42 PM
If you are interested in AI, read more about neural networks and fuzzy logic.