Off Topic > General Off Topic

The AI Revolution: Road to Superintelligence

(1/20) > >>

Dooz:
this is the most fascinating/important shit you're ever gonna not read, cretins

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html#

Torost:
read Nick Bostroms book "Superintelligence" when it came out.
It is somewhat poorly written, but he gets the message across.

Fascinating to wonder if when and how the singularity will take place.

Not in our lifetime is my bet.

Kafein:
I have one problem with Singularity theory: intelligence cannot make a robot escape the laws of physics. With electronic computers I don't think we'll manage to have the power to reproduce even one complete human brain, which would be the condition for reaching super-human intelligence in the first place. The author doesn't understand Moore's Law, and then everything he says pretty much collapses from there.

Suppose we manage to create a computer with human intelligence capable of designing more intelligent computers. More intelligence isn't about programming but about hardware resources. Otherwise we'd be capable to do this on a small calculator except we don't. We can only concentrate so much computing power into a small space before wasting stupidly high amounts of energy into heat that needs to be evacuated, as shown with any supercomputer in the world. And that's only the beginning. Super-computers are not (and could not be) designed to be an efficient implementation of neural nets (which is the structure of the human brain and any human-like AI program). That effectively means the term "cps" in this context is a meaningless construct, as any traditional computer would be doing a lot of overhead for each "neuron operation". On top of that, exponential intelligence requires resources growing exponentially at an actually faster rate. Why? Because as I said, you can't concentrate infinite computing power in the same place, eventually you have to build larger and larger computers in order to keep the heat manageable. The fact that your computer gets bigger means the information going from one end to the other end will take more time to arrive. Signals move at around c/2 in copper wires. That's fast, but it's a serious limitation significantly impacting the design of all modern computers nonetheless.

Note though that I can totally imagine it happening with devices specifically designed to be actual networks, not multicore processors. That brings a host of other problems though, and we're technically nowhere near it.

Corwin:
waitbutwhy is one of my favorite places on the Internet. Awesome blog, and superinteligence article is one of the most fascinating articles. Just want to say you should READ it.

Dooz:
Do any of the points of skepticism take into account the accelerated rate of the growth of knowledge and that whole thing about not even being able to fathom what's to come, much like people in past centuries couldn't have fathomed the things of today? If everything being used to cast doubt on the possibility of this thing happening is strictly within the paradigm of human knowledge where it currently stands, it doesn't really hold up to the inevitable changes in what is considered possible constantly shifting at an ever increasing rate.

Just because we can't imagine something happening based on the current limits of knowledge, doesn't mean it can't happen. Those limits are constantly expanding and the world we think is flat will soon become round.

Navigation

[0] Message Index

[#] Next page

Go to full version