AI is developing way too quickly. In 20 years, it's going to be a massive problem, Stephen Hawking is right on that.Stephen Hawking isn't an AI specialist.
How could AI be a problem to begin with? What some people are really afraid of is the undeniable proof that humans are nothing more than biological constructs.
Stephen Hawking isn't an AI specialist.
One doesn't need to be an AI specialist to guess that it might become a problem in the future.No indeed, since the statement applies to literally everything. I was referring to the time frame but maybe that was added by you.
When reflecting about it, 20 years is a bit improbable, yes. But 50 years seem to be quite realistic when you know the rate of development we have got now.What are you talking about? What development? We are not closer to true AI than we were 50 years ago. It's not a matter of just building more powerful computers.
When reflecting about it, 20 years is a bit improbable, yes. But 50 years seem to be quite realistic when you know the rate of development we have got now.
Assuming a framework based on human minds, the issue is hardware.
Human brain design is not efficient, much like human ankle design. Based on that, an AI could be created that's more efficient, which suggests that the issue is software design rather hardware constraints.
If you have a human-level AI that can rewrite it's own code, it would be able to constantly improve itself. Since it may not have as many inefficiencies as regular human minds, it would be able to change itself fairly (it takes humans years to change their habits- an AI could do this in a second) quickly. Once it does this, it doesn't just become a particularly smart human- it's more like the difference between a dog and a human. Is a human going to listen to a dog, or do everything that a dog wants?
So you get an AI like this and you think 'O, let's just keep it isolated with no access to the outside world'. Think about how a human being is able to get lesser intelligences to do what it wants. Now, imagine something a degree smarter and how it might be able to persuade the humans it comes in contact with to give it external access or keep it running. What if it figures out that one of the researchers in the lab has a daughter that's dying of neuroblastoma, and offers a cure, if only it had access to more data? That doesn't even take advanced persuasive techniques into account. Stuff we haven't even mapped out yet.
That's one possibility. There's also the idea of a paperclip maximizer. Say you build an AI with the goal of making as many paperclips as possible (you could replace with 'farming as much nutrient dense food', or whatever) and find that AI constantly finds new ways to turn matter into paperclips. Where will it stop? Will it continue converting the entire solar system into paperclips?
The current expert estimates are between 20 and 250 years, where 20 is improbable.
To create the technological singularity, ie. 'strong AI', science would first have to understand metaphysical terms such as 'mind' and 'thought', and we are a long, long way from that.
Some people here have been reading/watching too much sci-fi. AI like that simply is not happening in the foreseeable future.
Some people here have been reading/watching too much sci-fi. AI like that simply is not happening in the foreseeable future.
What are you talking about? What development? We are not closer to true AI than we were 50 years ago. It's not a matter of just building more powerful computers.
It's not going to happen soon, afaik. But it will happen eventually. The "weak" AI era we're into is just the beginning, with machines executing simple tasks with algorithms more or less complex. I'm not an AI expert, so I can't tell how to build a better AI, but some will know in the future, doesn't matter if it's 50 years or 250 years after today.How do you know that if you're not an AI expert?
How do you know that if you're not an AI expert?
...
Do you understand what algorithms are? How they work? How software is built and functions?
How can't one know that the current machines run on "basic" algorithms that make them look like they have a form of intelligence, and will be developed further, like everything nowadays ? Being aware of that doesn't require much.That has nothing to do with AI.
It's called the Turing test and we already have it. We are sure that humans have a "mind" and "thoughts" therefore if something behaves like a human then it must also have these things.The Turing test has nothing to do with understanding these concepts, or recreating them. Which is what I was referring to. We don't even understand the relation or the boundaries between the physical and the non-physical, 'the mind-body problem', let alone complicated notions such as process of thought.
That has nothing to do with AI.
But what is artificial intelligence then ? How would you describe it ? I describe it as a form of intelligence created by humans, and now, we can only create"AIs" which are barely as smart as mouses. Need a quote there, but it's approximately what we can do now, a really basic AI with limited "choices", while a real intelligence could not have these limitations, because it could reflect on itself and its environment, and therefore create "choices'' by itself.https://intelligence.org/2013/08/11/what-is-agi/
Fuck english and french education on foreign languages on a side note.
Robots like Star Wars would be easy, we just havent mass produced annoying bleeping Mars Rovers yet. Or given clunky bad-at-walking robots annoying or snarky voices yet. It's not exactly impossible with current technology to have something respond to you with a snarky voice, ever seen one of those voice command phone thingies that acknowledge you in a sexy female voice? We just need to replace the voice with something that sucks and we're at Star Wars.
I guess the thing with hovercraft is that we have had the technology for ages but wheels are just easier, more efficient and break less. Flying cars dont seem to make much sense, imagine how you'd need to do roads and traffic lights in a city, or air control, not worth the effort. To perfect or refine a technology, there needs to be a demand for it. What seemed like super advanced sci fi in the 70s/80s is largely possible now, we've either already got it or there wasnt any demand for it.