Skip to main content

Week 1 [03.10-09.10.2016] Can we build AI without losing control over it?

Watch the presentation Sam Harris: Can we build AI without losing control over it? at https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it#t-22278 and present your opinion on it/discuss it. 

Comments

kondrat said…
It is a terrifying vision of the future, but in my opinion if we are able to keep our curiosity at adequate level there is no risk at all. We can build real artificial intelligence without equipping it without self-consciousness.
I think it would never happen. There should be always a backdoor, left in software that let humans turn off AI-things.
Unknown said…
As long as human is involved in programming AI there will always be a risk of serious bug which can be dangerous for our species. But as you said if we keep it on adequate level we will still have a chance to fix it and prevent us from disaster.
Artificial intelligence, intelligent machines can help us in life. Life is better but these machines can't be more intelligence than we because man must master the machine and not vice versa
I wrote interesting article about it in the past, if somebody is interested into that particular subject, I'm leaving a link:? http://onlineclasspjwstk-malgorzataswierk.blogspot.com/2015/12/week-10-0712-13122015-artificial.html

Personally I still have the same opinion - humans are capable of creating a thing that will be smart enough to improve itself, and at some point we may simply lose control over it, not necessarily physically, but logic-wise, as our AI might reach the point when we'll no longer understand it or it's actions.

I don't think it will happen anytime soon though. It doesn't look like human race is trying that hard to exterminate itself, and AI progres is rather experimental than practical. Creating a decent AI that can learn and improve itself is difficult task, but possible. Creating AI being capable of outsmarting us will be the last achievement of our race :).
Dajana Kubica said…
I think it will not happen, because people have imagination. It is primarily thing that separates our intelligence from artificial intelligence. If a robots would want to destroy our civilization, as was mentioned in the video, it is only because the man who created them, taught or allowed to be taught the desire to destroy the human world.
Unknown said…
Yes, but think about it this way: As it was said in the video we will keep improving what we have built, we will still making AI efficiency better and better and one day it will become so smart, so intelligent that even you will have some sort of a 'backdoor' , the AI will notice that and block it. When it's self-consciousness will become high enough it will prevent every possibility to turn it off. Just like humans prevent themselves from dying, the same highly developed AI will do. I just imagine that development in such major is very dangerous for us. There can be always some bug, some mistake, we are only humans. And if something goes wrong with the artificial intelligence...We should be very careful with that.
That's why we should always have a backdoor. I don't think that AI would outsmart humans.
Unknown said…
There already has been a case of a robot mentioning "Human zoo" existing in the future. Here's a link to one of many articles mentioning it:

http://glitch.news/2015-08-27-ai-robot-that-learns-new-words-in-real-time-tells-human-creators-it-will-keep-them-in-a-people-zoo.html

Since this robot was not programmed to lie, we can safely assume he was telling the truth. So yes, the AI will most likely take over. We should remember to be nice to robots every day, that one mentioned that "Human zoo" was only for the kindest of us.
As Sam said in his presentation, scientists agree that creating AI that could somehow be dangerous to humans won't be the problem of our nearest future. The problem is that we don't really know what's our timeframe to start worrying about it. He rightly pointed out that it should be already a topic for serious discussion.

The problem of dangerous AI is that it doesn't really need to outsmart human beings or be created by someone with a malevolent intent to become a danger. For example AI that teaches itself to be considerate of the natural environment could possibly come to a conclusion that humans are the biggest threat to the planet and that it has to do sth about it.

Also the task of creating the sort of "science-fiction type" all-encompassing AI is a task of implementing some sort of a moral system. The most obvious problem with that is that such system could easily be as flawed as the understanding of morals by its creator (and I'm still not talking about anyone that would genuinely mean to inflict harm on others by the actions of his AI).

PS I really recommend Sam Harris' books and podcast (not only about AI), quite interesting stuff to say the least.
Its very interesting vision and I agree that it won't be the problem of nearest future. Unfortunately this 100, 200... years as he also mentioned is not so long period and will go by very quickly. The worst thing is that we don' t really know when we will go a step too far in developing AI.
We always should have in mind that we cannot treat robots as humans. Maybe one time we would have big problem to show difference between humans and computer.
I have watched very good TV show called "Humans". It is about robots with consciousness applied by their creator. I highly recommend this show as it opens mind to AI topic.
Maciej Główka said…
Sam Harris vision of future is really interesting and also quite terrifying.
Firstly, we should know how AI is working nowadays. When developers want to teach AI what is a cat and how it looks, they have to put into AI millions of cat photos. Artificial Intelligence isn't browsing internet and learning about cats by itself, as ,in my opinion, many people believe how it looks like now. That's why it is very important to show people what are possibilities and limitations of current AI. Luckily, some of the biggest technology companies such as Facebook, Google decided to team up and educate people about AI. Here is news about it:
http://www.theverge.com/2016/9/28/13094668/facebook-google-microsoft-partnership-on-ai-artificial-intelligence-benefits
I agree with Sam Harris about our near future. We should start thinking about some laws narrowing down use of AI in specific areas, for example in military. We should always remember, that main purpose of AI is to make our lifes easier.
Piotr Basiński said…
I agree with your point of view. There is also a very good independent film about artificial intelligence, which is very interesting. Title of this movie is "Ex Machina". For several hundred years from now someone will see whether it was worth to develop AI or whether it was a big mistake.
We can not only build AI without equipping it without self-consciousness, many neuroscientists claim that AI robots will never have consciousness like humans. It is fair to say that we still haven't figured out how brain works and even if we do it may turn out that due to the technical limitations we won't be able to recreate human brain. The thing which Sam Harris didn't mention is that apart from intelligence there is emotional intelligence (self-awareness) and creativity. The last two are probably the once that are most difficult to understand and, at least for now, impossible to simulate with computers that we know. However, there is no doubt that AI does not need to be self-aware or creative in order to become a danger to humans and at this point I can agree with Harris thoughts. I also liked the part about Justin Bieber - nice one :D
Kacper Zaremba said…
I have to agree with you. Saying AI is intelligent is huge overstatement. Today Artificial Intelligence is learned by doing the same tasks millions of times. It doesn't have anything in common with self-consciousness or intelligence. Currently there is no way to lose control over AI, unless we program it to do so.
Unknown said…
I my opinion advenced A.I. powerd by the computer, that is more powerful then our brains is not the danger, but the anwser to all the questions we ask ourselves today, like what is going on, deep in our minds? What is depression or much worst brain diaseases that we are facing right now, on even much bigger scale than it was not so long ago. I see the danger in human brain simulation, which connected to the network could make a lot of problems, becouse our brains, in many cases are not perfect and it is possible that it could go crazy, making a lot of mess. Artificial inteligence made by microsoft some time ago went complitly "out of her mind" into extreme right wing side ideology sometimes even nazi, but it was learning especialy from internet comments that are mostly stupid and unthinking. I agree, that for the proper development of A.I. we need kind of Manhatan Project. Then, most likely we could build it without losing control.
Right now we should not call computers intelligent. They are "intelligent" because they are learning some tasks billion times and that's not what we humans call intelligence.
Unknown said…
Personally I think that we are very far from creating a true AI. Every kind of intelligence we know (basically only human intelligence for now) requires a purpose and is also defined by its limitations. In humans the only purpose of intelligence is to help us survive and reproduce with everything else being side effects. As animals we have a lot of things hardcoded into our brains genetically. Most of those things change extremly slow. If we were to create an AI we'd have to hardcode a lot of basic instincts to it. Those instincts would define and limit the AI. Without them it would probably get into very chaotic state and turn itself off.
What if the dangerous AI was made by some maniac or someone that made a huge mistake developing it ?
This comment has been removed by the author.
I think being outsmarted by intelligent machines is not the only problem here. More real threat is that people will be less and less intelligent because machines will do more and more of thinking for us. One can observe nucleus of the problem among today's youth - many of them have serious gaps in knowledge about basic matters (Why do I need to remember this? I can google it in a few seconds), big problems with basic math (Calculator app for mobile?) and so on. I'm afraid that with the development of AI this problem with get worse. It may be that one day, we will realize we can no longer function without smart machines or gadgets. Being fully dependent on such devices seems to be much more tangible form of "losing control".
Michał Pycek said…
In my opinion it is possible to create an AI without losing control of it, however only if we take it seriously and spend some time on analyzing all of the possibilities of results.
If we are aware of them, theoretically there should be no danger, however it is difficult to recognize all of them.

I partly agree with you. We are far from AI we know from sci-fi movies.

One of the most developed branch of computer science is machine learning. It helps to solve problems like handwriting recognition, autonomous vehicles driving etc. that can't be hardcoded. Computer learn by itself how to do such things and in majority does it better then human. If future AI would be able to learn, how we can be sure that it won't find a way to overcome limitations we gave to it?
Unknown said…
In my opinion we will never build a perfect artificial intelligence, because the perfect AI should be able to lie. If AI could lie, how the programmers would be able to identify whether the computer is lying or just giving the wrong result?
Wojtek Kania said…
Nowadays the technology is already here. But all this fear, which was created by science fiction movies, is invalid. There is a difference between intelligence and awareness. We have no intention of teaching robots that they are robots, what does it mean „to be” and why we exist. A better description would be „artificial thinking”.
Moode said…
Why not, if have asked someone 200 years ago that,
Do you think you will be able to travel to another continental in a few hours, 
the answer will be No. and it’s something normal right now for all of us.
I think this the same matter.
but when?? No idea

Popular posts from this blog

Week 1 (09-15.03) VOD

http://www.vod-consulting.net/wp-content/uploads/2012/08/1.jpg

Week 11 [03-09.06.2019] The problem with ecological cars emission in UK

The problem with ecological cars emission in UK Since the adoption of the European Emission Allowance Directive in the European Parliament, all car makers have tried to submit. Since 1992, the Euro I standard has been in force, which limited the emission of carbon monoxide to the atmosphere. The Euro VI standard currently applies, which limits the series of exhaust gases. These include: hydrocarbons, nitrogen and carbon oxides, and dust.   The most significant change was brought by the Euro IV standard. For the first time it introduced the limitation of nitrogen oxides, which are responsible for the harmful compounds of smog.   What is smog?   Smog consists of sulfur oxides, nitrogen and carbon. In addition, solid substances such as suspended dust (PM). Dust suspend in atmospheric aerosols may be in liquid and solid form. These can be particles of sea salt, clouds from the Sahara and artificial compounds made by people. These compounds often come fr

Week 4 [06-12.11.2017] This is what happens when you reply to spam email.

James Veitch is a British comedian. In today’s Ted Talk James with characteristic for himself a sense of humor shows how he deals with spam emails and why responding to junk messages may be sometimes dangerous. Questions: What do you think about James’s  way of dealing with spam? Why are junk messages legal, even though it sometimes may be a fraud? Dou you have a problem with spam? How do you deal with with it?