Skip to main content

Week 2 [16-22.10.17] Can we teach robots ethics?

Read the article at  http://www.bbc.com/news/magazine-41504285 and discuss it here.


Comments

Andrzej Gulak said…
This comment has been removed by the author.
Unknown said…
I despise the "dilemma" mentioned in the article. There are thousands of people killed or injured every day in accidents that are a result of a driver error. The real issue that we should focus on is not how the AI should behave in those situations but how fast can we improve it to be versatile enough to replace drivers completely. In a difficult situation the computer should not waste computing power to consider what choice would result in less deaths or who is more important - an old lady or a small child. Instead it should do what a human driver would do in a similar situation - start to brake.
Unknown said…
They forgot to add another case, where there is a thief/robber/murderer/rapist, you name it.
"...if we can create weapons which make it less likely that civilians will be killed, we must do so". It's strange. They don't want to kill civilians, but want to kill soldiers. But they are all people.
Unknown said…
Yes, but the problem is that it's impossible to brake. "It's out of control. The brakes have failed", - is written in the article.
Tomasz Morawski said…
Of course there's only one solution of the dilemma mentioned in article - use random function. AI shouldn't have right to decide whose life is more important. Also, imagine thousands fo lawsuits filled by families dissatisfied with car decision, I'm pretty sure automotive companies wouldn't take that risk.
Regarding the programming of "avoid suffering" or "promote happiness" - you can't and really do that. In most cases the happiness of one is suffering of other. Again, I believe we shouldn't get car to calculate if old lady should be killed, since it would cause less sorrow overall (old lady wouldn't be sad due to obvious reasons and small child wouldn't care).
There are 2 things we need to discuss here: 1)Are robots able to make deliberate decisions in future? 2) Are these decisions will have anything in common with human's thinking?
To find out you need to think about that this is possible, but not in close future. There are a huge amount of examples you can find in internet(starting from smart bots in video games and ending with VR). As the statement said , we can not fully understand why robot or such a machine made a particular decision. One day we can take a look and notice, that company N made a new startup called "smart driveless car", and if they would do a big announcement, other companies,of course, would support them. Until we can't even imagine how it can be, but all the giants are already implementing this kind of technologies into our daily routine
Some philosopher may response to question given above: do people even know ethics? There are some problems that are unresolved. For many people ethics problems are one of that kind problems. What makes me to decide about someone life? Artificial intelligence despite public opinion is not a science about teaching computer how to think. AI is a science that makes computers able to make decisions by having huge set of previous made decisions by humans - good and band ones. For me it is hard to think of a test set for moral/ethical decision. On the other hand when we think about example as that with self drive cars, AI may be factor that will increase safety. Assuming that in future all cares are will be self driven and roads will be somehow humans and animals free, and OS of that cars will be well designed, all of that may lead to accident free roads. To sump up think about simple algorithm: when you see red light you should stop the car, so does with yellow light. For computer that is really simple instruction. Now think about how many people sometimes ignore that simple rules? I am not saying that it is proof of something, but there are tons of other situations when people ignore some rules because they are in hurry or hungry.
Unknown said…
In my opinion this article questions unanswerable things. There can be never clear solution, which suits best in this kind of situations. Even if we consider plenty of factors (like social status, influence, work or edu achievements etc.), we can not decide whose life is more important. Machines can make logic and rational decisions, which in most cases suit well, but there always will be some exception that proves the rule. What we, humans, can do about it? First of all, we need to teach a machine our rights, so it is obeyed to law and each decision must fit in the legal frame. Then, we could use artificial intelligence and cognitive computing to make machines smarter and more intuitive. With all the rights and deep analysis, empowering machines in making such quick and difficult decisions for us is the best thing we can do for today.
Unknown said…
It will be difficult to make computers which think like humans in creative way. Machines work acording to rules which you must write. There are a lot of situations in which machines will never replace people. Ambulans is good example. Often drivers must break rules to get to the accident fast. I read an article about new Audi a6 which will come out in future. It will have autnomic driving system up to 50 kph. It's intersiting that when car causes accident the driver will not be responsible. But in article they didn't write who will
Unknown said…
I won't agree with you with everything you've mentioned, because people can't predict everything - human factor. It makes that even if something appear to be easy and obvious it may complicate and change at 180degress any time. You can't predict everything and evem milion scenarios won't be enough - some better idea would be machine learining, but we can't plan how it will improve/evolve.
Unknown said…
Actually I think that not a title/posiotion is important, but behaviour/actions/intentions. Kill solider, who wants to save civil from another civil? What if only civils will be involved? Kill them all, save all or a russian roulette?
Unknown said…
We are in the era of the first autonomic cars rolling out. They follow the law, they get you from A to B without actually having to think about it, and they are relatively safe. Up to this point everything is clear and kind of acceptable for a human being. The complicated part are unforeseen situations, like the following.
An autonomous car travels at a lethal speed. A child runs onto the street out of nowhere, only a few meters from the car. The kids parents are on the sidewalk at one side of the road. On the other side there are other groups of people. There is not enough space to do a maneuver without risking any lives. The odds for not crashing into anyone, while still minimal, are greatest while risking the parents lives, where a failure would mean making the child an orphan. Otherwise, the car can run over the kid just following its original route. Third option is to run over a bunch of random people, so that the family do not have to be incomplete. Or the car can hit both the kid and the parents so that they share the same fate, but without risking other innocent souls.

The questions are: Is an AI unit ever going to be capable to decide about such things? Is it going to act predictably? How is it going to calculate who's lives to put at stake? Will it be able to come up with a solution a human would not? And even if not very probable - what if the AI is really advanced and starts to enjoy running people over and decides to go on a killing spree?

To sum it up, as previously wrote by Zygmunt Zardecki - the AI turns out to be quite unpredictable in some cases (robots inventing their own language), which also means that they ethics might be entirely different than ours, especially that these can differ even between two humans.

Another interesting thought to read up on is the Paperclip Maximizer problem, which was presented to illustrate the threats of AI - that a robot might be smart enough to learn how to fulfil its original purpose in ways humans would not want it to.
Unknown said…
I do agree with the argument of Prof John Tasioulas. Sometimes we must make a decision despite all negative aspects of a problem, taking into consideration additional circumstances. Generally, it concerns judges, policemen or other people, who're in authority. The value of human relatioships is really huge. Moreover, are the governments of the countries of high technologies going to throw drivers out of work? Yeah, well, mainly the article conserns such countries as the UK, the USA, the Netherlands, Sweden etc, nevertheless thousands of workers from east-central Europe will lost their job because of robotisation of the economy. I mean long-distance lorry drivers, bussies and taxi men. Btw, Volvo has produced a driveless lorry and I believe it's going to be common in Western Europe in a decade. Probably, my alarm isn't so serious as I treat it, but try to persuade me.
Foodocado said…
None of us should decide whether someone should be sacrificed or rescued. These days autonomous cars are extremely inteligent and they can predict incoming danger, so they can decrease the speed or even pull over before something bad will happen. Of course there will be scenarios when something unpredictable take place. What then ? It's hard to learn AI to think like human, but on the other hand we should remember that computer thinks hundred of times faster than we do. It can consider all possible scenarios taking into accounts thousen of factors and choose best one.
Unknown said…
Teaching robots how to be ethical is impossible because there is no straight definition of “being ethical”. If we are talking about difficult dilemmas as those mentioned in article we have to take some facts into consideration. We shouldn’t decide whether robot should sacrifice one life to save five people. Because nowadays computers are extremely fast we should focus on algorithms for robots which in connection with many different sensors would allow them to try to find the least harmful solution for any situation. It may be hard but I think that it is necessary to be sure that cars are really autonomous. Teaching robot that killing one person is better than killing five people for me doesn’t mean that this robot will be autonomous.
Andrzej Gulak said…
This comment has been removed by the author.
There are some lessons we can give to robots like "do not kill", "do not steal" etc. But without self-consciousness robots are the instruments in our hands without any possibility to decide for themselves about the reaction to current problem. And robots will do everything we command them to do without even thinking about it- the algorythm will "decide" how to do it but that's all. Ethic can be understood as set of some rules what should be done in certain situaton but can we say that robot which follows the rules is MORAL? Or just fine programmed? In my opinion without self-consciousness robots are neither good not bad, neither ethical nor immoral. A robot is robot and human is human. We can program robot to follow rules but this is just a matter of algorythms and I belive that beeing moral is about having a choice and choose the right thing- not beeing "forced" to do right.
Yevhen Shymko said…
Through article author reefers to "right ethic" implying that there is such thing. First of all I don't believe it even exist since concept of ethic differ from culture to culture as if differ from people to people. In my opinion right solution would be to test driver on this moral dilemmas and program AI accordingly.
Knowing how at this point AI works and learn from the technical point of view the biggest challenge would be to give neural network ability to explain based on what criteria they made choices. Without that ability humans wouldn't be able to accept whatever is the outcome.
First thing that I was thinking about is, how many less accidents will be there if robots drives cars. If there are many less then we get 1 up vote and I personally think that self driving cars will be better for everyone. Second thing is about ethics. For me its kinda easy. U cannot do this correctly at every point. The best solution is just to do as much as u can to save as many lives as u can. With this trolley problem. We don't know if the robot will be able to stop train before hitting anyone. Maybe yes, than this would be great.
Unknown said…
First we need to ask ourself how to define which option is a good one? And what does good means? How to define whose life is more valuable? It is easy. Person who has more money, more power, is more popular, is more usefull for society. This kind of division is and was present during whole humanity. Maybe you can argue but truth is that we are selfish creatures pushed by instinct.
AI won't do any better because a) developed by humans b)even if will "think" how will it define which person should live? Of course by some data comparison which equals how is better then survives.
Maybe most "fair" will be flip of a coin? 50/50 chance. There are a lot of people. Earth is becoming overpopulated, so why should survive people who are "useless"? In a second hand you never knows if this person won't do something meaningfull for humanity. On the other hand we can use statistics like in science to predict:) Playing a God isn't an easy task so how something what can't think make a such decision?
Vladlen Kyselov said…
I think that speaking about the topic of the article may take a long time, but I would like to mention several of my thoughts about this article and AI. At present, technologies are developing at such a speed that it is actually difficult to predict what progress we will achieve in 10 years. An artificial intelligence is quite popular and the most actively developing direction in information technologies. Personally, I can hardly imagine how today is it possible to program a machine for independent thinking, making its own decisions that correspond to the same type of person's character. Of course, there is such a technology as a neural network, but I want to note that the use of neural networks does not give the machine its artificial intelligence, because simply using this algorithm the machine selects the most appropriate behavior from a huge database in a given situation. Although I would gladly use smart technologies in the near future and I hope very much that progress in this area will not be long in coming.
This comment has been removed by the author.
I my opinion we should not attempt at humanizing robots at all, whats a use of robot which has same qualities as human using it ? Its just like making robot use "human" like hands instead of having him have better grabbing capabilities than us
Filip Sawicki said…
This kind of problem can be reduced to the optimization problem, where the goal is to minimize suffering and casualties. This is nontrivial task, even most complex algorithms does not guarantee on finding the minimum. While there is possibility that AI could find a solution with good approximation to the result, it would need much time for calculation. In most of these situations time is critical, therefore I guess just a simple random function would be most ethical thing to do.
Unknown said…
The article is really cool. It shows problems with artificial intelligence from perspective I didnt think at all. For example situation with car, human will react on this situation without any thoughts, it will be decision based on reflexes, but robot could analize situation and make pondered decision. Even though I agree that we would have less car incidents if cars will be drived by artificial intelligence, but there is some risks here. After reading this article I remembered last movie 'Fast and Furious', in which hackers hack all cars with computers inside and made some terrible things.
Sylwia Pechcin said…
Thinking robots remind me about every sci-fi movie I've watched about it. In none of them there was a happy ending. So in my opinion continuing to teaching machines to think and act like a human is not a good idea. And even worse is to teach them to think in an ethic way. Is it even possible? For me machine is always gonna be machine and human is always human and it's not right to mix up those two things.
Unknown said…
This comment has been removed by the author.
Unknown said…
"Ever since the first computers, there have always been ghosts in the machine. Random segments of code that have grouped together to form unexpected protocols." (c) Dr. Alfred Lanning, "I, Robot" (2004). The more complicated code is, the more bugs, or according to this quote, ghosts it has. Why, for example, robot decided that life of 11-year-old girl is less impotrant than life of the "I, Robot" protagonist, a regular man with regular life? I know that it is so mostly because of the screenplay, but still, think about it, nobody can say that such events are impossible. Give a smart machine one of these "trolley problem" and you won't be able to predict the result. And how can we predict it, if even we don't know the ethically right answer?

Speaking of weapons, can you imagine how easy any machine can be fooled? Observe the situation: you are in war with the cruelest and the most unscrupulous warlord of all times, who has only one purpose - destroy and overcome the world with all costs required. You are trying to capture a city under his control, which is full of people, and only 15% of them are soldiers, according to your scouting. So you decide to send a squad of these machines that would kill only enemy soldiers, and civillians will be safe. A day ofbattle comes, your machines are in position, and order "Attack" is given. They're heading to the enemy's fortifications through no man's land. Suddenly, you are starting to receive a multiple signals of enemy's presence. After a moment, your forces open fire, destroying everyone on their way. But numerical superiority is on the enemy's side, and your assault is failed. You can't understand, what's happened. Scouts were wrong? Machines were malfunctioned? You are observing a battlefield with binoculars. The realization is horrific. You see corpses of enemy's soldiers still holding their guns. Oh, wait! They aren't holding them. Guns are simply duct taped to their hands! Then the alarms starts shouting about enemy's counter-attack. The scouts reports are the same - 15% of city former population. The warlord fooled your machines - while they was distracted by look-like-soldiers civillians, enemy's true forces took their chances. Can you tell, who is truly responsible for the death of thousands innocent people?
Maciej Główka said…
We are living in very interesting times. As mentioned in an article, I do believe that in 10-15 years seeing autonomous car on the road will be as normal for us as seeing normal car.

Problem mentioned in article is very interesting to resolve. I think, scientists and engineers are more focused on creating first versions of autonomous cars than resolving such ethical problems right now. What should program do in "trolley" situation? I'm sure that it should protect driver's health in first place. After that I think it should minimize casualties.

I'm curious whether in 10-20 years we will see court processes against car companies that created autonomous car that killed for example 3 kids rather than 1 man. I think it might be really interesting subject.
Unknown said…
It is really complex issue and it is really hard to answer what have to be done in such philosophical problems which was described in this article. I don't think that robots can learn ethics. We can program them and set how they should react in specyfic situation. I think that these days we are not limited by technology but more by human nature. I have to admit that I would prefer to buy a car which would be programed to protect me and my familly in any accidents. In my opinion in such situation we would not prefer to dedicate our life to any algorithm which would decide if we survive or not. I think that ethics and such decision is a next huge barrier in human development and it takes more than decade to do step forward. I feel that it takes less time to develop 100% safe car which can't hurt anyone than force people to use cars which can sacriface their life in case of road accident.

In my opinion, we cannot teach ethics robots. Attempting to teach AI some habits or how to behave well can only end in failure. At every step, every man would do his own thing, and the robots would repeat only the learned and correct operation. Sometimes you have to choose a site that is not entirely correct but you have to make a decision. The described tram problem is equally difficult to solve because we do not know how the AI would react. Would she be able to stop and avoid an accident? How less would it be if robots were driving cars?

Unknown said…
There are as many answers on ethical problems as there are people on the Earth. Each one of us may have different morality and we will never agree which answer is correct. If I were the one responsible for teaching the robots ethics, I wouldn't teach them. If there would be an ethical problem in a "robot's mind" and there would be still time to decide what to do, then human should give the robot the final answer. Otherwise robot should randomly choose one of the options.
Unknown said…
It's kinda hard topic to talk about.
First of all - Tesla. For me, Elon Musk is a genius and cars made by his company are great. They are able to prevent from crashes hundreds of times faster than people. It's incredible, isn't it? But it's not that simple. What about a situation, where computer have to decide what is the best choice? I mean, what will be the safest way to avoid an accident? Going off the road? Hitting a car driving next to us? Or maybe hitting a cyclist instead of child on pedestrian crossing? These decisions are even hard for us to make. We don't have time to think in this kind of situations. We have to make a move, a right move. And there comes the big thing - machine learning. We can predict some kind of situations, do some research to find out how to most effectively teach robots to make the right choices. But still, it requires thousands of hours of tests. I've heard about a situation, where autonomic car crashed because of car in front of it, which had a painting of skies, road etc. on its back. Computer thought that it's a normal road.
To sum up, our future has enormous possibilites. Will it be used in proper way? We will see.
Unknown said…
For me, the problem with autonomous cars is not about how they "think". As many people stated in comments already, if human beings can't solve ethic or moral dillemas, then how can we expect robots to do that correctly.

I think the real question is: how can we prevent autonomous cars from encountering such tough situations? Why should we even consider such dillemas - isn't it better to design transportation means in such way that will assure safety?

Right now, autonomous cars are being designed to use regular roads and streets humans use everyday. In that case, there is a constant risk of hitting somebody. Even if the car has outstanding AI capabilities, it won't be able to respond correctly to all random events it encounters.

That's why I think we should separate whole autonomous transportation from regular traffic. Actually, railway is a good example of a well designed transportation mean. Trains have separate tracks which can't be used by any other type of machine. They can be easly controlled and the time of travel can be assessed with no problem. As long as autonomous cars share roads with people, there is possibility of an accident, probably caused by human mistake.

To sum up, I believe that the real deal is to find a way of improving the infrastructure autonomous cars are going to use, not their artificial intelligence.
In according to this article and the answer to the main question from title I chose one part of text which is good summary. "One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize." Robots behave consistently - I will compare it to learning. If you learn robots a nice ethic behavior day after day they will consistently learning that things and they will operate in the same way. Man is only man, everyone makes mistakes. If you program robot to avoid mistakes what are you doing a robot will know about it “UMM..IT IS NOT OKAY. I WILL NOT TO DO IT”.
I haven’t heard before about carebots but I like this initiative. For sick and elderly people this is a great idea to care about them by personalized robots. In conclusion I quote the last sentence which I agree on one hundred percent “The robots turn out to be better at some ethical decisions than we are”.
Unknown said…
In my opinion never a computer will be smarter than a human. Robots and artificial intelligence facilitates our daily duties but people have to think and make decisions. Will autonomous cars be good? I don't know. I think there will be many mistakes in the beginning, lots of fixes and it will take a long time before it is perfect
Marcin Mróz said…
The key argument in this discussion is, in my opinion, a question asked by Amy - if people passing a test for driving license don't have to pass an ethic test, why do self-driving cars would have to? People are different, everyone has its own point of view in case of ethics. In the situation described in the article everyone could act differently. But I believe that in such accidents everything is happening so fast, that there is no time to think what should you do to be ethically correct - you act instinctively or just random. What if we could control the situation? I don't know. There is no set of universal ethical rules that would apply to every situation, therefore we can't teach robots how to act in given situation. But what we can do, is make self-driving cars as safe as possible, so that they wouldn't even have to make such decisions. And this is the thing that I would focus on during development of such cars.
Indeed this is really hard to resolve problem. I think that there's no right decision in situiations mentioned in article. In this case we decide between bad or very bad result. And i guess computer will always choose less evil. Thankfully i'm not a person who is responsible for that decision, because as i said before it's tough problem. And no, we cannot learn robots ethics, not now. We can teach what to do in specific situations.
Unknown said…
To be honest, questions that are given in the article are even difficult for humans. There is no "right" way to solve those problems, and those questions are definitely not the ones that should be asked today. The Google Assistent in my phone struggles even with the queries like "Download last season of GOT from torrent tracker" or "What can I wear today?", I'm not even trying to ask questions like "To be or not to be?".

I think the question that could bother us in the future is "How to prevent AI takover?". We can't estimate the power of Artificial Intelligence, so maybe in one day we would have to limit its resources.
Unknown said…
In my opinion robots should stay robots. There is no need to humanize these creations as it is not always the best choice. And i believe that it's possible that if we don't control robotic advancement, we might end up with thinking machines, and that in itself might cause more problems. That's why i'm against teaching robots too much. Besides such robots would be like children. We would have to teach them everyting and more. And if something would have gone wrong in that process, there would have been consequences...
Vyvyan said…
The other problem is that different people have different sense of ethics. For one person stealing chocolate bar is bad because, well... Stealing, right? But if we assume that, for example, mother is stealing chocolate bar for her starving child beacuse she can't afford it the situation is different and the act of stealing can be forgiven.
We don't see the world in black and white. We see all in grey. Robots will not be able to see world in grey. They will be tought only the 'evil' and 'good', black and white. No grey, no compromises. They will not be able to analyze situation and draw conclusions.
Unknown said…
When I was reading about driverless cars I was thinking how a driver would feel among dozens of driverless cars. I think it would feel to him like having interactions with machines. Machine would get input from human and human would get response from machine. It is like having another level of human computer interaction.
About ethics. I think it is possible that machine can learn ehics from humans. It can learn by getting input and camparing produced output with human decision.
While it might be possible to create a silicon based life, self-driving cars are not and probably won't be as conscientious as other beings that we are so familiar with. The choices that they make will be a result of a program that was created and approved by someone and as long as that is the case we are the ones responsible for the actions they take. If we decide to let autonomous cars on our roads we have to teach and learn how to live with them and let natural selection do its job if some choose not to.

When it comes to which choice to take, then it's a much harder subject. People really like to put value on things and while it makes a lot of things easier it doesn't mean it makes them true. Look how easy it is: let's say a life is equal to one, so it must mean that five lives are greater than one and kaboom... just like that a loving father of four lost his life because some children decided to fetch a ball, that somehow found itself on a road. There are no good choices here, so maybe the best option is to get rid of cars from urban areas for good.
I am inclined to believe, that we can teach robots basic ethics. First of all , because,as it is said in the article ,self-driving cars will soon be a part of our daily life and we have to establish basic rules and ethics how to behave in uncertain/difficult situations. My personal opinion, that, on example of cars and "trolley problem", system will choose to kill one person. But again , it's a hard question . What if there are 2 elderly people and 1 young person , its a tricky one. I think, "robot" has to choose to save one young life. I don't believe in robotic tolerance, but I think that the "value" of a human life can be calculated in these so cold artificial intelligence and that's how it will probably be done.
Jakub Lisicki said…
It's pretty tough decision to allow machines to be independent, but since our minds are not capable of fully understanding them, we doesn't seem to have much of a choice. AI constantly evolves and and does it so frequenty, it's not possible how it will behave in the next five minues from now. Even if we finally believe that we begin to understand the machine in it's current state, it' s constantly changing, constantly reconfiguring. Even the experts and greatest mind sometimes don't know exactly what is going on.

Now it's basically a system that we build on trust since we've kind of expanded it way beyond the limits it was meant to operate.

My personal opinion is that we should let it evolve, observe it and maybe even learn from it in the future (from what I've hear we are already learning some minor things from the AI like design patterns, etc.)
Basic ethics are important and should be indeed programmed, but AI should have no way to define e.g. whose life is more important. You have that situation mentioned in the article - kids and motorbike. In real life situation you'd go by first thought, and very often one human wouldn't act the same as the other one, simply because there is no time to think what is better and what you should do, you act fast without thinking to avoid a tragedy.

Same here. If robot can minimize the damage and choose better path based on advanced math and foreseeing the crash, then by all means it should do it, but decisions whether person A or person B should die, should be still, if unavoidable, first thought or random functions, to avoid any possible problems arising from choosing one or another, from lawsuits to riots.
Bartosz Łyżwa said…
At first I thought - yes, everything is possible to program and I haven't seen any problems. But my second thought - what do I mean by teaching ethics? I've never thought a basic assumption of this word. in my opinion it's something that tells us what we should and what we shouldn't in given situation. I'm a programmer and I suppose it's possible to teach machine everything what we know how to do. There are better or worse solutions but it doesn't mean right now, it should work. Ethics is something that I don't know how doest it work so it's impossible to tell comupter how it should work. We only know how to behave in situations that we've already experienced. Summarizing I don't think so that we can teach machines what ethics is.
I agree with Amy Rimmer – we can’t predict what a human would do in such situation so we can’t programme robot how to act if a given situation appears. Robots do what we tell them to do and they learn whatever we programme them to learn. In my opinion by eliminating factors like driving under influence, driving while tired or being distracted by for example screaming kids in a backseat self-driving cars seem like a very safe mean of transportation and we should give them a chance before discussing questions that even human can’t answer.
Unknown said…
I agree with this article, robot ethics can be problematic. Solving it can be more complex than author assumed. I think it should be constantly updated from principle to be capable of learning on mistakes, analyzed by court, for example. Also, at first algorithm should have list of rules in order to keep high level of safety.
Artificial intelligence is still in at the stage of development and in my opinion ethic problems should be solved by law globally.
Jakub Lisicki said…
My thoughts on the dilemma are all leading to the same simple conclusion – We cannot stop the research. It is possible to teach robots ethics in the same way as humans, by learning from experience. By putting robots in as many situations as possible, we can work out the best pattern of how they should act. It will create a moral code upgraded with an independent decision making regarding moral dilemmas. AI can help us erase human errors in many aspects of life, including human errors that that are fatal or causing death. That is precisely why the idea needs to be pushed further no matter what mistakes there will be along the way. The sooner we put driverless cars or autonomous weapons on the market and normalize them, the sooner we can see what needs to be developed. There’s a probability that the robots will turn out to be better at some ethical decisions than we are.
For me this is a very complex subject and one thing that differs us from robots. We have emotions, compassion and anger - that cannot be passed to robots because then you cannot control them, as you can not control a person with that attributes. I think lately Microsoft tried to release a chatting thinking robot that was publishing offensive comments after getting too much data from the Internet :)

https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot

We are people, we cannot every time do everything 100 % ethical - as anyone who ever was lying about speeding ticket knows. Feelings - I would like to keep this one thing that separates us from machines as long as possible separated... Also watch a new Blade Runner if you want to see what happens in a new release.
The problem lies elsewhere. Who will be responsible for the decision made by machine? Machines could be learned to recognize objects, and make decisions faster then people (of course no emotions involved), also human often during accident won't make a decision about what is more ethic, our instinct automatically will protect us. Human reaction is slower then a robot (there are many videos on youtube where Tesla predicts an emergency before human and saves the passengers from the accident).
Unknown said…
Well, the idea of such technical progress sounds amazing, of course. Most of us have watched sci-fi films dreaming about the day when we will own a flying car and a smart human-like robot. But the truth is - we didn't think about the problems, that would accompany such novelties. In my opinion, this world is not ready for robot ethics, because we can hardly deal with human ethics. And the robot can't solve the human problem, cause it is actually made by human. And there is the question - who is gonna be good enough to program the robot's ethics, who will be responsible for that?
I don't know, I think that ethics is a subjective product of intelligence and ability to feel emotions, to analyze the situation in a human way. And I wouldn't trust the robot, who was created by someone with his own opinion about right and wrong.
About that car situation: if the car was controlled by that man, he could predict, that if children are playing near the road, he must control the speed to be able to stop in any case. The human is able is much cooler than any machine, if only he could be responsible for his actions and always control it. If you are tired and sleepy - don't drive and so on.
And really, we keep on striving for everything to make our life more simple not to make decisions, choices, not to be responsible, not to think, not to move. And that scares a bit. What would we do in 20-30 years...
Unknown said…
The problem mentioned in the article is hard to solve. I don't think that we can teach robots ethics. Each of us are different person that's why our ethics are different. Robots will do what we teach them without any feelings. If we teach them some behaviour they will do exactly what they learn from us. We can teach them but we can't program any feelings, so they won't be able to be decisive in some situations. In my opinion we should focus on creating robots, for example to help older people in daily activities. In future robots will gradually replace human in many aspects of life. This is inevitable, but will it be still the "real" world?
Unknown said…
I doubt we’ll ever solve the trolley problem completely.
Humans can't agree on one solution, ones would choose lesser suffering - killing less lives and others would make the car continue on its predefined path - in this case killing more lives. So if we, humans, cannot choose one solution why cars would? I even recommend everyone to take the test linked here: http://moralmachine.mit.edu It’s a great example of a lack of agreement between people when faced with that problem.
I’m all for autonomous cars but maybe the biggest solution for the trolley problem would be improving or completely changing the way we build roads and other vehicle infrastructure.
Unknown said…
After reading this article, it seems to me that on the one hand people are more confused than computers. Therefore, reading all the examples of situations related to accidents, I think that the probability of such situations where one would rely on an ethical part of the computer would be negligible.
On the other hand, the more human tasks we entrust to computers, the less human will be needed.
And who knows if computers will eventually subdue the world and will use a small part of the human factor to achieve their goals.
I think it is worth to consider how ethical it is to give up human labor for the work of the machine. In constant pursuit we forget that we are to use the world.Moreover, people have a huge problem with ethical approach to life. So with machines that don't have feelings or emotions. I'm not against development, but it's getting dull.
Zygmunt Z said…
Looking at the trolley problems, we may think that the AI will make the same decisions as the person/people who adjusted it by looking at what they would do. Everybody says sometimes that “people learn all their lives” and I think that as long as you gain complete professional experience, there might be situations when your emotions will surprise you. It doesn’t matter if you are emotionally mature or not, there might be a situation when you don’t know how to handle it. It might be your daughter’s under-age pregnancy or a child on the road when you make an overtaking maneuver and the only options are hitting the kid or hitting the car that is going the opposite way. As long as we cannot fully deal with difficult and sudden situations, we shouldn’t learn AI to make decisions by ourselves.
Unknown said…
As a driver with a 4 year experience I can say that in any kind of stressful situations there are no right choice. There is only a chance that nobody will be dead or injured. There is even a smaller chance that your care will be ok. While driving for so much tome and so many kilometers through all of the world, I've seen a lots of idiots on the road and they were not driving Teska cars with the autopilot. So many people are killed and injured by drunk drivers, drivers who were distracted by a phone call, nice song on the radio or just a good conversation with the passenger. And in this accidents car was innocent. Even the hundreds of computers who are just assisting the driver can not improve the human factor. This kind of quiz is just as stupid as "there are two houses on fire. In each of them there is a person who you love. Who you will save". The human factor may help in this kind of situation but it may make it worse. So, what shall we do? I think that the autopilot in the cars is a good invention but if and only if you can take the control of the situation buy, for example, pressing a button on the wheel.
Unknown said…
In my opinion we can teach robot ethics to some extent and I think there lots of areas where machine learning techniques can be applied to enable machines to distinguish good from bad. Children are going through the same process when parents teaching them to tell good and bad behavior apart using simple examples. As Mr. and Mrs. Anderson stated some basic “ethics” principles are already implemented in AI algorithms so robots can behave properly in difficult situations. I am aware that there are some scenarios where simple reasoning does not suffice and those algorithms can fail (example with hitting two children or motorbike driver) but those “difficult” cases compared to huge number of scenarios where machines do really help people or even save their lives should not be an obstacle in developing next generations of robots and smart devices.
Wojtek Kania said…
This is very hard topic. I think that people had problem with ethics. Unethical behavior is common in work, school etc. Learning ethics by people may be less problem than teaching ethics robots. Robots will be always thinking like calculators. In my opinion only people can be etical. Like Władysław Bartoszewski said: "It is good to be honest, though it does not always pay off. It pays to be dishonest, but not worth it.". Only humans can understend this.
Unknown said…
Can we program a machine to be ethical? What is ethics in the first place? That's a tough issue. Even with given trolley problem us humans are divided. If a programmer writes a code that takes one of two approaches to the trolley problem there will be people who consider the other approach ethical, so did the robot do the ethical thing? Is the programmer to blame? I don't think we can teach robots ethics not until we learn what exactly is conscience and how to program it first. And that's probably even harder. We can always try though. Would robot try to finish work after a deadline? It's code would tell it not to but us humans always can try ;)
Alicja said…
This comment has been removed by the author.
Alicja said…
I tend to agree with a comment made by dr Rimmer. We do not need to incorporate ethics principles into self-driving cars to benefit from them.

Moreover, I believe there are more burning challenges for ethics, which we do need to tackle, for example, in genetics and bioengineering. They are still unsolved years after we started controversial experiments. If we do not know how to solve them, we should not optimistically assume than machine learning algorithms (which are essentially just mathematical models) will. Machine learning algorithms learn from inputs and they often produce "black box" solutions. There are problems with both of these aspects. First is simplistically but well demonstrated by Microsoft’s experiment with AI chatbot Tay, which turned a racist and misogynist in less than a day. The second is that "black box" solutions do not seem to be ethical themselves. Does a judge's decision that can not be explained seem fair?
Wojtek Protasik said…
„This puzzle has been around for decades, and still divides philosophers.”
This article rises very tough questions, however, these questions has been asked many times in many different cases and they didn’t just appear now. When it comes down to make a choice between choosing lesser evil in trolley problem case, it’s a situation I would never want to be in, but we still have trolleys and we are happy to use them right? By saying trolleys I mean all technology and machinery that is available nowadays and could lead to any harmful situation. Going back to trolley problem. We have managed to minimize the risk by establishing strict rules to follow and by teaching people so they become specialists in their fields. This helped a lot to bring down the numbers of people killed by machines every year.
“I don't have to answer that question to pass a driving test, and I'm allowed to drive”
I believe that this statement, which is a fact, partially solves the problem of letting cars drive autonomously. We use machines that cause deaths, because we benefit from it greatly.
Unknown said…
I think yes beacuse we can do anything. We can teach robots how to behave in different situations. We can teach him only advanteges. Of course technology can give us "prank" and learn some bad behaviour like killing or giving suffer to peoples. Its high risk. In my opinion we should do it in little step and try to do strong controll of this robots. Robots on internet to ask a questions or answers its good thing to save some time and money for clients and vendors. Robots is good idea on future but we have to prepare our society for that and do it very safety.

Popular posts from this blog

Week 12 (12.01-18.01.15) Are you an early bird or a night owl ?

Owls are nocturnal creatures. They’re wide awake at night and they sleep during the day. If this sounds like bliss to you, then, like about 20 percent of the population who find themselves most active at around 9 pm, you may fall into the same category as our feathered friend. Night owls often have difficulty waking up in the morning, and like to be up late at night.  Studies of animal behaviour indicate that being a night owl may actually be built into some people’s genes. This would explain why those late-to-bed, late-to-rise people find it so difficult to change their behaviour. The trouble for night owls is that they just have to be at places such as work and school far too early. This is when the alarm clock becomes the night owl’s most important survival tool. Experts say that one way for a night owl to beat their dependence on their alarm clocks is to sleep with the curtains open. The Theory is that if they do so, the morning sunlight will awaken them gently and natura...

Week 11 [03-09.06.2019] The problem with ecological cars emission in UK

The problem with ecological cars emission in UK Since the adoption of the European Emission Allowance Directive in the European Parliament, all car makers have tried to submit. Since 1992, the Euro I standard has been in force, which limited the emission of carbon monoxide to the atmosphere. The Euro VI standard currently applies, which limits the series of exhaust gases. These include: hydrocarbons, nitrogen and carbon oxides, and dust.   The most significant change was brought by the Euro IV standard. For the first time it introduced the limitation of nitrogen oxides, which are responsible for the harmful compounds of smog.   What is smog?   Smog consists of sulfur oxides, nitrogen and carbon. In addition, solid substances such as suspended dust (PM). Dust suspend in atmospheric aerosols may be in liquid and solid form. These can be particles of sea salt, clouds from the Sahara and artificial compounds made by people. These compounds...

Week 4 [06-12.11.2017] This is what happens when you reply to spam email.

James Veitch is a British comedian. In today’s Ted Talk James with characteristic for himself a sense of humor shows how he deals with spam emails and why responding to junk messages may be sometimes dangerous. Questions: What do you think about James’s  way of dealing with spam? Why are junk messages legal, even though it sometimes may be a fraud? Dou you have a problem with spam? How do you deal with with it?