Owls are nocturnal creatures. They’re wide awake at night and they sleep during the day. If this sounds like bliss to you, then, like about 20 percent of the population who find themselves most active at around 9 pm, you may fall into the same category as our feathered friend. Night owls often have difficulty waking up in the morning, and like to be up late at night. Studies of animal behaviour indicate that being a night owl may actually be built into some people’s genes. This would explain why those late-to-bed, late-to-rise people find it so difficult to change their behaviour. The trouble for night owls is that they just have to be at places such as work and school far too early. This is when the alarm clock becomes the night owl’s most important survival tool. Experts say that one way for a night owl to beat their dependence on their alarm clocks is to sleep with the curtains open. The Theory is that if they do so, the morning sunlight will awaken them gently and natura...
Comments
"...if we can create weapons which make it less likely that civilians will be killed, we must do so". It's strange. They don't want to kill civilians, but want to kill soldiers. But they are all people.
Regarding the programming of "avoid suffering" or "promote happiness" - you can't and really do that. In most cases the happiness of one is suffering of other. Again, I believe we shouldn't get car to calculate if old lady should be killed, since it would cause less sorrow overall (old lady wouldn't be sad due to obvious reasons and small child wouldn't care).
To find out you need to think about that this is possible, but not in close future. There are a huge amount of examples you can find in internet(starting from smart bots in video games and ending with VR). As the statement said , we can not fully understand why robot or such a machine made a particular decision. One day we can take a look and notice, that company N made a new startup called "smart driveless car", and if they would do a big announcement, other companies,of course, would support them. Until we can't even imagine how it can be, but all the giants are already implementing this kind of technologies into our daily routine
An autonomous car travels at a lethal speed. A child runs onto the street out of nowhere, only a few meters from the car. The kids parents are on the sidewalk at one side of the road. On the other side there are other groups of people. There is not enough space to do a maneuver without risking any lives. The odds for not crashing into anyone, while still minimal, are greatest while risking the parents lives, where a failure would mean making the child an orphan. Otherwise, the car can run over the kid just following its original route. Third option is to run over a bunch of random people, so that the family do not have to be incomplete. Or the car can hit both the kid and the parents so that they share the same fate, but without risking other innocent souls.
The questions are: Is an AI unit ever going to be capable to decide about such things? Is it going to act predictably? How is it going to calculate who's lives to put at stake? Will it be able to come up with a solution a human would not? And even if not very probable - what if the AI is really advanced and starts to enjoy running people over and decides to go on a killing spree?
To sum it up, as previously wrote by Zygmunt Zardecki - the AI turns out to be quite unpredictable in some cases (robots inventing their own language), which also means that they ethics might be entirely different than ours, especially that these can differ even between two humans.
Another interesting thought to read up on is the Paperclip Maximizer problem, which was presented to illustrate the threats of AI - that a robot might be smart enough to learn how to fulfil its original purpose in ways humans would not want it to.
Knowing how at this point AI works and learn from the technical point of view the biggest challenge would be to give neural network ability to explain based on what criteria they made choices. Without that ability humans wouldn't be able to accept whatever is the outcome.
AI won't do any better because a) developed by humans b)even if will "think" how will it define which person should live? Of course by some data comparison which equals how is better then survives.
Maybe most "fair" will be flip of a coin? 50/50 chance. There are a lot of people. Earth is becoming overpopulated, so why should survive people who are "useless"? In a second hand you never knows if this person won't do something meaningfull for humanity. On the other hand we can use statistics like in science to predict:) Playing a God isn't an easy task so how something what can't think make a such decision?
Speaking of weapons, can you imagine how easy any machine can be fooled? Observe the situation: you are in war with the cruelest and the most unscrupulous warlord of all times, who has only one purpose - destroy and overcome the world with all costs required. You are trying to capture a city under his control, which is full of people, and only 15% of them are soldiers, according to your scouting. So you decide to send a squad of these machines that would kill only enemy soldiers, and civillians will be safe. A day ofbattle comes, your machines are in position, and order "Attack" is given. They're heading to the enemy's fortifications through no man's land. Suddenly, you are starting to receive a multiple signals of enemy's presence. After a moment, your forces open fire, destroying everyone on their way. But numerical superiority is on the enemy's side, and your assault is failed. You can't understand, what's happened. Scouts were wrong? Machines were malfunctioned? You are observing a battlefield with binoculars. The realization is horrific. You see corpses of enemy's soldiers still holding their guns. Oh, wait! They aren't holding them. Guns are simply duct taped to their hands! Then the alarms starts shouting about enemy's counter-attack. The scouts reports are the same - 15% of city former population. The warlord fooled your machines - while they was distracted by look-like-soldiers civillians, enemy's true forces took their chances. Can you tell, who is truly responsible for the death of thousands innocent people?
Problem mentioned in article is very interesting to resolve. I think, scientists and engineers are more focused on creating first versions of autonomous cars than resolving such ethical problems right now. What should program do in "trolley" situation? I'm sure that it should protect driver's health in first place. After that I think it should minimize casualties.
I'm curious whether in 10-20 years we will see court processes against car companies that created autonomous car that killed for example 3 kids rather than 1 man. I think it might be really interesting subject.
In my opinion, we cannot teach ethics robots. Attempting to teach AI some habits or how to behave well can only end in failure. At every step, every man would do his own thing, and the robots would repeat only the learned and correct operation. Sometimes you have to choose a site that is not entirely correct but you have to make a decision. The described tram problem is equally difficult to solve because we do not know how the AI would react. Would she be able to stop and avoid an accident? How less would it be if robots were driving cars?
First of all - Tesla. For me, Elon Musk is a genius and cars made by his company are great. They are able to prevent from crashes hundreds of times faster than people. It's incredible, isn't it? But it's not that simple. What about a situation, where computer have to decide what is the best choice? I mean, what will be the safest way to avoid an accident? Going off the road? Hitting a car driving next to us? Or maybe hitting a cyclist instead of child on pedestrian crossing? These decisions are even hard for us to make. We don't have time to think in this kind of situations. We have to make a move, a right move. And there comes the big thing - machine learning. We can predict some kind of situations, do some research to find out how to most effectively teach robots to make the right choices. But still, it requires thousands of hours of tests. I've heard about a situation, where autonomic car crashed because of car in front of it, which had a painting of skies, road etc. on its back. Computer thought that it's a normal road.
To sum up, our future has enormous possibilites. Will it be used in proper way? We will see.
I think the real question is: how can we prevent autonomous cars from encountering such tough situations? Why should we even consider such dillemas - isn't it better to design transportation means in such way that will assure safety?
Right now, autonomous cars are being designed to use regular roads and streets humans use everyday. In that case, there is a constant risk of hitting somebody. Even if the car has outstanding AI capabilities, it won't be able to respond correctly to all random events it encounters.
That's why I think we should separate whole autonomous transportation from regular traffic. Actually, railway is a good example of a well designed transportation mean. Trains have separate tracks which can't be used by any other type of machine. They can be easly controlled and the time of travel can be assessed with no problem. As long as autonomous cars share roads with people, there is possibility of an accident, probably caused by human mistake.
To sum up, I believe that the real deal is to find a way of improving the infrastructure autonomous cars are going to use, not their artificial intelligence.
I haven’t heard before about carebots but I like this initiative. For sick and elderly people this is a great idea to care about them by personalized robots. In conclusion I quote the last sentence which I agree on one hundred percent “The robots turn out to be better at some ethical decisions than we are”.
I think the question that could bother us in the future is "How to prevent AI takover?". We can't estimate the power of Artificial Intelligence, so maybe in one day we would have to limit its resources.
We don't see the world in black and white. We see all in grey. Robots will not be able to see world in grey. They will be tought only the 'evil' and 'good', black and white. No grey, no compromises. They will not be able to analyze situation and draw conclusions.
About ethics. I think it is possible that machine can learn ehics from humans. It can learn by getting input and camparing produced output with human decision.
When it comes to which choice to take, then it's a much harder subject. People really like to put value on things and while it makes a lot of things easier it doesn't mean it makes them true. Look how easy it is: let's say a life is equal to one, so it must mean that five lives are greater than one and kaboom... just like that a loving father of four lost his life because some children decided to fetch a ball, that somehow found itself on a road. There are no good choices here, so maybe the best option is to get rid of cars from urban areas for good.
Now it's basically a system that we build on trust since we've kind of expanded it way beyond the limits it was meant to operate.
My personal opinion is that we should let it evolve, observe it and maybe even learn from it in the future (from what I've hear we are already learning some minor things from the AI like design patterns, etc.)
Same here. If robot can minimize the damage and choose better path based on advanced math and foreseeing the crash, then by all means it should do it, but decisions whether person A or person B should die, should be still, if unavoidable, first thought or random functions, to avoid any possible problems arising from choosing one or another, from lawsuits to riots.
Artificial intelligence is still in at the stage of development and in my opinion ethic problems should be solved by law globally.
https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot
We are people, we cannot every time do everything 100 % ethical - as anyone who ever was lying about speeding ticket knows. Feelings - I would like to keep this one thing that separates us from machines as long as possible separated... Also watch a new Blade Runner if you want to see what happens in a new release.
I don't know, I think that ethics is a subjective product of intelligence and ability to feel emotions, to analyze the situation in a human way. And I wouldn't trust the robot, who was created by someone with his own opinion about right and wrong.
About that car situation: if the car was controlled by that man, he could predict, that if children are playing near the road, he must control the speed to be able to stop in any case. The human is able is much cooler than any machine, if only he could be responsible for his actions and always control it. If you are tired and sleepy - don't drive and so on.
And really, we keep on striving for everything to make our life more simple not to make decisions, choices, not to be responsible, not to think, not to move. And that scares a bit. What would we do in 20-30 years...
Humans can't agree on one solution, ones would choose lesser suffering - killing less lives and others would make the car continue on its predefined path - in this case killing more lives. So if we, humans, cannot choose one solution why cars would? I even recommend everyone to take the test linked here: http://moralmachine.mit.edu It’s a great example of a lack of agreement between people when faced with that problem.
I’m all for autonomous cars but maybe the biggest solution for the trolley problem would be improving or completely changing the way we build roads and other vehicle infrastructure.
On the other hand, the more human tasks we entrust to computers, the less human will be needed.
And who knows if computers will eventually subdue the world and will use a small part of the human factor to achieve their goals.
I think it is worth to consider how ethical it is to give up human labor for the work of the machine. In constant pursuit we forget that we are to use the world.Moreover, people have a huge problem with ethical approach to life. So with machines that don't have feelings or emotions. I'm not against development, but it's getting dull.
Moreover, I believe there are more burning challenges for ethics, which we do need to tackle, for example, in genetics and bioengineering. They are still unsolved years after we started controversial experiments. If we do not know how to solve them, we should not optimistically assume than machine learning algorithms (which are essentially just mathematical models) will. Machine learning algorithms learn from inputs and they often produce "black box" solutions. There are problems with both of these aspects. First is simplistically but well demonstrated by Microsoft’s experiment with AI chatbot Tay, which turned a racist and misogynist in less than a day. The second is that "black box" solutions do not seem to be ethical themselves. Does a judge's decision that can not be explained seem fair?
This article rises very tough questions, however, these questions has been asked many times in many different cases and they didn’t just appear now. When it comes down to make a choice between choosing lesser evil in trolley problem case, it’s a situation I would never want to be in, but we still have trolleys and we are happy to use them right? By saying trolleys I mean all technology and machinery that is available nowadays and could lead to any harmful situation. Going back to trolley problem. We have managed to minimize the risk by establishing strict rules to follow and by teaching people so they become specialists in their fields. This helped a lot to bring down the numbers of people killed by machines every year.
“I don't have to answer that question to pass a driving test, and I'm allowed to drive”
I believe that this statement, which is a fact, partially solves the problem of letting cars drive autonomously. We use machines that cause deaths, because we benefit from it greatly.