When working in the field of Computer Science, Artificial Intelligence is definitely considered on cutting edge. We are working on Machine Learning – how to make our computers learn things by observing the world around and by finding patterns in data. Just like we do. But to the outside observer, this sounds almost like magic. Like we’re trying to free a genie from its bottle. It could either push as to new miraculous advances or it might just destroy us.
One of the ideas often discussed in the popular science press is the possibility of an intelligence explosion, a singularity. A point at which we are unable to predict what will come after because all the rules will change. An intelligence explosion assumes that at some point we will invent an AGI (Artificial General Intelligence) which will be as smart as we are. At this point, some conclude, this AGI will be able to create a bit better version of itself – because it’s something we could do as well. Once it’s a bit better, it will be able to do the same over and over again – each time building a new version that is faster and better and “more intelligent” than we are. But what’s next? What will such a super intelligent being think about us? Will it be our friend? Or maybe our enemy?
François Chollet, creator of Keras Deep Learning library for Python and author of “The impossibility of intelligence explosion” article thinks this is just another myth that people who don’t really know the topic try to propagate. There are multiple problems with the idea of singularity.
Firstly, there is nothing like “general intelligence”. Actually, psychologists can’t even agree what intelligence is. We have some ideas and some definitions, but they don’t really cover all the cases we observe in the real world. Intelligence is considered to be “general problem-solving skill”. But we don’t really understand how it works. If we make “it” think faster, will it actually be more intelligent? Or “more intelligence” actually needs to change the quality of thinking? This is something that contradicts the idea of “slightly better AGI”.
Also, our intelligence seems to be connected with our culture. Or even wider – with the environment. We don’t really know how much of the intelligence is connected to the way we teach our kids. If a genius kid is suddenly lost in the jungle and parented by wolves (like in Jungle Book), will it still be a genius? Will it be able to invent new technologies and ideas?
Our intelligence is partially coded in our civilization. In a way, it’s constantly improving, constantly reinventing itself. Yet it’s not really exploding. The complexity of our civilization is growing as fast as it’s capabilities. The same we can observe with our computers. Moore’s law says every 18 months our processors double their processing power. But is it’s exploding? Not really, because the tasks they are facing are also increasing in complexity. Exponential progress, meet exponential friction.
In conclusion, we are not really in danger of a Singularity. There will be no Intelligence Explosion on earth. We will observe the growth, which is good news, but we shouldn’t fear to become irrelevant overnight. That’s just a myth of “intelligence explosion”.
Questions:
1. Are you aware how many AI products are you already using?
2. Do you think we’ll see AGI in our lifetime?
3. Are you afraid we’ll be made irrelevant once AI will be used everywhere around us?
One of the ideas often discussed in the popular science press is the possibility of an intelligence explosion, a singularity. A point at which we are unable to predict what will come after because all the rules will change. An intelligence explosion assumes that at some point we will invent an AGI (Artificial General Intelligence) which will be as smart as we are. At this point, some conclude, this AGI will be able to create a bit better version of itself – because it’s something we could do as well. Once it’s a bit better, it will be able to do the same over and over again – each time building a new version that is faster and better and “more intelligent” than we are. But what’s next? What will such a super intelligent being think about us? Will it be our friend? Or maybe our enemy?
Will AI replace humans? |
Firstly, there is nothing like “general intelligence”. Actually, psychologists can’t even agree what intelligence is. We have some ideas and some definitions, but they don’t really cover all the cases we observe in the real world. Intelligence is considered to be “general problem-solving skill”. But we don’t really understand how it works. If we make “it” think faster, will it actually be more intelligent? Or “more intelligence” actually needs to change the quality of thinking? This is something that contradicts the idea of “slightly better AGI”.
Also, our intelligence seems to be connected with our culture. Or even wider – with the environment. We don’t really know how much of the intelligence is connected to the way we teach our kids. If a genius kid is suddenly lost in the jungle and parented by wolves (like in Jungle Book), will it still be a genius? Will it be able to invent new technologies and ideas?
Our intelligence is partially coded in our civilization. In a way, it’s constantly improving, constantly reinventing itself. Yet it’s not really exploding. The complexity of our civilization is growing as fast as it’s capabilities. The same we can observe with our computers. Moore’s law says every 18 months our processors double their processing power. But is it’s exploding? Not really, because the tasks they are facing are also increasing in complexity. Exponential progress, meet exponential friction.
In conclusion, we are not really in danger of a Singularity. There will be no Intelligence Explosion on earth. We will observe the growth, which is good news, but we shouldn’t fear to become irrelevant overnight. That’s just a myth of “intelligence explosion”.
Questions:
1. Are you aware how many AI products are you already using?
2. Do you think we’ll see AGI in our lifetime?
3. Are you afraid we’ll be made irrelevant once AI will be used everywhere around us?
Comments
I believe that technology is growing very fast but to be honest I think that we won’t be able to see AGI in our lifetime. Also I can say that I’d be very happy not to see it :D
As I’ve said, I think that I’m afraid about it because nowadays even some real people become irrelevant for others. So…
When it comes to new technology is still expanding drasticly but I don't think we will see AGI in our lifetime. I guess after it will be ready it will be still tested many times before releasing so it will take many years to see.
I think it is possible to human kind to become irrelevant. Nowadays a lot of working places are closed because humans were swapped to some machine with artificial intelligence. I think it will take some time but I guess we will become irrelevant.
2. Do you think we’ll see AGI in our lifetime? I hope so because it will be definitely new page in our lives. To my mind it may bring big changes to our world.
3. Are you afraid we’ll be made irrelevant once AI will be used everywhere around us? I don’t believe in all those myths that robots will replace humans and etc but I do strongly believe that most of the manual work will be done by robots.
In my opinion, human abilities are as high as AI. The human brain has many abilities and capabilities that have not yet been discovered. Information and knowledge are expanding in the world and there is still a great way to learn everything.
Artificial Intelligence is written by humans, before you can define in the program code the rules that artificial intelligence is only available to humans.
The real example is the use of artificial intelligence in China to give citizens a rating that is more like a nightmare.
It's difficult to answer the question will we ever see AGI in our life time just because we still don't have the precise definition of intelligence in AI terms. Although I guess that will witness something mind blowing in the next 10 - 20 years.
Certainly not, there should be some one alive in order to controls those AI systems. I mean there still will be positions which require creativity e.g science, art.
2. I think we will see some form of AGI in next few years as it is one of the fastest-growing area in science and technology area. Every tech company or major university is investing lots of money in research & development of AI so it will inevitably lead to AGI to be created.
3. In my opinion that is not going to happen because there will be still areas where it would be difficult to replace humans, e.g.: creating artworks, managing people, expressing emotions. Additionally, I think that authorities and designers of AI-driven solutions will consider implementing some kind of limitations in the self-improvement process of such a devices in order to prevent machines to get out of human’s control.
I rather don't worry about becoming irrelevant due to Artificial Intelligence. As far as I work in IT, if my specialisation became automated, I would shift to another field of IT, maybe connected with tuning this AI or something like that.
I think we will see the first AGI in next 30-40 years, and this one will change everything in this world, even probably will start a new era in human history.
I don't think that all the humans will be replaced, because there are still many options and jobs, which cannot be replaced . Humanity is making this for themselves, so its obvious, that some places will have to stay.
I think we will be the generation that will witness the introduction of AI in the daily life of the population of the planet.
Even today, many processes occur at the automatic level. And technology replaces people in production. This is on the one hand a negative phenomenon. But such human progress can be forgiven.
On the other hand, maybe because they will be more intelligent then us, they will be less prone to violance and abuse then we are. We can only hope.
I don't like term artificial intelligence, because at this point there are no intelligence. I prefer terms machine learning, and deep learning, because computers just calculate weights and possibilities. So I happy that computers now are better in predictions.
2. Do you think we’ll see AGI in our lifetime?
I think no, because switch from learn to how to play DOTA to real world learning is not a question of decade. From my perspective it will take about a century.
3. Are you afraid we’ll be made irrelevant once AI will be used everywhere around us?
I think people can concentrate on more important tasks and also can perform better with assistance of AI.
Propably I'm not aware of their number because they they shouldny be called intellegent for me.
2. Do you think we’ll see AGI in our lifetime?
I don't think this will happen in our lifetime. Hu mans are trying to invent someting as smart as we are but i'm not considered that it wil happen at all.
3. Are you afraid we’ll be made irrelevant once AI will be used everywhere around us?
I'm not afraid about me but maybe about other generations. Nowadays many people lose their job in favour of robots or some kinds of systems. They have many benefits from the point of view of the employer and something like AI will have even more.
In my opinion, we’re far away from AGI now, as is a possibility for machines to realize self-existence and start recursive self-improvements (a.k.a. the emerging of singularity). We’re very far away from that now. And I’m not afraid at all.