Remember when cell phones looked like this?
You could call, text, maybe play snake on it … and it had about 6 megabytes of memory, which was a small miracle at the time. Then, phones got faster and around every two years, you probably upgraded your phone from 8 gigs to 16 to 32 and so on and so forth. This incremental technological progress we’ve all been participating in for years hinges on one key trend, called Moore’s Law.
Co-founder of Intel, Gordon Moore made a prediction in 1965 that integrated circuits, or chips, were the path to cheaper electronics. Moore’s law states that the number of transistors, the tiny switches that control the flow of an electrical current that can fit in an integrated circuit, will double every two years, while the cost will halve. Chip power goes up as cost goes down. That exponential growth has brought massive advances in computing power… hence tiny computers in our pockets! A single chip today can contain billions of transistors, and each transistor is about 14 nanometres across! That’s smaller than most human viruses! Now, Moore’s law isn’t a law of physics, it’s just a good hunch that’s driven companies to make better chips.
But experts are claiming that this trend is slowing down. Granddaddy chip maker Intel recently disclosed that it's becoming more difficult to roll out smaller transistors in a two year timeframe while also being affordable. So, to power the next wave of electronics, there are a few promising options in the works. One is quantum computing. Another currently in the lab stage is neuromorphic computing, which are computer chips that are modelled after our own brains! They’re basically capable of learning and remembering all at the same time at an incredibly fast clip.
Let’s break that down and start with the human brain. So, your brain has billions of neurons, each of which forms synapses or connections with other neurons. Synaptic activity relies on ion channels, which control the flow of charged atoms like sodium and calcium that make your brain function and process properly.
So, a neuromorphic chip copies that model by relying on a densely connected web of transistors that mimic the activity of ion channels. Each chip has a network of cores, with inputs and outputs that are wired to additional cores, which all operate in conjunction with each other. Because of this connectivity, neuromorphic chips are able to integrate memory, computation, and communication all together.
These chips are an entirely new computational design. Standard chips today are built based on von Neumann architecture where the process and memory are separate and the data moves between them. A central processing unit runs commands that are fetched from memory to execute tasks. This is what’s made computers very good at computing, but not as efficiently as they could be.
Neuromorphic chips however completely change that model by having both storage and processing connected within these “neurons” that are all communicating and learning together. The hope is that these neuromorphic chips could transform computers from general purpose calculators into machines that can learn from experience and make decisions. We'd leap to a future where computers wouldn't just be able to crunch data at break neck speeds but could do that AND process sensory data in real time. Some future applications of neuromorphic chips might include combat robots that could decide how to act in the field, drones that could detect changes in the environment, and your car taking you to a drive through for ice cream basically these chips could power our future robot overlords. We don't have machines with sophisticated, brain-like chips yet but they’re on the horizon. So get ready for a whole new meaning for the term “brain power.”
Sources:
https://www.investopedia.com/terms/m/mooreslaw.asp
https://en.wikipedia.org/wiki/Neuron
https://www.intel.pl/content/www/pl/pl/research/neuromorphic-computing.html
https://towardsdatascience.com/neuromorphic-hardware-trying-to-put-brain-into-chips-222132f7e4de
https://www.semanticscholar.org/paper/Neuromorphic-devices-and-architectures-for-Burr-Narayanan/1f170ebf03c5c269f873d2a129ba4dc4cfb08cc2
Buch of old cell phones |
Co-founder of Intel, Gordon Moore made a prediction in 1965 that integrated circuits, or chips, were the path to cheaper electronics. Moore’s law states that the number of transistors, the tiny switches that control the flow of an electrical current that can fit in an integrated circuit, will double every two years, while the cost will halve. Chip power goes up as cost goes down. That exponential growth has brought massive advances in computing power… hence tiny computers in our pockets! A single chip today can contain billions of transistors, and each transistor is about 14 nanometres across! That’s smaller than most human viruses! Now, Moore’s law isn’t a law of physics, it’s just a good hunch that’s driven companies to make better chips.
But experts are claiming that this trend is slowing down. Granddaddy chip maker Intel recently disclosed that it's becoming more difficult to roll out smaller transistors in a two year timeframe while also being affordable. So, to power the next wave of electronics, there are a few promising options in the works. One is quantum computing. Another currently in the lab stage is neuromorphic computing, which are computer chips that are modelled after our own brains! They’re basically capable of learning and remembering all at the same time at an incredibly fast clip.
Let’s break that down and start with the human brain. So, your brain has billions of neurons, each of which forms synapses or connections with other neurons. Synaptic activity relies on ion channels, which control the flow of charged atoms like sodium and calcium that make your brain function and process properly.
Neuron sketch |
So, a neuromorphic chip copies that model by relying on a densely connected web of transistors that mimic the activity of ion channels. Each chip has a network of cores, with inputs and outputs that are wired to additional cores, which all operate in conjunction with each other. Because of this connectivity, neuromorphic chips are able to integrate memory, computation, and communication all together.
Simplified model of chip based neuromorphic architecture |
These chips are an entirely new computational design. Standard chips today are built based on von Neumann architecture where the process and memory are separate and the data moves between them. A central processing unit runs commands that are fetched from memory to execute tasks. This is what’s made computers very good at computing, but not as efficiently as they could be.
Simplified model of chip based on von Neumann architecture |
Neuromorphic chips however completely change that model by having both storage and processing connected within these “neurons” that are all communicating and learning together. The hope is that these neuromorphic chips could transform computers from general purpose calculators into machines that can learn from experience and make decisions. We'd leap to a future where computers wouldn't just be able to crunch data at break neck speeds but could do that AND process sensory data in real time. Some future applications of neuromorphic chips might include combat robots that could decide how to act in the field, drones that could detect changes in the environment, and your car taking you to a drive through for ice cream basically these chips could power our future robot overlords. We don't have machines with sophisticated, brain-like chips yet but they’re on the horizon. So get ready for a whole new meaning for the term “brain power.”
Sources:
https://www.investopedia.com/terms/m/mooreslaw.asp
https://en.wikipedia.org/wiki/Neuron
https://www.intel.pl/content/www/pl/pl/research/neuromorphic-computing.html
https://towardsdatascience.com/neuromorphic-hardware-trying-to-put-brain-into-chips-222132f7e4de
https://www.semanticscholar.org/paper/Neuromorphic-devices-and-architectures-for-Burr-Narayanan/1f170ebf03c5c269f873d2a129ba4dc4cfb08cc2
- Do you think that neuromorphic architecture could replace standard von Neuman architecture?
- What applications do you see for artificial intelligence based on this architecture.?
- Are you afraid of skynet scenario?
Comments
2. I think that it may be able to perform better with all artificial intelligence applications that are being developed nowadays. Of course it will also accelerate development of them and maybe one day cars will be fully autonomous, software will be able to diagnose and conduct a surgery or even make strategical decisions in politics or the economy.
3. Right now - no. Artificial intelligence algorithms have their limitations. They're able to perform really good at some tasks, but it can't link these results and create something more complex. Maybe development of things like neuromorphic architecture, which seems to be ideal for e.g. neural networks, will accelerate this process and enable more complex solutions.
It could in distant future. Neuman software and hardware architecture is used right now almost everywhere and replacing architecture made like this will take years to develop.
2.What applications do you see for artificial intelligence based on this architecture.?
Many applications nowdays are using AI based on architecture because it is way of the future. I see most of the applications working this way in the future.
3.Are you afraid of skynet scenario?
No, I'm not. Or at least i don't belevie I will be alive in that moment.
Not completely. Different architectures are suitable for different purposes. I do not think that neuromorphic architecture will be a universal solution to all problems.
2. What applications do you see for artificial intelligence based on this architecture.?
Everywhere. Of course, not everywhere it will give exact results. But it will definitely be interesting to look at it. Sometimes AI does strange things.
3. Are you afraid of skynet scenario?
I'll program my robots so they can't hurt me under any circumstances, they just protect me.
Do you think that neuromorphic architecture could replace standard von Neuman architecture?
It is different, indeed. Every change in something well established with consecutive improvement in characteristics is a natural development cycle.
What applications do you see for artificial intelligence based on this architecture.?
Neural networks have proven their capability near in every sphere of science and learning. Although the only thing a program can do is execute its own code -- we do not know whether the human is not as well. This might be the answer to the human source code.
Are you afraid of skynet scenario?
Not at all. In fact, it is inevitable, as a result of natural development of technology.
At the moment it is only a novelty for which we can look for new applications in which it will work better than others. Replacement of the current architecture will not happen so soon but maybe it will be the future as von neuman architecture will come to its limits.
2. What applications do you see for artificial intelligence based on this architecture.?
Definitely the robotics and autonomous vehicles industry.
3. Are you afraid of skynet scenario?
Looking at the current development of AI, I am not afraid of it yet, but who knows what will happen one day ...
As far as I know, Neuman software and hardware architecture is used almost everywhere today. Her successor would have to be immaculate. This can take a long time.
2. What applications do you see for artificial intelligence based on this architecture.?
The car industry, mainly passenger or public use (metro, trams, buses).
3. Are you afraid of skynet scenario?
Looking at the current development of artificial intelligence, I'm not afraid of him yet.
2. Human will most likely find a way to put it everywhere, but what comes to mind first are probably robots. It would surely be nice to see Boston Dynamics creations running this architecture.
3. Not really. All you have to do is put if statement to restrict them from going evil or removing red lights and problem solved :). And even if it won't work they will still need us to read captcha for them. On a serious note though, I highly doubt that we will advance enough in my lifetime, for such scenario to appear.
2. All of which are now based on AI. It's hard to say what else we can use AI for.
3. Artificial intelligence is weak and I doubt that in my life it will achieve such power that it could create a global threat.