AI vs Human brain
Apr 9, 2020
The question of whether AI will take over the world has been around for years, but will it? Let’s figure out the difference between AI and the human brain.
Have you watched The Terminator? That is an example of how humans think Artificial Intelligent (AI) could go wrong. But worry not, that future will not come (at least, hopefully?). By now, you are probably familiar with topics like “Will AI ever be able to replace humans?”, “What is the difference between AI and the Human Brain?”, or “Will AI ever go against us?”. So let’s start off with something simple – What exactly is AI and how does it work?
“Can machines think?” – Alan Turing (1950)
Famous with the encryption of the Nazi codes, winning the Allied Forced the World War II, Alan Turing changed history yet again with this simple question, laying down the fundamental goal and vision of Artificial Intelligence.
AI is a branch of computer science that, at its core, answers Turing’s question – and it seems to lean on the “Yes” side. The ultimate goal of AI is to “teach machines how to think”, by replicating human intelligence in machines.
AI is “the study of agents that receive percepts from the environment and perform actions.” -Russel and Norvig viii.
In simple terms, AI is the engineering of making intelligent machines, which is similar to using computers to understand human intelligence. And by intelligence, it means, the ability to achieve goals through learning, thinking, adapting, and acting.
Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as “Algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”
There are 2 categories of AI: Narrow AI (weak AI) and Artificial General Intelligence (AGI – Strong AI). Your friendly neighbor Alexa and your sweet Siri are example of Narrow AI, while, well, cyborg or Terminator is an example of AGI – a machine with general intelligence and act like a proper human being.
While Narrow AI focuses on performing a task extremely well (searching the web, calculating data, performing checks, etc.), they are coded to operate in a stricter manner than how humans would. Much of Narrow AI is developed through machine learning and deep learning.
Artificial General Intelligence however, is the closest we will get to “cloning a human brain”. Machines with AGI can use their general knowledge and intelligence to solve any problem. Creating machines with AGI is the Holy Grail for many AI researchers, but the journey continues as the task’s difficulty doesn’t decrease with time and technologies thus far.
AI works by combining large amounts of data with intelligent algorithms, allowing machines/software to learn automatically from patterns or features in the data. AI is a broad field of study that includes many theories, methods and technologies. As explained by SAS Insights, including:
Machine learning: uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed.
A neural network: a type of machine learning that is made up of interconnected units processing information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
Deep learning: uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
Cognitive computing: a subfield of AI that strives for a natural, human-like interaction with machines. Using AI and cognitive computing, the ultimate goal is for a machine to simulate human processes through the ability to interpret images and speech – and then speak coherently in response.
Computer vision: relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
Natural language processing (NLP): the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
Graphical processing units: key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.
The Internet of Things: generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.
Advanced algorithms: being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.
APIs, or application programming interfaces: portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.
Elon Musk, CEO of Tesla and SpaceX has claimed that AI is a “fundamental risk to the existence of human civilization”, yet, ironically, he pushes for the foundation of OpenAI, a non-profit AI research company developing friendly AI that can benefit society as a whole.
Another scientist who sees AI as a potential risk is the famous Stephen Hawking, who foresees that once AI has reached a certain level of advanced intelligence, it will rapidly advance to the point that it will outstrip the human capabilities, a phenomenon known as the singularity (you probably have already seen one of those movies), and pose a potential threat to humanity.
Yet, AI researches believe that as long as they are careful about the risk, it won’t happen. Furthermore, that shouldn’t be an issue for at least a few more decades, since we are not even close to the edge of developing real AGI yet.
Instead, a more credible near-future possibility should be…
Yes and No. As technologies advance rapidly, repetitive and complicated tasks are slowly being replaced by automatons and AIs.
However, in order to make sure that all these programs run smoothly, employees are being trained in different and newer aspects to increasing productivity.
Since machines are much better than humans at doing routine, repetitive work, and we are much better at doing productive, managing tasks, why don’t we just stay doing what we are best at?
Optimism asides, millions of jobs are being replaced by AIs and this might cause some panic to the workforce. However, as with every technological shift, the demands with new jobs and skills will surface.
It is up to businesses to offer employment to those who are being displaced and provide the proper training. This leads to a future where AI and human can work alongside-by-side, complimenting each other’s flaws.