What is Artificial Intelligence (AI)? How does AI work?

 

what-is- artificial- intelligence

Definition of Artificial Intelligence

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision. John McCarthy offers the following definition in this 2004 paper, " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable." Artificial intelligence is a constellation of many different technologies working together to enable machines to sense, comprehend, act, and learn with human-like levels of intelligence.

AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

What is weak Artificial Intelligence or Narrow AI?

what-is-weak- artificial- intelligence

Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles. These systems are powerful, but the playing field is narrow: They tend to be focused on driving efficiencies. But, with the right application, narrow AI has immense transformational power—and it continues to influence how we work and live on a global scale.

What is strong Artificial Intelligence or General AI?
what-is-strong-artificial-intelligence

General AI is more like what you see in sci-fi films, where sentient machines emulate human intelligence, thinking strategically, abstractly and creatively, with the ability to handle a range of complex tasks. While machines can perform some tasks better than humans (e.g. data processing), this fully realized vision of general AI does not yet exist outside the silver screen. That’s why human-machine collaboration is crucial—in today’s world, artificial intelligence remains an extension of human capabilities, not a replacement. Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

What is the difference between machine learning and deep learning?

Machine Learning is a subset of artificial intelligence focusing on a specific goal: setting computers up to be able to perform tasks without the need for explicit programming. Computers are fed structured data (in most cases) and ‘learn’ to become better at evaluating and acting on that data over time. Once programmed, a computer can take in new data indefinitely, sorting and acting on it without the need for further human intervention. Supervised learning is a subset of machine learning that requires the most ongoing human participation — hence the name ‘supervised’. The computer is fed training data and a model explicitly designed to ‘teach’ it how to respond to the data. In semi-supervised learning, the computer is fed a mixture of correctly labeled data and unlabeled data, and searches for patterns on its own. The labeled data serves as ‘guidance’ from the programmer, but they do not issue ongoing corrections. Unsupervised learning takes this a step further by using unlabeled data. The computer is given the freedom to find patterns and associations as it sees fit, often generating results that might have been unapparent to a human data analyst. In supervised and unsupervised learning, there is no ‘consequence’ to the computer if it fails to properly understand or categorize data. In reinforcement learning the computer would presumably begin to figure out how to get specific tasks job done through trial-and-error, knowing it’s on the right track when it receives a reward (for example, a score) that reinforces its ‘good behavior’. 

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn. "Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

What are the applications of artificial intelligence?

application-of-artificial-intelligence

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

Speech Recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—e.g. Siri—or provide more accessibility around texting. 

Computer Vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.

AI in manufacturing: Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

Security: AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

AI in education: AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

Recommendation Engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.

AI in business: Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.