Although Artificial Intelligence (AI) is not a recent concept, we, the masses, barely know anything about it. Did you know that the idea of AI was first realized sometime during the Ancient Era (post 3000 BC)? Greek mythology shows the existence of many robotic beings, like Talos – the formidable defender of Europa.
However, it was only during the late 19th century that humans revived the concept to give rise to our current perception of AI. In fact, Samuel Butler’s novel, Erewhon (an anagram of the word ‘nowhere’), is widely considered to be the first fictional work since the Early Middle Ages to include AI (and Metropolis (1927) was the first feature film). Today, we have already stepped into the machine intelligence sphere in real-time; Siri and Alexa, for example. And who knows what the top AI research centers like Google and Carnegie Mellon University (CMU) are cooking at the moment!
However, in order to understand the AI research already available to us, you need to get hold of several technical jargons in the trade. Here, we have compiled a list of all the AI terms that are commonly (and uncommonly) used in the latest technology publications (updated for 2020!). Rest assured that you don’t need to look any further!
Algorithm: It is, essentially, a box of rules and directions put together in a flexible group to let the machine know what is to be done. In other words, a tight set of instructions that determine the course of a program is called an algorithm.
Artificial Neural Network (ANN): ANN tries to imitate the human brain in a system with the help of examples. For instance, just as we understand ‘water’ to be a clear, odorless, tasteless and colorless liquid, a similar example/s of water is imbibed in the ANN of a system to help it understand what such a liquid is called.
Augmented Intelligence or Intelligence Augmentation (IA): Imagine any humanly possible task being performed several times faster! Complex mathematical calculations, for instance. However, in the case of IA, human input isn’t necessarily required.
Autonomic Computing: Consider this as being the immune system of a computer. Just like the white blood cells of our body, Autonomic Computing recognizes and takes the required action against foreign invaders in the system. It also automatically upgrades the system whenever needed.
Backpropagation: It’s basically a learning mechanism for systems. Humans learn from their mistakes, and so do computers. Backpropagation enables the system to refer to its past mistakes and learn to avoid committing them again.
Bayesian networks: These are like a set of symptoms that predict the presence of a disease/s in humans. With computers, they predict the probability of several different events based on certain contributing factors.
Chatbots: A short-form for ‘chat robots’. They are communicative AIs which identify the main keywords from a question asked to render the most probable pre-written answer.
Clustering: It is a technique that involves classifying different sets of data points. For example, in the restaurant industry, regular customers can be segmented into vegetarians and non-vegetarians in a computer with the aid of clustering.d
Cognitive Computing: This is one of the most important parts of machine learning. It helps the system mimic the human brain to differentiate between two objects through data mining, linguistics, and patterns.
Convolutional neural network (CNN): Similar to ANN, CNN targets a specific area of AI – that of image recognition.
Data Mining: As the name suggests, this process analyzes large pieces of existing data and mines them for new information.
Deep Learning: ANN forms a major factor here. The deep learning machine uses ANN to imitate the thought process of humans as accurately as possible.
Expert System: This is a type of system AI that is focused on a particular field of study. It performs all the computing tasks in that field according to the user input.
Explainable AI (XAI): Oftentimes, a computer AI delivers results that are not comprehensible to the user. XAI decodes those results, presenting them in a way that can be easily understood.
Few-Shot Learning: Machine learning requires a large amount of data to be installed in the system. However, there are times when we don’t have enough data to help the system learn better. In such cases, we will categorize the existing data into starkly contrasting classes, thus enabling the machine to pick the results on its own based on the class factors. That is called few-shot learning!
Forward Chaining: It is a system where the result is derived from existing facts. It is the exact opposite of backward chaining where the known results point to the source. Forward chaining keeps mining data from the available sources till the end goal is reached.
Generative Adversarial Networks (GAN): This system has the ability to create the image of a unique, almost-real human face based on existing or user-inputted characteristics.
Heuristic Search Techniques: Remember the quote by the fictional Sherlock Holmes, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth”? That is what Heuristic Search Techniques are all about!
Image Recognition: It is different from CNN in the sense that it points out similar characteristics WITHIN a particular image from the existing data.
Knowledge Engineering: This procedure is just like a tractor used to plow a field of land. However, here the system doesn’t need a driver. It is programmed to know exactly when and where the land needs plowing.
Limited Memory: It is an advanced type of Random Access Memory (RAM). It stores data temporarily.
Machine Perception: Just like humans use their five senses to understand the nature of things around them, a computer uses Machine Perception to do the same.
Machine Translation: Google Translate is the perfect example here! Data from different languages is fed into the system to enable the translation feature.
Natural Language Processing (NLP): This is one of the most critical processes of AI. It enables the system to understand a piece of text from the human point of view (instead of the regular binary standpoint).
Pattern Recognition: The meaning lies in the name itself. Pattern recognition connects two different events that have certain similarities.
Recurrent Neural Network (RNN): A type of ANN that focuses specifically on sequential patterns.
Singularity: A technology singularity is a point in the future when technology would begin to control humans instead of the other way around, just like in I, Robot!
Supervised Learning: Here, a particular result can be used to teach the system the outcome of another problem; unlike unsupervised learning where any form of system guidance isn’t provided.
Transfer Learning: A self-learning algorithm that only needs an initial structure. For instance, if a system has been programmed to recognize different types of cellphones, then they can learn to recognize different types of laptops as well.
Weak AI: A term used to describe narrow intelligence. A Weak AI can basically perform a particular task better than the best but sucks at completing other tasks.
And now you are an AI wizard or witch! Rest assured that once you understand and learn all the aforementioned terms, you will be able to comprehend any kind of AI-related article, be it in the latest tech magazine or a decade-old manuscript!