Perhaps you’ve heard about neural networks, software simulations interconnected inside a computer that helps learn things, recognize patterns, and make decisions in the same manner a human does.
Well, symbolic AI, otherwise known as the Good, Old Fashioned AI (GOFAI), had been a dominant paradigm in the community from the post wars, until the late 80s. As the phrase suggests, symbolic AI runs on the idea where we use symbols to come up with a certain solution.
Building intelligent machines have always been a fascination for the human race. But right through history, we’ve seen many researchers rise and fall. While neural networks are one of the most common breakthroughs AI has encountered, symbolic AI once played the same role. Although neural networks have given us a promising future, researchers believe for an AI to advance further, it must be able to understand the ‘why’ and ‘what’ along with the cause-effect relationships.
Precisely, the current deep learning is not perfect, it is flawed due to the lack of model interpretability. This is one major reason why researchers are keener on exploring new avenues in AI – the unification of neural networks and symbolic AI.
AI uses deep learning neural network architectures and combines it with symbolic reasoning techniques to identify objects in a video, analyze their movements, and find a solid reason behind their behavior. For instance, the neural network has helped us identify the color, shape, and size of an object. However, if you apply symbolic AI, it can further analyze and tell you more about the exciting properties of the object like the area of the object and volume, etc.
The first seeds of intelligent machines were laid in 1956 by Marvin Minsky, John McCarthy, Nathan Rochester, and Claude Shannon at the Dartmouth Conference. Thus, concepts such as deep learning, artificial neural networks, and even neuro-symbolic AI isn’t new. These scientists have been working hard in teaching the computer how to model human brains for such a long time. It is only fair to say that now the technology has developed and we can see AI systems become of practical use.
Irrespective of the advances in deep learning, it is still far from making a replica of a human brain. It is an undeniable fact to see how machines can identify skin cancer better than doctors, but there are still many flaws.
The flaw lies in the deep learning algorithms and machine learning neural networks, they’re still too narrow.
The reason why we need to put our focus in the middle ground, a broad AI to multitask and cover multiple domains. It should read data from multiple sources, whether the data is structured or unstructured to enter the era of neuro-symbolic AI.
A collaboration happened between IBM and MIT, they have planned to invest USD 250 million over ten years, says David Cox, head of the MIT-IBM Watson AI Lab. This approach will be taken up to advance the fundamental research to be taken place in AI. A critical avenue of the research includes neuro-symbolic AI.
Here’s what Cox told ZME Science,
“A neuro-symbolic AI system combines neural networks/deep learning with ideas from symbolic AI. A neural network is a special kind of machine learning algorithm that maps from inputs (like an image of an apple) to outputs (like the label “apple”, in the case of a neural network that recognizes objects). Symbolic AI is different; for instance, it provides a way to express all the knowledge we have about apples: an apple has parts (a stem and a body), it has properties like its color, it has an origin (it comes from an apple tree), and so on.”
“Symbolic AI allows you to use logic to reason about entities and their properties and relationships. Neuro-symbolic systems combine these two kinds of AI, using neural networks to bridge from the messiness of the real world to the world of symbols, and the two kinds of AI in many ways complement each other’s strengths and weaknesses. I think that any meaningful step toward general AI will have to include symbols or symbol-like representations.”
When you combine both approaches, you will end up getting a system that will have a neural pattern recognition allowing it to see things, while the symbolic part will allow the systems to make logical reasoning about the objects or the symbols or maybe the relationships taking place amongst them.
If you examine its success, neuro-symbolic AI has the capability of going beyond what deep learning is currently capable of doing.
Will neuro-symbolic AI be the next evolution in artificial intelligence?