AI or Artificial Intelligence had been a buzz word of the twenty first century in its early days. However, somehow the world seemed to find more interesting stuff to buzz about. The likes of smart phones, tablets, thinner laptops and cool hardware, gaming, social networking platforms, virtual reality.. the list goes on. All of these had us busy and obsessed while the catch of doing something genuinely cool with AI seemed to be a far cry or not worth much, at least for the time being. However, as of late, it seems like the gorgeous butterfly, done with its Metamorphosis , is coming out of its cocoon.
So, what makes AI so spectacular and noteworthy? Before we can understand the significance of AI or Artificial Intelligence, we must be able to distinguish it from the traditional ‘Intelligence’ that has been part of computers since the beginning of their time. If we look at how computers have worked all these days, they have been solely computational machines, having little human-like intelligence at all. Most of their applications have revolved around codes written to perform different pre-defined tasks which one way or another solely depend upon mathemetical computation. However, Artificial Intelligences don’t necessarily have a fixed algorithm to go about and solve an underlying task. Instead AI takes a human like approach to any provided problem. It analyzes existing sets of data, thereby it can train itself and ‘learn’ from experience, just like humans do. This might not sound like a much for a starter, but as we will see, this ability to improve from experience opens doors to thousands of possibilities.
The Evolution of AI
In its earlier days, artificial intelligence started off with simple logistic regressions to fit data for either classification or prediction related problems. These regression models were at the core of the machine learning algorithms. However, these simple regression models failed at performing complex tasks with higher degrees of freedom. So machine learning algorithms soon had hit a wall. A new sort of learning algorithm had to be introduced. The new model approached machine learning in a completely different way. Instead of searching for direct relationships between sets of data, the computation would be broken into different hidden layers which would be used to determine a complex set of interrelation between the input data. The input data had to go through multiple layers of computation before finally reaching a reasonable prediction. Further development of deep learning algorithms later seemed to mimic the human brain or neural system. These sorts of networks are often referred to as neural networks.
However, there were certain drawbacks to implement deep and neural nets. The algorithms themselves had mathematical limitations like back propagation errors, the vanishing gradient problem etc. Not only that, in order to train this sort of a learning algorithms it would take enormous amount of processing power which the traditional CPUs could not possibly offer. It could often take years to simply train a deep learning model. Luckily enough, GPUs seemed to excel in Deep learning problems. As of the last couple of years, GPUs have seen large strides of improvement in both performance and power efficiency, thereby silently providing a greater opening to the world of AI. Concurrently, a great number of scientists and researchers have been working relentlessly during the recent past and have provided us with breakthroughs from time to time to overcome the hurdles in the way of implementing deep learning networks.
AI around us
The most interesting thing is that we are already using AI all around us. However, most of these may be termed as ANI (artificial Narrow Intelligence), expert systems or weak AI – according to different perspectives. But they converge to the same idea - AI which has expertise in a certain field. The list of examples is countless. Starting from the Google searches we run everyday, the anti-spam filters in our mail inbox, or the recommendations we get on YouTube - AI is vaguely present in our day to day lives and even on our handhelds. However, there are certain aspects of AI which I cannot help but mention.
Thanks to AI, autonomous cars are no more a thing of the future. Extensive research and experiment is already underway lead by giants like Google, Tesla, Apple etc. The Google Self-Driving Car project has been around for a decent amount of time and we already know enough about it. The Tesla Autopilot on the other hand is pretty new to the party, being around only since the late 2015. But the most interesting thing about Tesla Autopilot is that it learns driving as the users drive their vehicles. Note that we are speaking about the ‘users’, not any single user. The whole family of thousands of Tesla cars cruising across the US are connected to each other just like a single entity which enables them to learn at a spectacular pace. Tesla users reported dramatic improvements in the Autopilot system over weeks since the Autopilot system was first rolled out. Tesla Autopilot however is still in Beta stage and there have been reports of accidents putting the program in jeopardy.
In 2001, three scientists Eric Cornell, Wolfgang Ketterle and Carl Weiman were jointly awarded the noble prize in physics for being able to create the Bose-Einstein Condensate, an extremely rare state of matter which they were able to obtain by using precise lasers to nudge and bump atoms and reduce their movement to obtain a very still state. Lately, a group of scientists at the Australian National University decided to recreate this experiment from scratch, but this time using the help of an AI which would be given full control of all the lasers. Astonishingly, the AI took barely an hour to achieve what the noble prize winning scientists took years of hard work. So it goes beyond any doubt how radical the role of AI might be in the field of scientific research. The application of AI for scientific research knows no limits. Medical science, modern physics, chemistry, astronomy, nuclear physics, nanotechnology.. there is simply no field that AI cannot help but contribute.
If we want to talk about the applications of AI in diversified sectors, we must talk about IBM Watson. IBM Watson is basically a supercomputer created by IBM Corporation but the wizardry behind its diversity lies in the programming used. Watson possesses the ability to skim through data and research material and educate itself just like a human scholar would. Later, it is able to use its knowledge to aid in the respective fields. IBM has opened the platform for developers to benefit from Watson. Watson is already being used to aid in medical and healthcare applications for cancer treatment and has proven its high degree of reliability among the doctors and healthcare specialists. Watson is also likely to be able to provide significant support to business and financial organizations because of its ability to learn through research.
AI: Art, Music and Literature
Wait. What? Who would have thought that the world of art and culture would ever be a place for AI? Surprisingly though, AI doesn’t fail to disappoint the haters. If you have caught my previous blog post then you might know about Prisma- the AI that creates artworks comparable to great paintings. My friend Shaer Ahmed took the liberty to dig deeper into the prospect of Art and AI. Feel free to visit his site here and learn more.
AI has not stopped at artwork. Google Magenta is a great example to how AI might be used to create music. You won't believe that this is AI production when you listen this:
AI enthusiasts didn’t like to keep any stone unturned. They have also tried to implement an AI capable of literary work. There are already a number of news portals based on AI to provide news updates on sports, weather, stock etc. based on real time data. However, to take this to another level, NYU graduate Ross Goodwin created an AI which produced a screenplay dubbed ‘Surprising’. The screenplay might seem pretty creepy. But hey, its just a start.
Most of the AIs we have talked about till now(except IBM Watson) were one way or another ANI or Artificial Narrow Intelligence, to obtain expertise in a specific task. AGI or Artificial General Intelligence on the other hand is a different story. By definition, AGI does not posses a fixed instruction set. It has a more general purpose and might be used in a variety of different applications with no alteration at all. We are lucky to have a general purpose AI- Google Deepmind surfaced in the AI league.
Deepmind technologies was originally a British AI startup founded in September 2010. Later in 2014 it was acquired by google who renamed it to Google Deepmind. The company has been operating relentlessly since, to combine techniques of machine learning and systems neuroscience to build powerful general purpose learning algorithms.
An event of late has taken AI researchers by aw, when Google’s general purpose Deepmind AI was able to beat the best human player in the traditional Chinese board game GO. The project termed as google AlphaGo has to be seen as a major breakthrough in the world of machine learning since this is the first time in the history that a general purpose AI has been able to pull off a feat like this. Previously AIs like IBM’s Watson beat the best human chess players, but the Chinese board game GO is way more complex for a machine to brute force all the possible combinations. Instead of the traditional approach, Deepmind uses a technique called reinforcement learning, which is powered by neural networks, to mimic the human brain. Deepmind only takes inputs through raw pixels and later processes them to find overlaying patterns and combinations. Initially, Deepmind was fed streams of GO games of amateur players playing each other in the game GO. By only watching these games Deepmind had learnt the basics to play GO but it was not particularly good at it. Later, the machine was tasked to play with itself for millions of times and through each game, it became better and better, learning from its own mistakes. This proves the idea that a machine is capable of learning on its own and solidifies the claim of general purpose AI.
Artificial intelligence is by far the most significant feat of humans in the era of information and technology. However, the coin has both sides to look at. It is true that artificial intelligence opens up the world to new possibilities. AI can offer a helping hand to excel and improve in every possible way humanity can or cannot imagine. However, the idea of a general purpose intelligence that is capable of improving itself also poses a threat to humanity as well. Any self-improving system runs the risk of achieving the state of accelerated returns – as predicted by researchers which might lead to a super-intelligence greater than us human beings ourselves. However, we are far from the point to judge if this might be a probability or not, but in the mean time I would agree with Dagago Altraide to say that I am ‘Cautiously Optimistic’ about AI. Whether or not AI might pose a threat to humanity is beyond the scope of this article, maybe a story of another day.