Recent developments in artificial intelligence have been under-appreciated by industry due to a lack of clarity in definitions and a lack of understanding of machine learning. Machine learning, deep learning, neural networks, predictive analytics, big data analytics and artificial intelligence are used interchangeably leading to widespread confusion. In addition, the discussion around artificial intelligence has always been led astray by Hollywood and news media looking for exciting stories. There is nothing more exciting than machines vs humans story, and so important developments in artificial intelligence are lost in the Terminator fantasies. Developments in AI algorithms are often hard to explain, and because the word has been misused over the years, the public associate AI with an human-shaped robot in the future and dismiss stories of incremental progress in AI. This is a huge mistake.
Artificial intelligence has been improving software for a long time now, often going by the name of machine learning. Your Google search, Amazon recommendations and your SIRI requests. When an algorithm is successful, it is embedded in software and disappears making the software smarter over time. What then has changed to make AI so important today? The answer is threefold; big data, increased computation and parallel computing.
Vast amounts of data are being collected and stored at ever decreasing costs providing a wealth of training data for training for AI algorithms. With the rise of cloud computing and the continued progress of Moore’s Law, big data can be processed more cheaply and quickly than ever before. Finally, the use of graphical processing units (GPUs) and application specific integrated circuits (ASICs) has provided a more efficient way of running learning algorithms, which tended to be inefficient and ineffective on traditional central processing units (CPUs).
These three trends have provided a cheap innovation platform for developers who are now using artificial intelligence algorithms to make progress across of range of industries. The big internet companies such as Google, Facebook and Amazon have become multi-billion dollar businesses on the back of using AI as a competitive advantage in search, social and retail. These AI techniques are now cheap enough and pervasive enough that they can be applied in any industry. IBM, Google and Microsoft even offer these techniques to any company in a package called ML-as-a-service (MLaaS). The application of AI to meet specific market needs will form the basis on the next billion dollar companies. Already we can see, Uber in Logistics, Airbnb in hospitality and Palantir in data analysis. Human Longevity Inc, for example, are attempting to pull together all genomic, microbiome, metabolome, and physiological data from an individual and run a machine learning algorithm to better understand the human aging process.
The thing about artificial intelligence is that it is an exponential technology. The rate of improvement will continue to double every 18 months, costs of storage and computing will fall, and new neuromorphic chip designs will allow for more efficient machine learning computation. Extrapolating Moore’s Law even further, Ray Kurzweil, Director of Engineering at Google estimates that by 2023 it will cost $1000 for a computer with 20 petaFLOPS, roughly the same processing power as the human brain.
This exponential progress will creep up on businesses not paying attention. This will not be confined to the technology industry, AI will fundamentally reshape every industry. AI is the very definition of a disruptive technology.