The paradox with artificial intelligence (AI) and machine learning (ML) is that despite (or because of) the hype, it’s still hard to find practitioners who understand how to use those approaches. Perhaps the best way to characterize the AI paradox is “so near yet so far.”
AI is tantalizingly accessible for a variety of reasons.
First, there’s the data. With the explosion of big data, there’s ample to go around enabling machines to do the learning. Unlike the old days, when data was sparse and storage expensive, we don’t have to rely on static rules devised by human experts.
Then there’s the infrastructure. Thanks to the cloud, there’s ample compute and storage available for the right price — you no longer have to ante up for a dedicated HPC grid.
And then finally, there’s the software. You no longer have to write the algorithms from scratch. The open source community offers a wealth of libraries.
So what could go wrong?
Businesses that evolve a rapid, customer-centric approach to application development and delivery are best positioned to innovate in their markets. These organizations can respond to market opportunities so fast that it becomes a competitive advantage.
While we speak of intelligence that is artificial, and learning and cognition that is done by machine, humans remain very much part of the equation. Sure, AI programs can now create alien worlds on their own, but to solve useful problems, it still takes a person to ensure that the model is on target. Not surprisingly, skilled people remain the most elusive commodity. Yesterday, data scientists were in short supply. Actually, correct that — even as colleges and universities turn out new hordes of data scientists — that’s still the case. They are still being snapped up by the Global 2000. Just on this go round, most data scientists are now adding AI and ML to their calling cards.
On the outside looking in, however, is a large population of developers who would love to get a piece of the action. And just as nature abhors a vacuum, surprise, surprise, there is a growing body of services opening the doors to machine learning for developers with less formal knowledge of modeling techniques.
Amazon SageMaker, Microsoft Azure Machine Learning Studio, and Google Cloud AutoMLeach provide curated environments that are, essentially, IDEs for basic machine learning updated to a world where Jupyter notebooks have become tabula rasa. While these services simplify ML, developers need to know the difference between classification, clustering, regression, and other basic algorithms.
For instance, SageMaker keeps you within a wall garden of roughly a dozen prebuilt algorithms; if you want to get ambitious, you can run them using MXnet or TensorFlow deep learning. You deploy the models with a single click on a serverless infrastructure where the service automatically scales the cluster. Google’s service is at an earlier stage — it’s more ambitious in being oriented toward neural nets. But for now, only machine vision models are available.
Azure Machine Learning Studio looks more like a classic IDE, which shouldn’t be surprising, as it’s from the same company that brought Visual Studio into the world. Like SageMaker, the Azure offering also comes with a library of pre-built algorithms that you build into an experiment. Or alternatively, you can go to Azure AI Gallery to take advantage of experiments already shared by other members of the community. Experiments are organized as self-contained modules that contain the algorithms. Developers go through steps such as preparing the data, defining the features (or parameters) of the model, and choosing the algorithms. Then the fun starts when you drop a data set onto the canvas and run the experiment, initially to train the model, and then apply it to the data set where you want the predictive results.
The idea of democratizing analytics remains just as critical for big data and machine learning today as it was during the onset of modern BI 20 years ago. It very much followed in the tradition of democratization of application development that IDEs like Visual Studio ushered in years ago, which broadened the field beyond computer scientists to music, philosophy, and English majors. Arguably, democratization succeeded with AppDev, but took another decade plus with BI until the world got its Tableau data extracts.
On one hand, the price paid for democratization was tangles of undocumented spaghetti code and proliferation of BI visualizations that cast questions about data governance, currency, and consistency. And for machine learning, just like for data science before it, there are penalties to pay when you choose the wrong data sets or identify the wrong signals. For ML, there’s an added challenge: data sets, and models, naturally drift (yes, there are tools for that).
But the gates were opened for a reason. For AppDev, there just weren’t enough formally-schooled programmers to satisfy appetite for a new generation of distributed desktop and web applications. And even if there were enough mainframe programmers to go around, they probably would not have understood the nature of these new distributed apps, or the bottom up, rapid and later agile development processes required to develop them.
For BI, the emergence of self-service visualization reflected the need for lines of business to become more agile in the face of changing markets. For ML, it is still early days to judge the costs of poorly designed models. Yes, there has certainly been plenty of baggage around the notion of citizen data scientists before it. And if there is an equivalent for machine learning, these so-called citizens are not going to be tackling development of highly sophisticated models for which Global 2000 companies are paying data science grads top dollar for. But if experience with AppDev and Bi is any indication, there may not be any other choice to opening the gates for more modest, well-bounded, everyday predictive analytics, text or image recognition, or language translation-related problems.