Best Practices for selecting and implementing Machine Learning Systems
Walk into any executive-level meeting these days, and you’ll hear plenty of chatter about machine learning (ML) and artificial intelligence (AI), but what do they really mean? ML is complex because it includes concepts from multiple areas, such as mathematics, computer science, statistics, and logic. These are all highly technical and seldom explained in a simple manner. Let us introduce a few key ML and AI technical terms and how people apply these techniques and share some best practices and challenges for implementing ML systems using these technologies.
What are some basic terms you need to know?
ML is artificially intelligent self-learning system that use data mining, pattern recognition, and natural language processing to mimic human reasoning. It includes a training phase where the system learns from training data. For instance, a ML algorithm can learn to predict health or disease by analyzing all of the data generated by medical specialists.
A successful implementation of aML system requires a common understanding of the objectives between business and technology teams
Artificial intelligence is the study of how to make computers do things that people are better at, or would be better at, if they could extend what they do to a large amount of data and not make mistakes.
Supervised learning systems leverage training data of input values and associated output values also called labeled data. For example, a machine learning system can be trained using a set of images of cats and dogs, and when given a new image, the system can then predict whether it is a cat or a dog. Similarly, a supervised learning system can leverage logistic regression techniques and historical data to predict the value of a variable such as future market indices.
Unsupervised learning systems have an ability to learn and figure things out from unlabeled data. For example, an unsupervised machine learning system can learn how to group (also called clustering) a series of news articles under different categories without explicitly being told how to do it.
Reinforcement learning systems have the ability to not only learn from training data but also improve their performance by processing external feedback. For example, the system that selects your favorite music list uses your feedback from previous choices and improves its selections on an ongoing basis.
Neural networks use supervised learning and were originally developed to emulate the human brain, which uses an extremely large network of neurons to process information. A simple neural network consists of a single layer of neurons that connects input data to output data.
Deep learning systems are neural networks with many layers, and the learning performed by them is called “deep learning.”
What are some successful applications of ML?
Now that you have some familiarity with ML and AI, let us look at a few examples of successful applications of these technologies:
• Customer profile analysis to understand and retain the most loyal customers, as well as target new customers
• Fraud and anomaly detectionusing clustering techniques
• Medical diagnosis, self-driving vehicles, and facial recognition using deep learning
• Recommender systems using reinforcement learning
What are some best practices for implementing ML systems?
• A successful implementation of aML system requires a common understanding of the objectives between business and technology teams. There must be an agreed upon set of metrics to evaluate and measure the performance of the system.
• Special attention should be paid if the machine learning systems are expected to provide an explanation of their decision making process. Typically, machine learning systems are not very good at providing this detail. Experience suggests that in such cases, it is good to include a human expert during the overall process so that the ML system plays a support role in identifying best possible options, butthe human experts make the final decision.
• There must be proper guidelines for when ML systems should be retrained with new data. Depending on the domain, the training data used for building a ML system may not continue to be relevant after a period of time due to changes in other external factors. Therefore, it is important to have well-defined criteria for retraining the system with new data. This can be done as frequently as daily (e.g., setting price values) or after specific events(e.g., when new products or versions are introduced into the market). It is important to ensure appropriate support from the underlying IT and governance processes.
There are also potential organizational challenges worth watching:
• A common challenge that many enterprises face is lack of a cohesive leadership to drive ML and AI practices. Often, you find too many leaders racing against each other. It is important to have an AI strategy at an organization level that is aligned around enterprise data strategy, information security, governance, and compliance requirements.
• The next challenge is selecting the right use-case to implement. A few ideas to explore include looking for areas with high revenue but low efficiency, or business processes experiencing common errors.Additional factors to consider include availability of relevant data, business champions, and willingness to learn and adapt.
• The demand for AI talent will always be high compared to the availability of skilled resources within an organization. Consider bringing in outside talent by partnering with schools or universities, developing training classes and courses, and encouraging on-the-job training.
Today, there is a lot of buzz around ML and AI, and many C-level executives across many industries are very interested in building out a successful practice for these technologies. If you are just beginning your ML and AI journey, don’t fret, you’re not alone. Many of your colleagues are also just entering this burgeoning field. As you move forward with introducing ML and AI to your organization, consider starting small, identifying and building a few use-cases, and learning and growing your technologies from there.