As new opportunities arise in our markets, retaining customers has become more important than ever and competitors are embracing technology to obtain a competitive advantage. We as business leaders are being pushed to find innovative ways to gain even the smallest of competitive advantages.
Terms such as Machine Learning, Artificial Intelligence, Deep Learning, Robotics, IoT, and Cloud are loosely thrown around in leadership sessions as “silver bullets” that will ultimately provide us with a competitive advantage.
The cornerstone of Artificial Intelligence is Machine Learning, where historical data (whichever organization has) is used to teach sophisticated algorithms or machines to solve everyday problems. Providing businesses with insights into what the future will hold. Sounds much like science fiction you tell me? Well not so much as you would think, these “machines” are readily available to the public in tools such as Scikit-learn, Tensorflow, jupyter notebook, Python, and R.
Over eagerness to gain a competitive advantage can be a disaster waiting to happen. Before one goes down this machine learning rabbit hole without being forewarned of the potential landmines, transparency and governance can possibly save time, money, and even possibly your employment status.
When approaching Machine learning as an integral part of the decision-making process, the following should be considered:
Explainability of the Machine Learning algorithms
A major issue with Machine Learning is that it is widely acknowledged that some models are unexplainable, while others you would need a Ph.D. to decipher the inner workings of complex algorithms. But that is not the case with all models, knowing what can be explained is often the best place to start with Machine learning. Using complex neural and deep learning methods as a starting point maybe like flying a Boeing 777 when one only has a Vespa license.
So start with models that can be explained and that can yield great results from the onset and don’t try to dive too deep too soon, that will come with time as you are comfortable with your data and outcomes required.
Limited visibility into training data sets
Explainability is one thing, but the trust of the model is critical. The strength of a model depends significantly on its training data. Good, clean, well-labeled data will result in good, well-performing models, well that is the theory and a good principle to follow. But this is not always the case.
We need to know the whole life cycle of how one's data is acquired, managed, manipulated, and stored. Trust is a big concern here, does the data fit what you want to predict, and do you believe that the data is of a standard that is not as good as a random sample.
Having visibility on the entire process of utmost importance!
Lack of visibility into methods of data selection
Having access to your data is one thing, and although very important, the complexity of deciding which features and dimensions are most important is where the magic happens.
Our data scientists and actuaries tinker day in and day out looking for the most optimal way to predict the future. But do we really know how they come up with the answer that will change how we operate, is it not crucial to know the process of finding the holy grail.
Little to no understanding of the bias in training data sets
Adding to the woes of bad data, poorly selected features, and dimensions we create the possibility of models having a bias to predict an incorrect value due to what is known as over or underfitting. A bias can be captured in the setup of the features, the data set, or even by human involvement.
Very little visibility into how models are versioned
As the data and conditions around the data continually change, this could be due to seasonality, micro, and macroeconomics, fads, or even global catastrophic evens such as a Covid-19 pandemic. Our data scientists, well should be revising tried and tested models to ensure they are still accurate and relevant. As models drift away from peak accuracy, they need to be tested and revised.
In the real world, new and existing models get mashed back into production inconsequently. Creating further distrust with business is there true confidence that the right model has been deployed and will human error slip into the equation. Without a doubt, this will occur.
As business leaders, we need to be the masters of our own destiny, not leaving the predicted future to a limp wristed roll of the dice. My mind is absolutely blown away on a daily basis by the possibilities of what the machines can do for us. We are truly in the revolutionary time where so much will be achieved. But remember, if not approached properly the disadvantages could far outweigh the opportunities.