Reaping the rewards of explainable AI

AI chaos

There is no doubt that AI is here and that it can solve a great many problems. However, accessibility is a serious obstacle, frustrating the democratisation of the technology.

Even as a data scientist, the number of ways of working around AI is bewildering, with thousands of opinions and models touted each week as the “right” way. This is confusing and chaotic for professionals in the area, let alone to problem owners whose professional expertise does not extend to AI.

The data game

Data’s role in organisations is becoming increasingly important, with some even calling it the new oil. But it’s not just those in technical positions who will have to embrace data and the outcomes it can provide. Those with little technical experience – executive leaders or sales teams, for example – will have to use this asset to make the best and most informed decisions possible, becoming citizen data scientists.

It is therefore imperative that data science – the process of extracting knowledge from data –is made accessible to every business user moving forward. This is where explainable AI comes into play.

Explainable AI

AI for too long has been a black box technology, where the rationale behind the algorithm’s decisions is not explained. This type of service has delivered successful results, but is far from ideal.

To take the next step in adoption, explainable AI must be integrated into platforms and solutions. Only then can organisations and consumers fully trust this technology and let it help drive their decisions.

Data scientists and problems owners in organisations shouldn’t fully embrace an AI platform’s logic if they can’t view the data used or understand the steps the platform took to arrive at the solution. It would be irresponsible, even unethical, to allow a machine learning platform to make business decisions without first explaining the rationale behind it.

AI should not replace humans: the two should work together. It’s all about augmenting human intelligence

From months to minutes

If a business wants to solve a data-centric problem, it has to hire a data consultant or team of data scientists to trawl through an increasing expanse of unstructured datasets for a solution. This could take days, weeks, or more likely months. A mid-sized organisation may well have access to many petabytes of data spread out across many forms, such as text, image or reports.

This way of working is ill suited to the digital age. It is time consuming, costly, and above all, avoidable. The whole process could be automated, with the pipeline of discovery executed in minutes rather than months.

Smart AI, or automation, has a telling advantage over an expert human. These systems compute billions of times faster. They don’t get bored going through all the number crunching and they don’t go home and have an evening off. They are computers that, in a robust and principled way, go through trillions of computational steps in the time it would take a human to do just a few. The space of solutions that Mind Foundry’s models can investigate, refine or reject is vastly superior to that of any human data scientist, no matter how expert and experienced they may be.

But don’t write off humans just yet: they remain a vital component. Any algorithm can deploy a solution based on the available data, but the human is needed to draw conclusions, hone the datasets and present the findings to their teams. Most importantly, the human takes the results from the solution and puts them in the context of the business, asking how the information can help solve a particular business problem.

Augmenting human intelligence

AI should not replace humans: the two should work together. It’s all about augmenting human intelligence. Society is not in an era where machines are as smart as people … yet.

If you’re a data scientist, solutions that automate the data curation process can help accelerate investigations and allow you to solve problems in a much faster way. At the other extreme, where individuals are not technically proficient and don’t want to use a service as a hands-on tool, smart solutions should be able to provide advisory recommendations, which lead to a functional deployed model. This can apply to decision makers, who can use smart AI to inform their decisions before committing to a new product or investing in another company. Such people are experts in their problem domains and define the success criteria, but they can always get better. By combining their expertise with technology, never-before-seen or hidden insights can be identified from the analysed data, which will lead to new ideas and opportunities.

At the same time, if the human is made aware of the key factors that led to the machine’s solution, then they can start to understand what data is important and justify their eventual decisions to the rest of their department or company.

It’s all about transparency, engaging the user at every possible avenue by explaining the path to the solution and why the AI came to a particular conclusion: what some term “honest computing”. This is just as valuable as providing the final answer and is relevant all the way from the decision makers at the top of an organisation to the data scientists and the non-technical problem owners in a customer-facing role.

Man and machine

There are many platforms that mercilessly exploit our silicon friends to perform scalable data analytics. But if you couple that with the belief in continued human interaction and the importance placed on understanding the key factors that lead to the right decisions, then solutions like Mind Foundry begin to stand out.

In today’s business environment, nobody wants an unexplainable solution. They need to understand it at a forensic level. To usher in the next stage of AI and hit big with data, organisations need solutions that are explainable at every level.