Unblocking the supply chain to make data useful

The ability to put data to work for your business is the difference between success and failure, but primitive ways of unlocking data value are holding organisations back

Data’s common description as the ‘new oil’ was initially meant to accentuate its value as a new commodity, but it now risks underestimating its importance in every aspect of how we work and live. Unlike oil, data is constantly replenished and amid the slow but inevitable decline of the petroleum economy, every successful organisation is now a data company. It’s impossible to be a modern enterprise without data-driven operations and culture.

Data helps streamline business processes, making organisations work better and helping companies understand and therefore serve customers better. And with artificial intelligence and machine learning, they can make better decisions more efficiently. But data doesn’t just bring value in a core business sense. From developing drugs to irrigating farmland or accelerating innovation in health, science and energy, it is advancing society.

Many of these advances can be attributed to the cloud, which is not only where new innovation is happening but also why it’s possible. The cloud provides, for the first time in the history of business and technology, a limitlessly powerful, easy to stand up and scale, and cost effective way of capturing, storing, processing and getting value from data.

A clog in the system

There is a problem, however. Every analytics, AI or machine learning use case relies on a supply chain; a pipeline of useful analytics-ready data. And the pipeline is very clogged up.
“We have more data than ever and it’s growing logarithmically, but it doesn’t start off in a state that’s useful for analytics, AI and machine learning,” says Matthew Scullion, CEO and founder of Matillion, a cloud-native data integration company.

“Data is like iron ore and steel. Just like you need steel to make a bridge, factory or car, to gain analytical insight you need analytics-ready data. Data starts off like ore and needs refining into steel by joining it together, cleaning it up, making sure it’s the right data and then embellishing it with metrics.

“That refinement process needs to happen before we can make every aspect of how we work, live and play better. But today the world’s ability to make data useful is constrained because the refinement process is done in a primitive way: by people who write code. Writing code is the slowest and hardest to maintain way of doing it, but most crucially it relies on a small number of highly skilled people, which the world doesn’t have enough of.”

A perfect storm

The problem is felt by every company trying to innovate with data, but it is most acute in large organisations, which have the biggest piles of data. A study of Matillion’s 200 largest customers – billion-dollar revenue and above companies – found they had, on average, more than 1,000 different computer systems from which they were extracting data to put to work in analytics, AI and machine learning. Some systems date back to the 70s and 80s, while some are contemporary SaaS and cloud-based systems. Some are commercial off-the-shelf systems bought from well-known vendors, others are built bespoke.

We have more data than ever and it’s growing logarithmically, but it doesn’t start off in a state that’s useful for analytics, AI and machine learning

Enterprises have the most to win or lose by getting this right. If a large company improves a business process, understands the customer better and enhances a product – or fails to do any of these things – there are tens of millions of pounds at stake. Yet large enterprises also suffer from the limited pool of highly skilled engineers in the most pronounced way. It is not uncommon for more than half of the employees in a tech startup to have the coding skills to make data useful. But in a large bank or manufacturing firm, it’s more like 0.5% of the workforce.

“Large organisations have the biggest pile of heterogeneous data and the most to win or lose, but the least capability per head of capita in their workforce,” says Scullion. “We felt this problem personally because we used to build finished analytics solutions for these companies, and the biggest part of the job in any analytics, AI or machine learning use case - probably 70% of the work - is in making the data useful. So we developed a solution to solve it.”

Setting the pipeline free

By refining data in a way that doesn’t require skilled engineers to write code, Matillion enables organisations to make data useful more quickly and in greater quantities. It also means the task of making data useful is no longer the purview of a small number of costly people doing it in a primitive way. It can be done by far more people on the data engineering team and in other data disciplines like data science, which allows companies to unblock the pipeline of analytics-ready data and accelerate analytics, AI and machine learning projects.

“The end of the arc is getting to a point where all aspects of making data useful – how you load, transform, synchronise and then orchestrate that process at enterprise scale – are delivered for you in a single place - a data operating system - that doesn’t require high-end coding skills to use and works across clouds and cloud data platforms,” Scullion adds.

“A company’s ability to make data useful is directly correlated to its core outcomes. Being able to innovate with data at an accelerated rate is a competitive imperative. You either get good at this stuff, and real quick, or you’re not going to survive much longer. Matillion is building a platform to enable that, but it’s up to businesses to use that technology to become a data business and put data to work faster than their competition.”

Learn more about Matillion at matillion.com/demo

Promoted by Matillion