Knowledge is power. That this aphorism should have become a generally accepted truth is ironic, since the precise origin of the phrase remains unknown. It is attributed to Francis Bacon in the 17th Century yet the phrase never appeared in any of his works.
The interpretation of information, and the assumptions we draw, lead us to create the view we have of the world. When that world is a complex network of interrelations and moving parts, our assumptions can quickly grow from slight generalisations to fundamentally erroneous premises on which we base critical decisions.
In today’s multi-nodal supply networks knowledge really is power. Knowing the precise disposition of all the parts and precisely what and where the risks are, is really a competitive edge in a logistics-led enterprise. An edge we need to stay ahead in today’s era vulnerabilities, uncertainty, complexity, and ambiguity.
It was a VUCA world before Covid, and those sensitivities have been exacerbated, sometimes to the point of collapse. But the signs have been there for a while. The good news is that the digital solutions we need to help us take control of complex supply chains in a post-covid world have been in development for some time.
Getting to grips with the vast volumes of data involved in a global logistics network is vital for that control, and a specific set of digital technologies is making it possible to transform the mass of raw data into useful, actionable intelligence.
For instance, when Covid lockdowns first began impacting manufacturing hubs in China and South East Asia, Lennox International mapped their entire just-in-time supply chain network to identify potential disruptions that could impact customer service, fulfillment, manufacturing and transportation, using unstructured data. Within 3 weeks, Lennox went live with an automated, AI-driven visual dashboard, that overlays the number of COVID-19 cases, deaths and hospitals by country, state and city with our internal data about customer demand, orders, inventory, and logistics to predict disruptions in real-time. Lennox is now incorporating a wide range of external data feeds to systematically evaluate all potential risks to its global supply chain going forward.
The key technology underpinning everything is the new paradigm in data storage.
We dive headlong into the data lake. Why is this such a big deal? Simply because the traditional means of data storage and retrieval, whilst robust, has fundamental limitations which have direct knock-on impacts into how agile and responsive an organisation can be.
The traditional method of storing data is like storing goods in a warehouse. The warehouse is designed on the assumption that what you are bringing in and taking out will fit on the racking and into the bins. In the digital world, you must process all your data, and get it correctly packaged on the way in, before you can use it.
But what happens when circumstances change? What about the data that we don’t yet know we need? If knowledge is power, how do we acquire that knowledge if our systems are not designed to handle it?
This is where the data lake provides the answer. If the old paradigm was like a warehouse, a data lake is really like a lake. Everything flows in, however it comes, and is processed into something useful on the way out.
This represents a fundamental shift in how data can be turned into actionable intelligence more rapidly and more effectively than ever before. Gather all the data you might need, from news and weather feeds, from social media, from point of sale and sensors across the supply chain, and then you can turn that data into knowledge to be more responsive, more agile and more resilient.
It might sound overly simplistic to say, change how you store your data and a whole new world of possibilities will open. And it is. Because the data is only the start.
To do that processing on the way out, you need immense computing power and some very clever software indeed. And to make an impact you need something that people can use that provides the actionable intelligence, allowing decisions and changes to happen in real time.
Consequently, on top of the data lake we need layers of artificial intelligence (AI) to transform raw data into meaning, and on top of that, layers of well-designed, easy-to-use applications that provide the power to execute. This whole stack needs to be built on massive processing power and connected to every part of the supply chain.
So, not so easy after all. Well, although not easy, it’s definitely happening. This confluence of scalable cloud computing, data science, artificial intelligence and user-first application design is driving the development of some truly revolutionary next-generation systems. Digital technology that will help all companies, especially those with complex logistics demands, deal with whatever the future has in store.