As tech terms go, “continuous intelligence” is recent – very recent. The term was hardly heard before, say, 2018, and then only rarely. In truth, it’s a very 2020 term.
But it’s a concept whose time has come. In essence, continuous intelligence is akin to real time analytics – but it’s much more. While real time analytics typically assigns a time frame (a very short one) to intervals at which data is analyzes, continuous intelligence takes that final, inevitable step: it use AI and ML to create a constant – and constantly digested – stream of data.
In essence, continuous intelligence is a faster and smarter colleague to real time analytics. By leveraging AI for decision automation, continuous enables a never-ending flow of data integration and data mining.
To provide insight into how businesses can gain advantage from continuous intelligence, I’ll speak with three leading experts:
Sam Ramji, Chief Strategy Officer, DataStax
Simon Crosby, CTO, SWIM.AI
George Gilbert, Industry Analyst, Data Management and Analytics
Moderator: James Maguire, Managing Editor, Datamation – moderator
TOP QUOTES:
What is Continuous Intelligence?
Ramji: I’d say it picks up where continuous integration and continuous deployment left off, which is the next layer of the stack. How can we be constantly updating our decision models? Because what we don’t want, exactly, as George pointed out, is a fake real-time learning system, that is really a badge behind the curtain, we don’t look behind the curtain, they’re just doing a badge upgrade on the model once a day, and so you’re only getting smarter in increments of a day.
So the competitive basis of CICD, was that you could shift features around and change your understanding of your engagement with your users, really, deployment by deployment and you could push your production in minutes to hours, so you should be able to do multiple deploys per day. When you bring that to what we wanna see in automated intelligence, in our systems, how can this application learn from all the behaviors of all of its users? How are those things moving and how can you do that within a bounded context, that doesn’t require yet another loop of human data scientists and model builders, to make those incremental adjustments?
Crosby: So these are applications that always have to have an answer. So you can’t afford to wait, you can’t compute when a human looks, or when the GUI wants to refresh. You always need the answer, so if there’s a pedestrian crossing the road, you better know. So, this notion of needing an answer, means that data has to drive competition, and so when data is generated by sensors, you’ve got to compute and analyze that, and do your AI on the fly, so you have an answer.
And so, data drives the whole computational process, as opposed to humans versus SQL queries of whatever. And that’s a fundamental change, and it’s a big change, by the way. In the normal way, where we have a data scientist, who sits and build a model, and the cloud trains it and then pushes it, maybe, to the edge or does something. Here, we have to continuously learn and predict, and that’s quite a change.
The Advantage of Continuous Intelligence
Gilbert: And a benefit to businesses is beyond just making the decisions faster or informing the decisions faster, is that you now need to be able to build these applications closer to where the skills of operations technologists are becoming involved as opposed to the information technologists. And so, that requires a simplicity, this… A much more end-to-end vertically integrated set of application and infrastructure components that can figure out where to distribute themselves, but also how to maintain all the application components so that when one change happens in one place, it knows where all the other components need to be updated.
Ramji: So [Home Depot] built an application that would dynamically dispatch the expert advice from the person who’s meeting the customer at the curb, into the big box store, to the associates that are walking around, serving other needs at the same time and saying, “Hey, you’re in this aisle, pick that. You’re in this aisle, pick that.” And you’ve got 20 cars waiting in the front. So it’s a dynamic dispatching problem. It’s a fascinating problem and it requires you to have a combination of knowledge of what’s in the store. Y
ou need to have cloud-based intelligence of what are the patterns that you typically see across all of the Home Depot environments, times what’s actually available right then. So that actually showed up in Home Depot’s revenues. They had double digit revenue improvements and margin improvements during the COVID recovery. So those kinds of responses make businesses smart, and smart businesses are successful. The ones that can’t adapt actually fade out.
With all these data streams going every which way, can traditional databases keep up with it all?
Ramji: I think old-fashioned databases can’t and so one of the things that we’re seeing is if you just wanna look at gross measurements of economic activity, you can look at market behavior. So SQL databases, classical single instance, and in some cases replicated but usually you don’t think replication, you think scale up. That market is about $45 Billion last year, it’s growing about 14%, which is pretty fast for a market that size. Whereas, non-SQL databases were about six and a half billion dollars last year, but they’re growing at 45%.
Gilbert: I think one way to look at it is that there’s a continuum. When you’ve got a cloud-native architecture, you’re supplying a developer, and the administrators are supplying some intelligence explicitly about how to put all the pieces together, because you have specialized pieces that are good for specific functionality, but then you have to say, “Okay, here’s what data is gonna stream in. Here’s how I’m gonna persist it. Here’s how I’m gonna prepare it and analyze it, how it’s gonna inform a decision.”
But when you’re closer to the edge and you’ve got many orders of magnitude, more entities that are supplying data that need to get informed about decisions, something’s gotta tie all those pieces together in a way that would be probably much more difficult to be done manually.
Cloud Native
Gilbert: If Cloud native allows you to take that operational model from the Cloud and extend that beyond that, maybe then the way to express the connection to continuous intelligence is that we need a new application model that complements that. And that’s what we’ve also been talking about, which is where you’ve got this sort of, I guess, application fabric that is much more seamlessly distributed, and sort of has enough intelligence built in that you don’t have to have the entire application architecture in your head.
You put little bits of intelligence where appropriate. And this fabric then figures out how everything runs.
The Future of Continuous Intelligence
Crosby: [Edge is] where you get to process data. Okay. And then, often it will be at the physical aid. Sometimes it’ll be in the Cloud, ’cause you want to know where the source of data is, right? It will just be the inside. Okay. So, there is one key benefit that we haven’t touched on, and that is once you adopt a stateful application model, you can do data science on live data, which is totally trippy and creeps people out, right?
Gilbert: I actually think Sam distilled it in a way that sort of makes all the pieces kind of fit together, where he says sort of the model, the framework, and the platform, if I’m capturing that right. And that we have seen Simon and Swim put together what sure seems like a very elegant implementation of all three of those. And it’s gonna sort of be a pioneer that’s… I guess we’ll see other implementations.
Their platforms, sort of the water level rises and their abstraction level rises, like when they had an application mesh. I think they’re trying to start adding that sort of distributed intelligence, that sort of holds more of the pieces together transparently… What I don’t know is, can they really bring along all the baggage that they’ve got where a fresh implementation makes it so much simpler?
Ramji: One, I think distributed data is gonna be much, much easier in three years because we will have solved for laying out distributed data on Kubernetes. So, the parallel effort that’s been running for a while to re-platform the enterprise on Kubernetes will give us a standardized control plane for laying out data and doing intelligent replication, which is gonna be the infrastructural component that application frameworks and platforms like swarming around.
I think the second thing, and this is maybe the one to grow on, the less obvious one, is this is gonna create a real wave of AutoMl. When we talk about data science, we’re talking about a desperately labor-locked market. There’s demand for three to four times as many data scientists as there are in the world, or as there are likely to be produced.
So, in those kinds of environments, you could do one of two things. You could either make the data scientists more productive, or you could make other people more like data scientists. So, when you listen to Simon talking about being able to do data science on live data, lots and lots of engineers, and I mean real world engineers, not like software engineers, like mechanical engineers, construction and folks, will be able to look at these things.
They’re domain experts, they’ll have access to data, the data would be well-governed, and then they’ll be able to use AutoML almost like a microscope or an oscilloscope and be like, “Hey, I think there’s some interesting dimensionality, there might be a feature in here.” And as they discover it, then they may hand that off to the data scientist, or there might be enough value in the model, they might hand that right back into production through this framework platform mechanism. So, I think distributed data will be available easily and AutoML will be prevalent.