We chatted with Robin Bloor (
@robinbloor), co-founder and chief analyst of The Bloor Group. He has more than 30 years of experience in data and information management and is the author of several books including: “The Electronic B@zaar, From the Silk Road to the eRoad.”
What is the presiding model for enterprise application architecture today?
I don’t think there is any model emerging or consensus yet, because we are in a time of major change. The main thing that has happened is the introduction of RESTful interfaces. While this trend has been fairly efficient, my conviction is that we are moving toward an event-driven architecture and real-time operations. Real-time can be a bad word, but there are some things that if they would happen faster, it would be very good for organizations. The new stuff is around getting a lot of event data in hand before the transaction occurs. The things that precede a financial transaction are being captured and it then becomes possible to leverage the information to provide better service to customers or upsell them. That is a customer perspective but this can apply to every business process. You can better direct the process if you know the direction it’s headed in…by getting intelligence fast enough so you can respond in real time.
“My conviction is that we are moving toward an event-driven architecture and real-time operations.”
Robin Bloor, The Bloor Group
What is the difference between transactions and events when it comes to monitoring?
Events are pretty much anything happens that is useful data. If someone sends a positive or negative tweet, you can either promote or suppress it. People today talk a lot about the instrumentation of the world or the Internet of things, such as a car updating the driver about its status all the time so that the driver knows if a failure is about to happen or if the car is needing an oil change or whatever. You can extend this concept to all of transportation, utilities, the movement of oil, and there is going to be an explosion of that kind of data. Embedded processes are growing at an exponential rate. This is going to be the next revolution and the source of all the Big Data volumes. It’s already happening to a certain extent through analyzing log files. The end goal for the organization is to leverage the data to maybe improve customer service and be very competitive or to boost sales. It depends on what the organization sees as the opportunity, but the opportunity is there.
Where’s the IT industry at today in terms of putting Big Data practices to use?
There is a mixed story about that. Most of the technology around Big Data is very recent. It’s only been in the last year or two that Hadoop has gained a lot of chatter on the Web, for instance. We are still in the first wave of experimentation and innovation, which is why there is such a big diversification in database products and management products. What happens next is, the products will become more standard and we will have Moore’s Law for at least the next eight years. The hardware will continue to accelerate, and software will backfill. We haven’t really had efficient software until now, but the new applications take advantage of parallel operations, which is pulling down latencies, and that is a positive change. It’s still very expensive, however, to organize and analyze a petabyte of information. As the cost of storage falls low enough, we will see the advent of the Big Market. This will be where everyone is conversant with Big Data technologies and it’s no longer causing a lot of problems for users. This isn’t happening yet.
So what’s the timeframe for that in your prediction?
It’ll probably be another six years before we transition from pioneering technology to the mainstreaming of the technology. Then, it will be another six years after that in which Big Data is common to everyone and companies all over the place are gathering lots of data and doing meaningful things with it. The big problem is, Big Data doesn’t fix on anything. Today hundreds of terabytes is considered Big Data. But in six years, large companies will be processing exabytes of data, and it won’t be easy to do.
How are Big Data and performance management intersecting?
I think that is doomed to happen, that APM solutions will need to process huge volumes of data. When trying to handle these high volumes, companies are trying to manage points in the flow. Data in motion is the trend, so management software must be able to perform in high-capacity and respond to data events in flight. We’re not really looking at discrete events anymore but at what is happening in the data flow, and that is much more onerous. It requires smarter software.
Will today’s “smarter” startups soon make the big players, like BMC, irrelevant?
The large companies are not innovative. CA, Cisco, BMC, HP and IBM don’t have the corporate environment to make internal startups work well and they really don’t want to do it. About 90% of startups fail and lose everything in the first year, so they’re waiting until the startups fight their way out of the swamp. And then they’ll just buy one. Even if the big companies pay top dollar for something very small, they can multiply the revenues quickly because they have a large and established sales force. So they don’t need to innovate. On the other hand, some of these startups will hold their own, such as Splunk, which is now a public, billion-dollar company.