Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
The last year has seen a major spike in the adoption of AI models in production environments, in part driven by the need to drive digital business transformation initiatives. While it’s still early days as far as AI is concerned, it’s also clear AI in the enterprise is entering a new phase.
Rob Thomas, senior vice president for software, cloud, and data platform at IBM, explains to VentureBeat how this next era of AI will evolve as hybrid cloud computing becomes the new norm in the enterprise.
As part of that effort, Thomas reveals IBM has formed a software-defined networking group to extend AI all the way out to edge computing platforms.
This interview has been edited for brevity and clarity.
VentureBeat: Before the COVID-19 pandemic hit, there was a concern AI adoption was occurring slowly. How much has that changed in the past year?
Rob Thomas: We’ve certainly got massive acceleration for things like Watson Assistant for customer service. That absolutely exploded. We had nearly 100 customers that started and then went live in the first 90 days after COVID hit. When you broaden it out, there are five big use cases that have come up over the last year. One is customer service. Second is around financial planning and budgeting. Thirdly are things such as data science. There’s such a shortage of data science skills, but that is slowly changing. Fourth is around compliance. Regulatory compliance is only increasing, not decreasing. And then fifth is AI Ops. We launched our first AI ops product last June and that’s exploded as well, which is related to COVID in that everybody was forced remote. How do we better manage our IT systems? It can’t be all through humans because we’re not on site. We’ve got to use software to do that. I think that was 18 months ago, I wouldn’t have given you those five. I would have said “There’s a bunch of experimentations.” Now we see pretty clearly there are five things people are doing that represent 80% of the activity.
VentureBeat: Should organizations be in the business of building AI or should they buy it in one form or another?
Thomas: I hate to be too dramatic, but we’re probably in a permanent and a secular change where people want to build. Trying to fight that is a tough discussion because people really want to build. When we first started with Watson, the idea was this is a big platform. It does everything you need. I think what we’ve discovered along the way is if you componentize to focus where we think we’re really good, people will pick up those pieces and use them. We focused on three areas for AI. One is natural language processing (NLP). I think if you look at things like external benchmarks, we had the best NLP from a business context. In terms of document understanding, semantic parsing of text, we do that really well. The second is automation. We’ve got really good models for how you automate business processes. Third is trust. I don’t really think anybody is going to invest to build a data lineage model, explainability model, or bias detection. Why would a company build that? That’s a component we can provide. If you want them to be regulatory compliant, you want them to have explainability, then we provide a good answer for that.
VentureBeat: Do you think people understand explainability and the importance of the provenance of AI models and the importance of that yet? Are they just kind of blowing by that issue in the wake of the pandemic?
Thomas: We launched the first version of what we built to address that around that two years ago. I would say that for the first year we got a lot of social credit. This changed dramatically in the second half of last year. We won some significant deals that were specifically for model management explainability and lifecycle management of AI because companies have grown to the point where they have thousands of AI models. It’s pretty clear, once you get to that scale, you have no choice but to do this, so I actually think this is about to explode. I think the tipping point is once you get north of a thousandish models in production. At that point, it’s kind of like nobody’s minding the store. Somebody has to be in charge when you have that much machine learning making decisions. I think the second half of last year will prove to be a tipping point.
Above: IBM senior VP of software, cloud, and data Rob Thomas
VentureBeat: Historically, AI models have been trained mainly in the cloud, and then inference engines are employed to push AI out to where it’d be consumed. As edge computing evolves, there will be a need to push the training of AI models out to the edge where data is being analyzed at the point of creation and consumption. Is that the next AI frontier?
Thomas: I think it’s inevitable AI is gonna happen where the data is because it’s not economical to do the opposite, which is to start everything with a Big Data movement. Now, we haven’t really launched this formally, but two months ago I started a unit in IBM software focused on software-defined networking (SDN) and the edge. I think it’s going to be a long-term trend where we need to be able to do analytics, AI, and machine learning (ML) at the edge. We’ve actually created a unit to go after that specifically.
VentureBeat: Didn’t IBM sell an SDN group to Cisco a long time ago now?
Thomas: Everything that we sold in the ’90s was hardware-based networking. My view is everything that’s done in hardware from a networking at the edge perspective is going to be done in software in the next five to seven years. That’s what’s different now.
VentureBeat: What differentiates IBM when it comes to AI most these days?
Thomas: There are three major trends that we see happening in the market. One is around decentralization of IT. We went from mainframes that are centralized to client/server and mobile. The initial chapter of public cloud was very much a return to a centralized architecture that brings everything to one place. We are now riding the trend that says that we will decentralize again in the world that will become much more about multicloud and hybrid cloud.
The second is around automation. How do you automate feature engineering and data science? We’ve done a lot in the realm of automation. The third is just around getting more value out of data. There was this IDC study last year that 90% of the data in businesses is still unutilized or underutilized. Let’s be honest. We haven’t really cracked that problem yet. I’d say those are the three megatrends that we’re investing against. How does that manifest in the IBM strategy? In three ways. One is we are building all of our software on open source. That was not the case two years ago. Now, in conjunction with the Red Hat acquisition, we think there’s room in the market for innovation in and around open source. You see the cloud providers trying to effectively pirate open source rather than contribute not commit. Everything we’re doing from a software perspective is now either open source itself or it’s built on open source.
The second is around ecosystem. For many years we thought we could do it ourselves. One of the biggest changes we’ve made in conjunction with the move to open source is we’re going to do half of our business by making partners successful. That’s a big change. That why you see things like the announcement with Palantir. I think most people were surprised. That’s probably not something we would have done two years ago. It’s kind of an acknowledgment that all the best innovation doesn’t have to come from IBM. If we can work with partners that have a similar philosophy in terms of open source, that’s what we’re doing.
The third is a little bit more tactical. We announced earlier this year that we’ve completely changed our go-to-market strategy, which is to be much more technical. That’s what we’ve heard customers want. They don’t want a salesperson to come in and read them the website. They want somebody to roll up their sleeves and actually build something and co-create.
VentureBeat: How do you size up the competitive landscape?
Thomas: Watson components can run anywhere. The real question is why is nobody else enabling their AI to run anywhere? IBM is the only company doing that. My thesis is that most of the other big AI players have a strategy tax. If your whole strategy is to bring everything to our cloud, the last thing you want to do is enable your AI to run other places because then you’re acknowledging that other places exist. That’s a strategy advantage for us. We’re the only ones that can truly say you can bring the AI to where the data is. I think that’s going to give us a lot of momentum. We don’t have to be the biggest compute provider, but we do have to make it incredibly easy for companies to work across cloud environments. I think that’s a pretty good bet.
VentureBeat. Today there is a lot of talk about MLOps, and we already have DevOps and traditional IT operations. Will all that converge one day or will we continue to need a small army of specialists?
Thomas: That’s a little tough to predict. I think the reason we’ve gotten a lot of momentum with AI Ops is because we took the stuff that was really hard in terms of data virtualization, model management, model creation, and automated 60-70% of that. That’s hard. I think it’s going to be harder than ever to automate 100%. I do think people will get a lot more efficient as they get more models in production. You need to manage those in an automated fashion versus a manual fashion, but I think it’s a little tough to predict that at this stage.
VentureBeat: There’re a lot of different AI engines. IBM has partnered with Salesforce. Will we see more of that type of collaboration? Will the AI experience become more federated?
Thomas: I think that’s right. Let’s look at what we did with Palantir. Most people thought of Palantir as an AI company. Obviously, they associate Watson with AI. Palantir does something really good, which is a low-code, no-code environment so that the data science team doesn’t have to be an expert. What they don’t have is an environment for the data scientist that does want to go build models. They don’t have a data catalog. If you put those two together, suddenly you’ve got an AI system that’s really designed for a business. It’s got low code, no code, it’s got Python, it’s got data virtualization, a data catalog. Customers can use that joint stack from us and will be better off than had they chosen one or the other and then tried to fix the things themselves. I think you’ll probably see more partnerships over time. We’re really looking for partnerships that are complementary to what we’re doing.
VentureBeat: If organizations are each building AI models to optimize specific processes in their favor, will this devolve into competing AI models simply warring with one another?
Thomas: I don’t know if it’ll be that straightforward. Two companies are typically using very different datasets. Now maybe they’re both joining with an external dataset that’s common, but whatever they have is first-party data or third-party data that is probably unique to them. I think you get different flavors, as opposed to two things that are conflicting or head to head. I think there’s a little bit more nuance there.
VentureBeat: Do you think we’ll keep calling it AI? Or will we get to a point where we just kind of realize that it’s a combination of algorithms and statistics and math [but we] don’t have to necessarily call it AI?
Thomas: I think the term will continue for a while because there is a difference between a rules-based system and a true learning machine that gets better over time as you feed it more data. There is a real distinction.
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article