CIO PODCAST: THE AI ADVANTAGE | EPISODE 5

The CIO’s view on starting, and restarting, AI initiatives

Enterprise AI projects often fail to deliver ROI, not because of the tech, but due to strategic misfires and "pilot purgatory." This episode offers a candid look at rescuing stalled investments by identifying early red flags and performing honest audits. We break down the non-negotiable steps CIOs must take regarding data governance and organizational alignment to transform costly science experiments into scalable business value. Learn how to stop wasting budget and pivot your AI strategy toward sustainable success.

Starting, and restarting, AI initiatives
19:36
Getting out of AI pilot purgatory
EPISODE 5

Getting out of AI pilot purgatory

Barbara Call 00:10

The AI era has arrived, but there is no guarantee of success. Industry estimates say anywhere from 70 to 95% of AI pilots fail to get off the ground. The question is, how can you move beyond POCs prototypes and feasibility studies to real ROI and measurable business outcomes? What does successful AI adoption look like, and what are the steps to move from potential to pay back?

Barbara Call 00:36

Welcome everyone. I'm Barbara Call, Global Director of Content Strategy at CIO.com and this is "The AI Advantage: Navigating Risk, Reward and Real World Deployment", created in collaboration with CIO.com and Vertesia. Today, we'll be talking about how to stop wasting budget and instead, how to pivot, restart and scale your AI success. We're going to dig into the signs of a failing initiative, how to perform an honest audit, and the non negotiable steps CIOs must take to turn costly science experiments into sustainable enterprise value. But let's first introduce today's speakers. First up is Sean Hauver CIO at Alorica, welcome Sean. Tell me about your background and your current role.

Sean Hauver 01:25

Thanks, Barbara. It's a pleasure to meet you. I'm the CIO for Alorica. It's a leading CX company in the CX industry, and I've been in the technology industry for over 30 years, and I've had the fortune of being able to play on both sides of working in the industry as well as working in the consulting side of the world. So I'm excited here to talk with you and Keith about the challenges and benefits of AI,

Barbara Call 01:49

All right. Sean, great to have you. And our second speaker is Keith Schlosser, multi time CIO in the insurance industry. Welcome Keith. Tell us a little bit about yourself.

Keith Schlosser 02:00

Thanks, Barbara, nice to be here on the podcast with you. So I've been in the industry for 35 years, specifically in the insurance business and financial services. Spent 20 years as a CIO currently focused on AI and helping firms identify opportunities for agentic AI and helping them implement and really excited to be on this podcast with Sean. We share a lot of similar experiences in our background.

Barbara Call 02:31

All right, great. Nice to have you, Keith. Let's kick things off So Sean, my first question is for you, how can an organization gain alignment around an AI program of work, and what are the most important things a CIO should do to set up a successful project?

Sean Hauver 02:48

I think that's a great question. I think the first thing to make sure that you're thinking about when you're implementing an AI initiative is that you're really starting with the business first. So it's all about making sure we understand what are the business challenges, or what the business problem is that we're trying to solve, working closely with our business partners to make sure that we're identifying what the benefits are going to be and what the success criteria will be, and then working with the technology organization to make sure we're designing solutions that will actually meet those business objectives and making sure that everybody has the complete understanding of what a win looks like at the end of the initiative.

Barbara Call 03:26

All right, thank you. Sean. So my next question is, what are the most common non technical reasons that an AI initiative fails to move from a successful proof of concept to enterprise wide production? The question is, is the failure point pilot Palooza, as in too many scattered projects, or is it almost always a fundamental misalignment of sponsorship and ownership? Sean, let's start with you.

Sean Hauver 03:51

I think that's a great question, Barbara. I think the first and foremost thing is making sure you do have that alignment up front in that we're actually implementing and building the technology for what that business challenge is. As we talked about in the first question, there are times when you're doing proof of concepts with technology to learn the technology that isn't something that you're thinking about, that you're going to drive and implement fully across the across the company, but for the most part, when you're implementing a project, it needs to be in the context of what that business challenge is that you're trying to solve and making sure that you have true business partnership and ownership on the business team leadership to help drive that from an implementation in a local environment to a full environment.

Keith Schlosser 04:34

For me, it's really around sponsorship and ownership of a program of work, something I think needs to be sponsored and owned on the business at the highest levels in the business, and then joint accountability. In my career, I've often seen accountability really fall on the shoulders of people like Sean and the IT team. And I think that the accountability really needs to be equally shared across a business, and in order to make sure that alignment is achieved and also that there is a definition of what good looks like, what success looks like. And that has to be done by both sides of the equation.

Barbara Call 05:25

Great. Points. Excellent. Thank you. So if you're setting up a new AI project, what are the key elements where CIO is can influence the success at every stage, whether that's visioning, buy in, creation, setup, management and deployment. Sean, what are your thoughts there?

Sean Hauver 05:42

I think when you're setting up a new AI program, again, you're not typically just setting up the AI program. What you're doing is, you're working with the business to understand what their challenges are. And there are times when you're using AI to address those challenges. So when you're setting up that program, it really is about making sure you understand the business needs, what the business requirements are, and then leveraging the technology to address those challenges. So in a sense, when you're using AI, there are other things you need to make sure you take into account, making sure that you have the right partners involved to help address those needs, from a business and from a technology perspective, making sure that you also understand the total cost of ownership of AI, because I think that's an area where sometimes individuals might go off track, where they don't understand what the total cost of ownership would be, and you need to understand that upfront to make sure that it still supports your business case, that you're making.

Keith Schlosser 06:39

One of the things that I think is really important, and I'm interested to see if Sean agrees, is around education. Before you start an AI project, I think AI right now is a buzzword in every organization. Of course, it's all over the news and the stock market and so on. What is AI in the context of your organization? And I think if you can get everybody on the same page here, you have a higher likelihood of success. So people are understanding the difference between generative AI and agentic AI. Sounds simple, but many don't understand it. So I would start with education, getting everybody on a foundation where they are comfortable with what you're talking about, what the effort is, and what the outcomes ought to be. The other thing that I would talk about is thinking ahead. When you're solving a problem with agentic AI, you need to understand what the next set of problems are going to be and that you are building in a way that supports the future efforts that you're going to undertake, instead of trying to solve the problem right in front of you and selecting a platform that can do that so that you don't have to keep looking and buying new platforms and just to solve the next problem. And then the other thing I would say, is be fluid and be willing to adjust. You're going to learn along the way. The projects that are very rigid, and we have our goal, we have the requirements, we're not going to change, in my opinion, those are the ones that are going to fail. But having a leadership team that's willing to adjust, I think, is going to set you up for success as well. Sean, agree?

Sean Hauver 08:28

I completely agree. And at Alorica, what we have done is we've implemented what we call The Alorica University, and we implemented that a couple of years back, and that was one of the foundations of AI, where one of the key training courses that we've put out there, both from a business and technology perspective. Because, as you mentioned, it's as important for the business to understand the terminology that we're using as well as the capabilities, so that helps them also think about new opportunities to go after as well.

Barbara Call 08:59

Great, great commentary. Thank you both. So when you inherit or identify a stalled AI project, the first step is often a triage. The question is, where do you typically begin the audit, and then specifically, how do you quickly assess if the failure is fixable? Do you engage with the business or should this be an IT led initiative. Sean, what are your thoughts?

Sean Hauver 09:23

I think, as you heard the theme with some of the earlier questions, it always starts with the business. So the first conversation is, again, what were the objectives, if I ever did a new AI project, my first conversations are with the business sponsors, with the business team to truly understand, what were they trying to accomplish with the initiative, and then work backwards from there, and then start looking at what was the technology and the solution that was designed for them, and address could that meet the needs that they want in the short term and long term. And then identify where the gaps were in capabilities or understanding, and close those gaps to try to bring that initiative, back on track.

Keith Schlosser 10:04

Yeah, I totally agree. I'm not sure how much more I can add. It starts with the business. Oftentimes a failed project, especially in the world of AI, is a disconnect between what the business thought they were getting and what is being built and delivered. So I think making sure that the alignment is there, and then really the basics, the things that we've all had to deal with, going back our entire careers: scope creep. People get excited, and they start adding things, and next thing you know, you have a failed project. Not because the original premise of the project is failed. It's usually that it has just gotten a little out of control. Going back to the business, solidifying what they're trying to accomplish. Make sure you deliver and the three Ds: doable, digestible and deliverable chunks. And I think you can bring it back on track.

Sean Hauver 11:06

And if I can add a little bit more to that too. And great thoughts, Keith, the whole concept of, as you continue to implement capabilities and features, it should not be viewed as a bad thing. If all of a sudden the business then says, "Oh, I didn't realize you could do that", and then wants to add more capabilities and more functionality that shouldn't be looked at as a failure. From a technology or from a business perspective, the faster we're able to implement technologies now with AI and other tools, and the iterative nature of being able to do it, should only encourage that iteration, that ability to now, once they see it and they feel it, to be able to now be able to add more capabilities and more thoughts now that they've gotten a chance to feel it. Because oftentimes they don't necessarily know, or the team doesn't necessarily know, all the potential opportunities that are there until they see it, until you start to feel it, you know, I mean, and then lots of ideas will flourish and encouraging that, you know, to flourish and and partner together and make sure you continue to iterate and add functionality over time. I think, is a huge benefit of the process and success, criteria, and that's why it's so important to have that partnership that we talked about up front, that alignment at the senior most executive level. We're we're in this together, and we're going to continue to make this successful. That's what success will look like in the long run.

Barbara Call 13:08

Sean, my next question is for you, how critical is it to have a strong foundation in data quality and data governance before embarking on your AI journey? And what's your advice for how to do that?

Sean Hauver 13:22

I actually think that's a great question, Barbara, and I think it's a question that really comes back to data, and the information that you implement in a solution or a technology has always been the foundation, and has always been the most critical aspect of what you're trying to deliver and understand for a successful solution. If you go all the way back to whether it was the mainframe or the client server days or SOA, it's always been about the quality of the information that you have that accelerates and enables you to build strong solutions. So to me, it is probably one of the most important pieces of information that you have. The question is how do you get there? It's making sure you're focused on the data assets that you have, whether it's structured or unstructured data, and making sure that you're bringing all that together as part of the foundation. We started many years before AI was really a buzz term, and really invested heavily in our enterprise data warehouse and data lakes and stuff like that, to make sure we're trying to pull together all the company assets and AI has only helped you with some of the other capabilities around unstructured data to be able to ingest it and train models and everything else that really is a huge success for us.

Keith Schlosser 14:41

So I'm going to take a little bit different approach on this one and add on to Sean's, I'll call it pre-AI data in a pre-AI stage, and I'm going to focus on data post-AI. Meaning the data that's coming out of the machine, if you will. And I think it's really important to have a strategy that in turn, feeds back to the data warehouses that Sean is talking about. So having a process in place to make sure that the AI processes are producing information and output that is consistent with what you want it to be, and then being able to go back and look at the data that it was trained on and make sure that all of that is is in a healthy state, I think is really important and it needs to be thought about before you implement an AI project. And so, I'm sure that most of the listeners out there already know this, but you just can't load it up with data, get something out the out of the AI process or the AI machine and trust it. You have to constantly be monitoring it, and then updating your models, updating your databases, and so on and so forth, and keeping a human-in-the-loop. In my opinion, this is not about replacing a bunch of humans. This is about making people more effective, giving them the information and the data that they need in order to do their jobs better, and then taking a little bit of the noise out of their day-to-day routine. There are a lot of things that I'm sure people do that is not super valuable and I think the work that these companies are doing, by way of example, Sean is really designed around helping people do more work and do it better and make more informed decisions. So Sean, I'm actually interested if you agree, or am I off base on that?

Sean Hauver 16:51

I agree, Keith and honestly, and that's some of the things that when you talk about what the future holds, we truly believe, from an Alorica perspective, it is around enabling our employees to be more successful. We're not looking at this even the agentic AI and other capabilities, is to enable our teams to be more successful and be able to work on higher value added tech solutions for the business. So it's 100% agree with you.

Barbara Call 17:21

Sean, what does the future of AI hold for your organization?

Sean Hauver 17:31

We look at AI as unlocking our company's full potential. It's really about augmenting and providing, letting technology help all of our employees, regardless of what role you play, be more effective, be more efficient, and really let technology help reduce the minutia that bogs them down day-to-day. Whether it's an executive and being able to take actions and status and transcript their meetings, or it's an agent and getting additional information provided to them so they can provide the best experience for their clients that they're working with. Or it's a developer using some of the AI tools to auto generate code so that they can focus on the business logic and not some of the things that used to take a lot of time in the past from their plate. So I'm super excited about the future, and it's definitely going to help us perform better and lead to further growth as part of the organization,