CIO PODCAST: THE AI ADVANTAGE | EPISODE 4

Building a secure and compliant AI operating model

This episode provides CIOs and CISOs with an actionable framework to manage AI security risks—including data breaches, IP theft, and malicious prompts—through effective guardrails, data protection, and robust governance that enables secure, compliant AI innovation at enterprise scale.

Building a secure AI operating model
28:30
Building a secure and compliant AI operating model
EPISODE 4

What CISOs need to know about AI

Barbara Call 00:10

The AI era has arrived, but there's no guarantee of success. Industry estimates say anywhere from 70 to 95% of AI pilots fail to get off the ground. The question is, how can you move beyond POCs prototypes and feasibility studies to real ROI and measurable business outcomes? What does successful AI adoption look like, and what are the steps to move from potential to pay back?

Barbara Call 00:37

Welcome everyone. I'm Barbara Call, Global Director of Content Strategy at CIO.com and this is "The AI Advantage: Navigating Risk, Reward and Real World Deployment", created in collaboration with CIO.com and Vertesia. In today's episode, we'll be talking about building a secure and compliant AI operating model. As most of us know, there are plenty of security risks inherent in AI from data loss and breaches and intellectual property theft to model integrity and malicious prompts. Today, we'll explore strategies for effective guard railing your LLMs, protecting your proprietary data and establishing robust AI governance and how to build trust, mitigate risk and scale your AI efforts securely and responsibly across the enterprise. But first, let's introduce today's speakers. two multiple times CISOs at highly regulated major companies. First up is Allen Wilson. Allen, welcome. Tell us a little bit about yourself.

Allen Wilson 01:42

Thank you for having me today. By background, I'm a three time CIO with over 30 years in information, risk management and cyber security, working at the intersection of technology, risk and business leadership. Much of my career has been shaped by roles within leading security, technology and global financial services organizations. Over the years, I've had the privilege of working alongside many of the veterans who helped shape modern cybersecurity, and I continue to work shoulder to shoulder with some of the most capable and thoughtful professionals in the industry today.

Barbara Call 02:14

All right, great. Nice to have you. And my second speaker is Brian Fricke, welcome, Brian. Tell us a little bit about yourself.

Brian Fricke 02:22

Thank you. Good to be here. I started my career in the military serving in the Marine Corps. I did 10 years of federal service after that, and I've been in the financial sector for about 15 years at different small, mid size and large financial institutions. I'm also the chair of the midsize bank coalition of America, about 120 member banks working on best practices, of which, of course, AI is now an emerging technology in that space. So being able to leave and coordinate with those, those fantastic caliber CISOs and I also serve on the board of the National Technology Security Coalition, which does a lot of policy work on the hill, on Capitol Hill, again, focused on emerging risks. And of course, AI. So glad to be here. Thank you.

Barbara Call 03:09

Absolutely great to have you. All right, so let's jump right in Allen. I'd like to start with you. Should CISOs be worried about AI in an enterprise environment? And if yes, can you name a few things that keep you up at night?

Allen Wilson 03:23

Absolutely, CISOs, absolutely need to be addressing AI risk. The risk is quiet. It's fast. It's already inside the enterprise. One of the most obvious things is AI creates an invisible path for data exfiltration. Many times, employees pay sensitive data into copilots. They use software as a service, AI tooling available on the internet. Now we have the proliferation at AI based browsers and browser extensions exposing corporate data. Most organizations, they can't see it, which means they're not logging it, they're not safeguarding that access, which, in of itself is a DLP failure. One of the other things I would call out is that AI also breaks I security and identity models that we've built over the course of many years. I like to equate AI to early Internet security, where in the early days, we secured places like the perimeter, and when that broke, we turned to securing people and devices or identities. Now with AI, we're redefining identity. We have models and agents and CO pilots and embedded ml components that are actors, that can reason, make decisions, take actions. All of these things need authentication, authorization, audibility, provenance. But the problem is AI, identities aren't static or human bound, and managing that security with traditional identity and access management or I. In principles breaks down, and that's a new attack surface where traditional security operations just doesn't have a muscle memory around these types of threats.

Barbara Call 05:11

All right. Thank you, Brian, what are your thoughts?

Brian Fricke 05:14

Yeah, I fully agree with Allen and what they got, I'll say is, and he probably agrees, is being worried won't help anyone. We try to position this as an emerging technology risk, that the CIO needs to be managing AI like cloud, right? It's here to stay. It comes in and a lot of different flavors, we have to figure out probably the three kind of key questions is, how is the organization going to consume it, right? And how they're going to use AI with intention. How is your supply chain going to begin to use AI with or without your approval or knowledge, including your staff, right? And how will the bad guys use AI to improve their capabilities? And are we going to be able to keep pace with that understanding? Where are the risky use cases coming from? How are we managing the non human identities that are required for the agents that are now emerging in these different organizations, and limiting the access so we can limit that potential damage, or that blast radius, or the cascade, or run runaway conditions that might, that might ensue where's the data going, like, like Alan mentioned, through various browsers and other little kind of gray use cases of AI, where it's going out, and are we using, managing MCP type tooling that allows all these new capabilities for these agents to have, I think the what security use cases, what are those that are emerging, and how can we increase the speed of our detection response with that precision? Not that humans are always precise, but when you start to rely on the AI or the agent, agents to do this response work, you've got to be able to give it the proper playbooks and the right context and guardrails. You also have to frame up AI and agents, more precisely, when you're having these conversations externally or internally with your stakeholders, there's the basic chat bots we were all familiar with. There's the assistant GPT agents which most people are using. When they're using a ChatGPT, they're structuring inputs and outputs, and they're giving it prompts. Prompt engineering is happening. They're giving it all that great context so they can get good outputs. Those are the lower risk sort of tier one, tier two. Then you get into Tasker agents, where you're giving it agency, you're giving it access to systems. And then the most risky one, in my mind, is the orchestration agents, where you have agents of agents and an army of activity happening that may or may not be as well managed as an organization might might believe. So the totality of all those things I covered, being able to articulate that back to the internal stakeholders, articulating the risk, understanding what the best practice design principles are around that, and what technical controls were able to put in place to help manage that.

Barbara Call 08:03

Excellent. Okay, thank you both. So my next question, we're all hearing about malicious prompt injects. What are some of the strategies that CIO should be employing to defend against this attack vector? Brian, let's start with you.

Brian Fricke 08:19

I think the input and output validation that's really a key control to mitigate these kinds of these risks, as well as access management. I think a lot of organizations have been struggling with access management for a long time, but now it's becoming even more imperative. You can inject a malicious prompt or that malicious prompt can be leveraged against you. But if the agent doesn't have access to the sensitive data or the systems, that reduces the risk, it reduces the ability for that agent to do harm. It also reduces the capability. So you have to have this trade off and have a really good understanding of what you're going to allow things to happen and not. Simon Willison actually describes this lethal trifecta, which I love, and it's similar to the access management concept of a toxic combination. So an example of a toxic combination might be, I can initiate a wire, I can review the wire, I can approve the wire, and I can send the wire. So if somebody has all four of those entitlements, that's a toxic combination, something bad could, could potentially take place fraud or other abuse, but basically the lethal Trifecta in the context of AI, it's like a three circle Venn diagram, and it describes a dangerous combination of these capabilities that we are building into these agents. Those three things are around. Does it have access to private data? So is the agent connected to sensitive information, emails, documents, databases. Can documents be uploaded to it, specifically to the prompt injection? Does it have exposure to untrusted content? So is it ingesting data or prompts or information or instructions that might be hidden in a file from sources that could be malicious? Does user inputs or websites, external files, and that third piece being externally communicating or exfiltration abilities exist, meaning the agent can send data out via maybe an HTTP request or an email or file transfer or some other channel. And if you've got these three capabilities in place around your agent, you could have basically created a very dangerous situation. So you want to start to create design principles that separate those out, so that the teams that are building these things are building them safely, and we know how to evaluate these, these AI systems to detect if those three capabilities have been put in place in one agent and so that, I think that's an emerging design principle that we're all starting to circle around.

Barbara Call 10:47

That's great. Thank you, Allen. What are your thoughts?

Allen Wilson 10:51

I would echo much of what Brian had said there. Certainly prompt injection is very real. You don't defend against it like you do traditional exploits. As Brian mentioned, you design around it, you've got to never treat prompts as control boundaries. So system prompts, guardrails, policies tend to be advisory in nature, not enforcement controls from a security perspective. CISOs need to mandate card authorization, data access controls and deterministic business logic, all outside of the model. The other things I would say to isolating models from sensitive systems by default, so not using, not having LLMs directly call APIs, databases or workflows without some type of broker layer that can enforce concepts of identity, intent, rate limit, logging those types of those types of controls. Another important one I think we hit on here, was treating LLM output as untrusted input to whatever is processing that so as attackers can manipulate the inputs to a model, so they can also manipulate the output from a model as well. So I think that's in a nutshell what I would say is, I think a lot of this is you take a layered approach. Assume you're not going to prevent every prompt injection, focus on detecting anomalous behaviors, unusual tool calls privilege escalation attempts that you can then work to contain fast and just design with a layered approach to security in mind around AI.

Barbara Call 12:39

Okay, thank you both. My next question, you mentioned LLMs, how do you defend against data loss by employees who, probably without malicious intent, are exposing company assets to public LLMs, Allen, let's start with you.

Allen Wilson 12:57

So this is one of the highest probability AI risks, I believe that are out there in most enterprises, and it's almost never malicious. I think you have to accept that blocking all AI at scale can be a fruitless effort. Employees are going to use public LLMs because they're effective. CISOs who try to ban them, create shadow AI, typically not a secure environment. Second thing I'd say is you move data protection closer to the user. So traditional data leakage controls were built for email and file shares. You now have controls that you need to embed further into browsers and API layers. So you've got to move those protections closer to the user than perhaps we've traditionally deployed controls in an enterprise environment. Third point, give employees a safe alternative that's just as good. So if your approved enterprise LLM is slower, worse or harder to use than other AI sources out there, users will go around it. There's a term out there "BYO AI", bring your own AI, that a lot of CISOs are facing. So security adoption tends to follow usability, not necessarily policy. By enabling companies and enabling the workforce to have secure approved solutions, I think that's that's really how you combat a lot of these threats of AI, and obviously that all starts with a governance structure that embraces secure AI adoption.

Barbara Call 14:51

Excellent. Thank you, Brian, what are your thoughts?

Brian Fricke 14:54

I see this challenge similar to what we had APIs or you have data management requirements. You have to have a comprehensive inventory, and you have to articulate what is going to be approved to meet the business needs. There's several hundred common public LLMs. Llama4 and Grok and Claude and Gemini and DeepSeek and GPT5. There are so many providers out there, and so understanding, "Are they fit for purpose? What are you trying to achieve?" Some of them are better for coding. Some of them are better for writing copy for maybe marketing. There are so many different ones that are out there. So selecting the one that's going to be fit for purpose. But also some of them have inherent biases in them. Some of them were purposely created to be malicious. They have no guardrails. And these are the underlying models that are the engine and driving whatever capability you're putting in front of the users. So, are you going to whitelist? Are you going to blacklist? Or if you're going to allow, many large organizations are allowing multiple you, so you can select any one of them that are approved for that use case. So, it's a strategic decision and a tactical one. For these technology teams, they need to evaluate the risks, understand the reason that they're going to select one or the other and I think that any one of them can be used with malicious intent by the user. So it goes back to the other discussion around input and output validations. What can it actually do? What actions are you allowing the agent to take? What kind of agency do you empower it with? The underlying model can certainly change the dynamics. But are you training your staff? Do they know how to give it the prompt? Can engineer the prompt with the right context and the right techniques, for instance, retrieval-augmented generation, where it's actually looking at a specific source, so it might be looking at your policies or your standards, or a menu, as opposed to searching the internet or hallucinating and coming up with information that could be harmful or that could be inaccurate. Those are the kinds of dynamics that you have to think through when you start to put together your operational strategy.

Barbara Call 17:59

My next question, I'm going to start with Brian. So from the point of view of a CIO, do you see value in a single AI enterprise platform to cover the entire enterprise, you know, front office, back office, versus multiple platforms and vendors addressing singular use cases. And part two, can you elaborate on how you can help influence the drive to fewer AI vendors versus multiple providers?

Brian Fricke 18:30

Your most common productivity gains for the workforce can be achieved with a single enterprise solution, whether it be Microsoft Copilot or using OpenAI or the others that are out there providing enterprise-level or enterprise-grade solutions today. Most productivity gains come from that for your workforce, you have to decide, is it better to buy or build or partner with companies to address your most critical use cases? And when I say critical use cases, it's the stuff that really brings business value, or maybe where there's a lot of drudgery, or very complex or cumbersome kind of use cases, where they might be ripe for automation. And not every solution needs AI. It could just be an automation opportunity. And every company wants to say they're an AI company now. I mean, Adobe PDF has been using optical character recognition (OCR), which is computer vision. That's a branch of AI. And so again, you have to be more precise in how we define these things. Again, do you build or do you partner to target the core business value add, what does your business do? And how do you address that specifically, and then buying for the commodity use cases, like contract reviews or governance matters, or maybe HR things, and even security related services could be considered more commodity because they're not germane to the core business service delivery, and that's where you want to again, I think, focus your build because even if you build your own could, even if its a workflow management or, automation, there's overhead to that. Somebody has to manage those things. And that gets into the more of the workforce upskilling kind of conversation. But those are some of the dimensions you have to think through. And again, putting it on paper, having the conversations with your key stakeholders, so that everybody agrees on the parameters under which we'd make decisions to build or buy.

Barbara Call 20:26

That's great advice. Allen, what would you add?

Allen Wilson 20:30

I would echo much of what Brian said there certainly, from a CIO perspective, there can be real value in consolidation, but I believe only if it's done intentionally so a single AI platform can help reduce chaos, but not necessarily risk by default. So fewer vendors means fewer data flows, fewer integrations, a smaller audit surface that directly lowers the security overhead and friction. But conversely you can also introduce a monoculture risk. So if one platform becomes your layer for reasoning, your workflow engine, your data broker, a security vulnerability and outage and misconfiguration, those are all things that you now have a consolidation risk of. So I think CISOs and this is where security organizations really shine and show their value, they need to push for deliberate and logical analysis and evaluation, even if the vendor is singular, to judge what those capabilities are and advise the organization on what a sound approach is.

Barbara Call 21:41

Okay, thank you both. Allen, my next question, I'll start with you. In the world of AI, how do you build a culture where security is seen as an accelerator rather than a hindrance?

Allen Wilson 21:55

This is one of the most important leadership questions in AI right now, and I think the same principle applies where security has to be included early on. You have to show up early. Certainly don't come in late when it comes to solutions involving AI, whether that's the procurement of AI solutions from third parties, how third parties are using AI, whether it's internal builds involving AI, basically, when security helps shape AI use cases. In the beginning, teams move faster because rework disappears late stage, security, whether you're talking AI or involvement in any type of business project always feels like a break if it's involved too late there. The other point I would make is ensuring that security is a business enabler. So developers and business teams typically don't want approval gates. They want safe defaults, so having pre-approved models and data sets and tooling and templates that you can turn into reusable patterns helps security become a shortcut and a true business enabler, rather than something that is in essence slowing down progress in an organization.

Barbara Call 23:20

Excellent. Brian, would love to know your thoughts.

Brian Fricke 23:23

CIOs and Chief Security Officers exist to enable the business to get them where they need to be, where they want to be safely, or at least risk informed. And even if they want to do something really reckless, it's up to us to help articulate what that risk is in business context, in financial impact or otherwise. So they can make an informed decision. We can certainly throw the red flag. But if their CEO wants to go off and expose API's and do all kinds of stuff in an aggressive way, it's up to us to articulate what the strategic, operational, reputational, compliance, or the financial impact can be to the organization. And if we can't communicate with the business, if we can't communicate with the board in a language that they understand, basically if we can't translate the ones and zeros into dollars and cents or into business risk, we're not doing our jobs very effectively. And so that's really the name of the game when it comes to whether you're a highly technical CIO, or you're coming out of the maybe the governance Risk and Compliance space, or you come from a consultancy or a business background, ultimately, we're all here to support the business and their strategic objectives.

Barbara Call 24:37

Excellent. Okay, thank you both. Here's my last wrap up question, looking forward, what does the future hold for security around AI, and how can it and business leaders prepare? Brian, let's, let's start with you.

Brian Fricke 24:50

I think you're going to see a massive change in the way that our workforce has been organized. The need has been for specialists in a lot of different domains, and I think you're going to see a shift to more generalists using AI that is doing the specialized work. So there's going to be a shift in that regard. In my own organization, we were going to do a backfill of a role, and instead of backfilling that role as the specialist in that space, we've included the automation and AI skill sets and capabilities. We've even renamed the role to include that piece. So I think that's what you're gonna see a lot more often, is a need for those skill sets.

Barbara Call 25:38

Allen, your last thoughts?

Allen Wilson 25:39

I agree with much of what Brian said there, but in the future, I would say specifically, folks are going to need to be more adaptable. No successful AI deployment is going to take place without people on board. It's more of a people transformation, as much as it is a technology transformation. We are very early on in the journey, as I alluded to earlier, certainly as it relates to security's journey, along with AI and I think adaptability is key. I think being able to keep pace with security technology in AI is a constant learning exercise, and it's a constant journey if you will, more so at an accelerated pace than traditionally I think folks have been accustomed to, as it relates to general technology, or even cyber security, AI. It's just moving at such a rapid pace that we're all going to have to become more adaptable. The other thing I would say, too is that it's truly an enabler for not just the business and security in general, but you are going to have a lot of entry level folks where knowing how to use AI in whatever that job, or whatever role they're they're coming into, is going to be key. So I think that its going to be part of your skill, core skill sets. So just like learning to use a computer and using a computer as part of whatever your job might be, I think AI is going to be part of your core skills as well.