The Customer Success Playbook

Customer Success Playbook S3 E69 - Gayle Gorvett - Scaling AI Governance Without Killing Innovation

Kevin Metzger Season 3 Episode 69

Send us a text

How do you build AI governance that scales without becoming the innovation police? In our final conversation with tech lawyer Gayle Gorvett, we tackle the ultimate balancing act facing every organization: creating robust AI oversight that moves at the speed of business. From shocking federal court rulings that could force AI companies to retain all user data indefinitely, to the Trump administration's potential overhaul of copyright law, this episode reveals how rapidly the legal landscape is shifting beneath our feet. Gayle breaks down practical frameworks from NIST and Duke University that adapt to your specific business needs while avoiding the dreaded legal bottleneck. Whether you're protecting customer data or designing the future of work, this customer success playbook episode provides the roadmap for scaling governance without sacrificing innovation velocity.


Detailed Analysis

The tension between governance speed and innovation velocity represents one of the most critical challenges facing modern businesses implementing AI at scale. Gayle Gorvett's insights into adaptive risk frameworks offer a compelling alternative to the traditional "slow and thorough" legal approach that often strangles innovation in bureaucratic red tape.

The revelation about the OpenAI versus New York Times case demonstrates how quickly the legal landscape can shift with far-reaching implications. A single magistrate judge's ruling requiring OpenAI to retain all user data—regardless of contracts, enterprise agreements, or international privacy laws—illustrates the unpredictable nature of AI regulation. For customer success professionals, this uncertainty demands governance frameworks that can rapidly adapt to new legal realities without completely derailing operational efficiency.

The discussion of NIST and Duke University frameworks reveals the democratization of enterprise-level governance tools. These resources make sophisticated risk assessment accessible to organizations of all sizes, eliminating the excuse that "we're too small for proper AI governance." This democratization aligns perfectly with the customer success playbook philosophy of scalable, repeatable processes that deliver consistent outcomes regardless of organizational size.

Perhaps most intriguingly, the conversation touches on fundamental questions about intellectual property and compensation models in an AI-driven economy. Kevin's observation about automating human-designed workflows raises profound questions about fair compensation when human knowledge gets embedded into perpetual AI systems. This shift from time-based to value-based compensation models reflects broader changes in how customer success teams will need to demonstrate and capture value in an increasingly automated world.

The technical discussion about local versus hosted AI models becomes particularly relevant for customer success teams handling sensitive customer data. The ability to contain AI processing within controlled environments versus leveraging cloud-based solutions represents a strategic decision that balances capability, cost, and compliance considerations.

Gayle's emphasis on human oversight—

Kevin's offering

Please Like, Comment, Share and Subscribe.

You can also find the CS Playbook Podcast:
YouTube - @CustomerSuccessPlaybookPodcast
Twitter - @CS_Playbook

You can find Kevin at:
Metzgerbusiness.com - Kevin's person web site
Kevin Metzger on Linked In.

You can find Roman at:
Roman Trebon on Linked In.

Customer success.

Kevin Metzger:

Welcome back to the Customer Success Playbook. I'm Kevin Medgar. Roman is unable to join us again, but we are wrapping up our shows on AI and AI governance with Gal Vet. We've explored the fundamentals of accountability of AI governance. Today we're focusing on scalability. How do you do governance at speed whenever. We have to involve legal, uh, as somebody who's been involved in customer contracts and working with legal teams, some legal teams are really good about speeding things up. Some are, it feels like, well, our job is to make sure the process slows down and everything's checked, and, and that is right. I mean, it's to make sure we do a good job of. Making sure we protect our companies and making sure we protect our customers. At the same time, we don't want to run into the bottlenecks. How do we avoid doing that in working through a governance policy?

Right. Well, I think what people are, are, are trying to do the, the ideal, um, answer would be, oh yeah, we're gonna automate our, uh, our AI governance process. And that's unfortunately not the ideal, um, solution. You know, you have to have humans in the loop. For one thing, this is. Uh, complex. And it sounds weird to say that it's too complex to be managed by ai, but it, it just, it, it just is, things are changing, um, rapidly, not just on a technical level, but on a, a, a regulatory and, um, you know, an operational level in this area. Um, and so I guess the answer is. You, you, you need to adapt to your business and your particular use case. What this is gonna look like and do your best. To not thwart, um, the innovation, again, for smaller companies, it's gonna look different than it is for larger companies. One of the things that I've been seeing is that both large and small companies have been liking the sort of adaptability of the risk framework model. And so that's what NIST. Has come up with, and that's what we are using in the working group that I'm working with at Duke.

Kevin Metzger:

Sorry, you mentioned NIST a few times. Yeah. And that, I just wanna make sure the audience knows. NIST is the National Institute for Standards and Technology, right? Yes. That you're referring to. Okay.

Yes. And they are a federal agency that often comes out with guidelines in these very key areas. They come out with cybersecurity guidelines that they, in playbooks and that are very useful, um, and, and free to the public. Um, and I highly recommend that people go on their website and look at them. They've already come out with an AI framework. They're coming out with a generative AI framework as well. And even the federal government is encouraged in the various agencies to use those frameworks as sort of their baseline, um, for how they start, uh, dealing with the AI risk assessment in their agencies and then, um, sort of tweaking it to adapt it to their particular needs. And that's one of the things that we used in, um, our working group at Duke. And we also looked at international, um, regulations and, um, some of the, you know, regulations that are already out in, in Europe, um, to create our rails risk framework. Um, that's specifically for legal teams. Either one of these types of frameworks can be useful for people who are trying to get something that is adaptive to different needs. You also wanna take into account whether you have specific regulatory requirements, if you're in healthcare or if you're in financial services, uh, or in industry like legal, or you deal with defense contracts where there are particular requirements that you have to layer into that. Now you can, um, you know, potentially try to, um, automate some of the, you know, checklists and some of the deadlines, some of those types of things within your, um, assessments. But I would say at least yearly you wanna be auditing what you're doing. And you, you wanna have humans in the loop on, on these processes. And definitely, um, as you're creating them, you need to be involving the different, um, teams within your organization that are gonna be responsible for the data that flows into the AI systems that you're using.

Kevin Metzger:

Yeah. Yeah. You mentioned auditing, and I know in it there's all kinds of kind of standard security audits that we tend to, we tend to run across organizations. Have there been any changes to those types of audit processes to start, including how AI is being used within the organization that you're aware of?

Yes. Part of the, this, the risk assessment that that companies go through when they're creating these governance policies is an audit of the AI processes, um, that the company is doing and the, and the data that is flowing through. Um, through the, those AI processes.

Kevin Metzger:

Yeah. So like the SOX compliance audits and things like that, are they being modified that you're aware of? Are you familiar with them?

I am familiar with them. I'm not aware of those being modified specifically. Gotcha. Yeah,

Kevin Metzger:

I, I assume that'll, that'll probably start coming down the pipe. Pretty quickly, I'm guessing for, for enterprise security because it's such a big impact on how AI is doing. I think that'll be, or. Technology. I think it'll be a piece of what comes down with the SOX compliance audits.

Right? That, that definitely prob will be coming down the pike. But, um, I'm not currently aware of, of those being modified specifically for ai.

Kevin Metzger:

I, I, I'm not either. I just. It occurred to me as you were talking about the audit process. I'm like, you know, there's some standards around some of these things. Maybe that's where, maybe that's where it'll go and I'm sure from anything else that you, you kind of wanna share?

Well, I mean, I think one thing people should be aware of in, in addition to what's happening in the US on a regulatory perspective in terms of the, the federal government introducing this. Legislation to put a halt to state AI regulatory initiatives, which have been really the only mandatory AI regulation up until now is there's a very big lawsuit that's happening in the us. In New York, which could have very big impact. It's open AI versus the New York Times in federal court in New York. And there is a, a magistrate judge that's in charge of evidentiary rulings in that case who has made an initial ruling, which I find to be quite astonishing. Um, she, I think was about. A week and a half ago came out with a ruling that requires open AI to keep all of their data outputs from their large language model, regardless of what regulations say, like GDPR or the EU AI Act, regardless of what, uh, whether the customer's on an API or an enterprise version of. Um, their, uh, chat, GPT, um, or other tools, regardless of what any, you know, terms and conditions or contracts say about what they're supposed to be doing with those outputs. So the lawsuit is about the New York Times, but the decision is about every single, everybody. Yeah. And so it's something that people should really be aware of because now this has massive potential privacy implications. I. It, it's something that I'm, this judge did to begin with, and it, it's another, I have to agree. There's no way

Kevin Metzger:

that can stand.

No, there isn't there. It

Kevin Metzger:

can't,

no. And, and it, it just seems to me another example of judges going far beyond their, you know, uh, their purview in terms of their, you know, their. The jurisdictional powers and, and, and the, the four corners of the case that they're meant to be deciding on in our, in the United States. I don't know what's happening, but it's something to also think, have in the back of your mind as you're, you know, thinking about enterprise uses of these tools, the privacy consequences we have to be technically designing around this type of thing. I would just suggest vendors who are working with companies are now gonna have to have an answer to how do you protect against this type of, uh, of incident As a customer, I would point blank, ask this type, you know, that type of question. If you're in a serious, you know, negotiation with an AI vendor because. This could be a concern. And

Kevin Metzger:

it's not just vendors, right? This is a really interesting thought, right? So open AI is being used by all these SaaS vendors, right?

Right. That's as the backend.

Kevin Metzger:

As the backend for that. So you've gotta be aware of that. And then if it's not open AI and it's cloud or somebody else, if it's a public, if it's a cloud-based vendor. Everybody's gotta, I assume, be sanctioned by the same law there or the same ruling?

No, no, no. This is exclusive to, it's only

Kevin Metzger:

for open

ai. But I, my my point is they all need to have an answer for how do they protect customers.

Kevin Metzger:

Yeah.

What's the wrapper? What's the workaround? What's the technical solution? What's the, anyone who's, yeah. Do I have to run local

Kevin Metzger:

models instead so that at least it's contained within my environment and I can protect my customer that way versus a hosted model. So now I can, these are decisions you gotta actually consider as you're looking at designs of how to implement AI tools within your business.

Right, right, right, right. Uh, yeah. I mean, in, in this. This decision applied to all enterprise customers, all, you know, even customers in the EU that weren't supposed to be subject to this type of thing. So it,

Kevin Metzger:

yeah. How is that? Even PO doesn't the, I would even say the EU data laws,

of course they apply.

Kevin Metzger:

This is

Of course, yeah.

Kevin Metzger:

How did, how you can't overrule that. Well,

yeah, I don't

Kevin Metzger:

understand the,

it's a huge wrinkle. And then the other, um, thing that has. Come into question in the last two weeks, the application of existing copyright law to ai. And if this is, we've been hearing a lot of chatter inventors of some, I won't even call'em inventors'cause they wouldn't. Use that term by the creators of some of these models who've been saying delete all IP law, which I think is really ironic because some of them have other companies, or even within their own companies, you know, a number of patents and they don't seem to see the irony in. And suggesting to delete all IP law with respect

Kevin Metzger:

to me. I think so. As somebody who doesn't own any patents. Right. I, I think it's a very interesting thing. These models have been written, basically consumed all mankind's knowledge.

Right.

Kevin Metzger:

Um, and basically been used to it. It's used IP from centuries of writing down. Information and learn from it, and now it's producing knowledge. They're producing information through prompts and, and input and, and it's taking it and then reproducing new derivative information. There's, right now, last I heard, you still can't put I, you can't. If it's produced by ai, there's a certain amount. I think that that has to be, the last time I I read about it, there's a certain amount that has to be human generated. Very little amount can be AI generated to be considered new Copyright. I'll, I'll, I'll stop and let you talk.

Yeah, no, that, that, that is true. There's two aspects to this. To get a copyright on something that's generated by ai, there's a whole in the US a whole series of. Of sort of tests that the, the copyright office has sort of set out. They've, they've said that there is no ability to get a copyright on something that is entirely generated by ai. There has to be a human, um, involvement. There has to be originality and, um, you know, there has to be, um, it can't be solely based on a number of prompts. It has to be, uh, you know, something that was with. Human interaction other than just prompting, there has to be human originality as part of it. Um, the question is how much, and, and then the copyright office is continuing to monitor jurisdiction's perspectives on all of that and how, um, other countries are viewing the, this copyright ability. Um, question. The, the interesting thing that's happened in the last couple of weeks is. With respect to the, the, um, the inputs into the large language models, which were subject themselves to copyright because copyright only extends for a certain number of years. So things that were, you know, over a hundred years old were already, you know, no longer subject. Yeah. The Trump administration in like. A week has turned all this on its head by firing some really key employees of the US Copyright Office. Insinuating, although I haven't heard any real like official statements on this, that they may be leaning towards the tech industry in this argument that copyright won't apply. In the information that was sucked into the large language models. And I, I, I'm actually ac absolutely stunned by that. Um, yeah, because copy intellectual property as a, uh, you know, a field. Was created to promote innovation and fuel investment in our country. So it's, it's quite astonishing to me take that approach.

Kevin Metzger:

Yeah. It's interesting though, because the other thing is, is today in the world we live in. The world we live in is Cha is changing very rapidly, but when you go to work for a company, anything you produce is then owned by that company. Right? Well, companies are taking advantage of all of that work, all of that human knowledge and capital that's been given to them. And it's been given for pay at hourly rates, but now they're automating it into AI systems that move on in perpetuity and then letting the people go. And so now how the, and we're, we're moving it rapidly in the direction of building lots of AI agents and there's all kinds of concepts as to whether or not there'll be you if you're able to produce more agents and have agents do work, whether you'll be able to have many people managing multiple agents or what, but. Realistically, you're automating workflows that are designed by humans now through agents that have been trained by humans, right? Humans put an invested capital into that, and now you're letting those humans go and letting those agents continue to run. I, I, I think we're going to have to move towards a, um, new form of compensation model where maybe IP goes into the blockchain. You actually. Somehow have to bring, bring the blockchain along and compensate based off of the blockchain usage in the blockchain going forward. It's, it's weird because we've already broken. That model's already broken and like you would've had to almost start with that model to make it work in perpetuity. But somehow, if you're using information that's produced by somebody in perpetuity, how have you fairly compensated them for that? Like just their time isn't an actual fair compensation.

Right. Um, right.

Kevin Metzger:

Going forward and like we, and we have to come up with a new model because the alternative is taxing these companies that are now building these tools and redistributing it as some kind of universal basic income that we, we've seen that doesn't work right. Coming over time. So somehow we have to figure out new ways of. Awarding people for the work that they do produce. I, I, I, I, I don't know where it goes. Uh, this is just some wild speculations, but I keep, keep thinking about it more. I am seeing what's happening in the industry, especially with the, on the tech side, with coders and with how these, these new coding agents are working, which are really incredible. I mean, mm-hmm. Did some work with somebody a couple weeks ago where I had a requirements conversation. Worked it into a requirements document, and then through using tools like Cursor and Rep Lit and now Claude Code and Open AI Codex, you can turn them into proofs of concepts. It, it took an hour and a half wow. Of conversation to proof of concept code. It, I, I have some coding skills, so I don't wanna say I don't have any, but I, I have enough to. To have figured out, Hey, this is incredible and amazing and, and, and make it work, but not enough to say, Hey, I can make this work as a, you know, enterprise tool and enterprise application and protect it because I, I, there's specialized skills. You need to make sure that those things and rules are in place and, or code it in a way to make it do, make it secure. That is the path we're going down. Right. We're taking idea to inception, we're automating those paths, we're automating workflows and as we automate workflows, we're taking work that people manually do. Today maybe isn't the best, so work's gonna change how, how we work's gonna change what, you know, taste. I keep. One of the biggest things that I've heard in the last several weeks is taste is really getting the human element of looking at what. Is being produced and is it good or is it not good going to, to try and bring it to the, to the next level? But I think that'll change. But still, we are taking people's intellectual work and processes that they're designing and and working, and now turning it into workflows. If we're going to use those workflows in perpetuity, then. And say, we no longer need the person for that role, then how do you manage that? Because you've had a person create a workflow to, to automate. That's, that is a piece of intellectual design. I don't know whether it qualifies as intellectual property, but it's intellectual design that then company can profit from in in perpetuity and I think we've gotta figure out new compensation models. Yeah,

no, definitely. I'm definitely working on new compensation models for my work because

Kevin Metzger:

the old ones were based on time. Time is changing how you it, it's not the same, uh, it's not the same models. Well, thank you so much for your time. I really enjoyed the conversation. I love when I can get deep on ai. I love being able to talk with, uh, somebody who, who knows as much as you do about the law and about what's happening currently in the law, uh, on ai. I hope you. Audience liked and enjoyed this episode. If you did like, subscribe, share the with with others, we'll be back with more strategies for your customer Success Playbook. And until then, keep on.

People on this episode