When the EU enacted the General Data Protection Regulation in 2018, it triggered a ripple effect of consumer privacy acts, like the California Consumer Privacy Act, that resulted in corporate event marketing clients adding data privacy amendments to agreements with their suppliers. But a new regulatory chapter is upon us as the EU considers the AI Act, the first of its kind by a major regulator that is expected to become the global standard for artificial intelligence policymaking.
Advertisers are already tackling AI policies around fair use and transparency with brands adding clauses to contracts, while organizations like the Association of National Advertisers have released official advisements. And much like the adoption of data privacy practices, marketing agencies like The Opus Group say AI amendments are coming down the pike for events—in fact, a major global tech brand recently disclosed it was now including them in RFPs and agreements with suppliers.
To explore the issue further, we tapped Howie Cockrill, general counsel and evp-group operations at The Opus Group (parent company of Opus Agency, MAS, TENCUE and Verve) to discuss the impacts of AI policies and how agencies, clients and the experiential industry as a whole should prepare for “GDPR 2.0.”
Prepare to juggle different AI policies for each client.
A quick search across the landscape shows brands of all flavors adopting and promoting their AI policies. IBM has an extensive AI governance framework around ethics and use cases or “pillars.” KPMG has an AI framework for fair use around corporate values. And PepsiCo is collaborating with Stanford to explore AI policies around supply chain, direct-to-consumer impact, organizational design and sustainability.
Cockrill says he expects clients within the next six months will have AI policies in place and be seeking agency frameworks as well, which makes preparing your agency’s AI “north star” critical to ensuring everyone is on the same page ahead of time and to avoid sticky issues with agreements.
AI policy is more complicated than data privacy.
Unlike data privacy, which focuses on one topic, artificial intelligence presents an umbrella of topics for suppliers and clients to navigate. On top of obvious issues around data privacy (like inputting personal attendee data or confidential client information into a “machine learning application” for analysis or measurement), client AI policies may cover or ask suppliers to detail ethical practices like quality control and human review, bias mitigation, and how genAI tools are being used for automated decision making or performance reviews.
“What I’m noticing now and predicting is, this is going to be very similar to how GDPR affected data protection policies, where we start out with rigid requirements from the client that apply to all their vendors in all industries, we figure out an internal process to translate those to our business and our industry, , and then when things get gray or there are questions, we may have to go to the client for approval,” Cockrill says.
The issue of intellectual property, however, is more complicated.
“In client AI policies, there are a lot more ‘thou shalt nots,’ so ‘thou shalt not use any client confidential information in the use of AI.’ And that’s fine, but when your MSA [master service agreement] broadly defines ‘confidential information’ as basically everything related to putting on your event, then your MSA and your AI policy are out of step with what you’re actually asking us to do at your events,” he says.
To take it a step further, when everything created is handed over to the client at the end of a project and they get to “own it,” per an agreement, but it’s an output from AI with its own terms of use, that can bring up questions about who the originator is, whether it is a derivative work, and how it can be used.
Creating an AI chain of command.
Agency frameworks should also establish AI workflows to address common questions or concerns. At The Opus Group, teams have access to an internal email address they can send questions to which goes to Cockrill, the head of legal, the head of IT and the head of information security. All parties see it and can decide the best person or way to respond. Cockrill says it’s tempting to require everything to come through legal, IT and information security.
“The pro trick as an agency is, we have to create some guidelines and some guardrails for people, because legal and ITe don’t have the bandwidth for every single AI question, right?” Cockrill says. “We have to decentralize some of that. So, department heads are authorized to make some decisions.”
The more everyone knows, the better.
As part of that decentralization, Cockrill says training is critical to empowering team members to navigate the nuances and the regulations, whether they’re in the back office, in legal or IT, in creative, or a frontline employee. That training should include understanding the applications of AI within the company now, rather than just theoretical applications. “All employees need to be comfortable using it, and we think agencies should start training now, because our clients are using it, competitors are using it, and you want to be able to keep up with both sides,” Cockrill says.
A key component of this training, he says, should be teaching transparency—not only so that folks don’t feel the need to “hide” what they’re using and how, but so that the company as a whole can tailor its AI trainings and can benefit from competitive pricing for these tools. And there is certainly added value in that.
Photo credit: iStock/guoya