CCOs cautiously eye ChatGPT usage at their firms

Managers’ adoption challenges include leaks of material non-public information and archiving.

Private equity compliance chiefs say their firms plan to be cautious about experimenting with ChatGPT.

The executives took circumspect views of the conversational AI program’s usage during panels held at PEI’s Private Fund Compliance Forum last week, which covered topics including cybersecurity, recordkeeping and risk sensitivity.

“I’ve been meeting with our CTO to discuss policies and practices and uses, and trying to understand the potential risks associated with ChatGPT, but we haven’t gotten to a point where we’ve adopted a policy yet,” said one compliance chief at the event. “I’m still trying to wrap my hands around it.”

And others said they are holding off entirely from using ChatGPT, since too many questions still hang over regulatory requirements and cybersecurity risks.

“I just can’t endorse it right now,” said a second CCO, speaking on a panel about regulation and technology. “I see what some of my peers are doing but there’s still a lot of questions around its use.”

That’s in large part because of the challenges posed by records retention rules.

“It obviously has immense potential, but my approach has been not to use it for any business case right now because we don’t have the base level of archiving it,” he added.

A third CCO at the event noted the nascence of the technology, saying questions remain about cybersecurity and how ChatGPT’s biases – a problem all so-called “narrow” AI tools still pose – could affect areas of the business.

“We’re not encouraging use of it, we’re discouraging use of it,” she said.

One panelist’s firm has a policy permitting ChatGPT usage, provided there is a clear use case for it and that employees do not input material non-public or other sensitive information. That person also warned that the program’s responses to prompts should be taken with “a giant grain of salt” and that “you have to consider bias.”

Confidential information

Others expressed concerns about internal oversight of ChatGPT usage, and the risk of proprietary information leaking out through it.

“Our marketing and strategy officers are definitely leaning into AI, so we’re just making sure that they’re not putting any confidential information out there,” said a fourth CCO speaker at the regulatory demands panel.

That person said that their approach is to make use of ChatGPT, but with caution.

“We want to make sure people know how to use it,” she said. “You don’t want to be the bad person saying ‘no’ all the time, so you have to keep up with educating people and making sure it’s used appropriately.”

Material non-public information is “the overarching” risk, said another attendee at the event. But he voiced comfort with sponsors using ChatGPT if they first decide on what it should be used for and whether the program would add efficiency.

He also noted certain risk areas associated with ChatGPT might be addressed by existing internal measures.

That sentiment was seconded by another attendee, who said that companies can update their existing policies instead of making new ones.

“Take a look at what policies you already have in place around software management, transferring sensitive data and confidential data and work from there,” he said. “For example, if you already have a policy about not sharing corporate or sensitive information in a personal Google drive, that could be amended to include ChatGPT or an AI.”

But in the main, ChatGPT may actually be a boon to compliance departments, the person said.

“I think AI and programs like ChatGPT can be helpful to CCOs in learning more about technology and cybersecurity; where they can ask compliance questions to help understand this expanding role.”

With additional reporting by Jennifer Banzaca