Haggling Over the Future of AI Regulation and Responsibility
Can more guardrails and oversight allay fears of damage AI might do in terrible hands -- or simply by accident?
At a Glance
- AI x Future of Work Summit brought together regulators, investors, and innovators to discuss usage and potential AI policies.
- EEOC Commissioner Sonderling said precedence and policy may already exist for certain aspects of AI usage.
- Concerns linger regarding mixing AI-generated content with copyrightable content and actual ownership.
Now that the world has gone through its freshman year of generative AI mania, concerns have mounted about possible long-term changes and troubles the technology may bring. Job disruption, the spread of misinformation, digital hallucinations, and new types of cyberattacks are just a few of the potential negatives associated with GenAI along with the advantages it can offer.
At last week’s AI x Future of Work Summit in New York City, the final panel on “Responsible AI Regulations: 2024 & Beyond” discussed the scrutiny on AI and policies taking shape around it.
The panel included Keith Sonderling, a commissioner with the Equal Employment Opportunity Commission; George Mathew, managing director at venture capital and private equity firm Insight Partners; Stephen Malone, vice president for legal, employment, and corporate affairs at Fox Corporation; and Var Shankar, executive director at nonprofit the Responsible AI Institute. Krystal Hu, venture capital and startups reporter with Reuters, moderated. A.Team, an engineering talent marketplace-platform, hosted the summit.
Sonderling jokingly implied that the rise of ChatGPT one year ago not only caught the public’s attention, but it also effectively woke up politicos to AI’s existence. “For members of Congress, members of the Senate, they can play with it themselves and, more importantly, their constituents could use and play around with it,” he said. “Suddenly it became the top news story, so then Washington begins to care about this.”
The influx of policymakers discussing AI can be a big distraction, Sonderling said, for tech founders who may hesitate “to engage in this area, invest in this area, buying in this area, or just be thought leaders in this area to be able to make new products.” The piecemeal, somewhat chaotic development of regulation on AI in the United States and around the world, he said, obscures some significant aspects of the technology. “What AI is doing, it’s making a decision and, in my case, it’s making an employment decision and or in finance it’s making a credit decision, in housing it’s making a housing decision,” he said. “Whatever use you want to use AI, it’s making a decision.”
Employment decisions have been regulated in the US since 1964, Sonderling said, which includes longstanding policies on not having bias or discriminating. Even with the addition of AI or other resources, the fundamental principles should be upheld by existing laws. “And all you’re doing is using technology to either help you, augment it, or completely make that decision,” he said. It is unclear if new federal laws or addendums to existing laws might develop in the context of AI, Sonderling said, but he suggested tech founders should not over-anticipate legislation that has not been written. “Don’t be distracted,” he said. “Whatever industry you’re in, there’s principles regarding the use of whatever product you’re making, whatever decision you’re making and that has been long standing, and that’s [what] you should build your programs around, not potential future regulation.”
Tech founders are obviously not alone in concerns about regulation and responsibility in the use of AI. Malone said for an organization the scale of Fox, with thousands of employees, there can be concerns about a manager or staffer purchasing software applications that could introduce unexpected risk elements to the company. “An organization adopting that kind of technology really needs to kick the tires on it to make sure that it’s compliant from a data security and data privacy standpoint,” he said.
There is also the user factor, which could unintentionally lead to AI being fed proprietary information that is set loose into the wild. “We’ve had to send out to our employees some cautions about using generative AI products and ChatGPT,” Malone said. “There’s some concern that if employees put confidential information into that system, the system will learn from it and adopt it and potentially provide that as a result to other users out there. And then our confidential information goes out into the marketplace.”
Organizations that use content produced by generative AI might also find themselves facing questions about ownership of their own products. “We’re concerned users using generative AI, for example, receiving back results that might be copyrighted and then inadvertently using that, holding that out as their own content and violating copyright laws inadvertently,” he said.
Hiring and retention of employees with the assistance of AI products poses other potential risks for companies. “I think there are so many tremendous AI tools for the workplace in terms of helping you recruit, reach out, push for a diverse slate, have better recruiting -- I actually think it’s a terrific tool for fostering diversity,” Malone said. “There are some potential downsides, though if that technology is not used properly, and so I think any organization, before they adopt that, really needs to kick the tires on that.”
Recent policy moves, such as the executive order on AI signed by President Biden, might be signs of things to come. Shankar noted that while the executive order may have the force of law, it only applies to federal agencies -- yet has some potential to set precedence. “It’s not a broad law like the EU AI Act,” he said. “It kind of gives guidance to executive agencies on how to enforce existing laws, so it’s very powerful that way.”
With the federal government being a huge customer of technology, Shankar said its procurement procedures can set a standard of what “good” looks like, even more so than regulation or global developments. “That’s the main reason a lot of our members are looking to NIST [National Institute of Standards and Technology], which is supposed to be a non-regulatory, technocratic agency,” he said. NIST’s approach offers a common vocabulary and common terms, Shankar said, which communities could build around.
Waiting to discuss AI’s role within an organization, even with regulations still being drafted, does not seem to be an option as the world becomes enmeshed with such technology. Policymakers might take their time sorting out their roadmaps; companies might not have that luxury. “When organizations come to us and they’re like, ‘Hey, how do we start thinking about this?’ We’re kind of like, ‘You should have been thinking about this a few years ago,’” Shankar said. “We’re in a period of consequences and obviously it’s better to put guardrails in place now and to think through these issues now. Organizations that have been doing digital well for the past several years are already better positioned to deal with these issues.”
About the Author
You May Also Like