Meeting AI Regulations: A Guide for Security Leaders

With AI adoption surging, security leaders tackle growing data risks and regulations. CISOs must focus on data management, align with standards, and build security culture.

Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer, AvePoint

November 26, 2024

4 Min Read
digital abstract with security locks and words like danger
SergeyNivens via Adobe Stock

Artificial Intelligence is rapidly transforming the business landscape, already shifting the way we work, create and gather data insights. This year, 72% of organizations have adopted generative AI in some way, and 50% have adopted AI in two or more business functions -- up from less than a third of respondents in 2023. On the other hand, as AI adoption heats up, so do concerns around security -- with 45% of organizations experiencing data exposures while implementing AI. CISOs and security leaders now face the critical challenge of balancing AI implementation with growing data security risks.  

At the same time, government agencies are also turning their attention to AI security concerns -- and the regulatory landscape surrounding the technology is quickly evolving. Uncertainty persists on a federal level, as no all-encompassing legislature is currently in place in the US to set guardrails for the use of AI tools. However, frameworks including the AI Bill of Rights and Executive Order on AI, as well as state-wide regulations like the Colorado AI Act (with 45 other states in 2024 introducing AI bills), are gaining momentum -- as governments and organizations look to mitigate security risks associated with the use of AI.  

To prepare for rapidly evolving regulations in today’s unpredictable threat landscape, while still advancing AI initiatives across the organization, here are the strategies security leaders must prioritize in the year ahead: 

Related:Secure By Demand: Key Principles for Vendor Assessments

Building a robust data management infrastructure: Whether or not an organization is ready for widespread AI adoption, implementing an advanced data management, governance, and lifecycle infrastructure is critical to keep information safe from threat. However, 44% of organizations still lack basic information management measures, and only just over half have basic measures like archiving and retention policies (56%) and lifecycle management solutions (56%) in place.  

To keep sensitive data safe from potential threats, proper governance and access policies must be established before AI is widely implemented. That way, employees are not inadvertently sharing sensitive information with AI tools. Beyond keeping data secure, employing proper governance policies and investing in the automated tools needed to do so can also help streamline compliance with new regulations -- supporting security leaders by building a more flexible, agile data infrastructure to keep up with these fast-moving developments. 

Leveraging existing standards for AI use: To prepare data and security practices for new regulations in the years to come, CISOs can look towards existing, widely recognized standards for AI use within the industry. International standards like the ISO/IEC 42001 outline recommended practices for organizations looking to utilize AI tools, to support responsible development and use and provide a structure for risk management and data governance. Aligning internal practices with frameworks like ISO/IEC early on in the implementation process assures that AI data practices are meeting widely accepted benchmarks for security and ethics -- streamlining regulatory compliance down the road. 

Related:The Importance of Empowering CFOs Against Cyber Threats

Fostering security-focused culture and principles: Security leaders must strive to emphasize that security is everyone’s job in the organization, and that all individuals play a part in keeping data safe from threats. Ongoing education around AI and new regulations (through constantly evolving and highly customized trainings) ensures that all members of the organization know how to use the technology safely -- and are prepared to meet new standards and mandates for security in the years to come.  

Adopting “do no harm” principles will also help to future-proof the organization to meet new regulations. This involves carefully assessing all of the potential consequences and effects of AI before implementation, evaluating how these tools can impact all individuals and stakeholders. It’s important to establish these principles early on -- informing what limitations should be set to prevent potential misuse and preparing security teams for future regulations around ethical and fair use.  

Related:5 Questions Your Data Protection Vendor Hopes You Don’t Ask

As we continue to see new AI regulations take shape in the coming years, security and business leaders need to focus their attention on how to prepare their entire organization to meet new compliance standards. As CISOs continue to face uncertainty in how regulations will progress, this is a strong signal to safeguard data and ensure individual preparedness now to meet new standards, as they evolve rapidly. AI is now everywhere, and ethical, secure and compliant use is an organization-wide effort in 2025 -- which begins with building the proper data management and fair use principles and emphasizing security awareness for all individuals.   

About the Author

Dana Simberkoff

Chief Risk, Privacy, and Information Security Officer, AvePoint

Dana Louise Simberkoff is the chief risk, privacy and information security officer, AvePoint, Inc. She is responsible for executive level consulting, research and analytical support on current and upcoming industry trends, technology, standards, best practices, concepts and solutions for risk management and compliance. Dana is responsible for maintaining relationships with executive management and compliance officers, both internal and external to the corporation, providing guidance on product direction, technology enhancements, customer challenges and market opportunities. Dana holds a Bachelor of Arts degree from Dartmouth College and a Juris Doctorate from Suffolk University Law School. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights