3 Ways the CTO Can Fortify the Organization in the Age of GenAI

CTOs play a strategic role in helping organizations protect corporate data as they implement generative AI tools in the workplace.

Rob Juncker, CTO

March 1, 2024

5 Min Read
fortified castle
Viktor via Adobe Stock

Few technologies have captured the public imagination quite like generative AI. It seems that with every passing day, there are new AI-based chatbots, extensions, and apps being released to eager users around the world. 

According to a recent Gartner survey of IT leaders, 55% of organizations are either piloting or in production mode with generative AI. That’s an impressive metric by any degree, least of all considering that the phrase ‘generative AI’ was barely part of our collective lexicon just 12 months ago.  

However, despite this technology’s promise to accelerate the productivity and efficiency of its workforce, it’s also left a minefield of potential risks and liabilities in its wake. An August survey by Blackberry found that 75% of organizations worldwide were considering or implementing bans on ChatGPT and other generative AI applications in the workplace, with the vast majority of those (67%) citing the risk to data security and privacy. 

Such data security issues arise because user input and interactions are the fuel that public AI platforms rely on for continuous learning and improvement. Consequently, if a user shares confidential company data with a chatbot (think: product roadmaps or customer information), that information then becomes integrated into its training model, which the chatbot might then reveal to subsequent users. Of course, this challenge isn't limited to public AI platforms, as even a company's internal LLM trained on its own proprietary datasets might inadvertently make sensitive information accessible to employees who are not authorized to view it. 

Related:Bridge the Gap Between Business Leaders and Tech Teams

To better evaluate and mitigate these risks, most enterprises who have begun to test the generative AI waters have primarily leaned on two senior roles for implementation: the CISO, who is ultimately responsible for securing the company’s sensitive data; and the general counsel, who oversees an organization’s governance, risk, and compliance function. However, as organizations begin to train AI models on their own data, they’d be remiss to not include another essential role in their strategic deliberations: the CTO. 

Data Security and the CTO  

While the role of the CTO will vary widely depending on the organization they serve, almost every CTO is responsible for building the technology stack and defining the policies that dictate how that technology infrastructure is best utilized. Given this, the CTO has a unique vantage point from which to assess how such AI initiatives might best align with their strategic objectives. 

Related:How to Submit a Column to InformationWeek

Their strategic insights become all the more important as more organizations, who might be hesitant to go all-in on public AI projects, instead opt to invest in developing their own AI models trained on their own data. Indeed, one of the major announcements at OpenAI’s recent DevDay conference focused on the release of Custom Models, a tailored version of its flagship ChatGPT service that can be trained specifically on a company’s proprietary data sets. Naturally, other LLMs are likely to follow suit given the pervasive uncertainty around data security.  

However, just because you choose to develop internally does not mean you’ve thwarted all AI risks. For example, consider one of the most valuable crown jewels of today’s digital enterprise: source code. As organizations increasingly integrate generative AI into their operations, they face new and complex risks related to source code management. In the process of training these AI models, organizations are often using customer data as a part of the training sets and storing it in source code repositories.  

This intermingling of sensitive customer data with source code presents a number of challenges. Whereas customer data is typically managed within secured databases, with generative AI models, this sensitive information can become embedded into the model's algorithms and outputs. This creates a scenario where the AI model itself becomes a repository of sensitive data, blurring the traditional boundaries between data storage and application logic. With less-defined boundaries, sensitive data can quickly sprawl across multiple devices and platforms within the organization, significantly increasing the risk of being either inadvertently compromised by external parties, or in some cases, by malicious insiders.  

Related:How Many C-Levels Does It Take to Securely Manage Regulated Data?

So, how do you take something that is as technical and as abstract as an AI model and tame it into something suitable for users -- all without putting your most sensitive data at risk?  

3 Ways the CTO Can Help Strike the Balance 

Every enterprise CTO understands the principle of trade-offs. If a business unit owner demands faster performance for a particular application, then resources or budget might need to be diverted from other initiatives. Given their top-down view of the IT environment and how it interacts with third-party cloud services, the CTO is in a unique position to define an AI strategy that keeps data security top of mind. Consider the following three ways the CTO can collaborate with other key stakeholders and strike the right balance: 

1. Educate before you eradicate: Given the many security and regulatory risks of exposing data via generative AI, it’s only natural that so many organizations might reflexively ban their usage in the short term. However, such a myopic mindset can hinder innovation in the long run. The CTO can help ensure that the organization's acceptable use policy clearly outlines the appropriate and inappropriate uses of generative AI technologies, detailing the specific scenarios in which generative AI can be utilized while emphasizing data security and compliance standards. 

2. Isolate and secure source code repositories: The moment intellectual property is introduced to an AI model, the task of filtering it out becomes exponentially more difficult. It’s the CTO’s responsibility to ensure that access to source code repositories is tightly controlled and monitored. This includes establishing roles and permissions to limit who can access, modify, or distribute the code. By enforcing strict access controls, the CTO can minimize the risk of unauthorized access or leaks of sensitive data as well as establish processes that require code to be reviewed and approved before being merged into the main repository.  

3. Give users options for opting out: Allowing users to opt out is critical for building trust and transparency between the company and its users -- who are more likely to engage with AI technologies when they feel their privacy is both respected and protected. The CTO has the technical expertise to understand how data is collected, processed, and used within AI systems, which is essential in creating effective opt-out mechanisms that genuinely protect user data. The CTO can also play a key role in defining the strategic direction of how these AI solutions are responsibly deployed, ensuring that user privacy and data security are prioritized and integrated into the company's technology strategy. 

About the Author(s)

Rob Juncker

CTO, Code42

As chief technology officer of Code42, Rob leads software development and delivery teams. He brings more than 20 years of security, cloud, virtualization, mobile and IT management experience to Code42. Although Rob grew up as a hacker, he’s happy to be on the “good side,” working alongside many CIO’s/CISO's of Fortune 500 companies to ensure their networks and users are secure. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights