As Artificial Intelligence (AI) continues to revolutionize industries, Chief Information Security Officers (CISOs) face new challenges in safeguarding their organizations. In a recent discussion with our guest expert – Alex Sharpe, Principal at Sharpe Management Consulting LLC – who has consulted hundreds of CISOs, product vendors, and policy makers, five key areas of focus were highlighted. While the order of importance may vary depending on an organization’s unique context, these five security concerns consistently emerge when considering AI systems. Achieving the appropriate balance between safeguards and fostering innovation is not easy. 

1. It’s All About the Data

At the heart of AI security is data. Data is crucial at every stage—from training the AI models to the inputs and outputs they generate. AI systems don’t operate on static code; they rely on data to learn and adapt. This introduces significant concerns around data privacy, bias, and data sovereignty.

There are three key facets of data in AI systems: 

  • Training Data: AI models learn from data, which raises privacy and ethical concerns. The integrity and quality of this data influence how the model performs, and any biases present in the training data can lead to undesirable outcomes. 
  • Input (prompts): Users often input prompts or upload data that can directly influence the model’s output. It’s crucial to consider how these inputs might affect outcomes and whether the data involved aligns with regulatory requirements, privacy practices, and corporate policies. 
  • Output: AI outputs can be unpredictable due to the non-deterministic nature of many models, and the way models evolve. This is where issues like hallucinations—such as the citing of non-existent legal cases or nonsensical images—come into play. Continuous fact-checking is essential. 

2. When AI Goes Wrong, It Goes Wrong in Unimaginable Ways 

Paraphrasing the former head of DARPA, “When things go wrong with AI, it goes wrong in ways that a human never would”. This unpredictability is rooted in how AI processes information based on its limited understanding and the evolving nature of its models. It is easy for users to forget these models only mimic human behavior. Given the complex and sometimes unpredictable behaviors of AI, organizations must be mindful of learning from lower risk use cases before deploying AI in high-risk scenarios. 

The European Union (EU), for example, has introduced governance regulations that categorize use cases based on the potential damage when (not if) something goes wrong. AI’s non-deterministic nature and our inability to inspect these models warrants keeping the human in the loop. 

3. User Errors and Misuse

One of the biggest threats to AI systems is user error. It’s astonishing how many users upload sensitive or confidential information into these models without considering the consequences. Once data is fed into an AI model, it may be irretrievable or used in unintended ways. 

Misuse extends beyond data exposure to the improper application of AI tools. Some users mistakenly treat AI models as decision-making engines when that is not what they were designed to do, leading to inappropriate outcomes. It is easy to forget, these models only mimic human behavior, they don’t think, at least not like humans do. There have been instances where people relied on Generative AI for medical advice or even to make their next car purchase only to be disappointed with the results. In business settings, misuse can have serious consequences, making awareness and training a critical priority for CISOs. 

4. Lack of Basic Security Features

Many AI models lack the fundamental security features that we have come to expect, such as access control, logging, and monitoring. Vendors are actively working on integrating these, but currently, the performance isn’t up to par. Until these features are fully developed, organizations must take the initiative to implement these security measures themselves. 

For CISOs, ensuring that basic cybersecurity practices are in place—especially in environments involving AI—remains non-negotiable. 

5. Sovereignty and Ownership Issues 

Data ownership and sovereignty are increasingly complex in AI environments. When using AI products, organizations need to be wary of who ultimately owns their data and the outputs. Is the data being used to train other models? Does it adhere to jurisdictional and regulatory requirements for data privacy and intellectual property? To quote Alex Sharpe “Bits don’t know borders.” 

Data transfers across borders, especially with stringent privacy laws like GDPR, present challenges. Organizations must ensure that their data usage complies with these regulations and that any agreements they enter into with AI vendors do not violate data sovereignty. 

Conclusion

As AI continues to evolve, these five concerns—data, model unpredictability, user error, basic security, and sovereignty—will remain central to a CISO’s role in managing AI systems. AI presents incredible opportunities but also demands a change in the way we view security. Ultimately, while AI offers innovative solutions, it’s crucial that organizations approach its implementation with caution, awareness, and robust security protocols. It is a tough balance between providing safeguards and not stifling innovation. 

Watch our Expert Series episode with Alex Sharpe where he discusses more about The CISO’s Role in ensuring the safety and security of AI. In the two-part video series, he will share some advice on building guardrails without stifling innovation, suggest security-specific resources for managing AI risks, and recommend AI security efforts and initiatives that CISOs should keep an eye on.