Cybersecurity considerations for AI users

Artificial intelligence (AI) has been around, in various forms, for decades. The last three years, however, have seen significant advances, enabling organizations to rapidly consider and implement AI technologies in the hopes of gaining competitive advantages.

As with any new technology, organizations must consider all of the implications of deploying AI technologies in their businesses and operations. This includes recognizing potential cybersecurity exposures of AI technologies — and relevant risk mitigation strategies. Organizations should develop policies, procedures, and guidelines around the nature and scope of AI use and conduct due diligence any specific AI tools they are considering. At a minimum, internal policies and procedures should align with the confidentiality, integrity, and availability requirements for the organization’s computer systems and the data they will process.

Data integrity and security

Organizations should be adopting best practices to prevent data pollution from AI systems—for example, the use of false information sources or fabricated case law.

By nature, AI systems exist to process and learn from large sets of data. To protect sensitive data (e.g., corporate proprietary or personal information), organizations should ensure their workforces are trained in their data classification policies and that they understand what data can and cannot be used with AI systems.

Securing AI systems

Threat actors are very aware of the value of AI systems, which both store large amounts of data and enable operational efficiencies. Deploying and properly configuring identity and access management controls, such as multifactor authentication (MFA), can help secure data assets and unauthorized access to those assets. Regular code reviews, scans of AI systems for vulnerabilities or misconfiguration, and patching of host devices can also be part of an overarching enterprise data security program to help prevent system exploitation.

Incident response planning

Incident response plans should be updated or developed to account for the AI technologies that organizations adopt. Organizations can monitor and log AI systems to detect anomalous behavior that may indicate an attack. They should also ensure that AI owners, users, and system managers are involved in incident response exercises related to those systems.

AI supply chains

Given that most organizations leverage AI technologies from external providers, organizations should closely investigate their AI supply chains. Specifically, organizations should:

  • Require third-party AI providers to thoroughly explain how their systems work, process data, protect data, and control system privileges and disclose how their data will be used by the provider for non-contracted purposes, if any.

  • Establish a risk-based cadence for reviewing the privacy and cybersecurity practices of AI service providers.

  • Require all third-party information technology providers to disclose how and where they use AI technologies and ensure that contracts with all providers clearly set out rights and obligations regarding technology, security, insurance, and liability.

Employee awareness

A significant number of cybersecurity incidents stem from human error/employee mistakes. Taking the time to provide awareness training on AI dos and don’ts for your organization is critical.

Among other best practices, organizations should train employees how to properly use AI systems, how to look out for whether a system has been compromised, and who to contact if help is needed. Additional periodic training should be provided as new features are rolled out or as security requirements are updated.

Compliance obligations

Ensuring compliance with relevant regulations and standards, including international and local regulations is important for maintaining the security of AI systems. New AI-specific regulations are being explored in several jurisdictions, creating new obligations. In using AI technologies, companies may also be subject to a host of other rules and regulations that have nothing to do with AI.

Organizations should:

  • Keep up with industry and government standards, guidelines, and laws regarding AI systems and cybersecurity, and update internal policies and procedures as requirements change.

  • Keep an inventory of and regularly review all AI technologies and systems to ensure they are properly configured and adhere to compliance requirements.

  • Consider consulting outside counsel with expertise in compliance obligations for AI and other IT systems.

Operational reliance

In some cases, organizations may incorporate AI systems into their day-to-day operations. Organizations can prevent business interruption stemming from these AI systems by:

  • Conducting business impact analyses for any operations that materially rely on AI systems. If appropriate, design redundancy in these systems or define manual backup processes.

  • Putting measures in place to question and validate the results of AI systems to ensure consistency and accuracy.

Acquiring AI expertise

AI is a quickly evolving technology that requires several different skillsets to implement, configure, and maintain systems in line with enterprise risk tolerance and in compliance with external requirements. Organizations looking to take advantage of AI should work with teams with AI experience and expertise on both the technology and operational sides.