In order to ensure safe usage of Artificial Intelligence (AI) technologies, companies should consider establishing a Strategic AI Council (SAIC) and AI Innovation Teams (AIIT), a set of first principles for the assurance of AI workloads, and short-cycle supplier diligence reviews and onboarding processes with additional AI-specific due diligence.
Artificial Intelligence (AI) presents as an extremely disruptive technological advancement that is projected to cause disruption to up to 350 million white-collar jobs in the next five years.
Companies must establish responsible AI practices to ensure the safe usage of AI technologies, and should consider establishing a Strategic AI Council (SAIC) and AI Innovation Teams (AIIT) to govern the scope of usage and measure the impact of AI.
A set of first principles for the assurance of AI workloads should be established and should include statements such as not entering confidential customer data into unvetted AI tools, running generated code through standard code QA, and validating all AI output by a human prior to use.
Supplier diligence reviews and onboarding processes must include additional AI-specific due diligence around accuracy, alignment, ethics, and understandability.
AI toolstacks should be placed on six-month review cycles rather than twelve to ensure capabilities and use cases do not exceed safety criteria.
AI is One of the Most Disruptive Technological Advancements in Human History
Companies are increasingly turning to artificial intelligence (AI) tools to boost customer experience, employee efficiency, and accelerate innovation. AI presents as one of the most disruptive technological advancements in human history, on par with the development of agriculture or the advent of the industrial revolution. Goldman Sachs predicts up to 350 million white-collar jobs degraded or lost to AI technologies within the next five years. From self-refining AI's to AI-powered surgery robots to AI lawyers and accountants, nearly any job which can be represented as a data-decision flowchart will be impacted by AI in the next three to five years, and not necessarily to the benefit of the practitioner.
Into this context steps the data protection leader seeking to assure the organization's safe usage of AI technologies. "Safe" being the operative term, as it aligns several key data protection concepts (security, compliance, confidentiality, and other assurances) to a fundamentally human requirement to ensure that safety underscores all material business motions. While AI may be different from previous automation technologies, reputable audit frameworks exist and should be consulted when shaping an AI governance strategy. According to a recent report, many companies (69%) have started implementing responsible AI practices, but only 6% have operationalized their capabilities to be responsible by design. This sea change poses a confluence of risks that can be addressed through a set of first principles for the assurance of AI workloads.
Assurable AI First Principles for CISOs
A starting point for assurable AI first principles can be to restate existing trust and safety mandates in the appropriate frame to align enforcement to existing control capabilities. As an example, a CISO can present the following first principles as part of their larger assurance governance programs:
Do not enter any customer data, PII, trade secrets, or confidential information into any unvetted AI tool or service.
Do not utilize AI-generated code in production workloads without running generated code through standard code QA.
All AI output must be validated by a human for quality, compliance, and correctness prior to use in confidential workflows.
Consider establishing a permanent Strategic AI Council (SAIC) with leaders from the legal, security, product, operations, revenue, marketing, and customer organizations, and SMEs. The SAIC sets AI strategy, governs the scope of usage, and measures the impact AI is having on the revenue, customer, and valuation journeys. The SAIC authorizes each functional department to establish an AI Innovation Team (AIIT) that identifies and vets potential department-level AI services that can accelerate the value mission. The AIIT also serves as the local point of governance for operational AI-related decisions against safety criteria issued jointly by assurance and legal leadership.
Once the AIIT discovers a possible AI solution, it is put through the standard supplier diligence review and onboarding with additional AI-specific diligence around accuracy, alignment, ethics, and understandability. An accountable operations owner is assigned to each vetted AI service who is trained to oversee operational compliance criteria for confidential workloads to ensure front-line compliance motions are ready for internal audit review. Since most AI toolstacks tend to evolve quickly, they should be placed on six-month review cycles (rather than twelve) to ensure capabilities and use cases do not exceed safety criteria.
While AI technologies can help companies boost customer experience, employee efficiency, and accelerate innovation, they must be approached responsibly with safety and compliance in mind. For accountable leaders, this means establishing first principles for the assurance of AI workloads and ensuring rigorous standards of diligence for individual AI services. With a clear strategy and the right governance in place, executives can leverage AI safely and ethically to benefit their organizations.