Artificial Intelligence (AI) is a great invention and is a tool that enhances human productivity when combined properly with human intelligence. What we are seeing in the industry today is an acceleration in automation, corporations are leveraging AI to take care of mundane tasks and activities, thereby channeling freed up human capacity towards initiatives that are ground-breaking in nature.
3 Pillars for Ethical Practices around AI
From a Cybersecurity perspective, leaders play a key role in enabling innovation in a safe and secure fashion. In addition, they are also placed in a unique position to aid with cultural transformation, i.e. cultivating responsible behavior amongst the workforces. There are three pillars cybersecurity leaders must consider and that the organization must abide by to instill ethical practices around AI:
- Data Protection
- Privacy Compliance
- Governance
Leaders must start with creating awareness on AI tools, sharing information on how to leverage these tools effectively without compromising sensitive information, e. proprietary, privacy, and competitive data. Next, develop a framework that clearly articulates the requirements that the organization must comply with, communicate the standards, and embed it in the organization’s annual compliance training.
On the awareness front, highlight and emphasize the tools’ shortcomings (bias), risks, and importance of ethics for both internal and external use. Treat the output from GenAI as the content in Wikipedia on the internet, output from GenAI models must be thoroughly vetted for its authenticity, validity, and appropriateness based on how and where it will be used. Ensure that the output received is consistent when the same questions are approached from various angles. Utilize research materials from trusted sources and leverage subject matter experts within the organization to make sure that the output is accurate, not discriminatory, or offensive.
Develop a process that organization can utilize to make informed and consistent decisions. Document concrete examples along with the output from Gen AI where it was successful and where it had failed.
Develop and implement a Governance structure for checking and challenging all use of Generative AI, i.e. a process, and committee for reviewing and approving the output from GenAI for internal and commercial use. Utilize industry frameworks such as NIST AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0). The NIST AI Framework offers organizations and security professionals with guidelines and tools to improve reliability and promote responsible design, development, implementation, and usage.
Corporations are beginning to leverage Generative AI, while the industry has made tremendous progress and has achieved success in autonomous AI, i.e. Agentic AI. Agentic AI has the capacity to learn things on the fly, it can set objectives, reason, plan, execute and revise based on feedback (Hogan, 2024), and human intervention is required for validation and approval.
4 Recommendations for Organizations
My recommendation on what businesses should focus on:
- Develop a roadmap for GenAI adoption
- Establish a framework for AI - Governance, guiding principles, risk management etc.
- Start with implementing proven and validated outputs from Gen AI for internal use and then slowly begin incorporating proven models into commercial solutions.
- Look ahead and think ahead, develop plans for embracing Agentic AI, AGI and other similar innovations that are happening in the industry.
Recommended courses:
- Udemy offers Generative AI - Risk and Cyber Security Masterclass 2024
- SANS Institute offers AI Security Essentials for Business Leaders
- University of Oxford offers Artificial Intelligence for Cybersecurity
Helpful Resources:
https://airisk.mit.edu
https://www.nist.gov/itl/ai-risk-management-framework
A framework for assessing AI risk | MIT Sloan
https://owasp.org/www-project-ai-security-and-privacy-guide/
us-ai-institute-NIST-ai-risk-management
https://www.apexhq.ai/blog/blog/the-apex-genai-attack-chain/