As the AI-hype props the Economy, trust in AI capabilities has become central for end users and customers of B2B SaaS companies alike. After all, AI brings many promises, but also threats.
Without trust, there is no business. How do you build trust about systems whose marketing rely on mysticism? Do we need to think of something new and follow the “visionaries” of the Tech industry while they compulsively reinvent the wheel?

Or can we follow the decades of experiences built when developing new technologies like planes, trains, and automobiles? 🦃 Indeed, we can. As with any other types of products, we can build trust by implementing standards of quality management and safety, as well as demonstrating compliance with regulatory bodies, and building a culture of incident management. Henceforth the dawn of AI Governance.
The easiest part of AI Governance is understanding what it is. AI Governance is the process organizations implement to manage standards of quality and safety in their AI systems (internal or product alike). AI Governance frameworks abound, and each of them will tell you what you should implement. The difficult part of AI Governance is how to implement those processes.
From January to June 2025, I threw myself at the task of implementing an AI Governance program, which was externally audited in July 2025, leading to Zendesk becoming one of the first CX companies ISO 42001-certified. That means that our AI Governance program complied with the standards of a good AI management system, as viewed by ISO. Thus, I have a thing or two to say about effective implementation of AI Governance.
AI Governance, the What is Easier than the How
I will cut through the chase: most AI Governance frameworks overlap each others significantly, and my preference goes to the NIST AI Risk Management Framework (NIST AI RMF) for several reasons.
- It is a well-rounded AI Governance framework, it doesn’t overly focus on cybersecurity or compliance;
- NIST AI RMF is open source, all their playbooks and documentation is freely available on their website. Their documentation is very digestible (a mere 40 pages 😬);
- Like other NIST frameworks, it is well regarded and recognized in the US and by some global customers.
- If your AI Governance program aligns with NIST, you are 90% of the way there for ISO 42001.
- It also aligns very well with the EU AI Act.
An Overview of NIST AI RMF in 60 Seconds or Less
The NIST AI RMF proposes to build an AI Governance program around four connected parts (see Section 5):
- Govern: Create a governance structure across your organization that has the authority to control, execute, and oversee the other parts.
- Map: Map the context of AI in your organization. Make a list of your AI systems, and make a list of the AI risks you want to track, and map which system has which risk.
- Measure: Measure the specific risks instantiated in your AI systems.
- Manage: Manage these risks, by mitigation, treatment, etc.
Again, NIST AI RMF tells you what you should do, but not exactly how. That is because the how depends on your organization and its culture, established norms, and existing governance frameworks.
An AI Governance Program in Many Easy Steps
Here is my strategy, and your milage will vary based on your organization.
I once asked a senior executive what I would learn if I prepared the CISSP certification. His answer was the following joke:
Here is an example question from CISSP: The building is on fire. What should you do first?
- Call 911
- Find the fire extinguisher
- Seek senior leadership sponsorship
The correct answer is 3.
Even the most efficient AI Governance program may be perceived as a barrier to Product Development. Therefore, you need a full spectrum of senior executive sponsorship to lend the authority for this project. It is a good idea to design the implementation of the program as a company-wide initiative with multiple domains (Legal, Security, Engineering, Marketing, etc.) as stakeholders. It gives it a time-bound, objective-driven reality that will help guide your strategy. This group of senior executives also becomes the basis of the NIST AI RMF “Govern” part: they become the executive oversight once you slap a charter on it!
Step 2: Reduce. Reuse. Recycle
If your organization already has a Governance, Risk, and Compliance (GRC) structure, you should absolutely try to fit an AI Governance program within it. List the processes, meet with their stakeholders, and research how an AI Governance program could be integrated. It is also the time to assemble your day-to-day partners across the organization. At a minimum, you will assemble a team of stakeholders from security, legal, compliance, engineering, and product development.
Step 3: Scope
Do you want your AI Governance program to cover all of the AI features in your product? Or only some? None, but you want to manage your internal AI tool usage? Is your scope models instead of AI features (They are not the same 😁)? You need to scope the reach of your AI Governance program and the lense with which you view things. You can consult (for free!) ISO 22989 (Section 5.19), which defines AI stakeholder roles for your organization, and hence the scope of your program.
AI Risks
As part of your scope, you also need to decide which risks you want to track. What are AI Risks? AI risks are not just AI Security risks like prompt injections or fairness issues. They can be related to compliance, disclosure, transparency etc.
- NIST AI RMF proposes a list of risks, but I find them too general to be useful.
- MIT AI Risk Repository is an academic repository of more than 1,600 risks. I find their nomenclature too granular - and somewhat convoluted - for industrial purposes.
- MITRE Atlas focuses on cybersecurity risks for AI systems, but it lacks risks outside of security.
- OWASP Top 10 LLM focuses solely on cybersecurity risks for LLM, again, this is useful but too narrow.
- SAIL Framework is an example of the myriads of other AI risk frameworks that exist.
I think the most useful AI risk repository is the IBM AI Risk Atlas:
- It is short (60+ risks, and you can merge some together), comprehensive, and well rounded. It covers security, ethical, legal, and societal risks. Note that it doesn’t include operational risks (i.e., cost management or integration issues).
- Its nomenclature and lense make sense from an industry perspective, rather than a purely academic one.
- It is up-to-date, with a recent update on agentic-specific AI risks.
- It is free.
- It is backed by peer-reviewed research.
Step 4: Map
Once we have defined the scope (breath) and the lense (depth) of what we need to map, let’s map it. Make a list of all the AI systems within your scope. NIST AI RMF tells you to “map” the AI systems to the AI risks in scope. But how do you actually do that?
The MAP function suggests to gather information and establish context to identify risks. “The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the MAP” (NIST AI RMF documentation (page 28))
This is where I disagree slightly with NIST AI RMF, and recommend that you do a qualitative AI risk assessment (i.e., a questionnaire) during MAP. In my view, a qualitative AI Risk assessment (ISO 42001 calls them “AI Impact assessments”), is how you can MAP. Collecting all the context required within a structured questionnaire is much easier to evaluate than an unstructured collection of documents or an enormous spreadsheet. Save yourself the trouble.
Qualitative AI Risk Assessments
This might be the 2nd most difficult part of an AI Governance program: You must design an assessment (a questionnaire) that collects all the context necessary for you to evaluate whether a risk is present or absent within an AI System. Fortunately, once again, you do not need to reinvent the wheel.
For AI risk assessments on AI features in your product, I strongly recommend perusing the IBM Research’s Human-Centric AI research group (which was entirely laid off in Nov. 2025. What a short-sighted decision!), and their FactSheet project: 1,2,3. It is not just about Model Card, it’s about communicating effectively technical context to non-technical people. FactSheet is what powers IBM Watson Governance. It’s an impressive product, but since they fired their entire research team, one must wonder how long it will stay relevant…
For AI risk assessment for third-party provider, you can check out the Cloud Security Alliance AI-CAIQ (pronounced ‘AI-cake’) which is a standardized questionnaire written from a cloud security perspective. It is not the panacea of AI Governance, but it is a start.
A qualitative AI risk assessment needs to collect information on:
- Context of usage of the AI system (who is the end user, what data goes in and out, etc.)
- Who is accountable for this system (the engineers, the product managers, the scientists, etc.)
- Documentation to support transparency and explainability of the system (architecture, monitoring, code repository, model R&D.)
- How are the legal and compliance requirements met by this system
- The Security and AI Security features implemented
- Anything related to third-party management
You can check out the new ecosystem of AI Governance tools that support the design and implementation of those assessments (e.g., OneTrust, Credo).
Once you have collected all that context, then you can MAP to AI risks. You can assign points or values to answers, and design threshold scores for risks. However, for some of those risks, in particular AI Security risks, you will only have the notion of presence/absence of this risk in your AI system. Hence, the MEASURE.
Step 5: Measure
A quantitative AI risk assessment is how you can MEASURE the probability and impact of a risk. Essentially, if we know from MAP that an AI system has a risk for bias, then we must measure quantitatively that bias. Quantitative risk assessments are difficult, and in my view, the most difficult part of a mature AI Governance program. I have many thoughts to share about them, so many in fact, that it will be another blog altogether…
Step 6: Manage (and Maintenance)
Let’s look back at steps 1 and 2. By now, you have senior executive sponsorship, and you have assembled a cross-organization committee who has driven the MAP and MEASURE steps. This committee must now MANAGE the AI risks found in your organization all throughout the life cycle of your AI systems. This becomes a continuous and perpetual effort, which must be maintained and not let frizzled out.
Step 7: ISO 27001 is a Short-Cut
“ISO 42001: AI Management System” is a certification recognized internationally as “the AI Governance certification”. It overlaps significantly with NIST AI RMF. Since NIST AI RMF is not certifiable, ISO 42001 offers a way to certify externally the maturity of an AI Governance program. ISO 42001 is the little sibling of ISO 27001, and part of my rapid success was because my AI Governance program stood on the shoulder of our ISO 27001 certification. If your organization is already ISO 27001-certified, you must coordinate with your Compliance team in the design of your AI Governance program to cover the parts of ISO 42001 that aren’t suggested by NIST AI RMF. For example, NIST AI RMF doesn’t talk about AI incident management, or internal trainings.
AI Governance is the New Differentiator
Implementing an AI Governance program will require you to foster a culture of collaboration and cross-domain expertise. It is not easy, but it is a necessity. AI Governance is rapidly becoming the new market differentiator. Per my count, Zendesk was the fifth customer service company to obtain it. Salesforce obtained it a few days after Zendesk, but not for all its products 😛. So if you are a B2B company with an AI offering, demonstrating the existence and maturity of an AI Governance program will be the norm within the next couple of years. The best time to start an AI Governance program was a year ago. The next best time is now!