The Foundation of AI Governance

AI Use & Risk Assessment for
Regulated Organizations

Every AI governance journey starts with the same question: what AI do we have, and what risks does it carry? Without a complete, accurate answer, governance is guesswork and compliance is a liability.

Jared Clark JD MBA PMP CMQ-OE RAC
4–6
Week Assessment Timeline
5
Comprehensive Deliverables
$15K
Starting Investment

The Problem

Most Organizations Can't Answer Basic Questions About Their AI Systems

AI adoption is outpacing AI governance across every regulated industry. Organizations are deploying machine learning models, embedding AI-powered vendor tools, and automating decisions at an accelerating pace — but few have the foundational visibility needed to manage what they've built. Without knowing what you have, governance is performative and compliance is a guessing game.

How many AI/ML models are in production?

Most organizations can account for their flagship AI systems but significantly undercount the total. Shadow AI, embedded vendor AI, and experimental models typically outnumber known systems by a factor of three to five. You cannot govern what you cannot see.

Which decisions do AI systems influence?

AI systems now influence hiring decisions, credit approvals, clinical diagnoses, quality inspections, and procurement recommendations. The line between "decision support" and "automated decision" is blurry — and regulators are paying close attention to the distinction.

What data do they use and where does it come from?

Data provenance is a governance blind spot. AI systems may ingest personal data, protected health information, financial records, or proprietary datasets — often through pipelines that were never designed with governance in mind. Data lineage documentation is rarely adequate.

What happens when they fail?

Failure modes for AI systems are fundamentally different from traditional software. Models degrade silently. Distribution drift goes undetected. Bias emerges over time. Without monitoring and incident response protocols, failures compound before anyone notices — and the consequences in regulated industries can be severe.

Who is accountable for each system?

Accountability for AI systems is often diffused across data science, IT, product, and business teams — which in practice means no one is accountable. When regulators ask who is responsible for an AI system's outputs, the answer needs to be specific, documented, and defensible.

The bottom line

Without answers to these questions, every AI governance decision your organization makes is built on assumptions rather than evidence. An AI risk assessment replaces assumptions with data — giving you the foundation to govern AI with confidence and comply with regulations from a position of strength.

Scope of Assessment

What We Assess: Five Dimensions of AI Risk

Our AI risk assessment covers the complete landscape of organizational AI exposure. Each dimension is assessed systematically using frameworks drawn from ISO 42001, NIST AI RMF, and the EU AI Act — adapted to your industry and regulatory context.

AI Use Inventory

Complete organizational AI catalog

A comprehensive inventory of every AI system across the organization. This includes production machine learning models, AI features embedded in vendor products and SaaS platforms, internal AI-powered tools and automation, experimental and pilot AI projects, and generative AI usage across business units. The inventory captures system purpose, data inputs, decision influence, business criticality, and current governance controls for each identified system.

Our discovery methodology goes beyond self-reporting. We analyze procurement records, IT asset inventories, cloud service configurations, and API integrations to surface AI systems that stakeholders may not recognize as AI. The result is a register that serves as the authoritative source of truth for all subsequent governance activities.

Risk Classification

EU AI Act risk tier alignment

Every inventoried AI system is classified according to the EU AI Act's four-tier risk framework: unacceptable risk (prohibited), high-risk (subject to strict requirements), limited risk (transparency obligations), and minimal risk (no specific obligations). This classification is not merely an academic exercise — it determines your compliance obligations, documentation requirements, and the level of human oversight each system demands.

Beyond the EU AI Act tiers, we layer in sector-specific risk factors relevant to your industry. A clinical decision support system in healthcare carries different risk dimensions than a credit scoring model in financial services, even if both classify as high-risk under the EU AI Act. Our risk classification captures these sector-specific nuances and maps them to applicable regulatory requirements, giving you a risk profile that is both regulatory-aligned and operationally relevant.

Governance Gap Analysis

ISO 42001 & NIST AI RMF benchmarking

We assess your current AI governance practices against the control requirements of ISO 42001 (the international standard for AI management systems) and the NIST AI Risk Management Framework. The gap analysis covers organizational governance structures, risk management processes, data governance practices, model lifecycle management, monitoring and incident response, transparency and documentation, and human oversight mechanisms.

The output is a detailed gap matrix that shows, control by control, where your organization meets requirements, where partial controls exist, and where controls are absent entirely. This matrix becomes the foundation for your remediation roadmap and, if you pursue ISO 42001 certification, your implementation plan. For organizations with existing quality management systems (ISO 9001, ISO 13485, ISO 27001), we map integration points to avoid creating redundant governance structures.

Regulatory Exposure Mapping

Multi-regulation compliance matrix

AI systems in regulated industries are rarely subject to a single regulation. A healthcare AI system may simultaneously need to comply with the EU AI Act, FDA AI/ML guidance, HIPAA, and GDPR. A financial services AI model may face requirements from the EU AI Act, OCC model risk management guidance (SR 11-7), the SEC, and state-level consumer protection laws. Our regulatory exposure mapping identifies every applicable regulation for each AI system and scores the compliance priority based on enforcement timelines, penalty severity, and organizational readiness.

The resulting compliance matrix shows your organization exactly which regulations apply to which systems, what the compliance deadlines are, and where the highest-priority gaps exist. This is the document that transforms regulatory complexity from an overwhelming landscape into a manageable, prioritized work plan.

Third-Party AI Risk

Vendor AI, API services & embedded AI assessment

For many organizations, the largest source of AI risk is not the models they build internally — it is the AI systems they procure from vendors. Microsoft Copilot, Salesforce Einstein, Amazon Web Services AI services, ChatGPT integrations, and dozens of other vendor AI systems are being adopted across business units, often without governance oversight or formal risk assessment. These third-party AI systems create exposure that your organization is accountable for, regardless of who built the underlying model.

Under the EU AI Act, deployers of AI systems bear compliance obligations even when the AI is developed by a third party. This means your organization needs governance controls over vendor AI just as much as over internally developed AI — and in many cases, the governance challenge is harder because you have less visibility into how the system works.

Our third-party AI risk assessment evaluates vendor AI governance practices, data handling and privacy policies, contractual protections and liability allocation, transparency and explainability capabilities, your organization's ability to maintain meaningful human oversight, and supply chain concentration risk where multiple systems depend on the same underlying AI provider.

We also help establish a vendor AI risk assessment framework that your procurement and compliance teams can use going forward, so every future AI procurement decision includes governance considerations from the outset rather than as an afterthought.

The Process

A Structured, Four-Phase Assessment Methodology

Our assessment follows a disciplined methodology refined through years of quality systems and regulatory affairs work. Each phase builds on the previous one, ensuring thorough discovery, rigorous analysis, and actionable outputs. The typical engagement runs 4 to 6 weeks from kickoff to final deliverables.

1
WEEKS 1–2

Scoping & Stakeholder Interviews

We begin by defining assessment boundaries and conducting structured interviews with AI system owners, IT leadership, data science teams, compliance officers, and business unit leaders. The goal is to map organizational AI usage patterns, understand existing governance structures, and identify where AI decisions have the highest business and regulatory impact.

  • Define assessment scope and boundaries
  • Conduct stakeholder interviews
  • Map organizational AI usage patterns
  • Identify regulatory and business context
2
WEEKS 2–4

System Discovery & Documentation

With scoping complete, we conduct a thorough technical review of AI systems across the organization. This goes beyond stakeholder self-reporting to include procurement record analysis, cloud service auditing, API integration mapping, and vendor contract review. Every identified system is documented with its purpose, data inputs, decision scope, and current governance controls.

  • Technical review of all AI systems
  • Data flow and provenance mapping
  • Complete model inventory with metadata
  • Documentation gap identification
3
WEEKS 3–5

Risk Classification & Gap Analysis

With the inventory complete, we apply the EU AI Act risk tier framework to every system, conduct a controls assessment against ISO 42001 and NIST AI RMF, and map each system to its applicable regulatory requirements. This phase produces the core analytical deliverables: risk classifications, the gap analysis matrix, and the regulatory exposure map.

  • EU AI Act risk tier classification
  • ISO 42001 controls assessment
  • Governance gap identification
  • Regulatory exposure scoring
4
WEEKS 5–6

Remediation Roadmap & Executive Briefing

The final phase synthesizes all assessment findings into actionable deliverables. The remediation roadmap is a prioritized, sequenced action plan with effort estimates and timelines. The executive briefing translates technical findings into board-ready language. We present findings to both technical stakeholders and executive leadership, ensuring alignment on priorities.

  • Prioritized remediation roadmap
  • Board-ready executive summary
  • Technical stakeholder presentation
  • Implementation guidance and next steps

Deliverables

What You Receive: Five Actionable Deliverables

Every assessment produces five comprehensive deliverables. These are not generic templates — each deliverable is built from the specific findings of your assessment and tailored to your organization's regulatory context, industry requirements, and governance maturity.

AI System Inventory

Complete register of all identified AI systems with risk classifications, data inputs, decision scope, business criticality, accountability assignments, and current governance controls. This becomes your authoritative AI system register for ongoing governance.

Risk Assessment Report

Detailed analysis of AI risk across the organization, including EU AI Act risk tier classifications, sector-specific risk factors, regulatory exposure mapping with priority scoring, and third-party AI risk evaluation. The definitive document for understanding your AI risk posture.

Gap Analysis Matrix

Control-by-control comparison of your current governance state against ISO 42001, NIST AI RMF, and applicable sector-specific requirements. Shows where controls are met, partially met, or absent — with specific recommendations for closing each gap.

Prioritized Remediation Roadmap

A phased, sequenced action plan with effort estimates, resource requirements, and recommended timelines for closing governance gaps. Priorities are ordered by regulatory deadline urgency, risk severity, and implementation complexity to ensure the highest-impact actions come first.

Executive Briefing Deck

A board-ready presentation summarizing key findings, risk exposure, compliance gaps, and recommended investments. Designed for non-technical leadership audiences — translates technical assessment findings into business language that supports governance budgeting and strategic decision-making.

Delivered in 4–6 Weeks

All five deliverables are produced within the assessment timeline. You walk away with a complete picture of your AI risk landscape and a clear path forward — not a vague recommendation to "do more governance."

Who This Is For

Is Your Organization Ready for an AI Risk Assessment?

An AI risk assessment is the right starting point for organizations that are deploying AI in environments where failures carry regulatory, financial, or reputational consequences — but lack the foundational visibility to govern those systems with confidence. If any of the scenarios on the right sound familiar, this assessment is built for your situation.

The assessment is also the recommended first step for organizations that know they need AI governance but are unsure where to start. Rather than investing in a governance framework without understanding your actual AI landscape, the assessment gives you the evidence base to make informed decisions about governance investments, organizational structure, and regulatory compliance strategy.

Regulated Industry AI Deployers

Organizations deploying AI in healthcare, pharma, financial services, manufacturing, or defense that need to understand their regulatory exposure before it becomes an enforcement action.

EU AI Act Preparation

Companies that sell into or operate within the EU and need to prepare for the August 2, 2026 enforcement deadline. The compliance window is narrowing — organizations need 12+ months to implement full compliance.

ISO 42001 Certification Candidates

Organizations considering ISO 42001 certification who need a gap analysis as the first step toward implementation. The assessment directly feeds the certification roadmap.

Boards & Executive Leadership

Boards and C-suite executives who need visibility into AI risk across the organization. The executive briefing deck provides the concise, strategic view that governance committees and audit committees need.

Quality & Compliance Teams

Quality and compliance professionals whose mandate is expanding to include AI governance. The assessment provides the structured foundation that QMS-trained professionals need to extend their expertise into AI oversight.

Investment

AI Risk Assessment: Starting at $15,000

Assessment scope and pricing depend on organization size, number of AI systems, geographic distribution, and regulatory complexity. Typical engagements range from $15,000 for focused assessments of organizations with fewer than 20 AI systems to $75,000 for enterprise-wide assessments spanning multiple divisions, geographies, and regulatory frameworks.

We scope every engagement with a free consultation to ensure the assessment is right-sized for your organization. No surprises, no scope creep — you know the investment before we start.

FAQ

AI Risk Assessment: Frequently Asked Questions

A typical AI risk assessment takes 4 to 6 weeks from kickoff to final deliverables. The timeline depends on organization size, the number of AI systems in scope, and how distributed your AI usage is across business units. Organizations with fewer than 20 AI systems and centralized AI governance can often complete the assessment in 4 weeks. Larger organizations with decentralized AI adoption across multiple divisions, geographies, or regulatory jurisdictions may require 6 to 8 weeks. The assessment is structured into four phases: scoping and stakeholder interviews (weeks 1-2), system discovery and documentation (weeks 2-4), risk classification and gap analysis (weeks 3-5), and remediation roadmap with executive briefing (weeks 5-6).
That is actually the most common starting point — and the primary reason organizations need an AI risk assessment. Most companies significantly undercount their AI systems because they only think about purpose-built machine learning models while overlooking AI embedded in vendor software, SaaS platforms, cloud services, and internal productivity tools. Our discovery process is specifically designed for this situation. We use a combination of stakeholder interviews, procurement record analysis, IT asset inventory review, and technical architecture mapping to surface AI systems that organizations did not know they had. It is not unusual for a mid-market organization to discover 3 to 5 times more AI systems than they initially estimated.
Yes. Third-party AI risk is a critical component of every assessment we conduct. Many organizations have more exposure to AI risk through vendor and third-party systems than through internally developed AI. We evaluate AI embedded in SaaS platforms, API-based AI services like OpenAI or Anthropic, AI features in enterprise software such as Salesforce Einstein or Microsoft Copilot, and AI-enabled hardware or devices. For each third-party AI system, we assess the vendor's AI governance practices, data handling policies, contractual protections, and your organization's ability to maintain oversight and accountability. We also help establish vendor AI risk assessment frameworks that your procurement team can use going forward.
An AI risk assessment is a foundational prerequisite for ISO 42001 certification — but the two are distinct. ISO 42001 requires organizations to establish an AI management system that includes risk assessment as a core process. Our AI risk assessment delivers the baseline data and gap analysis that feeds directly into an ISO 42001 implementation. Specifically, the AI system inventory maps to Clause 6.1.2 (AI risk assessment), the risk classification maps to Annex A controls, and the gap analysis matrix explicitly benchmarks your current state against ISO 42001 requirements. Many clients use our risk assessment as Phase 1 of a broader ISO 42001 certification journey, but the assessment is also valuable as a standalone engagement for organizations that need governance clarity without pursuing formal certification. Learn more about ISO 42001 implementation →
The assessment delivers a prioritized remediation roadmap that gives you a clear, sequenced action plan. Most organizations take one of three paths after the assessment. First, some organizations use the roadmap to implement remediation internally, using the assessment deliverables as their guide. We can provide periodic check-ins to answer questions and review progress. Second, many organizations engage us on a monthly advisory retainer to guide implementation of the remediation roadmap, develop governance policies and procedures, and prepare for regulatory compliance deadlines. Third, organizations that need dedicated AI governance leadership may transition to a fractional Chief AI Officer engagement, where we provide ongoing strategic direction, governance committee leadership, and regulatory relationship management. The right path depends on your internal capabilities, timeline pressure, and the complexity of your AI governance gaps.

Ready to Understand Your AI Risk Landscape?

Start with a free 30-minute consultation. We will discuss your organization's AI landscape, regulatory exposure, and what an assessment looks like for your specific situation. No sales pitch — just an honest conversation about where you stand and whether an assessment is the right next step.

Or email support@certify.consulting