Latest developments: ForHumanity’s audit push expands
ForHumanity IAAIS expansion drives growth in this sector. Two years ago, independent audits of AI felt like a policy-panel talking point. This week, ForHumanity says it’s moving from paper to practice, beginning to assist early adopters that want to show their systems can pass an Independent Audit of AI Systems (IAAIS). Healthcare and education get the first sector-focused offerings in February. Latest developments: ForHumanity’s audit push expands, and the move nudges the conversation from guidance to verifiable controls.
ForHumanity IAAIS expansion: From framework to fieldwork
ForHumanity outlined the shift in a LinkedIn update from founder Ryan Carrier, who framed it as a step toward routine, independent checks on high-risk AI. Carrier describes IAAIS as an “infrastructure of trust” and says the nonprofit is now working with early adopters to help them meet governance obligations using the framework’s audit criteria and trained assessors. The post, marked five days old at the time of capture, also sets a date: new Healthcare and Education Technology offerings land in February.
“ForHumanity’s mission is ‘to examine and analyze the downside risk associated with AI, Algorithmic, and Autonomous (AAA) Systems and to mitigate those risk… ForHumanity’.” — Ryan Carrier, FHCA
“Our primary implementation of the analysis, examination, and mitigation lies within the infrastructure of trust called Independent Audit of AI Systems (IAAIS).” — Ryan Carrier, FHCA
Carrier’s post pitches IAAIS as available globally—“to every government, commercial endeavor, auditor, and advisor around the world”—and, notably, invites market leaders to engage before laws force the issue. It’s the clearest sign yet that IAAIS isn’t just a library of checklists; ForHumanity wants it used on live systems. The announcement is here: LinkedIn. Companies adopt ForHumanity IAAIS expansion to improve efficiency.
“Now, ForHumanity is beginning to assist early adopters of compliance aimed for market-leaders seeking to fulfill their Governance obligations of these tools.” — Ryan Carrier, FHCA
ForHumanity audit expansion How IAAIS scaled—and what’s next in February
IAAIS didn’t appear overnight. ForHumanity spent years drafting detailed criteria, training assessors, and mapping the work to a growing list of laws and risk domains. The current tally is big: more than 7,000 auditable, implementable criteria across 50-plus certification schemes. The coverage spans privacy, bias, safety, cybersecurity, and fairness regulations from multiple regions, plus use-case-specific controls.
“We have more than 7000 individual, auditable, implementable audit criteria in more than 50 certifications schemes that are jurisdictionally-sensitive and globally harmonized.” — Ryan Carrier, FHCA
Examples cited by ForHumanity include: Experts track ForHumanity IAAIS expansion trends closely.
- GDPR, the EU AI Act, and the Digital Services Act
- CCPA in California and the NYC AEDT bias audit requirement
- India’s DPDPA, Bermuda’s PIPA, and Nigeria’s NDPA
- Cybersecurity, accessibility, AI literacy, and model risk management
- Automated employment decision tools, consumer duty, and SM&CR
- LLM and AI agent audits with 175+ cataloged use cases
Healthcare and Education Technology are next in line, with sector-specific criteria rolling out in February. These domains carry obvious risk: clinical decision support can change patient outcomes, and classroom analytics can reshape student trajectories. A harmonized control set that can be independently tested is a practical way to probe those systems before they ship at scale.
Carrier’s LinkedIn post drew 11 comments shortly after going live—modest chatter, but enough to show the audit community is watching the pivot from framework-building to engagements.
IAAIS rollout What experts say about auditing AI at scale
Carrier and his training cohort have long argued that principles only matter when they’re testable. That’s the point of certified auditors who can convert abstract values into checks, evidence requirements, and repeatable procedures. The two core roles ForHumanity claims in IAAIS—writing criteria and training assessors—are designed to make that translation durable across jurisdictions.
“ForHumanity plays two key roles in IAAIS, we draft the rules (Audit Criteria) for government adoption or voluntary market-based adoption and we training peoples as ForHumanity Certified Auditors (FHCA).” — Ryan Carrier, FHCA ForHumanity IAAIS expansion transforms operations.
Independent audit gets support from practitioners who work in risk-heavy fields, especially with the new healthcare and edtech tracks on deck. Names attached to ForHumanity’s efforts include:
- Shea Brown, Jo Stansfield, and Dr. Sundaraparipurnan Narayanan
- Enrico Panai, Dr. Cari Miller, and Anne Armstrong (CPACC, FHCA)
- Paul Crafer (FHCA), Chris Leong (FHCA), and Vibhav Mithal (FHCA)
- Katie Grillaert, Pauline H., and Esther Y. Chung, Esq.
- Greg Elliott, Steve English, and Laura C. Morgan
- Willy Tadema, Damian Borstel, and Maud Stiernet
These are the folks likely to turn IAAIS language into evidence trails: model documentation that actually proves what a system does, bias testing that can be reproduced, cybersecurity controls tied to threat models, accessibility checks validated with users, and data governance that aligns to local law. That last part matters across borders. A hospital in Lagos and a clinic in Berlin won’t share the same legal stands, but an auditable control set can be harmonized to both.
“Regular independent audits of AAA Systems is the best mechanism for ensuring robust governance, oversight, and accountability of these system.” — Ryan Carrier, FHCA
IAAIS vs. patchwork compliance
Plenty of teams use self-assessments tied to a single law or a vendor’s checklist. Those can miss issues once a product crosses borders or touches new data. IAAIS positions itself as an independent, cross-jurisdictional layer: one place to benchmark against converging rules while still honoring local differences. The promise is less whiplash from juggling conflicting standards and more clarity on what “good enough to ship” looks like. Industry leaders leverage ForHumanity IAAIS expansion.
Healthcare and education buyers might find that appealing. A district deploying a reading app or a hospital piloting an AI triage assistant needs proof, not vibes. An audit that references both global norms and local statutes, with explicit criteria for risk, privacy, fairness, and accessibility, offers a clearer path to sign-off.
The open question is scale. Writing criteria and training assessors is slow work, and independent audits take time. ForHumanity’s bet is that a shared control library and a trained FHCA bench can reduce friction across repeated engagements. Anyone curious can start with the resource hub listed by Carrier at lnkd.in/ercgnCjX, and the announcement post on LinkedIn.
Audits won’t fix reckless AI on their own. They will, at minimum, force a paper trail. With the February launch of healthcare and edtech tracks and early adopters already knocking, ForHumanity is setting up the test bench—and inviting others to try to pass it. More details at ForHumanity IAAIS expansion. More details at AI governance audits healthcare.