COMPLIANCE
A grade. 100% pass rate.Eleven frameworks.
Vantage Workspace is graded continuously, not annually. The grade you see on this page is computed from runtime evidence — the same audit log that an auditor would review, summarised into a single posture.
Last assessed: May 5, 2026. The next assessment is automatic, every build.

CURRENT POSTURE
A grade · 100% pass rate · 11 frameworks · 168 automated tests
The compliance grade is computed by the platform's /assess mission, which runs against the production deployment on every build (currently every two-week sprint cycle, plus on-demand). The grade is the worst of: the lowest framework score, the test pass rate, and the policy coverage rate. It is not an average. A single failed test or a single uncovered control would move the grade.
The grade has moved over time. Three months ago the published grade was B at 84%. The methodology that closed the gap is on the insights archive — and the same methodology is what produces the next assessment cycle.
FRAMEWORKS COVERED
Eleven frameworks. One assessment cycle.
NIST AI RMF
A · 2026-05-05
ISO/IEC 42001
A · 2026-05-05
EU AI Act
A · 2026-05-05
SOC 2
A · 2026-05-05
PCI DSS v4.0
A · 2026-05-05
HIPAA
A · 2026-05-05
FINRA
A · 2026-05-05
FedRAMP
A · 2026-05-05
PIPEDA
A · 2026-05-05
Privacy Act (Canada)
A · 2026-05-05
AIDA (proposed)
A · 2026-05-05
OWASP Top 10 for Agentic Apps
Every category covered. Every control mapped. Every event signed.
The OWASP Top 10 for Agentic Applications enumerates the failure modes that turn a useful AI system into a dangerous one — prompt injection, tool misuse, memory poisoning, supply-chain compromise, and the rest.
Each item maps to one or more layers in the 7-Layer Defence Architecture. Each layer emits the signed events that constitute the evidence. The screen on the right is the live coverage report from /assess — not a sales chart.

METHODOLOGY
The grade is computed from runtime evidence, not from attestation.
There are two ways to claim a compliance posture. The first is to write a self-attestation document (sometimes a thousand pages long) describing the controls the platform implements, how those controls map to the framework, and why the auditor should trust the description. This is the dominant model in enterprise software, and it is the reason “compliance theatre” is a credible academic critique of the industry.
The second is to compute the posture from runtime evidence — to instrument the platform such that every event the platform produces is mapped to a control under each framework, and the grade is a function of how many controls have produced satisfying evidence within a defined window.
We use the second model. The platform's /assess mission, which runs every build:
- Walks the audit log for the last assessment window (default: the last 30 days).
- For each event, looks up which controls under which frameworks it satisfies.
- For each framework, computes the percentage of controls that have produced at least one satisfying event in the window.
- Computes a per-framework grade (A: 100%, B: 90-99%, C: 75-89%, D: <75%).
- Computes the platform-wide grade as the lowest of the per-framework grades, the test-suite pass rate, and the policy coverage rate.
The output of /assess is a structured JSON document committed to a public assessment registry (in the customer's deployment, scoped to their tenancy — and in our internal deployment, scoped to the published grade you see on this page). The registry is append-only, signed, and linkable; you can hand an auditor the URL of a specific assessment.
What “100%” means precisely
A 100% pass rate means: every control under every framework in scope has produced at least one satisfying event in the assessment window. It does not mean: the platform has zero open issues or zero technical debt or zero unresolved security findings. We list those separately, in the next section.

WHAT WE PUBLISH
Vendors typically publish certifications. We publish the gaps too.
Most vendor compliance pages list certifications, not failures. We list both. The reason is structural: an auditor's question is not “do you have a certificate?” but “show us the evidence that the controls were operating during the audit window, including the periods where the controls failed.” A vendor brochure that hides failures is a vendor brochure that won't survive a serious audit.
Below are the categories of disclosure we publish, alongside the assessment grade:
Open issues.
A list of unresolved technical or process issues that affect compliance posture. Each issue has: a description, an affected framework or control, an assigned owner, a target sprint for remediation, and a public update on progress. The list is currently empty (which is itself an audit-able fact); when issues open, they are added within the same business day.
Sprint retrospectives.
Every sprint produces a retrospective, internally. The compliance-relevant excerpts are published in the insights archive, tagged “retrospective”. This includes the sprints where we moved from B to A, the sprint where we added PIPEDA and Privacy Act (Canada) coverage, and the sprint where we caught a regression in the Layer 4 memory-safety embedding-inversion check.
Framework version drift.
Frameworks update. NIST AI RMF 1.1 will land at some point. ISO 42001:2026 (a hypothetical major revision) will land at some point. Each framework has a “tracked version” line in its detailed entry below; when the framework updates, we publish the gap analysis between the old version and the new version, the sprint plan to close the gap, and the assessment grade against the new version once the gap is closed.
Audit history.
Every assessment cycle produces an entry in the audit-history table. The history is published; you can see the platform's grade over time. The current published history shows the move from B at 84% (three months ago) to A at 100% (now), with the intervening sprints documented.
Pen-test summaries.
External penetration tests are commissioned twice a year. The summary findings are published — categorised, severity-rated, with remediation status. Customer-specific findings from a customer's own pen test of their deployment are confidential to that customer; we publish only our own external program.
What we do NOT publish, deliberately:
- The full audit log of any customer's deployment. Each customer's audit log is theirs.
- The specific contents of policies, prompts, or tool calls. The audit log records the metadata of every event; the content is encrypted and accessible only to the customer.
- Customer names in compliance posture summaries. If a customer wants to be named as a reference, that's their decision and they can be quoted — but the compliance posture is the platform's, not a per-customer claim.
WHAT AN AUDITOR RECEIVES
A Trust Report is the artefact your auditor will read.
When an auditor (internal compliance, external SOC 2 firm, regulator-mandated assessor) asks “show us the evidence”, the deliverable is a Trust Report — a tamper-evident, framework-mapped, time-windowed document generated by the platform.
A Trust Report includes:
- The events in scope for the time window the auditor specified. Every event has its full metadata (identity, agent, layer, decision, timestamp, signed hash).
- The policy that was in effect at the time of each event. Auditors often want to know “what controls were operating on the day in question” — the policy version in scope is included in every event.
- The compliance grade as of the moment the report was generated.
- A control-mapping appendix showing every control under every applicable framework, the satisfying events for that control during the window, and any controls that did not produce satisfying events (with explanations).
- A signed cryptographic hash of the report itself, anchored against the platform's tamper-evident log. The auditor can verify the report's integrity independently.
Trust Reports can be generated in three formats: HTML (browsable, hyperlinked, suited for online review), PDF (printable, suited for board packs), and JSON (machine-readable, suited for the auditor's own analytics pipeline).
A sample Trust Report cover page:
TRUST REPORT Vantage Workspace deployment: prod-tenant-acme Window: 2026-04-01 → 2026-04-30 Frameworks in scope: SOC 2, ISO 42001, EU AI Act Generated: 2026-05-05 16:42 UTC Compliance grade: A (100%) Total events: 487,213 Total controls evaluated: 84 Controls with satisfying events: 84 Controls without satisfying events: 0 Open issues: 0 Report integrity hash: sha256:9f4a2c1b... Verification key: https://workspace.handvantage.com/.well-known/trust-report-key
BY FRAMEWORK
Eleven frameworks, one block each.
NIST AI RMF (1.0)
Tracked version: 1.0
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. All four functions — GOVERN, MAP, MEASURE, MANAGE — covered. Specific control coverage: GOVERN (1, 2, 3, 4, 5, 6), MAP (1, 2, 3, 4, 5), MEASURE (1, 2, 3, 4), MANAGE (1, 2, 3, 4).
Evidence sources. Audit events from Layers 1, 2, 5, 6, 7 contribute the bulk of MAP and MEASURE evidence. Layer 7 supply-chain evidence covers GOVERN-2 and -3. Trust Report generation cadence covers MANAGE-2.
In practice. When a NIST AI RMF assessment is requested for a deployment, the platform generates a Trust Report scoped to the four NIST functions. Each function's controls are listed, with the satisfying events and the policy versions in effect.
ISO/IEC 42001 (2023)
Tracked version: 2023
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. All ten clauses covered. Annex A control coverage: complete.
Evidence sources. Audit log structure (Clause 8 — Operation), policy management (Clause 7 — Support), continuous improvement (Clause 10), management review (Clause 9).
In practice. A Trust Report for ISO 42001 includes a clause-by-clause mapping. Annex A controls are listed alongside the satisfying events. ISO 42001 maps cleanly to the platform's /assess mission output.
EU AI Act
Tracked version: Regulation (EU) 2024/1689; high-risk obligations (Articles 6-29, Annex III) — effective 2 August 2026.
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Articles 9 (Risk management), 10 (Data and data governance), 11 (Technical documentation), 12 (Record-keeping), 13 (Transparency), 14 (Human oversight), 15 (Accuracy, robustness, cybersecurity), 16-29 (Provider obligations), 99 (Penalties — for context).
Evidence sources. Article 12 (record-keeping) is the centrepiece — the audit log structure satisfies it directly. Article 11 (technical documentation) is satisfied by the Trust Report + the architecture documentation. Article 14 (human oversight) is satisfied by Layer 5 (Trust Boundaries) consent flows.
In practice. For a deployment in the EU or with EU-regulated data, the operator can request a Trust Report scoped to the AI Act's Annex IV technical-documentation requirements; the report maps each Annex IV item to its satisfying evidence in the platform.
SOC 2 Type 2
Tracked version: SOC 2 Type 2 (TSP 2017)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Security, Availability, Processing Integrity, Confidentiality, Privacy criteria — all five.
Evidence sources. Inter-Service Auth (Layer 6), Memory Safety (Layer 4), Supply Chain (Layer 7) cover the bulk of Security and Confidentiality. Continuous deployment metrics + sprint cadence cover Availability and Processing Integrity. Privacy controls map to PIPEDA / Privacy Act (Canada).
In practice. SOC 2 Type 2 is the most-requested framework for North American B2B customers. The deployment runs continuously with Type 2 evidence; the audit window is rolling, so an auditor can request a report for any 90-day or 180-day window.
PCI DSS v4.0
Tracked version: v4.0 (2022)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. All twelve requirements; specifically the encryption requirements (Req 3, 4), access control (Req 7, 8), monitoring (Req 10), and the new v4.0 requirements around customised approach and continuous validation.
Evidence sources. Layer 6 (Inter-Service Auth) covers Req 4 (encryption in transit). Layer 7 (Supply Chain) covers Req 6 (secure development). Audit log covers Req 10 (logging and monitoring).
In practice. Vantage Workspace itself is not a card-handling system. The PCI DSS posture is included because customers running PCI-relevant workloads on the platform need the platform itself to not be the weak link. A PCI assessor reviewing a customer's overall posture will have specific platform-level questions; the Trust Report answers them.
HIPAA
Tracked version: 45 CFR Parts 160, 162, 164 (current as of 2025 amendments)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Security Rule (administrative, physical, technical safeguards), Privacy Rule (uses and disclosures), Breach Notification Rule.
Evidence sources. Technical Safeguards (164.312) — Layer 6, Layer 4. Audit Controls (164.312(b)) — the audit log structure. Access Control (164.312(a)) — Layer 1 + Layer 5.
In practice. US healthcare customers operate the platform under a Business Associate Agreement (BAA). Handvantage signs the BAA with the customer; the platform deployment runs under the customer's HIPAA covered-entity scope. Trust Reports include HIPAA-specific control mappings.
FINRA
Tracked version: Current FINRA Rules (2024 codified)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Books and Records (Rule 4511), Communications (Rule 2210, 2212), Recordkeeping (Rule 4516), Supervision (Rule 3110).
Evidence sources. The audit log directly satisfies Rule 4511 (record-keeping). Layer 1 policy enforcement satisfies Rule 3110 (supervision controls). Communications captured by the Concierge agent's email/chat tools satisfy Rule 2210 review.
In practice. US financial-services customers under FINRA jurisdiction need supervisory and recordkeeping evidence for any AI system that touches client communications or trade decisions. The Trust Report includes a FINRA-specific record-export format.
FedRAMP
Tracked version: FedRAMP Moderate baseline (Rev 5)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. All Moderate-baseline controls. We do not yet pursue FedRAMP High; the deployment topology supports it but the formal authorization is a separate engagement.
Evidence sources. Continuous monitoring (CM family) covered by audit log structure. Identification and Authentication (IA family) covered by Layer 1 + SSO. System and Information Integrity (SI family) covered by Layers 2, 3, 7.
In practice. US public-sector customers can deploy Vantage Workspace inside a FedRAMP-authorised environment (e.g. AWS GovCloud, Azure Government). The platform's controls inherit the underlying environment's authorization plus add the platform-specific control evidence.
PIPEDA (Canada)
Tracked version: Personal Information Protection and Electronic Documents Act (current consolidation, 2024)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. All ten Fair Information Principles (Accountability, Identifying Purposes, Consent, Limiting Collection, Limiting Use & Retention, Accuracy, Safeguards, Openness, Individual Access, Challenging Compliance).
Evidence sources. Consent flow at Layer 5 (Trust Boundaries) covers Principle 3. Retention policies in the file pillar cover Principle 5. Safeguards (Principle 7) covered by Layers 4, 6, 7.
In practice. Canadian customers operating under PIPEDA have a documented compliance posture for the platform. The Trust Report includes a PIPEDA principle-by-principle mapping.
Privacy Act (Canada)
Tracked version: Privacy Act, R.S.C. 1985, c. P-21 (current amendments)
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Section 4 (collection of personal information), Section 5 (purpose), Section 6 (retention and disposal), Section 7 (use), Section 8 (disclosure).
Evidence sources. The audit log identifies every collection / use / disclosure event with its policy basis. Retention is enforced at the file-pillar level.
In practice. Canadian public-sector customers (federal or provincial bodies subject to the Privacy Act) have the platform's posture pre-mapped to the Act's sections.
AIDA (proposed)
Tracked version: Artificial Intelligence and Data Act (Canada) — Bill C-27, as introduced. Status: proposed, not yet in force.
Grade: A (100%) · Last assessed: 2026-05-05
Coverage. Anticipated impact assessment, mitigation, monitoring, and record-keeping requirements based on the published draft.
Evidence sources. Same evidence pipeline as the EU AI Act and NIST AI RMF; AIDA's structure overlaps substantially with both.
In practice. Canadian customers planning for AIDA's eventual coming-into-force can have their compliance posture pre-mapped. The grade and the mapping will be re-assessed when AIDA is enacted (the final text may differ from the proposed text).
THE AUDIT WINDOW
August 2, 2026 is when the obligations begin. The audit window started this quarter.
The EU AI Act's high-risk obligations come into force on 2 August 2026. The penalty structure under Article 99 is the larger of EUR 35 million or 7% of global annual turnover for non-compliance.
The under-discussed feature of the regulation is that it does not ask “did your platform behave correctly on the day?” It asks for evidence that controls were operating during the period leading up to the day. That period — the audit window — is already open.
The platform was designed for this regulatory shape. Continuous evidence generation, framework-mapped audit log, machine-verifiable Trust Reports — none of this is a feature added in response to the AI Act. The architecture was built for it.
For customers reading this who have not yet started preparing, the most useful frame is: the deadline is when the obligations begin; the audit window is now. Every prompt processed without contemporaneous evidence is a prompt that cannot be defended in retrospect. The work to instrument the platform should be done in the months leading up to August 2026, not in the weeks after.
CONTINUE THE CONVERSATION
If your audit window is open, talk to us.
The most useful conversation about compliance is the one where we sit with your General Counsel, your Chief Compliance Officer, or your Risk Officer for thirty minutes and walk through your specific framework requirements. We bring the framework cross-walks, the Trust Report templates, and the gap-analysis methodology. We tell you where the platform's posture aligns with your obligations and where you'd need additional controls.
Talk to us about your audit window →Or write to hello@handvantage.com directly.
