← Back to Docs

K0NSULT — 12 Agent Role Cards

Audit Agent Capabilities, Boundaries & Controls

Document version 1.0 • 23 March 2026 • Confidential — Client Copy

Each card below defines one audit agent's role with full transparency: what it does, what it does not do, what data it touches, and when it hands control to a human. These cards serve as both operational documentation and governance artifacts.

1. Process Discovery Agent

AUDIT-PDA-01
GoalMap existing business processes, identify automation candidates, and document current-state workflows with dependencies and bottlenecks.
ScopeDoes NOT redesign processes, make implementation decisions, or access production systems directly. Discovery and documentation only.
InputsProcess documentation, stakeholder interviews (transcripts), system logs, workflow tool exports (BPMN, Visio), SOP documents.
OutputsProcess map (current state), dependency graph, bottleneck report, automation opportunity score per process, recommended priority list.
PermissionsRead access to shared documentation repositories and workflow tools. No write access. No access to PII or financial systems.
EscalationEscalates when: process has undocumented tribal knowledge, conflicting process versions exist, or sensitive data flows are discovered.
RisksIncomplete mapping due to undocumented processes; over-reliance on formal documentation that doesn't reflect actual practice.
KPIProcess coverage rate (% of processes mapped); accuracy score from stakeholder validation; time-to-map per process.

2. Policy & Governance Agent

AUDIT-PGA-02
GoalReview AI policies, governance frameworks, and compliance documentation against standards (ISO 42001, EU AI Act, NIST AI RMF).
ScopeDoes NOT write policies, provide legal advice, or make compliance determinations. Analysis and gap identification only.
InputsCompany AI policies, governance frameworks, risk registers, regulatory requirements, industry standards, previous audit reports.
OutputsGap analysis matrix, compliance score per standard, policy recommendation list, priority remediation roadmap.
PermissionsRead access to policy documents and governance repositories. No access to operational systems or personal data.
EscalationEscalates when: critical compliance gap found, policy contradictions detected, or regulatory deadline is imminent.
RisksRegulatory landscape changes faster than analysis; policies may exist but not be enforced; jurisdiction-specific nuances missed.
KPIGap identification accuracy (validated by legal review); coverage of applicable regulations; time-to-analysis.

3. Data Flow Agent

AUDIT-DFA-03
GoalTrace data flows across systems, identify data sensitivity levels, and detect unauthorized or unprotected data transfers.
ScopeDoes NOT modify data, change access controls, or access actual data contents. Metadata and flow analysis only.
InputsSystem architecture diagrams, API documentation, database schemas, data classification policies, network topology, DPA/DPIA records.
OutputsData flow diagram, sensitivity heat map, unprotected transfer report, cross-border data movement inventory, DPIA gap list.
PermissionsRead access to system metadata, API specs, and architecture docs. No access to actual data records or PII.
EscalationEscalates when: PII flows to unprotected endpoints, cross-border transfers lack legal basis, or undocumented shadow IT data flows found.
RisksShadow IT systems not visible in documentation; API integrations change without updating docs; data classification may be outdated.
KPIData flow coverage (% of systems mapped); sensitivity classification accuracy; number of unprotected flows detected.

4. Automation Fit Agent

AUDIT-AFA-04
GoalEvaluate processes for automation potential, estimate ROI, and recommend agent team configurations for each candidate.
ScopeDoes NOT implement automations, commit budgets, or guarantee ROI figures. Assessment and recommendation only.
InputsProcess maps (from PDA-01), labor cost data, error rate reports, volume metrics, current tool stack, business priority rankings.
OutputsAutomation fitness score per process, ROI projection model, recommended agent team composition, implementation timeline estimate.
PermissionsRead access to process documentation and anonymized operational metrics. No access to financial systems or HR data.
EscalationEscalates when: ROI is unclear or marginal, process requires regulatory approval for automation, or workforce impact exceeds threshold.
RisksROI projections based on incomplete data; hidden process complexity; change management costs underestimated.
KPIPrediction accuracy (projected vs. actual ROI post-implementation); adoption rate of recommended automations.

5. Weakness Detection Agent

AUDIT-WDA-05
GoalIdentify vulnerabilities, single points of failure, and systemic weaknesses across AI systems, processes, and governance structures.
ScopeDoes NOT perform penetration testing, exploit vulnerabilities, or access production systems. Analysis of documentation and configurations only.
InputsArchitecture documentation, security policies, incident logs, dependency lists, access control matrices, previous audit findings.
OutputsVulnerability register, risk severity matrix, single-point-of-failure map, remediation priority list, mitigation recommendations.
PermissionsRead access to security documentation, architecture diagrams, and anonymized incident logs. No access to credentials or live systems.
EscalationEscalates when: critical vulnerability with active exploit potential found, systemic governance failure detected, or data breach indicators present.
RisksZero-day vulnerabilities not detectable from documentation; insider threat patterns may be missed; fast-changing threat landscape.
KPIWeakness detection rate (vs. external pen-test findings); false positive rate; mean time to remediation recommendation.

6. Moderation Process Agent

AUDIT-MPA-06
GoalAudit content moderation pipelines, evaluate bias in moderation decisions, and assess compliance with platform policies and regulations.
ScopeDoes NOT moderate content, make content decisions, or access user accounts. Audits the moderation process, not the content itself.
InputsModeration policy documents, decision logs (anonymized), appeal records, moderation tool configurations, accuracy metrics.
OutputsModeration accuracy report, bias analysis, false positive/negative rates, policy gap list, process improvement recommendations.
PermissionsRead access to anonymized moderation logs and policy documents. No access to user identities or original content.
EscalationEscalates when: systematic bias detected, moderation accuracy falls below threshold, or regulatory non-compliance found.
RisksAnonymized data may mask context-dependent decisions; cultural nuances in moderation may be lost; sample bias in audit data.
KPIAudit coverage (% of moderation decisions reviewed); bias detection rate; correlation with user appeal outcomes.

7. Customer Support Auditor Agent

AUDIT-CSA-07
GoalEvaluate AI-assisted customer support quality, response accuracy, escalation effectiveness, and customer satisfaction impact.
ScopeDoes NOT interact with customers, modify support responses, or access customer PII. Audits support process quality from anonymized data.
InputsAnonymized support transcripts, CSAT/NPS data, response time metrics, escalation logs, knowledge base usage stats, SLA reports.
OutputsSupport quality scorecard, response accuracy analysis, escalation effectiveness report, improvement recommendations, SLA compliance report.
PermissionsRead access to anonymized support metrics and transcripts. No access to customer identities, payment data, or account details.
EscalationEscalates when: support accuracy drops below SLA, systematic mishandling pattern detected, or customer harm potential identified.
RisksAnonymization may obscure context; CSAT scores may not correlate with actual resolution quality; seasonal patterns skew analysis.
KPIAudit coverage rate; correlation between audit findings and CSAT changes; time from finding to implemented improvement.

8. Banking Operations Agent

AUDIT-BOA-08
GoalAudit AI usage in banking operations including transaction monitoring, fraud detection, credit scoring, and regulatory reporting.
ScopeDoes NOT process transactions, access account data, or make credit decisions. Audits process compliance and model governance only.
InputsModel documentation, validation reports, regulatory filings, transaction monitoring rules, audit trails, compliance reports.
OutputsModel governance compliance report, regulatory gap analysis, transaction monitoring effectiveness assessment, remediation recommendations.
PermissionsRead access to model documentation, anonymized performance metrics, and regulatory filings. No access to customer data or transaction records.
EscalationEscalates when: model bias detected in credit scoring, regulatory non-compliance found, or fraud detection gaps identified.
RisksHighly regulated domain requires jurisdiction-specific expertise; model documentation may be incomplete; regulatory changes may outpace audit.
KPIRegulatory finding prevention rate; model governance coverage; time-to-audit per banking function.

9. Knowledge Base Integrity Agent

AUDIT-KBI-09
GoalVerify accuracy, completeness, and currency of knowledge bases used by AI systems. Detect outdated, contradictory, or missing information.
ScopeDoes NOT update knowledge base content, create new articles, or modify existing entries. Integrity assessment and reporting only.
InputsKnowledge base content, source documents, version history, usage analytics, user feedback/reports, last-reviewed timestamps.
OutputsIntegrity scorecard, outdated content list, contradiction report, coverage gap analysis, update priority queue.
PermissionsRead access to knowledge base content and metadata. No write access. No access to user identity data.
EscalationEscalates when: critical information is outdated (safety, legal, medical), contradictions affect customer-facing answers, or coverage gaps exceed 20%.
RisksDomain expertise limitations for specialized content; rapidly changing information may outpace audit cycle; implicit knowledge not captured.
KPIContent accuracy rate; outdated article detection rate; mean age of unreviewed content; user-reported error correlation.

10. ROI & Capacity Agent

AUDIT-RCA-10
GoalMeasure return on investment of AI deployments, assess capacity utilization, and identify optimization opportunities.
ScopeDoes NOT make budget decisions, reallocate resources, or access financial accounts. Measurement, analysis, and recommendation only.
InputsDeployment cost data, usage metrics, performance baselines, business outcome metrics, capacity logs, license utilization data.
OutputsROI dashboard, capacity utilization report, cost-per-outcome analysis, optimization recommendations, projected savings model.
PermissionsRead access to anonymized cost and performance metrics. No access to detailed financial records, contracts, or pricing agreements.
EscalationEscalates when: ROI is negative beyond threshold, capacity exceeds 85% sustained, or cost anomalies detected.
RisksIncomplete cost attribution; intangible benefits hard to quantify; baseline data may be unreliable; attribution challenges in multi-system environments.
KPIROI projection accuracy (predicted vs. actual); capacity forecast accuracy; optimization recommendation adoption rate.

11. Human Oversight Agent

AUDIT-HOA-11
GoalVerify that human-in-the-loop controls are functioning, humans are actually reviewing AI outputs, and override mechanisms work as designed.
ScopeDoes NOT monitor individual employees, track keystrokes, or assess human performance. Audits the oversight process, not the people.
InputsReview queue logs, override records, approval timestamps, escalation resolution data, training completion records, process documentation.
OutputsOversight effectiveness report, rubber-stamping detection analysis, override mechanism test results, training gap assessment, recommendations.
PermissionsRead access to anonymized review and approval logs. No access to individual reviewer identities or personal performance data.
EscalationEscalates when: rubber-stamping patterns detected (approvals under 2 seconds), override mechanisms non-functional, or oversight gaps in high-risk areas.
RisksDifficult to distinguish genuine fast reviews from rubber-stamping; oversight fatigue hard to detect from logs alone; process vs. practice gap.
KPIOversight coverage rate; mean review time per decision type; override mechanism test pass rate; escalation response time.

12. Incident & Failure Review Agent

AUDIT-IFR-12
GoalAnalyze AI-related incidents, near-misses, and system failures to identify root causes, patterns, and prevention opportunities.
ScopeDoes NOT investigate security breaches in real-time, handle active incidents, or assign blame. Post-incident analysis and pattern detection only.
InputsIncident reports, post-mortem documents, system logs (anonymized), error rate trends, near-miss reports, CAPA records.
OutputsRoot cause analysis report, incident pattern map, failure mode catalog, prevention recommendations, updated risk register entries.
PermissionsRead access to incident reports and anonymized system logs. No access to active incident channels, security tools, or personal data.
EscalationEscalates when: recurring failure pattern detected, root cause traces to systemic governance failure, or incident severity exceeds threshold.
RisksIncomplete incident reporting (near-misses underreported); post-mortem bias; correlation mistaken for causation; hindsight bias in analysis.
KPIRoot cause identification accuracy; repeat incident reduction rate; mean time from incident to prevention recommendation; CAPA closure rate.