6 November 2025 (Sydney time)
AI Strategy Co (Australia & Singapore) operates the Managed AI Governance Office (mAIGO™), an advisory-only, controls-to-evidence service for AI pilot readiness, vendor due diligence, and run-state governance.
We use AI to accelerate analysis and documentation in client engagements, for example: risk tiering, control mapping, explainability briefs (XAI), evidence pack compilation, and vendor DDQs, so that executive decisions can be made with audit-ready artefacts in ~30 days. We do not provide automated decisions about individuals on your behalf; mAIGO outputs are advisory and subject to human review.
We may use foundation-model services (LLMs) and domain tools inside client-approved environments to draft or structure material that our consultants then check. When using third-party models, we prefer options that support enterprise controls (data residency, access, logging) and clear model disclosures. Our governance references MAS FEAT principles/Veritas methodology and recent industry guidance on generative AI risks (e.g., transparency, monitoring, third-party accountability).
All deliverables (risk assessments, XAI notes, acceptance tests, DDQs, decision memos) are produced with mandatory human oversight and sign-off. HITL checkpoints exist at draft, pre-issue, and decision-gate stages; rollback criteria are defined for pilots.
• Typical inputs: client policies, process maps, model cards, vendor documents, security attestations, and de-identified samples for testing explainability.
• We avoid: feeding special category or production personal data into third-party models unless contractually required and approved by the client under a documented legal basis.
• Residency & transfers: We map processing locations and apply safeguards for any cross-border data movement.
We align our work to MAS FEAT and Veritas-style lifecycle checks (materiality, design, data prep, build/validate, deploy/monitor), with added generative-AI guardrails (e.g., hallucination checks, provenance notes, recourse).
Where AI tools are used, we apply vendor DDQs covering: secure SDLC, certifications (e.g., ISO 27001/SOC 2), model transparency, continuity/exit (RTO/RPO), sub processors, and incident notification duties. Evidence and scores are kept in the engagement’s Evidence Pack.
We operate to the spirit of APRA CPS 230/234 (ops risk, information security) and data-risk guidance, including incident run-books, supplier maps, and durable records suitable for audit. (For Singapore clients, we cross-reference MAS FEAT and related tech-risk/outsourcing expectations.)
When we help a client draft their own transparency notices (e.g., for pilots with customer touch points), we recommend plain-language disclosures about AI use, human oversight, data sources, and how to request a review, consistent with Australian public-sector transparency practice (DTA) and APRA’s own AI transparency example.
We keep engagement-level logs of AI-assisted activities (prompts/outputs where feasible), review for model staleness or drift where testing is in scope, and retain artefacts (registries, risk/exception/incident logs, DDQs, XAI notes, decision memos) per contract and law.
• Clients: You control which AI tools are permitted in your engagement and may opt-out of AI assistance entirely.
• Individuals (privacy): For matters under PDPA (Singapore) or the Australian Privacy Act (APPs), you can request access/correction via the contacts below. We support alternative formats on request.
• Governance & engagements: governance@aistrategy.au
• Privacy requests (PDPA/APPs): privacy@aistrategy.au
• Transparency logs: transparency@aistrategy.au
• Security incidents (24×7 triage): security@aistrategy.au
• Accessibility/alternate formats: accessibility@aistrategy.au
Service levels: business-hours responses ≤ 1 business day; security incidents triaged immediately.
• APRA AI transparency pattern and updates to prudential expectations (esp. CPS 230/234).
• DTA AI Transparency Statements(structure and minimum content for public disclosures).
• MAS FEAT & Veritas plus the MindForge consortium’s generative-AI risk dimensions and lifecycle controls(transparency, monitoring, third-party accountability).
We’ll revise this statement when our AI use materially changes or at least annually, mirroring the update cadence adopted by Australian agencies for transparency statements.