Help Centre

Frequently Asked Questions

Everything about AICVS — how it works, what the EU AI Act requires, why teams choose us, and what you actually get.

The product Why AICVS What you get EU AI Act explained Articles 9–17 Security & privacy Universities Pricing & plans CI/CD & API Roadmap
🔍
What AICVS does
What exactly does AICVS do?

AICVS scans source code for EU AI Act compliance signals — observable markers that indicate whether your code meets EU Regulation 2024/1689 requirements. It uses a 5-layer analysis engine (Regex, AST, Statistical Stylometry, Structural, and Explainability) to surface those signals, map them to specific regulatory articles, and generate a cryptographic compliance certificate you can hand to an auditor.

Every scan produces: a score (0–100), a status (PASS / CONDITIONAL / FAIL), a findings list with EU article mappings, a plain-English narrative for non-technical reviewers, and a 6-step SHA-256 Merkle evidence chain sealed with a tamper-evident cert hash.

ℹ️

Important framing: AICVS detects compliance signals, not authorship. A PASS result means no observable compliance markers were found — not that the code is provably human-written. Use results as part of a broader compliance programme.

What does a compliance score mean?
  • 75–100 → PASS (green): Few or no compliance signals. Suitable for regulated environments without immediate remediation.
  • 50–74 → CONDITIONAL (amber): Some signals detected. Human review required. Address findings before regulatory submission.
  • 0–49 → FAIL (red): Multiple or critical signals. Must be remediated before use in any regulated AI system.

Each finding has a severity (CRITICAL, HIGH, MEDIUM, LOW) and a score impact. CRITICAL findings (e.g. exec() calls, AI authorship comments) subtract the most points. Scores are deterministic — the same file always produces the same result.

⚠️

Scores are not a legal guarantee. Always involve your legal team for final compliance sign-off.

How does the 5-layer detection engine work?
  • Layer 1 — Regex (50+ rules): Pattern matching across all 15 languages. Catches AI API calls, LLM attribution comments, ML imports, auto-generated docstrings, TODO/FIXME placeholders.
  • Layer 2 — Python AST: Structural analysis for Python. Detects exec()/eval(), monolithic classes (>10 methods), dead imports, low cyclomatic complexity.
  • Layer 3 — Statistical Stylometry: Based on arXiv research (2411.04299, 2509.18880). Measures Shannon entropy of line lengths, identifier naming variance, blank-line burstiness, function length uniformity. AI-generated code produces statistically uniform patterns humans don’t.
  • Layer 4 — Structural: Cross-file patterns, evidence chain integrity, documentation coverage.
  • Layer 5 — Explainability: Every finding gets a plain-English explanation and reviewer note for non-technical auditors.

Optional Layer 6: STAT-008 AI-enhanced perplexity scoring (Pro+ opt-in). Sends code to Anthropic API for semantic analysis. A privacy banner warns you before enabling. Off by default.

What programming languages are supported?

15 languages: Python, JavaScript, TypeScript, JSX, TSX, Go, Java, Rust, C#, Ruby, PHP, Swift, Kotlin, C, and C++.

  • Free plan: Python, JavaScript, TypeScript only.
  • Pro and above: All 15 languages.

Python has the deepest analysis (AST + regex + statistical). All others use regex + statistical. Full AST for Go, Java, and TypeScript is on the roadmap for Q3 2026.

What is the 6-step evidence chain and why does it matter?

The Merkle chain creates a tamper-evident audit record. If anything changes — filename, score, timestamp, org ID — the cert hash changes. An auditor can verify any certificate independently at aicvs.io/verify/{scan_id}.

  • Step 1: SHA-256 hash of the file content
  • Step 2: Identity hash (filename + scan_id + timestamp + version)
  • Step 3: Provenance hash (step 1 + step 2 + score + classification)
  • Step 4: EU mapping hash (step 3 + articles triggered)
  • Step 5: Merkle root (steps 1–4 combined)
  • Step 6: Certificate seal (merkle root + scan_id + version)

This meets Art.12 record-keeping requirements — it proves a scan happened, when it happened, and what was found, with cryptographic proof of non-alteration.

🏆
Why teams choose us
How is AICVS different from Snyk, Semgrep, or GitHub Advanced Security?

Those tools find security vulnerabilities. AICVS answers a different question: does this code meet EU AI Act compliance requirements, and can I prove it to a regulator?

CapabilitySnyk / Semgrep / GHASAICVS
Security bug detection— not the goal
EU AI Act article mapping Art.9–17
Cryptographic compliance certificates SHA-256 Merkle
AI authorship signal analysis 5-layer engine
SOC 2 + ISO 27001 + EU AI Act in one scan
Annex IV PDF certificate Pro+
University academic integrity dedicated plan

Run AICVS alongside Snyk. They protect your code from bugs. AICVS protects your organisation from regulatory risk.

Why not use an AI writing detector like GPTZero or Turnitin?

AI writing detectors analyse prose — sentence structure, vocabulary, tone. They cannot analyse source code imports, AST structure, or map findings to regulatory articles.

AICVS is built specifically for source code and is the only tool that:

  • Runs deterministic, reproducible scans (same file = same result, always)
  • Maps code signals directly to EU AI Act articles
  • Generates a cryptographic evidence chain (not a probability estimate)
  • Produces a verifiable, tamper-evident certificate accepted in audit processes
🎓

For universities: Turnitin handles essays. AICVS handles code. Use both.

Why choose AICVS over building our own compliance tooling?

Building your own means: writing and maintaining 50+ regex rules across 15 languages, implementing AST analysis per language, keeping pace with EU AI Act enforcement guidance, building a cryptographic certificate chain, and designing audit-ready PDF outputs. AICVS is maintained full-time and updates as new AI tools emerge.

More importantly: your internal tool won’t have independent cert verification. AICVS certificates are verifiable at aicvs.io/verify/{scan_id} — third-party proof an auditor can check without trusting your infrastructure.

What is AICVS’s honest limitation?

AICVS cannot detect clean AI-generated code with no observable markers. This is the state of the art across all tools — even the best academic detectors achieve ~82% F1 score (arXiv 2411.04299). If all markers are removed (comments stripped, variables renamed), AICVS will score it high — as will every other tool.

This is why we say “compliance signals”, not “AI detection”. A PASS result means no detectable signals were found — it’s evidence of a clean scan, not proof of human authorship.

⚖️

AICVS certificates are supporting technical evidence, not formal conformity assessments under Art.43. High-risk AI systems still require notified body assessment for Annex III categories.

📦
Concrete deliverables per scan
What do I actually receive from each scan?

Every scan (including Free plan) produces:

🎯
Compliance Score
0–100 numeric score with PASS / CONDITIONAL / FAIL status. Deterministic — same file always gives same result.
All plans
📋
Findings List
Each finding: rule ID, severity, title, line number, EU article triggered, and remediation steps.
All plans
📖
Reviewer Narrative
Plain-English summary for non-technical reviewers. Professors and compliance officers see what was found and what questions to ask.
All plans
🔐
6-Step Evidence Chain
SHA-256 Merkle chain. Tamper-evident. Publicly verifiable at aicvs.io/verify/{scan_id}.
All plans
📄
PDF Certificate
Signed PDF with evidence chain, EU article mapping, and Annex IV notes. Auditor-ready.
Pro+
🏅
GitHub SVG Badge
Embeddable badge showing PASS/CONDITIONAL/FAIL with score. Links to your public verification page.
All plans

For Compliance Bundles (Pro+): an aggregated report across EU AI Act + SOC 2 + ISO 27001, with gap analysis and a combined PDF certificate covering your entire organisation’s AI compliance posture.

What goes in a PDF compliance certificate?
  • Organisation name, user name, scan timestamp (UTC)
  • File analysed (name + SHA-256 hash — never the actual content)
  • Compliance score and status
  • Full findings table with severity, rule ID, line number, EU article, and remediation step
  • EU AI Act article mapping summary
  • Plain-English reviewer narrative
  • The full 6-step evidence chain (each step’s hash)
  • Verification URL for independent confirmation
  • Version string and AICVS disclaimer

The PDF is suitable for Art.11 technical documentation packages, enterprise procurement due diligence, regulatory investigations, and academic misconduct proceedings.

🇪🇺
What it is, who it affects, and when
What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive binding AI law. It entered into force on 1 August 2024 and applies to any company deploying AI systems that affect people in the EU — regardless of where the company is based.

The Act classifies AI into four risk tiers:

  • Unacceptable risk (banned): social scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups.
  • High risk (Annex III): AI in critical infrastructure, education, employment, essential services, law enforcement. Heaviest obligations (Art.9–15 + conformity assessment).
  • Limited risk: Chatbots, deepfakes — transparency obligations only.
  • Minimal / no risk: Spam filters, AI in games — no specific obligations.

Deadline: High-risk AI system obligations (Art.9–15) are fully enforced from 2 August 2026. Fines up to €30M or 6% of global annual turnover.

Does the EU AI Act apply to my company?

It applies to you if:

  • You place AI systems on the EU market — including digital services used by EU citizens, regardless of your incorporation country.
  • You deploy AI in Annex III categories: credit scoring, HR screening, fraud detection, medical device software, critical infrastructure, public services.
  • You use AI-generated code or ML models that make decisions affecting EU employees or customers.
💡

Not sure if you’re in scope? Start with the Free plan — the EU article mappings in each scan will show which obligations, if any, are triggered by your code.

What is an Annex IV technical documentation package?

For high-risk AI systems, providers must compile an Annex IV technical documentation package before market placement. It must include:

  • General description of the AI system and intended purpose
  • Description of the development process and elements
  • Information on training, validation, and testing data (Art.10)
  • Risk management documentation (Art.9)
  • System version history and change log
  • Art.14 human oversight measures assessment
  • Art.15 robustness, accuracy, and cybersecurity measures

AICVS PDF certificates are designed to be slotted directly into Annex IV packages as the automated technical review evidence layer. They do not replace the full Annex IV package, but automate the most time-consuming part — per-file scanning evidence.

What is the enforcement timeline?
1 Aug 2024
Act entered into force
24-month transition period begins.
Feb 2025
Prohibited AI systems banned
Social scoring, real-time biometric surveillance prohibited.
Aug 2025
GPAI model obligations apply
General Purpose AI providers (GPT-class) face transparency and copyright obligations.
2 Aug 2026 ← KEY DEADLINE
Full enforcement for high-risk AI systems
Art.9–15 fully enforced. Annex III providers need conformity assessments, technical documentation, and CE marking.
Aug 2027
Annex I product obligations
AI embedded in regulated products (medical devices, machinery, vehicles) faces obligations.
📜
What each article requires — and what AICVS checks
Article-by-article: what each one requires and how AICVS covers it
Art. 9
Risk Management System
Establish a documented, iterative risk management process throughout the AI system’s lifecycle. Identify, analyse, estimate, evaluate, and mitigate risks.
✓ AICVS: Every scan creates a timestamped, cryptographically-chained audit record for your Art.9 risk register. AI-003 flags ML inference calls requiring documented risk controls.
⚠ Fines up to €30M or 6% of global turnover for high-risk AI without compliant risk management.
Art. 10
Data & Training Governance
Training, validation, and testing datasets must meet quality criteria. Free from harmful biases. Relevant, representative, and complete.
✓ AICVS: AI-004 flags ML library imports (numpy, sklearn, torch) indicating training data dependency. AI-002 flags pre-trained model usage requiring data lineage documentation.
⚠ Deploying high-risk AI trained on non-compliant datasets can trigger enforcement and reputational damage.
Art. 11
Technical Documentation
Compile and maintain comprehensive Annex IV technical documentation before placing a high-risk AI system on the market.
✓ AICVS: The 6-step Merkle evidence chain generates Art.11-supporting PDF certificates with tamper-evident SHA-256 proof. Each certificate is formatted for Annex IV inclusion.
⚠ Market surveillance can withdraw products from sale if technical documentation is incomplete or absent.
Art. 12
Record-Keeping & Logging
Enable automatic logging of events throughout the operational lifetime of the AI system. Logs must be tamper-resistant.
✓ AICVS: All scans logged with org_id, user_id, timestamp, cert_hash, and EU article mapping. Tamper-evident by cryptographic construction.
⚠ Without verifiable logs, demonstrating operational compliance to regulators is impossible.
Art. 13
Transparency & Information
AI systems must be sufficiently transparent to allow users to interpret outputs and use them appropriately.
✓ AICVS: Generates plain-English narratives per finding. AI-001 flags undisclosed AI authorship comments (GitHub Copilot, ChatGPT annotations).
⚠ Deploying AI without transparency provisions can result in system suspension pending compliance.
Art. 14
Human Oversight
High-risk AI must allow human overseers to monitor, understand, override, and intervene in the system’s operation.
✓ AICVS: AI-003 flags ML inference calls (model.predict()) indicating autonomous decision-making that requires documented human oversight gates.
⚠ Automated decisions without adequate human oversight are a primary enforcement target.
Art. 15
Accuracy, Robustness & Security
High-risk AI must achieve appropriate accuracy and remain resilient to errors, faults, and adversarial attacks throughout the lifecycle.
✓ AICVS: AST-001 flags exec()/eval() calls (CWE-95). AI-006 flags TODO/FIXME/placeholder code signalling incomplete implementation.
⚠ Security vulnerabilities in AI systems trigger both regulatory fines and civil liability for resulting harms.
Art. 17
Quality Management System
Providers of high-risk AI must have a QMS covering design, development, testing, monitoring, and post-market surveillance.
✓ AICVS: Scan history, compliance bundles, and certificate export integrate into ISO 9001 QMS and ISMS documentation. CI/CD integration enables continuous QMS evidence on every commit.
⚠ A missing QMS is a blocking issue for CE marking and Annex III market access.
Is an AICVS certificate a formal EU AI Act conformity assessment?

No. AICVS certificates are automated technical review evidence — supporting documentation, not a formal conformity assessment. Think of them like an automated penetration test report: it supports your security claim but doesn’t replace a manual pentest.

They are suitable for: Art.11 technical documentation packages, enterprise procurement due diligence, regulatory investigation responses, academic misconduct proceedings. They are not a substitute for a notified body assessment under Art.43 for Annex III high-risk AI systems.

🔒
How we protect your code and data
Do you store my source code?

Never. Your code is read into memory, analysed, and immediately discarded. We store only the scan result: score, findings, and SHA-256 hash. We cannot reconstruct your code from our records — this is an architectural decision, not just a policy.

🛡️

Exception: STAT-008 (Pro+, opt-in). When enabled, code is sent to the Anthropic API for semantic scoring. A warning banner appears before enabling. Off by default.

Is AICVS GDPR compliant?

Yes. We are incorporated in Ireland and process all data within the EU (Frankfurt). We have DPAs with all sub-processors. GDPR rights exercisable via Settings or privacy@aicvs.io.

  • Data residency: Frankfurt, EU only
  • Code never stored: Only hashes and results retained
  • No advertising: Data never shared with advertisers
  • Deletion: Full deletion via Settings → Danger Zone
How are passwords and credentials stored?
  • Passwords: PBKDF2-SHA256, 260,000 iterations, unique random salt per user. Exceeds NIST SP 800-63b. Never stored in plain text.
  • API keys: SHA-256 hashed. Raw key shown only once at creation.
  • JWT tokens: Expire after 8 hours. Refresh tokens rotate on every use. Revoked on logout.
  • 2FA: RFC 6238 TOTP (authenticator app only — no SMS, immune to SIM-swapping).
What attack protections are built in?
  • Rate limiting: 6 independent buckets — max 10 login attempts/min per IP
  • Account lockout: 5 consecutive failures → 15-minute lockout
  • ZIP bomb protection: malicious archives rejected at upload
  • MIME magic-byte validation: files checked against actual content, not extension
  • Path traversal sanitisation: filenames cleaned before processing
  • ReDoS timeout: regex rules run with timeout to prevent denial-of-service
  • Security headers: HSTS (prod), X-Frame-Options: DENY, CSP, X-Content-Type-Options
🎓
Academic use, integrity, and institutional access
How is AICVS different from Turnitin for code?

Turnitin detects plagiarism in written prose. It cannot analyse source code imports, AST structure, or generate cryptographic evidence for disciplinary proceedings.

AICVS is purpose-built for source code submissions: detects AI API attribution comments, structural patterns characteristic of AI-generated code, dead imports, complexity uniformity, and generates tamper-evident evidence for proceedings.

Universities use both side-by-side: Turnitin for essays, AICVS for code. Neither should be the sole basis for misconduct decisions.

Can AICVS evidence be used in academic misconduct proceedings?

Yes. The cryptographic evidence chain is suitable as supporting technical evidence in disciplinary proceedings — in the same way Turnitin similarity reports are used.

It proves: which exact file was scanned, when (UTC timestamp), what findings were detected, and that the record has not been altered since the scan.

⚖️

Use alongside academic policy review and institutional investigation procedures. AICVS provides technical evidence; human reviewers make the final determination.

How does the Academic plan work for departments?
  • 500 scans/month — enough for entire module cohorts
  • Bulk ZIP upload: Drop an entire submission folder as one ZIP, get per-file results
  • PDF export for every scan — print-ready evidence for disciplinary records
  • Team features: Multiple lecturers with role-based access
  • All 15 languages — covers any language taught in your department

For institution-wide use, contact academic@aicvs.io for Enterprise pricing with LMS integration (on roadmap for Canvas, Moodle, Blackboard).

💳
Plans, limits, and billing
What’s the difference between all five plans?
FeatureFree €0Pro €49Academic €25Team €99Enterprise
Scans/month5100500UnlimitedUnlimited
Languages315151515
PDF certificates
Bulk ZIP upload
Team features / RBAC
API keys151020100
STAT-008 enhanced detectionopt-inopt-inopt-inopt-in

All plans include: EU AI Act findings, SHA-256 evidence chain, GitHub badge, scan history, public verification URL, and REST API access.

Can I change or cancel at any time?

Yes. Cancel anytime from Settings → Billing. Cancellation takes effect at the end of your current billing period. No cancellation fees, ever. Upgrading takes effect immediately with prorated billing.

Do you offer discounts for startups, non-profits, or research?

Yes. Email hello@aicvs.io. We offer 50% discounts for: early-stage startups (pre-seed/seed), registered non-profits, EU-funded research projects (Horizon, ERC), Enterprise Ireland portfolio companies, and open-source projects with public repositories.

⚙️
GitHub Actions, REST API, and integrations
How do I add AICVS to my GitHub Actions pipeline?

The aicvs/scan-action@v1 GitHub Action is available now. Add it to any workflow:

- uses: aicvs/scan-action@v1
  with:
    api-key: ${{ secrets.AICVS_API_KEY }}
    min-score: 50          # fail if any file scores below this
    fail-on-critical: true # fail immediately on CRITICAL finding
    post-comment: true     # post results as PR review comment
    paths: './src'         # glob to scan (default: changed files)

Get your API key from Settings → API Keys. Set it as a GitHub secret named AICVS_API_KEY.

Use the CI/CD Wizard in the app (sidebar → CI/CD) for a visual YAML generator.

Does AICVS need access to my entire repository?

No. AICVS never requires repository access. You push files to the API — it never pulls from your repo. The GitHub Action only accesses files you specify in paths (default: changed files in the current PR). Your full codebase is never transmitted.

Can I use the REST API directly?

Yes. Full API documentation at https://api.aicvs.io/docs. Key endpoints:

  • POST /api/v1/scans — single file scan
  • POST /api/v1/scans/bulk — ZIP upload (Pro+)
  • GET /api/v1/scans/{id}/certificate.pdf — PDF download (Pro+)
  • GET /api/v1/badge/{scan_id}.svg — SVG badge (public)
  • GET /api/v1/scans/{id}/verify — public verification (no auth)

Authenticate with Authorization: Bearer <jwt> or X-Api-Key: <api-key>.

🗺️
What’s coming — and what’s a stub
What features are deferred or in progress?

🔴 P0 — Before first paying customer

P0
Redis persistence
All state (rate limits, JWT revocation, account lockouts) is in-memory and resets on server restart. A restart temporarily makes revoked tokens valid. ~1 day of work, ~€10/month (Upstash free tier available).

🟡 P1 — Before enterprise sales

P1
Rule suppression UI (.aicvsignore)
The backend already supports # aicvs:ignore-next-line, # aicvs:ignore-file, and .aicvsignore. The dashboard UI to upload and manage ignore files is not yet built.
P1
Stripe webhook connection
The webhook endpoint (/api/v1/webhooks/stripe) is built and deployed. Needs a Stripe account configured with the endpoint and price IDs to go live.

🔵 P2 — Product roadmap

P2
Commit pattern analysis (AI-007)
Backend analysis function exists but not exposed in UI. AI submissions typically arrive as one large commit — this detects that pattern.
P2
Database-driven rules
Rules currently hardcoded in backend. Moving to Supabase table allows updating rules without a code deploy.
P2
Full AST for Go, Java, TypeScript
Python has full AST. Go, Java, and TypeScript use regex + statistical only. Full AST for these targeted Q3 2026.
P2
LMS integration (Canvas, Moodle, Blackboard)
API endpoints designed. Native LMS plugins on the academic roadmap — contact academic@aicvs.io for early access.
P2
IDE plugin with opt-in keystroke timing
VS Code extension tracking typing rhythm (opt-in, clearly disclosed) to add a behavioural signal layer.

🟣 Currently implemented as stubs

Stub
AI Watermarking (Team+ plan)
Page and API endpoints exist. Currently returns a simulated signature and always verifies as AUTHENTIC. Real cryptographic model watermarking is the P2 replacement.
Stub
Federated Learning Audit (Enterprise)
Page and API exist. Currently returns simulated round data (fake Byzantine detection, fake privacy budget). Real federated audit integration planned for Enterprise.
How do I report a bug or request a feature?

Email hello@aicvs.io. For security vulnerabilities, email security@aicvs.io — we follow coordinated disclosure and respond within 48 hours. Security policy: /.well-known/security.txt.

Still have questions?

Our team responds within a few hours during business hours (Limerick, GMT/IST).