← Integrations AI coding

Cursor, governed.

Your developers ship faster with Cursor. Raidu sits between the IDE and the model, masks the code Cursor sends outbound, enforces your policy on every completion, and signs the record. No plugin. No IDE change.

Book a meeting See the runtime
The tool
Cursor
AI-first code editor, built on VS Code.

Cursor streams code context to frontier LLMs for tab completion, inline edits, and multi-file generation. Without governance, that context leaves your network unchecked.

The demo

Cursor. Five checkpoints. One signed record.

Captured · prod · ~90s

Without governance

What breaks when Cursor runs ungoverned.

Four failure modes security and compliance teams see every week. Each one is a signed record waiting to exist.

Risk 01

Proprietary code walks out

Cursor sends file context to cloud LLMs for completions. Trade secrets, API keys, and algorithms flow to third parties with no record of what left, who sent it, or what policy, if any, ran.

Risk 02

No audit trail for AI-written code

When Cursor writes code that ships, auditors ask who approved it and on what basis. Without Raidu, there is no record of the prompt, the model, the policy version, or the human review.

Risk 03

Shadow adoption across teams

Developers install Cursor independently. Security has no view into which models are in use, which repos are exposed, or which policies, if any, apply. The first signal is often an incident.

Risk 04

Compliance blind spots

SOC 2, HIPAA, and the EU AI Act require evidence of AI system usage and controls. Cursor activity without governance produces none of it. Policies without proof do not survive an audit.

With Raidu

How Raidu governs Cursor.

Every Cursor prompt passes through the five-checkpoint firewall. The policy is shared, the record is signed, the runtime is the same one your other agents already use.

01

Code context redaction

Checkpoint 02 · Before LLM

Raidu scans every prompt Cursor sends outbound. Secrets, credentials, customer PII, and flagged business logic are replaced with deterministic tokens the model can still reason about. Nothing leaves your network unmasked.

02

Model and repo policy

Checkpoint 03 · Before Tool

Allowlist and blocklist models per team, per repo, or per file pattern. Engineering can use Claude on the backend and GPT on the frontend; regulated repos can require a specific model or block AI entirely. Policy version is stamped on every call.

03

Response scanning

Checkpoint 05 · Agent Response

Completions are scanned for hallucinated packages, insecure code patterns, license risks, and data exfiltration attempts before the suggestion ever reaches the editor. Clean suggestions pass. Risky ones are flagged and logged.

04

Cryptographic audit trail

Post-execution

Every interaction, the developer identity, the prompt, the redactions, the model, the policy version, the response, the decision, is written to WORM storage. RSA-4096 signed. SHA-256 chained. RFC 3161 timestamped. 10-year retention by default.

Integration

Zero IDE change. One base URL.

Cursor speaks the OpenAI API. Raidu speaks OpenAI back. Swap the base URL and every prompt from every developer is governed from that moment on.

Cursor · Settings · Models shell
# In Cursor: Settings → Models → Override OpenAI Base URL
OPENAI_BASE_URL=https://proxy.raidu.com/acme-corp/openai
OPENAI_API_KEY=raidu_xxx   # scoped, rotatable, revocable

# Every request now carries:
#   x-raidu-policy:     cursor.eng.v4
#   x-raidu-record-id:  rec_01JBVX7P9A8Z8PTQJG4K9NDJ4W
#   x-raidu-decision:   allow | mask | deny
#   x-raidu-signature:  MIIFxjCCA66gAwIBAgI...
Questions

What engineering leaders ask before they roll this out.

Does Raidu require a Cursor plugin? +
No. Raidu is a transparent proxy. Cursor is pointed at a Raidu base URL via its existing OpenAI-compatible model configuration. Developers see no difference in the IDE.
What is the latency overhead on completions? +
Under 100 ms per checkpoint at p95, measured on n2-standard-4 in GCP us-east1. Tab completion and inline edits remain interactive. Long completions amortize the overhead across the stream.
Can I allowlist or block specific models inside Cursor? +
Yes. Raidu's policy engine routes per team, per repository, or per file pattern. You can require Claude for regulated repos, allow GPT for internal tooling, and block everything for anything matching a sensitive path.
How does Raidu handle Cursor's privacy mode? +
Privacy mode in Cursor stops Cursor from storing data. Raidu governs what leaves Cursor in the first place. They are complementary. Privacy mode plus Raidu gives you both no-storage and enforced masking with cryptographic proof.
Does this integration cover Cursor's agent and composer features? +
Yes. Any outbound model call, completion, inline edit, chat, Composer agent loop, multi-file edit, passes through the same five-checkpoint runtime and lands in the same signed audit trail.
Can my auditor verify the audit trail independently? +
Yes. The public verification endpoint accepts a record ID and returns the signature chain. Your auditor never needs access to your environment to confirm a record is untampered.