Proprietary code walks out
Cursor sends file context to cloud LLMs for completions. Trade secrets, API keys, and algorithms flow to third parties with no record of what left, who sent it, or what policy, if any, ran.
Your developers ship faster with Cursor. Raidu sits between the IDE and the model, masks the code Cursor sends outbound, enforces your policy on every completion, and signs the record. No plugin. No IDE change.
Cursor streams code context to frontier LLMs for tab completion, inline edits, and multi-file generation. Without governance, that context leaves your network unchecked.
Captured · prod · ~90s
Four failure modes security and compliance teams see every week. Each one is a signed record waiting to exist.
Cursor sends file context to cloud LLMs for completions. Trade secrets, API keys, and algorithms flow to third parties with no record of what left, who sent it, or what policy, if any, ran.
When Cursor writes code that ships, auditors ask who approved it and on what basis. Without Raidu, there is no record of the prompt, the model, the policy version, or the human review.
Developers install Cursor independently. Security has no view into which models are in use, which repos are exposed, or which policies, if any, apply. The first signal is often an incident.
SOC 2, HIPAA, and the EU AI Act require evidence of AI system usage and controls. Cursor activity without governance produces none of it. Policies without proof do not survive an audit.
Every Cursor prompt passes through the five-checkpoint firewall. The policy is shared, the record is signed, the runtime is the same one your other agents already use.
Raidu scans every prompt Cursor sends outbound. Secrets, credentials, customer PII, and flagged business logic are replaced with deterministic tokens the model can still reason about. Nothing leaves your network unmasked.
Allowlist and blocklist models per team, per repo, or per file pattern. Engineering can use Claude on the backend and GPT on the frontend; regulated repos can require a specific model or block AI entirely. Policy version is stamped on every call.
Completions are scanned for hallucinated packages, insecure code patterns, license risks, and data exfiltration attempts before the suggestion ever reaches the editor. Clean suggestions pass. Risky ones are flagged and logged.
Every interaction, the developer identity, the prompt, the redactions, the model, the policy version, the response, the decision, is written to WORM storage. RSA-4096 signed. SHA-256 chained. RFC 3161 timestamped. 10-year retention by default.
Cursor speaks the OpenAI API. Raidu speaks OpenAI back. Swap the base URL and every prompt from every developer is governed from that moment on.
# In Cursor: Settings → Models → Override OpenAI Base URL
OPENAI_BASE_URL=https://proxy.raidu.com/acme-corp/openai
OPENAI_API_KEY=raidu_xxx # scoped, rotatable, revocable
# Every request now carries:
# x-raidu-policy: cursor.eng.v4
# x-raidu-record-id: rec_01JBVX7P9A8Z8PTQJG4K9NDJ4W
# x-raidu-decision: allow | mask | deny
# x-raidu-signature: MIIFxjCCA66gAwIBAgI...