← Integrations AI coding

Continue, centralized.

Continue is open source, pluggable, and loved by developers. It is also configured per-developer, which is the governance problem. Raidu gives you one runtime across every Continue install in your org.

Book a meeting See the runtime
The tool
Continue
Open-source AI code assistant for VS Code and JetBrains.

Continue is configured through a local config.json. Every developer can pick models, endpoints, and prompts. Power for the developer. Visibility gap for security.

Without governance

Open source, decentralized configuration.

Continue's flexibility is its strength. Without a shared runtime, it is also four failure modes your security team will inherit.

Risk 01

Per-developer model sprawl

Continue reads a local config.json. One developer uses Claude, another uses a self-hosted model, a third uses a free API key from an unknown provider. Security has no central view and no central control.

Risk 02

Uncontrolled outbound context

Continue sends file context and chat history to whichever endpoint the developer configured. Without central redaction, secrets and proprietary logic flow to providers your procurement team never approved.

Risk 03

No cross-developer audit trail

If an incident happens, you cannot reconstruct which developer generated which code with which model under which policy. Continue writes no enterprise audit log.

Risk 04

Inconsistent safety posture

Some developers use models with built-in safety. Others do not. Without a shared checkpoint layer, the org's risk is the weakest per-developer config, not the best.

With Raidu

How Raidu governs Continue.

Push a single config to every Continue install. Every developer hits Raidu. One policy, one audit trail, every endpoint choice.

01

One apiBase for the whole org

Checkpoint 02 · Before LLM

Continue's config.json points at Raidu's OpenAI-compatible endpoint. You choose which models are available, enforce redaction before prompts leave, and flag any attempt to bypass the proxy.

02

Centralized model policy

Checkpoint 03 · Before Tool

Allowlist models per team, per repo, per file pattern. Claude for regulated code, GPT for internal tooling, self-hosted for air-gap. Developers pick from a curated menu. Security sets the menu.

03

Response scanning

Checkpoint 05 · Agent Response

Every completion is scanned for insecure code, license risks, hallucinated imports, and exfiltration before the developer sees it. Safe output reaches the editor. Blocked output leaves a logged reason.

04

Signed audit chain per developer

Post-execution

Every Continue interaction is tied to developer identity, repo, policy version, and signed. RSA-4096 signed, SHA-256 chained, WORM retained. SOC 2 auditors can pull per-developer evidence on demand.

Integration

Deploy one config.json.

Continue reads its config from a local file. Ship the Raidu config via MDM, dotfiles repo, or onboarding script, and every install is governed.

~/.continue/config.json json
{
  "models": [
    {
      "title": "Claude Sonnet 4.5 (governed)",
      "provider": "openai",
      "model": "claude-sonnet-4.5",
      "apiBase": "https://proxy.raidu.com/acme-corp/anthropic",
      "apiKey": "raidu_xxx",
      "requestOptions": {
        "headers": { "x-raidu-policy": "coding.eng.v7" }
      }
    }
  ]
}

// Ship via dotfiles, MDM, or onboarding script. One source of truth.
Questions

Questions from teams standardizing Continue.

Does Raidu fork Continue? +
No. Continue already supports OpenAI-compatible apiBase. Pointing it at Raidu is a configuration change, not a code change.
Can I enforce that developers cannot bypass Raidu? +
Yes. Combine managed config.json deployment, firewall egress rules that allow only the Raidu proxy domain for LLM endpoints, and Raidu's per-user key scoping. Bypass attempts are detectable and logged.
How does Raidu handle Continue's custom commands and slash commands? +
Slash commands are just prompts. They pass through the same five-checkpoint runtime with the same redaction, policy, and signing.
What is the latency overhead? +
Under 100 ms per checkpoint at p95. Developers notice no change in Continue's autocomplete speed.
Does this work for JetBrains as well as VS Code? +
Yes. The config and runtime are identical across Continue's IDE plugins.
Can I restrict which models a team can use from Continue? +
Yes. Either by issuing tenant-specific API keys with model allowlists, or by enforcing it in the Raidu policy. Unauthorized model requests are rejected with signed denials.