Dry-run mode for AI database agents: preview the blast radius before anything changes
The dangerous moment in an AI database workflow is not always execution.
Often, it is the moment before execution, when nobody knows the blast radius yet.
The agent says a change is simple. The SQL looks plausible. The request sounds routine.
Then the query touches more rows than expected.
That is why production AI database agents need dry-run mode.
Dry-run is not a prompt instruction
“Check before you act” is not enough.
A real dry-run is enforced by the database or server-side tool layer. It lets the agent prepare a proposed operation, but prevents the side effect from happening until the system has produced a structured preview.
For writes, that preview should show the exact entities affected. For exports, it should show row count and sensitivity. For broad reads, it should show scope and estimated cost.
Related: Approval gates for AI database writes.
What a dry-run result should include
A useful dry-run response is not “looks good.”
It should include:
- operation type,
- affected row count,
- affected entity IDs or sample IDs,
- before and after values for writes,
- tenant or workspace scope,
- policy checks passed or failed,
- query budget impact,
- approval requirement,
- rollback or compensation hint,
- audit event ID.
That turns the model’s proposal into something a human or policy engine can inspect.
Related: Tool result contracts for AI database agents.
Dry-run helps reads too
Dry-run is not only for writes.
A natural-language analytics request can be too broad even when it is read-only. “Show all churned customers” may be reasonable for one workspace and dangerous across every tenant.
A dry-run can classify the query, estimate its scope, check allowed views, and refuse if the request is too wide.
Related: AI database query budgets.
Keep execution deterministic
The safest pattern is two-step execution:
- The agent creates a proposed operation.
- The system runs a dry-run and returns structured evidence.
- A human or policy gate approves, narrows, or rejects.
- A deterministic server-side operation executes the approved change.
The final step should not be “let the model generate fresh SQL again.”
Related: Audit-ready MCP database workflows.
Where Conexor fits
Conexor is MCP infrastructure for AI-ready engineering teams. It connects databases and APIs to AI clients like Claude, ChatGPT, Cursor, n8n, Continue, and any MCP-compatible client.
For production teams, the goal is not just to let an AI agent act on data. It is to make the system prove what would happen before anything changes.