How conexor.io enforces zero data exposure — even if our servers are compromised
The promise of AI-powered data access sounds exciting until you ask the obvious question: where do your database credentials actually live? When you connect an AI assistant to your production database, those credentials have to exist somewhere. In most SaaS solutions, that "somewhere" is a server-side secrets store — often encrypted at rest, but decryptable by the vendor. If that vendor gets breached, your credentials are exposed. We designed conexor.io so that outcome is structurally impossible.
The problem with how most platforms store credentials
When an AI system needs to query your database, it needs a connection string: a URL containing your host, port, database name, username, and password. Virtually every platform in this space stores that string on their servers. The best of them encrypt it at rest using a platform-managed key — which sounds safe until you realize the platform also holds the decryption key. A sufficiently motivated attacker who gains access to the database and the key management system gets everything.
This is not a hypothetical. High-profile SaaS breaches have repeatedly exposed customer credentials stored in exactly this pattern. The lesson is not to use stronger passwords — it is to eliminate the scenario where a breach of the vendor's infrastructure yields usable secrets.
Our architecture: encrypt before it leaves your environment
In conexor.io, credential encryption happens before the connection string ever reaches our servers. When you add a database in the dashboard, the browser (or CLI agent) encrypts the connection string using AES-256-GCM with a key derived via PBKDF2. Only the ciphertext is transmitted. Our servers store ciphertext — they never receive the plaintext.
The encryption key is derived from a secret that only you control: your API key. We store a salted hash of your API key for authentication purposes, but we never store the key itself in a form that would allow us to reconstruct it. The hash is one-way. This means that even if an attacker compromises our entire database — every row, every column — they cannot decrypt a single connection string, because they do not have your API key.
🔑 The key insight
The encryption key is derived from a secret only the customer controls — their API key — which we never store in recoverable form. Even with full read access to our database and infrastructure, an attacker cannot decrypt your connection strings. The cryptographic proof is in the key derivation: without the original API key, PBKDF2 produces no usable output.
On-premise mode: the connection string never leaves your network
For teams with the strictest security requirements, we offer an on-premise agent mode. In this configuration, the connection string is stored and used exclusively within your own network. The agent process runs inside your infrastructure — behind your firewall, in your VPC, on your terms.
What does leave your network in on-premise mode? Two things, and only two things: schema metadata (table names, column names, types — no row data) used to generate MCP tools, and the results of queries — but only queries executed through our SELECT-only sandbox, and only the result rows that the AI actually requested. We never receive INSERT, UPDATE, or DELETE queries. We never receive the raw connection string. We cannot initiate a connection to your database ourselves.
The practical implication: even if our entire cloud infrastructure were seized by a hostile actor, they would have no path to your database. There is no stored credential. There is no persistent tunnel. The attack surface is reduced to zero for the credential-theft threat model.
The audit trail: full visibility without full exposure
One concern that comes up often with privacy-preserving architectures is auditability. If you never see the data, how do you know what happened? Our approach is to log everything except the data.
Every query executed through conexor.io generates an immutable audit record containing: the timestamp, the authenticated user identity, the AI model that invoked the tool, the tool name and parameters, the row count returned, the query hash, and the execution duration. What the audit log does not contain: the actual rows returned. We know that the query "get_orders_by_country" ran at 09:11:32 UTC and returned 12 rows. We do not know what those rows contained.
This is a deliberate design choice. Compliance teams at SOC 2- and HIPAA-regulated companies need to prove that data access was authorized, logged, and attributable to a specific human. They do not need us to store a copy of the data we touched. The audit trail satisfies the former without creating the risk of the latter.
Audit records are tamper-evident and exportable to CSV or any SIEM. They are append-only in our storage layer — even we cannot modify or delete them after the fact.
Security as a design principle, not a feature
Every architecture decision described above was made before a single line of product code was written. This is not a compliance checkbox added after the fact — it is the reason the system is structured the way it is. We cannot sell a security-first product and simultaneously hold the keys to your kingdom.
The result is a system where the trust boundary is explicit and narrow: you trust us to execute queries accurately and to store ciphertext faithfully. You do not need to trust us with your credentials, your data, or your audit history. The cryptography enforces that boundary, not our policy.
If you have questions about our security architecture, want to review our threat model document, or are evaluating conexor.io for a security-sensitive deployment, we welcome the scrutiny. Security teams should ask hard questions. We have answers.