Our Products

Responsible AI — in three distinct forms

Each product tackles a different accountability problem. All three share the same founding principle: AI that can't explain itself, trace itself, or limit itself is a liability, not a feature.

Assistive navigation

Daedalus

Real-time assistive navigation and memory support for people who need a trustworthy guide — not a confident one that guesses.

Daedalus is designed for the people most let down by confident-but-wrong AI: blind and low-vision users navigating unfamiliar environments; people living with dementia or Alzheimer's who need a patient, honest companion; children learning their surroundings; and families who need visibility into what their loved ones were told and when.

The core design decision: when Daedalus isn't sure, it says so. Confidence-calibrated speech means a clearly identified hazard triggers an immediate interruption. An ambiguous object gets a hedged description. The system never narrates everything — it filters by relevance, limits volume per time window, and logs what was said for authorised review.

  • Hazard-first interruption Stairs, drop-offs, and obstacles trigger immediate speech regardless of ongoing narration.
  • Confidence-calibrated output Clear language for high-confidence detections; explicitly hedged phrasing when certainty is lower.
  • Narration rate limiting Bounded speech per window prevents cognitive overload for users with memory or attention challenges.
  • Secure narration log Timestamped, tamper-evident log accessible to authorised caregivers — not a surveillance tool, a safety record.
  • Memory support Can surface contextual reminders relevant to the current environment — where objects were last seen, routines, upcoming items.
  • Caregiver visibility Review interface shows what was narrated, when, and with what confidence — enabling informed conversations about care.
Conversational data

MarcoPolo

Governed conversational access to distributed enterprise data — answer business questions in natural language without dismantling your governance model.

Enterprise data is fragmented by design: structured tables in PostgreSQL, documents in MongoDB, files in S3, spreadsheets in Excel. MarcoPolo bridges these with natural language queries while enforcing the access controls that hold those silos together.

The key constraint: MarcoPolo never writes. Every query plan is validated before execution; DuckDB stitches cross-source results inside strict memory and row limits; every answer is attributable to the sources it came from. Dashboards created from answers refresh through the same policies that governed the original query — no governance escape hatch at refresh time.

  • Multi-source natural language query PostgreSQL, MongoDB, S3-compatible storage, JSON files, and Excel — asked in plain English.
  • Read-only execution enforcement Validated query plans only. No writes, no schema modifications, no elevated privilege escalation.
  • DuckDB-powered cross-source joins Results stitched in memory within bounded limits — no permanent intermediate tables created.
  • Persistent governed dashboards Pin query results as dashboards; refresh triggers re-run through the original RBAC and allowlist policies.
  • Workspace isolation & RBAC Team-scoped workspaces with role-based access control and per-user datasource allowlists.
  • Full query audit Every query and dashboard refresh is logged — who asked, what sources were touched, what was returned.
AI gateway

Suez

A single control plane between your applications and every LLM, MCP tool, and autonomous agent behind them.

As organisations add AI features across products, the number of model calls, tool invocations, and agent behaviours multiplies — often faster than policy can keep up. Suez is the architectural answer: every interaction passes through a single gateway that enforces identity, policy, quotas, and rate limits before anything reaches a model or tool.

Policy enforcement is backed by Open Policy Agent, so rules are auditable, testable, and maintainable outside the gateway itself. Every policy decision — inputs, outputs, the rule that applied, the result — is logged to a tamper-evident audit store. Suez deliberately targets low added latency: control shouldn't come at the cost of responsiveness.

  • Universal AI traffic control All LLM calls, MCP tool invocations, and agent interactions route through Suez — no side channels.
  • OPA-backed policy enforcement Open Policy Agent rules govern every request. Policies are version-controlled, testable, and auditable independently of the gateway.
  • Identity, quotas & rate limits Per-application, per-user, and per-model quotas with configurable rate limits enforced at the gateway.
  • Multi-provider model support Route traffic to any supported LLM provider. Vendor lock-in is a governance risk — Suez treats it that way.
  • Agent and tool governance Suez governs not just model calls but the tools agents are permitted to invoke and the policies that apply to agent-generated actions.
  • Tamper-evident audit log Per-environment database isolation; every policy input and decision is recorded and protected from after-the-fact modification.
Blog