The Future is Small

Since our 2019 pivot, we've pursued a contrarian thesis: the future of AI isn't in ever-larger monolithic models, but in orchestrated networks of specialised Small Language Models (SLMs).

The Problem with Monoliths

Large Language Models (LLMs) like GPT-4 are powerful but economically unsustainable for most applications. API costs scale linearly with usage, latency is unpredictable, and you're dependent on external providers. For production systems handling millions of requests, this model breaks down.

The SLM Alternative

Small Language Models (1-7B parameters) can run on commodity hardware, respond in milliseconds, and cost a fraction of LLM API calls. The trade-off is capability—but what if you could combine many specialists instead of relying on one generalist?

Technologies We're Building On

CH

Cognitive Hive AI

CHAI Architecture

Our novel architecture for orchestrating swarms of specialised AI agents. Like a hive mind, each agent handles specific tasks while a coordinator manages collaboration, routing, and consensus.

SL

Small Language Models

Efficient Intelligence

Research into fine-tuning, quantisation, and deployment of models under 7B parameters. Focus on task-specific specialists that outperform generalists in their domain.

MO

Model Orchestration

Intelligent Routing

Techniques for dynamically routing requests to the optimal specialist, aggregating outputs from multiple models, and managing consensus when agents disagree.

FT

Fault Tolerance

Resilient Systems

Actor-model supervision trees for AI systems. When a specialist fails, the system self-heals. No single point of failure, no cascading errors, no downtime.

RA

RAG Pipelines

Retrieval-Augmented Generation

Efficient embedding generation, vector search optimisation, and context window management. Grounding SLM outputs in domain-specific knowledge bases.

EC

Edge Computing

Local-First AI

Running AI workloads on local hardware for privacy, latency, and cost benefits. Techniques for model quantisation and efficient inference on consumer GPUs.

From Pivot to Product

2019

Strategic Pivot

Scaled back consultancy operations to focus on R&D. Began systematic research into AI/ML technologies, evaluating frameworks, model architectures, and deployment strategies.

2020

Transformer Deep Dive

Intensive study of transformer architectures, attention mechanisms, and the emerging ecosystem of open-source models. Experimented with BERT, GPT-2, and early fine-tuning techniques.

2021

Distributed Systems

Research into actor models (Erlang/OTP, Akka) for building fault-tolerant distributed systems. Explored how these patterns could apply to AI orchestration.

2022

Multi-Agent Systems

Prototyped multi-agent architectures where specialised models collaborate. Developed early concepts for what would become CHAI (Cognitive Hive AI) architecture.

2023

CHAI Formalisation

Formalised the Cognitive Hive AI architecture. Defined patterns for specialist agents, coordinator nodes, consensus protocols, and fault recovery. Published internal specifications.

2024

Simplex Development

Began building Simplex—a programming language designed from the ground up for AI-native development. All research insights encoded into the language primitives.

2025

Production Systems

Simplex compiler and runtime in production. Building Futura (trading), Aether (CMS), and Gratia (support) as proof points for the CHAI architecture and SLM-first approach.

Why We Built Simplex

Six years of research led to an inescapable conclusion: existing languages weren't designed for the AI era.

Native AI Primitives

  • First-class agent definitions with supervision
  • Built-in embedding and vector operations
  • Native prompt templating and context management
  • Type-safe model interfaces

Fault-Tolerant Runtime

  • Actor model with supervision trees
  • Automatic agent restart on failure
  • Message-passing concurrency
  • No shared mutable state

CHAI Built-In

  • Coordinator and specialist patterns
  • Consensus and voting protocols
  • Dynamic routing based on capability
  • Hive orchestration primitives

Production-Ready

  • Content-addressed code distribution
  • Deterministic builds
  • 75+ standard library modules
  • Cross-platform runtime
Explore Simplex

See Research in Action

Explore the products we're building with these technologies.