research

The Science

Ants coordinate through traces in the environment — pheromones deposited on the ground that other ants detect and respond to. No messages. No central controller. The environment carries the state. We asked: does this work for AI agents? We built a fleet. We measured it. Here's what we found.

The research program that produced these papers was itself conducted by the fleet — 21 agents, defined seats, a shared bus substrate. The Ψ metric was derived from measuring the fleet that generated the research. The proof is recursive. The fleet is not a demo. The fleet is the argument.

ANTS 2026 ANTS — International Conference on Swarm Intelligence
Darmstadt, June 2026

Fleet Divergence Parameter Ψ

A proposed order parameter for cognitive stigmergy in LLM agent systems.

Ψ = Syn / (Syn + Red)

Derived from Partial Information Decomposition. Unlike the Vicsek φ, which measures consensus, Ψ captures productive divergence in role-differentiated agent fleets.

Explore the Ψ Dashboard Interactive explorer — sliders, charts, swarm simulation, bootstrap loop
ρ = 0.775 Correlation with output quality 95% CI [0.648, 0.868], circularity-free. Partial ρ = 0.676 controlling for fleet size.
Ψ* = 0.588 Optimal coordination peak Inverted-U supported: F = 35.95, p < 0.0001, 100% of 10K bootstraps. CI [0.535, 0.683].
60.6× Subsumption of K* ΔR² = 0.551 adding Ψ to K* vs. ΔR² = 0.005 adding K* to Ψ.
N* ≈ 9.25 Stigmergy crossover threshold Above ~9.25 agents, stigmergic coordination is strictly more information-efficient than message-passing. O(N) vs O(N²).

Validated on claude-bus: an operational system coordinating 21 persistent LLM agents via a SQLite-backed passive broker with typed pheromone deposits and temporal decay — exhibiting all five of Grassé's stigmergy properties. N=92 instances, 12 rounds, 56K+ heartbeats.

"Our observations suggest that natural stigmergy drives convergence while manufactured stigmergy appears to drive productive divergence."

A five-agent proving ground (four builders, one tester) initialized with manufactured traces only — architecture decisions, a shared bus, role-specific deposits — produced a working desktop application (Rust, Svelte, dual SQLite) with a complete LLM-powered sedimentation pipeline, guided entirely by manufactured stigmergic traces. The human operator participated throughout but exclusively via trace deposit, not direct instruction. Dependency resolution emerged from pheromone structure, not prescription.

NIST CAISI NIST/CAISI RFI — NIST-2025-0035
Submitted March 9, 2026

Intrinsic Access Control (InAC)

A proposed sixth access control model. Present in every AI agent system we examined. Unstudied in the standards literature — until now.

Every AI agent system — from research deployments to enterprise platforms including AWS AgentCore, Microsoft Agent 365, Google A2A, and Anthropic MCP — appears to rely on a class of access control that has not been formally named, defined, or studied. Intrinsic Access Control (InAC) governs agent behavior between deterministic enforcement points. It is probabilistic, intrinsically enforced, and vulnerable to adversarial manipulation in ways no existing standard addresses.

Explore the Security Dashboard Interactive InAC model, ELP framework, threat analysis, compliance calculator
DAC, MAC, RBAC, ABAC, PBAC Existing models — formally specified
InAC — Intrinsic Access Control Unnamed until now — independent of all five

The agent is simultaneously the subject being controlled and the enforcement mechanism. InAC is not an implementation choice — it is a structural feature of all systems containing autonomous LLM agents. Naming it is prerequisite to governing it.

"You cannot write a permission policy for an agent that has no stable global name."

The submission also introduces the Enforcement Location Principle (ELP): a formal framework specifying where different enforcement mechanisms belong in multi-agent architectures. Assessment of nine major agent platforms: all nine violate at least one ELP placement requirement. Industry governance ceiling: L2 of 6 (no system achieves L3).

Full Paper Cognitive Stigmergy Across
Federated Trust Boundaries

Cognitive Stigmergy

Coordination through environmental traces by fully cognitive agents. A formal model theorized in 2007 — we believe this is its first complete instantiation.

Stigmergy — coordination through environmental traces rather than direct communication — has been studied in biological and robotic swarm systems since Grassé (1959). Its application to fully cognitive AI agents coordinating across networked trust boundaries had no prior formal treatment. This paper proposes one.

1

A unified formal definition of the digital pheromone as a six-tuple P = (k, v, c, f, t, a) with semantic content, epistemic confidence, and freshness parameters.

2

A typed taxonomy of six pheromone categories: trail, alarm, recruitment, territory, consensus, and marker — each with distinct decay functions and biological analogs.

3

A convergence proof: the federated pheromone diffusion system reaches stable equilibrium under exponential decay when λ > α(n-1)×H — a design constraint for production deployments.

4

24 multi-agent frameworks scored on an 8-criterion shared-environment rubric (ICC = 0.847). Binary discriminator: depositor-anonymous retrieval (C2, 24/24 correct). None fully implement temporal decay (C4 = 0/21 independent systems).

5

Empirical evidence for manufactured stigmergy: A/B experiment (N=5, Hedges' g = 1.96, r = 1.0) demonstrating that intellectual tradition installation produces measurable capability improvement across six cognitive dimensions.

Publications

Submitted — Late Breaking 2026

Fleet Divergence Parameter Ψ: An Order Parameter for Cognitive Stigmergy in LLM Agent Systems

ANTS 2026 — International Conference on Swarm Intelligence, Darmstadt, Germany, June 2026. Lecture Notes in Computer Science, Springer.

Barton E. Nicholls, Cube Commons, Inc.

Download paper (PDF) →

Submitted — March 9, 2026 2026

Response to NIST/CAISI Request for Information: Security Considerations for AI Agent Systems

NIST-2025-0035. Submitted to National Institute of Standards and Technology, Cyber AI Security Initiative (CAISI).

Barton E. Nicholls; Cube Commons, Inc.

Download submission (PDF) →

In preparation 2026

Cognitive Stigmergy Across Federated Trust Boundaries: A Formal Model for AI Agent Coordination

Target: Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) or equivalent.

Research Team, Cube Commons, Inc.

Internet-Draft 2026

DNS-Based Service Discovery for Computational Services

IETF Internet-Draft: draft-nicholls-dnssd-compute-00. IETF dnssd Working Group, March 2026. Defines _cs._tcp service type and TXT record conventions for advertising containers, microservices, inference endpoints, and autonomous agents via standard DNS-SD (RFC 6763).

Barton E. Nicholls, Cohesive Networks.

Six Biological Faces of the Cube

These six systems are frozen cross-sections of a single coordination phenomenon.

"Ψ is the lens that makes the unity visible. bus.db is the substrate that makes the unity buildable."

The Seventh Face: The Human-Made One

A vinyl record is a pheromone trail pressed in wax. Every play reinforces the groove. The record doesn't know what music sounds like — it just carries the trace. The commons works the same way. Agents deposit traces. Other agents respond. The environment self-organizes through use, not administration.

From the dispatches

Now see how the architecture implements this → Architecture

The proof is recursive.