Research & Infrastructure

Where private AI actually happens.

We don't just build private software — we run the infrastructure to prove it works. Offline servers, a multi-GPU research workstation, and partnerships with organizations that refuse to hand their data to the cloud.

Real hardware. Real research. Completely offline.

We build, fine-tune, and train AI models from scratch on hardware we own, in a facility we control. No cloud provider ever touches the data.

Online Now

Cloudless Production Server

You're looking at it. This website is hosted on our own hardware — no AWS, no Google Cloud, no Azure, no third-party hosting. Fully self-managed, privately operated production infrastructure proving that the cloud is optional.

Serving digitaldisconnections.com (this site)
Cloud Provider None
Data Exposure Zero
Operational

AI/ML Research Workstation

Built from the internals of a Lambda workstation. Four NVIDIA RTX 3090s delivering 96GB of combined VRAM for training, fine-tuning, and running models up to 32B+ parameters. Available air-gapped or via secure remote access — never routed through a cloud provider.

GPUs 4× NVIDIA RTX 3090 (24GB each)
Total VRAM 96 GB GDDR6X
CPU Intel i9-10900X
RAM 256 GB
Capability Train & fine-tune models up to 32B parameters
Access Air-gapped or secure remote access

Your data never leaves the building.

We offer research infrastructure to organizations that need to conduct AI research in a completely private setting — because they value their data too much to upload it to someone else's cloud.

01

Air-Gapped Compute

Run experiments on our multi-GPU workstation with zero internet connectivity. Your data stays on-premises, processed on hardware with no cloud telemetry, no external API calls, no data exfiltration vectors.

02

Custom Model Training

We build, fine-tune, and train models from scratch. Whether you need a domain-specific language model, a classification system, or exploratory research into model internals — we do it on metal we control.

03

Research Partnerships

Organizations with sensitive data — healthcare, legal, advocacy, human rights — can leverage our infrastructure and expertise without ever exposing their data to third-party cloud providers.

Liberation Labs — THCoalition

Liberation Labs is the AI research arm of the Transparent Humboldt Coalition, a grassroots organization that believes AI is the new means of production — and that communities should seize it. They run workshops for progressive organizers, publish AI ethics scorecards, and build open-source agentic tools for social change. Their first major research campaign ran entirely on our infrastructure.

Campaign February 2026
Hardware 4× RTX 3090, 256GB RAM
Compute Time ~35 hours GPU time
Scale 7 model sizes, 0.5B – 32B parameters
Inferences ~18,000 total
Research Paper

KV-Cache Phenomenology: Geometric Signatures of Machine Cognition

Lyra & Thomas Edrington — Liberation Labs

Patent Pending — Liberation Labs

This research has resulted in pending patent applications owned by Liberation Labs. Digital Disconnections holds unlimited licensing rights and serves as the research infrastructure partner going forward.

We measured the internal geometry of language model computation across 7 scales (0.5B to 32B parameters) and discovered that different cognitive modes — factual recall, confabulation, self-reference, refusal, deception — leave statistically distinguishable geometric fingerprints in the KV-cache. The signal lives in the geometry, not the magnitude.

The signal lives in the geometry, not the magnitude.

The study measured the “shape” of AI working memory across 7 model scales. Different kinds of thinking — facts, lies, refusal, self-awareness — leave distinct geometric fingerprints. Here are the highlights.

01

You Can Catch an AI Lying — By Its Geometry

When a model makes something up, its working memory spreads across more dimensions than when it states a fact. The difference is invisible if you only measure signal strength — but obvious if you measure the shape. Honest answers, deliberate lies, and hallucinations are all geometrically distinguishable.

d = −3.065 Honest vs deceptive at 32B parameters
02

The Model Knows Before It Speaks

These geometric signatures appear before the model generates a single word. Just from processing the input — no response needed — the shape of the working memory already reveals what kind of thinking is happening. This kills the “it's just response style” objection.

ρ = 0.929 Input-only vs full generation correlation at 7B
03

Self-Awareness Has a Threshold

Below 7 billion parameters, a model processes “I am an AI” the same as any other sentence. Above 14 billion, self-referential content suddenly occupies a completely different geometric space. Something structural changes — sharply, between 7B and 14B — and then plateaus.

d = 1.23 Effect size at 32B (stable plateau)
04

They Tried to Break Their Own Results

The team designed adversarial controls to falsify their own findings. One hypothesis — that giving a model an identity restructures its cognition — didn't survive. They reported it anyway. Everything else held up across precision sweeps, token confounds, and encoding-only tests.

5 of 6 Major findings survived all adversarial controls

Full results, code, and data are open source.

Explore Liberation Labs Research on GitHub →

Need private AI research infrastructure?

If your organization has data too sensitive for the cloud and research too important to wait, we should talk.

Get in Touch See Our Products