Where private AI actually happens.
We don't just build private software — we run the infrastructure to prove it works. Offline servers, a multi-GPU research workstation, and partnerships with organizations that refuse to hand their data to the cloud.
Real hardware. Real research. Completely offline.
We build, fine-tune, and train AI models from scratch on hardware we own, in a facility we control. No cloud provider ever touches the data.
Cloudless Production Server
You're looking at it. This website is hosted on our own hardware — no AWS, no Google Cloud, no Azure, no third-party hosting. Fully self-managed, privately operated production infrastructure proving that the cloud is optional.
AI/ML Research Workstation
Built from the internals of a Lambda workstation. Four NVIDIA RTX 3090s delivering 96GB of combined VRAM for training, fine-tuning, and running models up to 32B+ parameters. Available air-gapped or via secure remote access — never routed through a cloud provider.
Your data never leaves the building.
We offer research infrastructure to organizations that need to conduct AI research in a completely private setting — because they value their data too much to upload it to someone else's cloud.
Air-Gapped Compute
Run experiments on our multi-GPU workstation with zero internet connectivity. Your data stays on-premises, processed on hardware with no cloud telemetry, no external API calls, no data exfiltration vectors.
Custom Model Training
We build, fine-tune, and train models from scratch. Whether you need a domain-specific language model, a classification system, or exploratory research into model internals — we do it on metal we control.
Research Partnerships
Organizations with sensitive data — healthcare, legal, advocacy, human rights — can leverage our infrastructure and expertise without ever exposing their data to third-party cloud providers.
Liberation Labs — THCoalition
Liberation Labs is the AI research arm of the Transparent Humboldt Coalition, a grassroots organization that believes AI is the new means of production — and that communities should seize it. They run workshops for progressive organizers, publish AI ethics scorecards, and build open-source agentic tools for social change. Their first major research campaign ran entirely on our infrastructure.
KV-Cache Phenomenology: Geometric Signatures of Machine Cognition
Patent Pending — Liberation Labs
This research has resulted in pending patent applications owned by Liberation Labs. Digital Disconnections holds unlimited licensing rights and serves as the research infrastructure partner going forward.
We measured the internal geometry of language model computation across 7 scales (0.5B to 32B parameters) and discovered that different cognitive modes — factual recall, confabulation, self-reference, refusal, deception — leave statistically distinguishable geometric fingerprints in the KV-cache. The signal lives in the geometry, not the magnitude.
The signal lives in the geometry, not the magnitude.
The study measured the “shape” of AI working memory across 7 model scales. Different kinds of thinking — facts, lies, refusal, self-awareness — leave distinct geometric fingerprints. Here are the highlights.
You Can Catch an AI Lying — By Its Geometry
When a model makes something up, its working memory spreads across more dimensions than when it states a fact. The difference is invisible if you only measure signal strength — but obvious if you measure the shape. Honest answers, deliberate lies, and hallucinations are all geometrically distinguishable.
The Model Knows Before It Speaks
These geometric signatures appear before the model generates a single word. Just from processing the input — no response needed — the shape of the working memory already reveals what kind of thinking is happening. This kills the “it's just response style” objection.
Self-Awareness Has a Threshold
Below 7 billion parameters, a model processes “I am an AI” the same as any other sentence. Above 14 billion, self-referential content suddenly occupies a completely different geometric space. Something structural changes — sharply, between 7B and 14B — and then plateaus.
They Tried to Break Their Own Results
The team designed adversarial controls to falsify their own findings. One hypothesis — that giving a model an identity restructures its cognition — didn't survive. They reported it anyway. Everything else held up across precision sweeps, token confounds, and encoding-only tests.
Full results, code, and data are open source.
Explore Liberation Labs Research on GitHub →Need private AI research infrastructure?
If your organization has data too sensitive for the cloud and research too important to wait, we should talk.