ODINN builds supercomputers the size of a carry-on giving you full AI power on site instead of in a distant cloud.
The World's Fastest, and only, Truly Portable AI Data Center
The Problem with Running AI in the Cloud
When you search the internet, filter spam in your email, or get recommendations online, your information travels over the internet to a distant computer farm, often in another city or country.
Those computers run AI systems and send the result back. For many everyday tasks, this is fast enough, and most people never think about where their data goes. But this cloud-based model doesn’t work for everyone.
Hospitals can’t send patient scans or medical records outside their buildings because of strict privacy and safety rules. Banks can’t send internal financial data into public systems. Governments and defense agencies can’t allow classified or mission-critical information to leave secure facilities. For these organizations, privacy and control aren’t optional. They’re requirements.
Speed is another problem. When AI runs far away, information has to travel back and forth before a decision can be made. In many real-world situations, even small delays matter. A military
system responding to a threat, a live broadcast being produced
in real time, a trading model reacting to market movement, or a robot controlling a production line can’t wait for data to travel across networks and back. Latency slows everything down.
To avoid these risks and delays, organizations often try to run AI themselves. That usually means building server rooms filled with powerful computers. These rooms require heavy electrical upgrades, large cooling systems, backup power, dedicated networking, and teams of engineers to operate them. Building and maintaining this infrastructure can cost millions of dollars and take months or even years, before a single AI model is running.
This is where ODINN comes in.
ODINN offers a revolutionary way to run powerful AI on site. Instead of sending data to distant cloud servers or building large data centers, ODINN delivers carry-on sized supercomputers with the same class of performance found in traditional data centers. By running AI directly where data is created, ODINN enables faster results, greater control, and deployment in real-world environments without massive infrastructure.
The ODINN Solution:
ODINN takes a completely different approach to AI. Instead of pushing data across the internet to massive data centers or forcing organizations to build rooms full of servers, ODINN delivers supercomputer-level AI directly into the places where work happens. These are carry-on sized machines that bring the power of a data center into ordinary rooms, running quietly on site, ready to work the moment they are turned on.
1) Data center-class performance on site
ODINN systems use the same class of high-end processors and GPUs found in large cloud data centers, but run inside the organization’s own facility.
2) Carry-on sized supercomputers
ODINN compresses extreme computing power into compact systems small enough to be deployed in offices, hospitals, labs, studios, and secure environments, rather than dedicated server halls.
3) Faster results through local processing
Because AI runs where data is created, results arrive faster, without the delays caused by sending information back and forth to distant cloud servers.
4) Full control of data
AI workloads run on hardware the organization owns and controls. Sensitive data stays inside the building instead of being sent to external providers.
5) No data center buildout required
Cooling, power delivery, and performance systems are built into the machine, removing the need for raised floors, industrial cooling plants, or major construction.
6) Rapid deployment
Organizations can deploy powerful AI in days instead of spending months or years planning and building infrastructure.
7) Built for real environments
ODINN systems are designed to operate where people actually work, not hidden away in loud, restricted server rooms.
8) Scales as needs grow
Individual systems can be combined to form larger clusters, allowing AI capacity to grow over time without redesigning facilities.
Products at CES
1) OMNIA™ — A Supercomputer the Size of a Carry-On
OMNIA is a carry-on sized AI supercomputer that delivers the kind of performance normally found inside large data centers. It packs data center-class CPUs, GPUs, memory, and storage into a single, self-contained system that can be deployed in minutes instead of months.
Inside OMNIA are server-grade processors, data center-class GPUs, and large memory and storage capacity designed for demanding AI workloads. High-end configurations support multi-terabyte memory and petabyte-scale NVMe storage. A proprietary closed-loop liquid cooling system is built directly into the unit, allowing it to run quietly and reliably in normal working environments.
OMNIA is designed to be used immediately. It includes an integrated display, keyboard, and trackpad, and connects to standard power and networking. Organizations can place it in an office, hospital, lab, studio, or secure facility and begin running powerful AI without building a server room or installing industrial infrastructure.
2) Infinity Cube™ — A Glass-Enclosed “AI Data Center”
Infinity Cube turns multiple OMNIA systems into a scalable AI cluster. It is a glass-enclosed, modular “edge data center in a cube,” built entirely from OMNIA units mounted upright inside a single structure.
Each OMNIA brings its own cooling, power integration, and computing capability. This allows Infinity Cube to scale to large GPU clusters without external cooling plants, raised floors, or traditional data center construction. Even compact configurations can house dozens of OMNIA systems, delivering high-density AI compute inside offices, trading floors, or secure rooms.
Infinity Cube is deployed as a single, self-contained object. Organizations roll it into place, connect power and networking, and begin operating a sovereign AI cluster they fully own and control.
3) NeuroEdge™
NeuroEdge is ODINN’s software layer that manages and optimizes AI workloads across OMNIA and Infinity Cube systems. It coordinates how jobs are deployed, scheduled, and tuned to extract maximum performance from the hardware.
NeuroEdge integrates with NVIDIA’s AI software ecosystem and other common frameworks, ensuring workloads run efficiently across CPUs, GPUs, memory, and cooling systems. It monitors system performance and understands the specific thermal and power characteristics of ODINN hardware.
The goal is simplicity. NeuroEdge allows organizations to run advanced AI workloads without manually tuning systems or managing complex infrastructure, so teams can focus on using AI rather than operating it.
WHO ODINN IS FOR
• Biotechnology, genomics, and pharmaceutical companies
• Research institutions and advanced AI labs
• Aerospace, aviation, and space technology sectors
• Automotive and autonomous mobility innovators
• Smart logistics, ports, and transportation hubs
• Smart retail, malls, and experiential commerce operations
• Universities and advanced engineering campuses
• Elite technology consultancies and systems integrators
• Private sector enterprises requiring secure, on-prem AI
• Defense and national security customers
• Government and sovereign AI programs
• Intelligence agencies and homeland security departments
• Telecommunications and edge connectivity providers
• Industrial and manufacturing enterprises
• Critical infrastructure operators (energy, utilities, smart cities)
• Financial services and trading firms
• Media, entertainment, and creative production companies
• Live event, sports, and broadcast organizations
• Healthcare systems and large hospital networks
Company Overview
ODINN was founded in 2023 to address a growing gap in how artificial intelligence is deployed in the real world. As AI models became more powerful, the infrastructure needed to run them remained locked in distant data centers and hyperscale clouds. For many organizations, especially those handling sensitive, regulated, or time critical data, that model no longer worked. The team behind ODINN believed AI needed a new physical form, one that could live where decisions are actually made.
From the beginning, ODINN set out to redesign high performance computing from the ground up. The goal was not to build faster servers for existing data centers, but to remove the dependency on those facilities altogether. That meant rethinking how power, cooling, acoustics, density, and reliability could be engineered into a single, integrated system. The guiding idea was simple but ambitious: bring data center level AI performance into everyday environments without forcing organizations to rebuild their buildings or send their data elsewhere.
This philosophy became known internally as Concentrated Compute™, a design approach focused on delivering maximum computing power in the smallest possible physical footprint while preserving full data sovereignty and low latency. Rather than spreading infrastructure across racks, rooms, or containers, ODINN concentrates everything into tightly engineered systems that operate as complete units. This approach allows AI to run locally, securely, and efficiently, even in places where traditional infrastructure cannot exist.
As the company evolved, ODINN invested heavily in engineering, manufacturing partnerships, and thermal innovation to prove that extreme compute density could function reliably outside traditional data halls. Early validation and customer interest across defense, government, industrial, financial, healthcare, and research environments reinforced the core thesis. Today, ODINN positions itself not as a cloud provider or server vendor, but as a new category of AI infrastructure company, focused on bringing sovereign, high performance AI into the physical spaces where intelligence is needed most.
Carl founded ODINN after working across law, finance, and large-scale infrastructure projects. He began his career in high-stakes litigation, then moved into investment banking and capital markets, where he worked on regulated transactions and complex financial systems.
He later advised major financial institutions on recovery and resolution planning, focused on keeping critical systems running during extreme stress events. This work involved understanding how large systems fail, how they recover, and how infrastructure must operate independently when outside support is limited.
These experiences led Carl to start ODINN. Instead of building faster systems for traditional data centers or relying on third-party cloud platforms, he focused on creating AI infrastructure that organizations can own and operate inside their own environments. That focus shaped ODINN’s core principles: high-density, sovereign systems designed to deploy easily, with performance built directly into the machine rather than the surrounding facility. In Carl’s view, controlling the infrastructure that runs intelligence is essential, because the future of intelligence belongs to those who control their own compute.
Leadership
Carl Liebel
Founder/CEO
Multimedia