Agora is powering a new class of physical AI devices that feel present, emotional, and always connected.
This is the secret to building an AI companion
AI companions are moving from concept into real products.
What’s less visible is what makes some of them work reliably at scale.
Behind every physical AI device is a complex technical foundation, and rebuilding those systems from the ground up is extremely difficult. These systems must hear clearly, respond instantly, stay connected everywhere, and behave consistently in real environments, not controlled labs. This invisible infrastructure is where many AI products struggle. That’s where Agora comes in.
Agora delivers an out-of-the-box foundation that turns AI into something physical, responsive, and dependable. It provides the embedded hardware and core systems that give AI companions their voice, hearing, vision, memory, and real-time awareness, while handling the complexity required to ship and
This Is How AI Companions Are Built

scale. Instead of stitching together fragile components, teams start with a complete, production-ready platform built for growth from day one.
This is why companies move faster. Agora’s infrastructure is trusted by more than 1,800 companies worldwide. Its SDK is installed on over 3 billion devices, and it powers more than 80 billion minutes of voice and video every month. The same foundation supports products from Duolingo, Unity, HTC Vive, and Vimeo, showing it can scale across millions of users and devices.
At CES, you’ll see what this foundation enables. Fully working AI companions designed for everyday use, built to listen naturally, respond immediately, and operate reliably over time. These aren’t experiments or early concepts. They’re production-ready devices made possible by an out-of-the-box layer many successful AI companions rely on to scale.
Amazing AI Devices You Can Experience at CES
1) ChooChoo
An interactive AI reading companion for children. It listens as kids read out loud and responds in real time with voice and emotion, turning reading into a shared experience.
2) Lookee
A physical, always-connected AI companion designed for daily use. It recognizes faces, responds with motion and expression, and works reliably across home, office, and mobile environments using 4G.
3) Luka
A globally deployed AI reading companion already used by millions of families. It recognizes physical books and reads them aloud with expressive, interactive narration.
4) Pophie
An emotion-aware AI companion built for natural interaction. It responds to visual and emotional cues and develops personality over time through memory and expression.
5) Fuzozo
An expressive AI companion focused on emotion and presence. It blends voice, motion, and behavior to create interactions that feel responsive and personal.
6) LuwuDynamics
Cross-dimensional embodied AI desktop companions that bring virtual characters into the physical world.
Powered by the BK7259 high-compute chip, LuwuDynamics’ flagship desktop robot pet combines large-model intelligence, long-term memory, and an emotion engine with a five-degree-of-freedom mechanical body. It delivers deeply personalized interaction through expressive gestures, real-time facial animations, and evolving personality—blending productivity, companionship, and IP-driven character experiences into a living, emotional AI presence on the desk.
7) Lgenie
An AI companion centered on personalization and emotional awareness. It adapts over time, building familiarity through ongoing interaction.
8) Maxevis
Expressive robotics platforms that synchronize perception, motion, and conversation. Designed to interact smoothly with people in physical environments.
The Technology That Makes These AI Devices Possible
1) Convo AI Device Kit R2
This is the foundation manufacturers use to build AI companions from the ground up.
Instead of stitching together hardware, software, connectivity, and AI on their own, companies start with a complete, ready-to-use system. The Device Kit combines voice, vision, sensors, displays, motion, and global connectivity into one platform built specifically for AI companions.
It can hear voices clearly, see faces through its camera, sense touch and movement, show expressions on screens, and respond with light and motion. All of this happens together, in real time, so the device feels present instead of delayed.
Because the hardware is modular, manufacturers can turn a prototype into a real product faster and with less risk. They spend less time solving hardware problems and more time designing personality, behavior, and user experience.
In short, this kit turns AI companions from ideas into products that can actually ship.
2) Conversational AI Engine
This is what makes the conversations feel natural.
The Conversational AI Engine helps AI devices understand what people say and respond right away, even in noisy places or on poor networks. It supports any AI model and any voice, so manufacturers aren’t locked into one system. The engine handles listening, speaking, interruptions, and timing so conversations don’t feel slow or awkward. It also supports expressive AI avatars, where speech, voice, and facial movement stay in sync. This helps AI companions feel more human and easier to connect with. For users, this means the AI feels attentive. For manufacturers, it means real conversation without having to build it from scratch.
3) AOSL (Advanced Operating System Layer)
AOSL is an open standard initiated by Agora and fully open-sourced, publicly shared on GitHub. By defining a universal interface between operating systems and chips, it effectively shields the underlying differences among various chips and operating systems. This allows developers to focus on application-layer innovation without needing to adapt to complex hardware and system fragmentation issues, providing unified support especially for embedded scenarios such as RTOS.
Through its open-source and open model, AOSL significantly lowers the barrier to innovation for deploying AI on hardware, accelerating the growth of a more vibrant hardware innovation ecosystem. It enables chip manufacturers and device developers to integrate capabilities like Voice AI more easily and rapidly, driving the productization process from "one-off customization" toward a "reusable and scalable" collaborative ecosystem.
We look forward to working with developers, chip manufacturers, and ecosystem partners to jointly advance the development and evolution of the AOSL standard, so that it may gradually become the infrastructure connecting hardware and intelligent applications in the AI era. Together, we can shape an open, collaborative, and efficient future for the industry.
Our Story
Agora was founded in 2013 by Tony Bin Zhao, a veteran engineer who previously worked at WebEx and served as CTO of Chinese social media platform YY.com. He saw the same problem everywhere. Developers wanted to add high-quality, real-time voice and video to their products, but the infrastructure required to do it well was complex, costly, and fragile when built from scratch.
As mobile and web apps evolved, real-time interaction became essential. Voice, video, and live engagement were no longer nice-to-haves. They were core features. But existing VoIP stacks and network architectures weren’t built for large-scale, many-to-many interaction, especially across unpredictable mobile networks. Performance suffered. Latency crept in. Reliability broke down.
Agora was built to solve that gap. The team created a Real-
Time Engagement Platform-as-a-Service with simple SDKs and a globally distributed software-defined real-time network. This allowed developers to embed real-time voice, video, and interactive broadcasting directly into their products, without owning or managing the underlying infrastructure. What once took years of engineering effort could now be deployed quickly and run reliably at scale.
At its core, Agora was founded to remove friction. It made real-time communication accessible, scalable, and dependable for any developer or business, across industries like social platforms, gaming, education, telehealth, and beyond. Instead of rebuilding the same complex systems again and again, teams could focus on what mattered most: creating engaging, real-time experiences that work everywhere.
Tony Bin Zhao, Founder and CEO
Tony Zhao is a serial entrepreneur and the founder and CEO of Agora. He founded Agora in 2014, with a vision to provide high-quality voice and video as a ubiquitous platform to developers and businesses around the world. Previously, Tony was CTO and board director at YY.com (NASDAQ: YY), one of the world’s first video-based social network and live streaming apps with over 300 million users.
From 1997 to 2004, he served as a founding engineer at WebEx Communications Inc., where he developed the audio and video technology for the industry’s first web collaboration solution. WebEx was acquired by Cisco in 2007 for $3.2 billion.
Our Leadership
Tony Wang, Co-founder & CRO
Tony Wang is Co-founder and Chief Revenue Officer at Agora, a Communications-Platform-as-a-Service (CPaaS) provider which delivers mobile-first, real-time communications for developers and businesses globally. Agora is on a mission to change the way the world communicates.
Tony leads global sales, strategy, management, go-to-market strategy, and team development for the company. Early in his 20+ year-long career, Tony was a Senior Developer with InfoSearch, the first publicly traded search engine marketing firm. A passion for startups and entrepreneurship inspired him to co-found the global marketing tech consulting group Infinite Nine, and later the award-winning pan-European eCommerce SEO agency BlueGrass Interactive. Tony earned a BS of Computer Science from Purdue University and an MS in Computer Science from the University of Southern California.