How It Works¶
How Moveris detects real humans vs. spoofs—in plain terms and technical detail.
In plain terms
Moveris checks whether the person in front of the camera is real by measuring involuntary biological signals (micro-movements, physiological reactions) that AI and deepfakes cannot reliably replicate. No user actions required—just look at the camera for about 1 second.
While deepfakes and AI-generated content have gotten remarkably good at mimicking human appearance, they can't fake biology.
Our API analyzes involuntary physiological signals that occur naturally when real humans interact with cameras—subtle reactions that happen below conscious awareness and can't be replicated by even the most sophisticated generative models.
Unlike traditional liveness detection that relies on challenge-response actions or AI model training, we've taken a fundamentally different approach rooted in psychophysiology. We measure what the body does automatically, not what it's told to do.
The Science (Simplified)¶
When you're alive and looking at a camera, your body is constantly generating signals—micro-expressions, subtle movements, physiological reactions that cascade through your system. These aren't things you can control or fake; they're the signature of a living, breathing human nervous system.
But it goes deeper than isolated signals. Real humans exhibit cognitive coherence—the natural, split-second coordination between what you're thinking, what you're seeing, and how your body responds. When you react to a question, your pupil dilation, facial muscle timing, and micro-movements all sync in patterns that reflect actual neural processing. Deepfakes can replicate individual elements, but they struggle to maintain this multi-layered coherence across time. The signals don't just need to exist; they need to make sense together.
Our technology reads these signals through standard webcams, requiring no special hardware. Within seconds, we can determine whether we're looking at a real person or a sophisticated fake.
Why This Gets More Effective, Not Less¶
Here's the counterintuitive part: as deepfakes improve, our approach becomes more valuable.
Traditional detection methods look for flaws—artifacts, inconsistencies, or statistical fingerprints left by AI generators. As generative models evolve, they learn to eliminate these tells. It's an arms race that favors the attacker.
We're not in that race. We're not looking for what's wrong with the fake—we're confirming what's present in the real.
AI can learn to add realistic-looking blinks or micro-movements, but it can't generate authentic biological coherence. There's no actual nervous system processing stimuli, no real pupils responding to light changes, no genuine cognitive load creating coordinated physiological patterns. These require actual neural tissue, actual consciousness, actual life.
As deepfakes reach visual perfection, the only reliable differentiator becomes: Is there a real biological system behind this? When pixels become indistinguishable, measuring the presence of life becomes the only moat that matters.
That's what Moveris does. And that's why we get stronger as the synthetic world gets more convincing.
Getting the Best Results¶
Embed in Natural User Flows¶
Our API works best when users are naturally engaged with their screen. Rather than treating liveness verification as a separate step, integrate it into existing user activities:
Sign-in Flows¶
During authentication while users wait
Content Viewing¶
While users watch content or read instructions
Onboarding¶
When users are focused on completing setup
Video Calls¶
In the background of video calls or identity verification
Natural engagement
When users are actively doing something, their natural biological signals are strongest and most consistent.
Camera Positioning Matters¶
- Face the camera directly rather than at an angle
- Ensure adequate lighting for better visibility
- Stay relatively still during the brief capture window
- Position at a natural distance (head and shoulders framing is ideal)
Integration Tips¶
Capture Time¶
Capture frames at ~10 FPS (count matches model: e.g. 10 for mixed-10-v2, 30 for mixed-30-v2)
Retry Logic¶
Users might need 3–4 tries in poor conditions
Low friction verification
The more natural and unobtrusive the verification feels, the better the biological signals we can measure.