React Examples¶
Everything you need to start adding human liveness verification to your React apps. Production-ready React hooks and components for integrating liveness detection. These samples walk you through the Moveris SDK (v2)—from basic setup to production-ready implementations.
In plain terms
These React hooks and components handle camera access, frame capture, and API calls. Drop them into your app to add a liveness verification flow with minimal setup. Route API calls through your backend to keep the API key secure.
Moveris API (v2)
These examples use Moveris API (v2) endpoints. The source field is required and should be set to "live" for real-time camera capture.
SDK default
The @moveris/react SDK defaults to fast-check-crops. Use LivenessView or useLiveness for the simplest integration; pass endpoint="fast-check-stream" to use streaming instead.
Security Best Practice
Never expose your API key in client-side code. Always proxy requests through your backend server. Your backend must forward requests to api.moveris.com with your API key in the X-API-Key header. The SDK defaults to fast-check-crops; use fast-check-stream when you prefer streaming.
Model selection
Examples use the v1 flow (model in body). For v2 resolution, add X-Model-Version: latest header and frame_count in the body. See Model Versioning & Frames.
Custom Hook¶
A reusable hook that handles video capture, frame extraction, and API calls. Use this as the foundation for your liveness verification UI.
import { useState, useRef, useCallback } from 'react';
interface Frame {
index: number;
timestamp_ms: number;
pixels: string;
}
interface LivenessResult {
verdict: 'live' | 'fake';
real_score: number;
score: number;
session_id: string;
}
interface UseLivenessCheckReturn {
videoRef: React.RefObject<HTMLVideoElement>;
checkLiveness: () => Promise<void>;
isChecking: boolean;
result: LivenessResult | null;
error: string | null;
}
export function useLivenessCheck(): UseLivenessCheckReturn {
const [isChecking, setIsChecking] = useState(false);
const [result, setResult] = useState<LivenessResult | null>(null);
const [error, setError] = useState<string | null>(null);
const videoRef = useRef<HTMLVideoElement>(null);
const captureFrames = useCallback(async (): Promise<Frame[]> => {
const video = videoRef.current;
if (!video) throw new Error('Video ref not set');
const frames: Frame[] = [];
const frameCount = 10; // mixed-10-v2; use useModels() for dynamic selection
for (let i = 0; i < frameCount; i++) {
const canvas = document.createElement('canvas');
canvas.width = 640;
canvas.height = 480;
const ctx = canvas.getContext('2d')!;
ctx.drawImage(video, 0, 0, 640, 480);
frames.push({
index: i,
timestamp_ms: i * 100,
pixels: canvas.toDataURL('image/png').split(',')[1]
});
await new Promise(resolve => setTimeout(resolve, 100));
}
return frames;
}, []);
const checkLiveness = useCallback(async () => {
setIsChecking(true);
setError(null);
setResult(null);
try {
const frames = await captureFrames();
const sessionId = crypto.randomUUID();
// Proxy through your backend to api.moveris.com (one frame per request)
let lastResponse: LivenessResult | null = null;
for (let i = 0; i < frames.length; i++) {
const response = await fetch('/api/v1/fast-check-stream', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// v2 flow (optional): 'X-Model-Version': 'latest',
},
body: JSON.stringify({
session_id: sessionId,
// v1 flow:
model: 'mixed-10-v2',
// v2 flow (alternative):
// frame_count: 10,
source: 'live',
// warnings: ['Low light detected'], // optional; API echoes in response
frame: {
index: frames[i].index,
timestamp_ms: frames[i].timestamp_ms,
pixels: frames[i].pixels
}
})
});
const envelope = await response.json();
if (!envelope.success) {
throw new Error(envelope.message || 'Liveness check failed');
}
const data = envelope.data;
if (data?.verdict) {
lastResponse = {
verdict: data.verdict,
real_score: data.real_score,
score: data.score ?? data.real_score * 100,
session_id: data.session_id
};
break;
}
}
if (lastResponse) setResult(lastResponse);
else setError('No verdict received');
} catch (err) {
setError(err instanceof Error ? err.message : 'Unknown error');
} finally {
setIsChecking(false);
}
}, [captureFrames]);
return { videoRef, checkLiveness, isChecking, result, error };
}
The SDK defaults to fast-check-crops. This manual example uses fast-check-stream (one frame per request). Alternatives: - Fast Check Crops — default SDK endpoint, pre-cropped 224×224 faces - Fast Check — send all frames in a single request
Complete Component¶
A full React component with camera preview, verification button, and result display.
import { useEffect } from 'react';
import { useLivenessCheck } from './useLivenessCheck';
export function LivenessCamera() {
const { videoRef, checkLiveness, isChecking, result, error } = useLivenessCheck();
useEffect(() => {
let stream: MediaStream | null = null;
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'user',
width: { ideal: 640 },
height: { ideal: 480 }
}
})
.then((mediaStream) => {
stream = mediaStream;
if (videoRef.current) {
videoRef.current.srcObject = stream;
}
})
.catch(console.error);
return () => {
stream?.getTracks().forEach(track => track.stop());
};
}, [videoRef]);
return (
<div className="liveness-container">
<div className="video-wrapper">
<video
ref={videoRef}
autoPlay
playsInline
muted
className="liveness-video"
/>
<div className="face-guide" />
</div>
<button
onClick={checkLiveness}
disabled={isChecking}
className="verify-button"
>
{isChecking ? 'Verifying...' : 'Verify Liveness'}
</button>
{result && (
<div className={`result ${result.verdict}`}>
<span className="verdict">{result.verdict.toUpperCase()}</span>
<span className="score">Score: {(result.score * 100).toFixed(1)}%</span>
<span className="real-score">
Real Score: {result.real_score.toFixed(1)}
</span>
</div>
)}
{error && <div className="error">{error}</div>}
</div>
);
}
With Face Crops (Faster)¶
Use MediaPipe to detect faces and send pre-cropped 224×224 images for faster processing via the /fast-check-crops endpoint.
import { useState, useRef, useCallback, useEffect } from 'react';
import { FaceDetector, FilesetResolver } from '@mediapipe/tasks-vision';
interface CropFrame {
index: number;
timestamp_ms: number;
crop: string; // 224x224 base64 face crop
}
export function useLivenessWithCrops() {
const [detector, setDetector] = useState<FaceDetector | null>(null);
const [isChecking, setIsChecking] = useState(false);
const [result, setResult] = useState(null);
const [error, setError] = useState<string | null>(null);
const videoRef = useRef<HTMLVideoElement>(null);
// Initialize MediaPipe Face Detector
useEffect(() => {
async function initDetector() {
const vision = await FilesetResolver.forVisionTasks(
'https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm'
);
const faceDetector = await FaceDetector.createFromOptions(vision, {
baseOptions: {
modelAssetPath: 'https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite',
delegate: 'GPU'
},
runningMode: 'VIDEO'
});
setDetector(faceDetector);
}
initDetector();
}, []);
const captureAndCropFrames = useCallback(async (): Promise<CropFrame[]> => {
const video = videoRef.current;
if (!video || !detector) throw new Error('Not initialized');
const frames: CropFrame[] = [];
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d')!;
for (let i = 0; i < 10; i++) {
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
ctx.drawImage(video, 0, 0);
const timestamp = performance.now();
const detections = detector.detectForVideo(video, timestamp);
if (detections.detections.length > 0) {
const face = detections.detections[0].boundingBox!;
// Expand bounding box by 3x for context
const expandFactor = 3;
const centerX = face.originX + face.width / 2;
const centerY = face.originY + face.height / 2;
const size = Math.max(face.width, face.height) * expandFactor;
const cropX = Math.max(0, centerX - size / 2);
const cropY = Math.max(0, centerY - size / 2);
const cropSize = Math.min(size, canvas.width - cropX, canvas.height - cropY);
// Create 224x224 crop
const cropCanvas = document.createElement('canvas');
cropCanvas.width = 224;
cropCanvas.height = 224;
const cropCtx = cropCanvas.getContext('2d')!;
cropCtx.drawImage(
canvas,
cropX, cropY, cropSize, cropSize,
0, 0, 224, 224
);
frames.push({
index: i,
timestamp_ms: i * 100,
crop: cropCanvas.toDataURL('image/png').split(',')[1]
});
}
await new Promise(r => setTimeout(r, 100));
}
if (frames.length < 10) {
throw new Error(`Only captured ${frames.length} faces. Need 10.`);
}
return frames;
}, [detector]);
const checkLiveness = useCallback(async () => {
setIsChecking(true);
setError(null);
try {
const frames = await captureAndCropFrames();
const response = await fetch('/api/v1/fast-check-crops', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// v2 flow (optional): 'X-Model-Version': 'latest',
},
body: JSON.stringify({
session_id: crypto.randomUUID(),
// v1 flow:
model: 'mixed-10-v2',
// v2 flow (alternative):
// frame_count: 10,
source: 'live',
crops: frames.map(f => ({ index: f.index, pixels: f.crop }))
})
});
const envelope = await response.json();
if (!envelope.success) {
throw new Error(envelope.message || 'Check failed');
}
setResult(envelope.data);
} catch (err) {
setError(err instanceof Error ? err.message : 'Unknown error');
} finally {
setIsChecking(false);
}
}, [captureAndCropFrames]);
return { videoRef, checkLiveness, isChecking, result, error, isReady: !!detector };
}
SDK with Advanced Options¶
When using the @moveris/react SDK, LivenessView and useLiveness default to fast-check-crops. For custom UIs with eye-quality feedback and tuned capture rate, use useDetectionPipeline and useSmartFrameCapture. Wrap your app in MoverisProvider (see SDK Quick Start).
import { useRef, useState, useCallback } from 'react';
import {
useCamera,
useSmartFrameCapture,
useDetectionPipeline,
} from '@moveris/react';
function CustomLivenessFlow() {
const { videoRef } = useCamera();
const [eyeFeedback, setEyeFeedback] = useState('');
const restartRef = useRef<() => void>(() => {});
const onRestartNeeded = useCallback(() => restartRef.current(), []);
const { detectionGate, getWarnings } = useDetectionPipeline({
videoRef,
enabled: true,
onRestartNeeded,
onEyeWarning: (message) => setEyeFeedback(message),
});
const { state, progress, feedback, start, restart } = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
captureIntervalMs: 100,
detectionGate,
onComplete: (frames) => {
const warnings = getWarnings();
// Submit frames + warnings to your API...
},
});
restartRef.current = restart;
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
{eyeFeedback && <p>{eyeFeedback}</p>}
{feedback && <p>{feedback}</p>}
<p>Frames: {progress.current}/{progress.total}</p>
{state === 'idle' && <button onClick={start}>Start</button>}
</div>
);
}
SDK options
captureIntervalMs(default 100) — tune capture rate (e.g. 50 for ~20 FPS)onEyeWarning— called with messages like "Eyes are in shadow" or "Glare detected" beforeonRestartNeeded
Installation¶
Usage Tips¶
- Cleanup streams: Always stop camera tracks in the useEffect cleanup to prevent memory leaks.
- Face guide overlay: Add a visual guide to help users position their face correctly within the frame.
- Loading states: Disable the button and show progress during verification to prevent double-submissions.
- Error boundaries: Wrap the camera component in an error boundary to gracefully handle permission denials.
- Mobile considerations: Use
playsInlineandmutedattributes for iOS compatibility.