React Hooks¶
The @moveris/react package provides 6 hooks for building custom liveness verification UIs. Use these when you need more control than the pre-built components offer.
In plain terms
Hooks let you build your own UI while the SDK handles the logic: useLiveness for the full flow, useCamera for camera access, useFrameCapture for capturing frames, useFaceDetection for face detection, useSmartFrameCapture for quality-gated capture, useDetectionPipeline for gaze + eye-region gating, and useModels for dynamic model selection.
Optional parameters
Parameters followed by ? (for example, options?) are optional. You can omit them if you don't need to customize the call. For instance, useLiveness() works without passing options.
Import¶
import {
useLiveness,
useCamera,
useFrameCapture,
useFaceDetection,
useSmartFrameCapture,
useDetectionPipeline,
useModels,
} from '@moveris/react';
useLiveness¶
The main hook that manages the full liveness verification flow: camera access, frame capture, API submission, and result handling.
const {
status,
result,
error,
framesReceived,
framesRequired,
feedbackMessage,
start,
stop,
reset,
} = useLiveness(options?);
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
model | FastCheckModel | '10' | Model to use |
source | FrameSource | 'live' | Frame source |
sessionId | string | auto-generated | Optional. Session ID for API calls. When provided, the same ID is used for every request in the session. |
endpoint | 'fast-check-crops' \| 'fast-check-stream' \| 'fast-check' | 'fast-check-crops' | API endpoint to use. Use fast-check-stream for streaming; fast-check for batch with full frames. |
mode | 'batch' \| 'stream' | 'batch' | Upload mode: batch or streaming (when endpoint is fast-check-stream) |
autoStart | boolean | false | Start capturing automatically |
onResult | (result: LivenessResult) => void | -- | Result callback |
onError | (error: Error) => void | -- | Error callback |
onProgress | (received: number, total: number) => void | -- | Progress callback |
Return Value¶
| Property | Type | Description |
|---|---|---|
status | 'idle' \| 'capturing' \| 'processing' \| 'complete' \| 'error' | Current state of the verification flow |
result | LivenessResult \| null | Verification result when complete |
error | Error \| null | Error if the flow failed |
framesReceived | number | Number of frames captured so far |
framesRequired | number | Total frames needed |
feedbackMessage | string \| null | User-facing feedback message |
start | () => void | Start the verification flow |
stop | () => void | Stop capturing |
reset | () => void | Reset to idle state |
Example¶
function CustomLivenessUI() {
const {
status,
result,
error,
framesReceived,
framesRequired,
feedbackMessage,
start,
reset,
} = useLiveness({
mode: 'stream',
onResult: (r) => console.log('Done:', r.verdict),
});
return (
<div>
{status === 'idle' && <button onClick={start}>Begin</button>}
{status === 'capturing' && <p>{feedbackMessage}</p>}
{status === 'capturing' && <p>{framesReceived}/{framesRequired}</p>}
{status === 'processing' && <p>Analyzing...</p>}
{result && <p>Verdict: {result.verdict}</p>}
{error && <p>Error: {error.message}</p>}
{(result || error) && <button onClick={reset}>Try Again</button>}
</div>
);
}
useCamera¶
Manages camera permissions and the video stream.
const {
videoRef,
stream,
isReady,
error,
hasPermission,
requestPermission,
startStream,
stopStream,
} = useCamera(options?);
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
facingMode | 'user' \| 'environment' | 'user' | Camera to use |
width | number | 640 | Preferred video width |
height | number | 480 | Preferred video height |
autoStart | boolean | true | Start stream on mount |
requirements | CameraRequirements | SDK defaults | Minimum camera capability targets used for validation |
Return Value¶
| Property | Type | Description |
|---|---|---|
videoRef | RefObject<HTMLVideoElement> | Ref to attach to a <video> element |
stream | MediaStream \| null | Active media stream |
isReady | boolean | Whether the camera is ready to capture |
error | Error \| null | Camera initialization error |
hasPermission | boolean | Whether camera permission is granted |
capabilities | CameraCapabilities \| null | Camera track capabilities when available (browser-dependent) |
validation | CameraValidationResult \| null | Validation result against requirements |
requestPermission | () => Promise<boolean> | Request camera permission |
startStream | () => Promise<void> | Start the camera stream |
stopStream | () => void | Stop and release the camera |
applyConstraints | (constraints: MediaTrackConstraints) => Promise<void> | Apply runtime track constraints when supported |
Example¶
function CameraPreview() {
const { videoRef, isReady, error, hasPermission, requestPermission } = useCamera();
if (!hasPermission) {
return <button onClick={requestPermission}>Enable Camera</button>;
}
if (error) return <p>Camera error: {error.message}</p>;
return (
<video
ref={videoRef}
autoPlay
playsInline
muted
style={{ width: '100%' }}
/>
);
}
Capability support
capabilities depends on browser support for MediaStreamTrack.getCapabilities(). Some browsers (for example, older Firefox versions) may return limited or no capability data.
useFrameCapture¶
Captures individual frames from a video element.
const {
captureFrame,
capturedFrames,
frameCount,
clearFrames,
} = useFrameCapture(videoRef, options?);
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
width | number | 640 | Output frame width |
height | number | 480 | Output frame height |
format | 'image/png' \| 'image/jpeg' | 'image/png' | Image encoding format |
Return Value¶
| Property | Type | Description |
|---|---|---|
captureFrame | () => CapturedFrame | Capture a frame from the video element |
capturedFrames | CapturedFrame[] | All frames captured so far |
frameCount | number | Number of captured frames |
clearFrames | () => void | Clear all captured frames |
Example¶
function ManualCapture() {
const { videoRef } = useCamera();
const { captureFrame, capturedFrames, frameCount } = useFrameCapture(videoRef);
const handleCapture = () => {
const frame = captureFrame();
console.log(`Captured frame ${frameCount}:`, frame.index);
};
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
<button onClick={handleCapture}>Capture ({frameCount}/10)</button>
</div>
);
}
useFaceDetection¶
Real-time face detection using an adapter pattern. Supports MediaPipe (web) and custom detectors.
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
adapter | FaceDetectorAdapter | -- | Required. Face detector adapter |
enabled | boolean | true | Enable/disable detection |
interval | number | 100 | Detection interval in milliseconds |
Adapters¶
MediaPipe (Web)¶
import { createMediaPipeAdapter } from '@moveris/react';
const adapter = createMediaPipeAdapter({
modelPath: 'https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite',
delegate: 'GPU',
});
Return Value¶
| Property | Type | Description |
|---|---|---|
detectionResult | DetectionResult \| null | Latest detection result |
isInitialized | boolean | Whether the detector is ready |
error | Error \| null | Initialization error |
Example¶
function FaceDetectionPreview() {
const { videoRef } = useCamera();
const { detectionResult, isInitialized } = useFaceDetection(videoRef, {
adapter: createMediaPipeAdapter(),
});
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
{isInitialized && detectionResult?.faceDetected && (
<p>Face detected!</p>
)}
</div>
);
}
useSmartFrameCapture¶
Intelligent frame capture that only counts frames when quality and alignment checks pass. Rejects faces below 4% of frame area. Gates capture based on external detectors and supports restart() when a blocking condition occurs (e.g. hidden/shadowed eyes).
const {
state, // 'idle' | 'detecting' | 'capturing' | 'complete'
progress, // { current: number, total: number, quality: string }
feedback, // string - User guidance message
ovalState, // 'no_face' | 'poor' | 'good' | 'perfect'
frames, // CapturedFrame[]
start, // () => void
stop, // () => void
reset, // () => void
restart, // () => void - reset all state and immediately resume capturing
} = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
onFrameCapture: (frame, index, total) => console.log('Captured:', index),
onComplete: (frames) => console.log('All frames:', frames),
onError: (error) => console.error(error),
});
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
videoRef | RefObject<HTMLVideoElement \| null> | required | Reference to the live video element |
targetFrames | number | 10 | Number of frames to capture |
captureIntervalMs | number | 100 | Minimum ms between captures (e.g. 50 for ~20 FPS) |
blurThreshold | number | auto | Blur rejection threshold (100 desktop, 150 mobile) |
captureMode | 'crop' \| 'full' | 'crop' | Capture mode (crop for fast-check-crops, full for fast-check / fast-check-stream) |
detectionGate | () => boolean | - | External gate: only capture when it returns true |
detectFace | (video) => Promise<FaceBoundingBox \| null> | built-in | Optional custom face detection function |
onFrameCapture | (frame, index, total) => void | - | Called for each captured frame |
onComplete | (frames) => void | - | Called when all frames are captured |
onError | (error) => void | - | Error callback |
onQualityUpdate | (quality) => void | - | Called on each quality update (for UI feedback) |
Return Value¶
| Property | Type | Description |
|---|---|---|
state | 'idle' \| 'detecting' \| 'capturing' \| 'complete' | Current capture state |
progress | { current: number; total: number; quality: string } | Capture progress and current quality label |
feedback | string | User guidance message |
ovalState | OvalGuideState | Guide visual state (no_face / poor / good / perfect) |
frames | CapturedFrame[] | Captured frames |
start | () => void | Start capture loop |
stop | () => void | Stop capture loop |
reset | () => void | Reset to initial state |
restart | () => void | Reset state and immediately resume capturing |
Example¶
function SmartCapture() {
const { videoRef } = useCamera();
const {
state,
progress,
feedback,
ovalState,
start,
stop,
reset,
restart,
} = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
onComplete: (captured) => {
console.log('All frames captured:', captured.length);
// Send frames to API...
},
});
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
{feedback && <p>{feedback}</p>}
<p>
Frames: {progress.current}/{progress.total}
</p>
<p>Oval: {ovalState}</p>
{state === 'idle' && <button onClick={start}>Start</button>}
{state === 'complete' && <button onClick={reset}>Try Again</button>}
{state !== 'idle' && <button onClick={() => restart()}>Restart</button>}
</div>
);
}
Capture Modes¶
| Mode | Output | Use With | Description |
|---|---|---|---|
'crop' | 224x224 PNG | fast-check-crops | Face-cropped frame (client-side detection) |
'full' | 640x480 JPEG | fast-check, fast-check-stream | Full video frame (server-side detection) |
Important:
fast-checkandfast-check-streamexpect full video frames because the server performs its own face detection. Onlyfast-check-cropsexpects pre-cropped 224x224 images.
Detection Gate¶
Use detectionGate to integrate external detectors (e.g. gaze, eye-region gating):
const detectionPassedRef = useRef(true);
useSmartFrameCapture({
videoRef,
detectionGate: () => detectionPassedRef.current,
onFrameCapture: (frame) => {
console.log('Captured frame:', frame.index);
},
});
When the gate returns false, the frame is silently skipped and the internal counter does NOT increment.
useDetectionPipeline¶
Encapsulates the gaze + eye-region detection pipeline and provides: - detectionGate for useSmartFrameCapture - getWarnings() to collect non-blocking warnings at submission time (e.g. glasses glare) - onEyeWarning callback for real-time eye quality feedback (e.g. "Eyes are in shadow", "Glare detected") just before onRestartNeeded fires
const restartRef = useRef<() => void>(() => {});
const onRestartNeeded = useCallback(() => restartRef.current(), []);
const { detectionGate, getWarnings } = useDetectionPipeline({
videoRef,
enabled: isSessionOpen,
onRestartNeeded,
onGazeFeedback: (message) => setGazeFeedback(message), // optional
});
const { start, stop, restart } = useSmartFrameCapture({
videoRef,
targetFrames: 10,
captureMode: 'full',
detectionGate,
onComplete: (frames) => {
const warnings = getWarnings();
// Submit frames + warnings to your API...
submitFrames(frames, warnings);
},
});
// Wire restart after the hook call.
restartRef.current = restart;
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
videoRef | RefObject<HTMLVideoElement \| null> | required | Video element ref used by the detectors |
enabled | boolean | required | Activates/deactivates the detection loop |
onRestartNeeded | () => void | required | Called when hidden/shadowed eyes are detected (blocking) |
intervalMs | number | 200 | Detection interval in milliseconds |
onGazeFeedback | (message: string) => void | - | Called with each gaze feedback update (empty string = clear) |
onEyeWarning | (message: string) => void | - | Called with eye failure reason ("Eyes are in shadow", "Glare detected", etc.) just before onRestartNeeded |
Return Value¶
| Property | Type | Description |
|---|---|---|
detectionGate | () => boolean | Pass to useSmartFrameCapture.detectionGate |
getWarnings | () => string[] | Returns accumulated session warnings; call at submit time |
Detection Behaviour¶
| Condition | Behaviour |
|---|---|
| Gaze off-camera | detectionGate returns false; onGazeFeedback provides a hint |
| Gaze restored | detectionGate returns true; onGazeFeedback('') clears the hint |
| Glasses glare | Non-blocking; adds "User was wearing glasses" to warnings |
| Hidden/shadowed eyes | Calls onRestartNeeded(); you should wire this to useSmartFrameCapture.restart() |
useModels¶
Fetches the live model registry from the API for dynamic model selection. Returns active and deprecated models, with local fallback while loading or on error. Use this when you want users to choose a model from the current API configuration (e.g. a model selector dropdown).
Return Value¶
| Property | Type | Description |
|---|---|---|
models | ModelEntry[] | Full model list (active + deprecated) |
activeModels | ModelEntry[] | Non-deprecated models only |
deprecatedModels | ModelEntry[] | Deprecated models only |
loading | boolean | Whether the fetch is in progress |
error | Error \| null | Error if the fetch failed |
refetch | () => void | Manually re-fetch the model list |
Example¶
function ModelPicker() {
const { activeModels, deprecatedModels, loading } = useModels();
if (loading) return <span>Loading models…</span>;
return (
<select>
{activeModels.map((m) => (
<option key={m.id} value={m.id}>
{m.label} ({m.min_frames} frames)
</option>
))}
{deprecatedModels.length > 0 && (
<optgroup label="Deprecated">
{deprecatedModels.map((m) => (
<option key={m.id} value={m.id}>
{m.label} (deprecated)
</option>
))}
</optgroup>
)}
</select>
);
}
Recommended models
Use activeModels to show only non-deprecated models. The API prefers mixed-30-v2 (Balanced) for most integrations. See Models overview.
Utility Classes¶
The @moveris/react package also exports utility classes for advanced use cases:
DetectionManager¶
Orchestrates multiple detectors (face, gaze, hand occlusion):
Frame Utilities¶
import {
analyzeBlur,
analyzeBlurFromVideo,
analyzeLighting,
captureVideoFrame,
captureFaceCroppedFrame,
checkFrameQuality,
FrameCollector,
VideoFrameSync,
CameraStabilizer,
} from '@moveris/react';
analyzeBlur(imageData)-- Returns a blur score for the imageanalyzeLighting(imageData)-- Evaluates lighting conditionscaptureVideoFrame(video)-- Capture a single frame from a video elementcaptureFaceCroppedFrame(video, detection)-- Capture and crop a face from videocheckFrameQuality(frame)-- Combined quality check (blur + lighting)FrameCollector-- Collects frames at a configured intervalVideoFrameSync-- Synchronizes frame capture with video playbackCameraStabilizer-- Detects when the camera feed has stabilized