LivenessClient¶
The LivenessClient class is the core API client for communicating with Moveris API (v2). It wraps all REST endpoints with full TypeScript types, automatic retry with exponential backoff, and configurable error handling.
In plain terms
LivenessClient is the object your app uses to talk to the Moveris API. You create it with your API key and base URL, then call methods like fastCheck() or fastCheckCrops() to send frames and get verdicts. It handles retries and errors automatically.
Optional parameters
Parameters followed by ? (for example, options?) are optional. You can omit them if you don't need to customize the call. For instance, client.fastCheck(frames) works without passing options.
Import¶
Constructor¶
Configuration¶
| Property | Type | Default | Description |
|---|---|---|---|
apiKey | string | -- | Required. Your Moveris API key (starts with sk-) |
baseUrl | string | https://api.moveris.com | API base URL |
timeout | number | 30000 | Request timeout in milliseconds |
enableRetry | boolean | true | Enable automatic retry with exponential backoff |
customFetch | typeof fetch | undefined | Custom fetch implementation (useful for React Native or testing) |
Example¶
const client = new LivenessClient({
apiKey: 'sk-your-api-key',
baseUrl: 'https://api.moveris.com',
timeout: 30000,
enableRetry: true,
});
Security
Never expose your API key in client-side code. Use the SDK through a backend proxy or configure your backend to forward requests to the Moveris API.
Methods¶
health()¶
Check the API health status.
Returns: HealthResponse with service status and model availability.
getModels()¶
Fetch the list of available models from the API (id, label, description, min_frames, deprecated). Use this for dynamic model selection in your UI. The React SDK provides a useModels hook that wraps this method. Requires a backend proxy that can access the models registry.
Returns: ModelEntry[] — Array of model entries with id, label, description, min_frames, and deprecated.
fastCheck(frames, options?)¶
Send frames for fast liveness detection with server-side face detection. Frame count must match the model's min_frames (e.g. 10 for mixed-10-v2, 30 for mixed-30-v2).
const result = await client.fastCheck(frames, {
sessionId: 'uuid-string',
source: 'live',
model: '10',
});
Parameters:
| Name | Type | Description |
|---|---|---|
frames | FrameData[] | Array of frames; count must match model min_frames (e.g. 10, 30, 60) |
options.sessionId | string | Unique session identifier (UUID recommended) |
options.source | FrameSource | "live" for camera capture, "media" for recorded video |
options.model | FastCheckModel | Model alias (e.g. "mixed-10-v2", "mixed-30-v2"), default "10" |
Returns: FastCheckResponse with verdict, score, real_score (use this for decision-making), processing_ms, and optionally warnings.
The confidence field is reserved for future use and is functionally identical to real_score at the moment.
fastCheckCrops(crops, options?)¶
Send 10 pre-cropped 224x224 face images for faster processing.
Parameters:
| Name | Type | Description |
|---|---|---|
crops | CropData[] | Array of crops; count must match model min_frames (e.g. 10, 30). Each 224×224 base64 PNG. |
options.sessionId | string | Unique session identifier |
options.source | FrameSource | Frame source |
Returns: FastCheckResponse
streamFrame(frame, options?)¶
Send a single frame for the streaming endpoint. The server buffers frames and processes when all arrive.
const response = await client.streamFrame(frame, {
sessionId: 'uuid-string',
source: 'live',
model: '10',
});
if (response.status === 'buffering') {
console.log(`${response.frames_received}/${response.frames_required} frames`);
} else if (response.status === 'complete') {
console.log('Verdict:', response.verdict);
}
Parameters:
| Name | Type | Description |
|---|---|---|
frame | FrameData | Single frame with index, timestamp_ms, pixels |
options.sessionId | string | Session identifier (must be the same for all frames in a session) |
options.source | FrameSource | Frame source |
options.model | FastCheckModel | Model alias |
Returns: FastCheckStreamResponse with status ("buffering" or "complete"), frames_received, frames_required, and result fields when complete.
fastCheckStream(frames, options?, callbacks?)¶
Send all frames in parallel using the streaming endpoint. This is the lowest-latency option since frames upload simultaneously.
const result = await client.fastCheckStream(
frames,
{
sessionId: 'uuid-string',
source: 'live',
model: '10',
},
{
onProgress: (received, total) => {
console.log(`Progress: ${received}/${total}`);
},
}
);
Parameters:
| Name | Type | Description |
|---|---|---|
frames | FrameData[] | Array of frames to send in parallel |
options | object | Session ID, source, model |
callbacks.onProgress | function | Called with (framesReceived, framesTotal) as frames are acknowledged |
Returns: FastCheckStreamResponse with the final result.
fastCheckStreamSequential(frames, options?, callbacks?)¶
Same as fastCheckStream but sends frames one at a time. Useful for bandwidth-constrained environments.
verify(frames, options?)¶
Send 50+ frames for spatial-feature-based verification (standard KYC).
Returns: VerifyResponse
hybridCheck(frames, options?)¶
Send frames for CNN + physiological hybrid analysis.
Returns: HybridCheckResponse
hybrid50(frames, options?)¶
50-frame hybrid model with 93.8% accuracy.
Returns: HybridCheckResponse
hybrid150(frames, options?)¶
150-frame hybrid model with 96.2% accuracy.
Returns: HybridCheckResponse
getJobResult(jobId)¶
Poll for the result of an asynchronous job.
Returns: JobStatusResponse with job status and result when complete.
waitForJobResult(jobId, timeout?)¶
Long-poll for a job result. Blocks until the result is ready or the timeout is reached.
Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
jobId | string | -- | Job identifier |
timeout | number | 30000 | Maximum wait time in milliseconds |
Returns: JobStatusResponse
queueStats()¶
Get queue statistics (pending jobs, processing capacity).
Returns: QueueStatsResponse
Error Handling¶
All methods throw LivenessApiError on failure:
import { LivenessApiError } from '@moveris/shared';
try {
const result = await client.fastCheck(frames, options);
} catch (error) {
if (error instanceof LivenessApiError) {
console.error('API error:', error.message);
console.error('Status code:', error.statusCode);
console.error('Error code:', error.code);
}
}
Common Error Codes¶
| Code | Status | Description |
|---|---|---|
invalid_key | 401 | API key is missing, invalid, or revoked |
insufficient_credits | 402 | Not enough credits for the operation |
insufficient_scope | 403 | API key lacks required scope. Create a key with the needed scope in the Developer Portal. |
rate_limit_exceeded | 429 | Too many requests |
insufficient_frames | 400 | Not enough frames provided |
invalid_model | 400 | Invalid model alias |
Retry Behavior¶
For full error codes and handling, see Errors.
When enableRetry is true (the default), the client automatically retries failed requests with exponential backoff:
- Max attempts: 3
- Initial delay: 1 second
- Max delay: 10 seconds
- Retried errors: Network errors, 5xx server errors, 429 rate limit errors
- Not retried: 4xx client errors (except 429)
Helper Functions¶
The @moveris/shared package also exports utility functions: