# Moveris Liveness Detection API — Full Documentation > This file contains the complete Moveris API documentation in a single, > machine-readable format for AI agents and LLMs. > Source: https://documentation.moveris.com > Index: https://documentation.moveris.com/llms.txt --- ======================================================================== Source: index.md URL: https://documentation.moveris.com/ ======================================================================== # Moveris Liveness Detection API (v2) # Human Liveness Verification Liveness detection rooted in psychophysiology, not AI training. Stop spoofing attacks by detecting what deepfakes can't replicate: biology. !!! info "For decision-makers" Moveris verifies that a **real person** is in front of the camera—not a photo, video, or deepfake. Integrate it into your app with a simple REST API. Typical flow: capture video, send frames (e.g. 10 for the fast model, 30 for the balanced model), receive a live/fake result in under a second. Get Started API Reference ## Key Features ### :zap: Fast Processing Sub-second liveness detection with 10-frame analysis ### :material-api: REST API Simple REST API for easy integration with any platform ### :material-target: High Accuracy Deep learning model with confidence scoring (0-100) ### :material-code-tags: Easy Integration Simple API with JavaScript and Python examples ## At a Glance < 1s Processing time 10 Frames required 0-100 Confidence score ## Quick Links - **[How It Works](getting-started/how-it-works.md)** - Science behind liveness detection (technical and non-technical overview) - **[For Decision-Makers](getting-started/for-decision-makers.md)** - High-level overview: value, use cases, and integration paths - **[Quick Start](getting-started/quick-start.md)** - Get up and running in minutes - **[MCP](mcp/overview.md)** — Verify human presence before AI agent actions (wire transfer, signing) - **[API Key](getting-started/api-key.md)** - Obtain your API credentials - **[Code Examples](examples/javascript.md)** - Production-ready code in multiple languages - **[llms.txt](llms.txt)** - Machine-readable docs for AI agents ([full content](https://documentation.moveris.com/llms-full.txt)) - **[Glossary](glossary.md)** - Technical terms explained in plain English ## Base URL ``` https://api.moveris.com ``` ## Authentication All API requests require an API key passed in the `X-API-KEY` header. ```bash curl -X POST "https://api.moveris.com/api/v1/fast-check" \ -H "Content-Type: application/json" \ -H "X-API-Key: your-api-key" \ -d '{ ... }' ``` [Get your API key :material-arrow-right:](getting-started/api-key.md){ .md-button } ======================================================================== Source: glossary.md URL: https://documentation.moveris.com/glossary/ ======================================================================== # Glossary Terms explained in plain English. Included: product-specific terms, technical concepts, acronyms, and potentially ambiguous words. Excluded: generic business terms and roles. If a non-technical reader might ask "What does this mean?"—it's here. --- ## A ### AI Agent Software that performs tasks autonomously on behalf of a user—e.g., ChatGPT, Claude, Cowork. Moveris can verify that a live human authorized an AI agent's high-stakes action (e.g., wire transfer, contract signing). *In practice:* When an AI agent is about to execute a consequential action, it calls Moveris to verify the user is present and authorized. ### API **Application Programming Interface.** A way for software to talk to another service. In our case, you send requests to Moveris servers, and they send back answers about whether a face is live or fake. *In practice:* You call our API from your app or website to verify that a real person is in front of the camera. ### API Key A secret code that identifies you when you use the Moveris API. Think of it like a password—you must include it in every request, and you must keep it private. *In practice:* You get your API key from the Moveris Developer Portal and send it in the `X-API-Key` header. ### Attestation A signed proof that a live human completed verification at a specific time. The Moveris MCP server returns attestations as JWT (ES256) so relying parties can verify them without calling the API. *In practice:* After the user passes liveness, the agent receives an attestation containing verdict, confidence, timestamp, and action description. You can store it for audit or compliance (e.g., EU AI Act Article 14). --- ## B ### Babel A tool that transforms modern JavaScript (or TypeScript) into code that older browsers and runtimes can run. It lets developers use the latest language features while supporting more environments. *In practice:* When you build a React or React Native app, Babel often runs behind the scenes to compile your code. ### Boolean A data type that has only two values: `true` or `false`. Used in programming and API responses to represent yes/no, on/off, or similar binary states. ### Bot Software that runs automatically without human control. Bots can be benign (e.g., search engines) or malicious (e.g., automated fraud). Moveris helps detect when a bot is trying to impersonate a human (e.g., with a photo or video instead of a live face). ### Backend The server-side part of your application—the code that runs on your server, not in the user's browser. Your backend should add the API key and forward requests to Moveris, so the key stays secret. *In practice:* Never put your API key in frontend code. Create a backend endpoint that receives frames, adds the key, and calls Moveris. ### Base64 A way to convert binary data (like images) into text so it can be sent safely in JSON. Your frame images are sent as Base64-encoded strings. *In practice:* Instead of sending the raw image file, you convert it to a long text string that the API can decode. ### Base URL The main address where all API requests are sent. For Moveris: `https://api.moveris.com` *In practice:* Every endpoint path is added to this base URL. Example: `https://api.moveris.com/api/v1/fast-check` --- ## C ### CI (Continuous Integration) Automated pipelines that run tests, linting, and builds when you push code. CI catches errors before they reach production. *In practice:* GitHub Actions runs unit and integration tests on every pull request. ### Claude Code An AI coding assistant (by Anthropic) that runs in your terminal. It reads `CLAUDE.md` and `AGENTS.md` at session start to follow your project's rules and conventions. *In practice:* Add a `CLAUDE.md` file to your project root; Claude Code loads it automatically and follows the rules you define. ### Claude Desktop A desktop application for chatting with Claude. Can be configured as an MCP host to connect to tools like the Moveris MCP server for liveness verification. *In practice:* Configure Claude Desktop to connect to the Moveris MCP server so you can verify human presence from within a conversation. ### CNN (Convolutional Neural Network) A type of deep learning model that analyzes images by detecting patterns and features. Moveris uses CNN models for visual analysis of face frames—detecting texture, depth, and spatial patterns that help distinguish live faces from photos, screens, or masks. *In practice:* CNN models in Moveris power the hybrid and anti-spoofing pipelines, analyzing frame pixels to produce liveness scores. ### Conventional Commits A commit message format: `type(scope): description`—e.g., `feat(auth): add login`, `fix(api): handle 429 errors`. Enables automated changelogs and clear history. *In practice:* Use prefixes like `feat:`, `fix:`, `docs:`, `chore:`, `ci:` so tools can categorize commits automatically. ### CommonJS A module system for JavaScript used in Node.js. Uses `require()` and `module.exports` to share code between files. Contrast with ESM (ES Modules), which uses `import` and `export`. *In practice:* Older Node.js projects often use CommonJS. The Moveris Node.js examples show both CommonJS and ESM + TypeScript. ### Components Reusable building blocks of a user interface—buttons, forms, cards, etc. In React and React Native, you compose screens from components. The Moveris SDK provides components like `LivenessCheck` so you don't build the camera flow from scratch. *In practice:* A component is a self-contained piece of UI that you can drop into your app and customize with props. ### Confidence This field is currently reserved for future use and is functionally identical to `real_score`. Clients should use `real_score` for decision-making. ### Credits Units used to pay for API usage. Each request uses a certain number of credits. You need enough credits in your account for requests to succeed. *In practice:* If you run out of credits, the API returns an `insufficient_credits` error. ### Crops / Pre-cropped Face images that have already been cut (cropped) to show only the face. The `/fast-check-crops` endpoint accepts these instead of full frames, which can be faster. --- ## D ### Deepfake Video or images created or altered by AI to look like a real person. ### Dependency Injection A pattern where functions receive their dependencies (e.g., database, auth) as arguments instead of creating them internally. Makes code testable and modular. *In practice:* FastAPI's `Depends()` injects auth or database connections into route handlers. Moveris helps detect when someone is trying to pass off a deepfake as a live person. ### Developer Portal The Moveris web application where you manage your account: create API keys, view usage and credits, access documentation, and handle billing. Available at [developers.moveris.com](https://developers.moveris.com/). *In practice:* Sign up, create an API key, and copy it to use in your integration. --- ## E ### .env A file (often named `.env`) that stores environment variables—API keys, database URLs, and other secrets. Never commit it to version control. *In practice:* Keep `.env` in `.gitignore` and load it with a tool like `python-dotenv` or your framework's config. ### Endpoint A specific URL path that accepts requests and returns responses. Each endpoint does a specific job (e.g. health check, liveness check). *In practice:* `/api/v1/fast-check` is the endpoint for standard liveness verification. ### Error Boundaries In React, components that catch JavaScript errors in their child tree and display a fallback UI instead of crashing. Useful for isolating failures (e.g., in a liveness check component) from the rest of the app. *In practice:* Wrap your verification flow in an error boundary so a single failed check doesn't break the entire page. ### Express.js A popular Node.js framework for building web servers and APIs. Used to create backend endpoints that receive requests and respond with data. *In practice:* The Node.js examples use Express.js to create a `/api/liveness` endpoint that accepts frames from your frontend and forwards them to Moveris with your API key. --- ## F ### FastAPI A modern Python framework for building APIs. Uses type hints, Pydantic for validation, and async support. *In practice:* The Moveris Liveness API v2 is built with FastAPI. ### Float A number with a decimal part (e.g., 0.5, 3.14, -1.0). --- ## G ### .gitignore A file that tells Git which files or folders to exclude from version control. Use it to keep secrets (e.g., `.env`) and build artifacts out of the repository. *In practice:* Add `.env` and `node_modules/` to `.gitignore` so they are never committed. Used in API responses like `confidence` and `real_score` where fractional precision matters. ### Frame A single image from a video. The API analyzes multiple frames (count varies by model: e.g. 10 for mixed-10-v2, 30 for mixed-30-v2) to determine if a face is live or fake. ### FPS (Frames Per Second) How many images (frames) are captured from video every second. For Moveris, we recommend ~10 FPS—about 10 images per second. *In practice:* If you capture 10 frames at 10 FPS, you have roughly 1 second of video. Frame count must meet your model's minimum `min_frames` (recommended: send exactly that many). ### Frame Object The structure you send for each frame: `index`, `timestamp_ms`, and `pixels` (Base64-encoded image). ### Facemesh Comparison Comparison of 3D face landmark data (facemesh) between the current verification and historical sessions. Used for returning users: Moveris can confirm the person authorizing the action matches the person who enrolled. *In practice:* When `user_identifier` is provided in `verify_human_presence`, the attestation may include `facemesh_comparison` with `match_found` and `similarity_score`. A score above 0.80 indicates the same person. ### Frontend The part of your app that runs in the user's browser or on their device—the UI, forms, and client-side logic. Your frontend captures frames and sends them to your backend, which adds the API key and calls Moveris. *In practice:* Never put your API key in frontend code. The frontend sends frames to your backend; the backend calls Moveris. --- ## H ### HTTP **Hypertext Transfer Protocol.** The protocol used to send and receive data on the web. APIs typically use HTTP methods like GET (retrieve) and POST (send). Moveris is accessed over HTTPS (secure HTTP). *In practice:* When you call `fetch()` or send a request to the API, you are using HTTP under the hood. ### Header Extra information sent with an HTTP request. Headers are separate from the main body. Common ones: `Content-Type`, `X-API-Key`. *In practice:* You put your API key in the `X-API-Key` header so the API knows who is making the request. ### HttpStream An MCP transport that uses HTTP for communication. The MCP server runs as a standalone service; the host (e.g., Cowork) connects to it over the network. Use httpStream for production or remote hosts. *In practice:* Deploy the MCP server, expose it at a URL, and point the host to that URL. Contrast with stdio, where the host starts the server as a subprocess. ### Hooks Functions that React (or similar frameworks) call at specific moments—for example, when a component loads or when data changes. The Moveris SDK provides hooks like `useLivenessCheck` so you can trigger verification from your components without writing low-level code. *In practice:* Instead of managing camera and API calls manually, you use a hook and get back the result and loading state. --- ## I ### Integration Test Tests that run your code against real or cloned services (e.g., database, API). Slower than unit tests but catch integration issues. *In practice:* Run integration tests in CI with a real PostgreSQL clone to verify API endpoints work end-to-end. ### Integer A whole number with no decimal part (e.g., 0, 1, -5, 42). Used in programming and API fields like `index` or `score` when fractional values are not needed. ### Integration The process of connecting Moveris (or any service) into your app or workflow. An integration might involve adding the SDK, calling the API, and wiring up the results to your UI. *In practice:* "Integrating Moveris" means adding liveness verification to your sign-up, KYC, or checkout flow. --- ## J ### JWT **JSON Web Token.** A standard format for signed data that can be verified without calling the API. The Moveris MCP server returns attestations as JWT (ES256) so relying parties can validate them locally. *In practice:* After the user passes liveness, the agent receives a JWT attestation. The JWT includes an `exp` claim (TTL) so you can check if it is still valid. --- ## L ### JPEG **Joint Photographic Experts Group.** A common image format that uses lossy compression. Smaller file size than PNG but may lose some detail. PNG is recommended for Moveris frames for better liveness accuracy. ### JavaScript A programming language that runs in web browsers and on servers (via Node.js). Used to build interactive websites and, with React and React Native, mobile and web apps. *In practice:* Moveris provides JavaScript and TypeScript examples for integrating the API. ### JSON **JavaScript Object Notation.** A standard text format for exchanging data between systems. Uses key-value pairs and lists, easy for both humans and machines to read. The Moveris API sends and receives data in JSON. *In practice:* Your request body is JSON (e.g. `frames`, `session_id`). Successful API responses wrap the payload in the standard envelope: `{"data": { "verdict": "live", ... }, "success": true, "message": "OK"}` — read `data` for the result. See [Errors](api-reference/errors.md) for error responses. ### Live (Verdict) The API decided the face is from a real person in front of the camera. ### Latency The time between sending a request and receiving the response. In APIs, lower latency means a faster response. Shown in the `processing_ms` field (milliseconds). *In practice:* If latency is too high, users may abandon the verification flow. The fast model typically returns results in under 1 second. ### Liveness / Liveness Detection Checking whether a face belongs to a real, living person in front of the camera—and not a photo, screen, mask, or deepfake. *In practice:* Moveris uses biological signals (e.g. micro-movements) that are hard to fake. --- ## K ### KYC Know Your Customer. Regulatory process to verify a customer's identity. Moveris liveness detection is often used as part of KYC flows to prove the person is physically present. --- ## M ### MediaPipe Google's open-source framework for ML models (e.g., face detection). ### Monorepo A single repository that contains multiple related projects or packages. The Moveris SDK is organized as a monorepo with shared core logic (`@moveris/shared`), React package (`@moveris/react`), and React Native package (`@moveris/react-native`). ### ML Kit Google's mobile SDK for on-device machine learning. Provides face detection, barcode scanning, and other capabilities. The Moveris SDK can use ML Kit for face detection on React Native before sending crops to the API. *In practice:* ML Kit runs on the user's device, so faces can be detected and cropped without sending full frames to a server first. ### MCP (Model Context Protocol) A standard for connecting AI agents to external tools and services. The Moveris MCP server allows AI agents (e.g., Claude, Cowork) to trigger liveness verification when performing high-stakes actions on behalf of users. The agent sends the user a verification link; after the user completes the check, the agent receives a signed attestation. *In practice:* Use MCP when your AI agent needs to verify that a live human authorized an action (e.g., wire transfer, contract signing). ### MCP Host The application that connects to and runs the MCP server—e.g., Cursor, Claude Desktop, Cowork. The host spawns the server (stdio) or connects to it over HTTP (httpStream), then uses the exposed tools. *In practice:* When you "configure the host," you tell Cursor or Cowork where to find the Moveris MCP server so it can call liveness tools. --- ## N ### Node.js A JavaScript runtime that runs outside the browser, on servers or your computer. Lets you use JavaScript for backend code, APIs, and scripts. The Moveris Node.js examples use Node.js to process frames and call the API. *In practice:* Use Node.js when you need server-side liveness checks (e.g., processing uploaded videos) or a backend proxy that adds the API key. ### npm The default package manager for Node.js. Used to install libraries and tools (e.g. `npm install express sharp`). Most Node.js projects use npm to manage dependencies. *In practice:* Run `npm install` in your project to install the packages listed in `package.json`. --- ## O ### OAuth **Open Authorization.** A standard protocol for authorization that lets applications obtain limited access to user accounts (e.g., "Log in with Google") without sharing passwords. The user authorizes via a redirect flow. *In practice:* The Moveris MCP verification flow is similar to OAuth for agents: the agent gets a session URL, the user completes verification in their browser, and the agent receives a signed attestation without handling credentials. --- ## P ### PNG **Portable Network Graphics.** A common image format that supports lossless compression. Recommended for frame images sent to the API because it preserves quality better than JPEG. *In practice:* Use PNG encoding when capturing frames—better accuracy for liveness detection. ### PPG (Photoplethysmography) A technique that measures blood volume changes—typically heart rate—from skin color variations in video. Real faces show subtle pulse; photos and screens do not. Moveris uses PPG signals as a biological indicator for liveness detection. *In practice:* PPG helps detect deepfakes and screen replays because AI-generated faces lack the natural pulse visible in real skin. ### Postman A popular tool for testing and exploring APIs. Lets you send HTTP requests (GET, POST, etc.) manually without writing code. Useful for trying out Moveris endpoints before integrating them into your app. ### Pydantic A Python library for data validation using type hints. Validates request bodies, config, and responses before they reach your code. *In practice:* Define `CreateItemRequest(BaseModel)` and FastAPI automatically validates incoming JSON against it. ### pnpm A fast, disk-efficient package manager for Node.js. Alternative to npm and yarn. Use `pnpm add` instead of `npm install` to add packages. ### Pixels In the API request, this is the Base64-encoded image data for a frame. The field is named `pixels` in each frame object. ### Polling Repeatedly asking the server for status (e.g., "is the verification done?") at fixed intervals until you get a final result. Contrast with webhooks, where the server pushes results to your URL. *In practice:* If you don't use a webhook, poll `check_verification_status` every 1–2 seconds until the session is completed, failed, expired, or cancelled. ### Proxy A server that forwards requests on your behalf. A backend proxy receives requests from your frontend, adds the API key, and forwards them to Moveris—keeping the key secure. *In practice:* Your frontend calls `https://yourserver.com/api/liveness`; your backend adds the API key and forwards to `https://api.moveris.com/api/v1/fast-check`. ### Processing Time How long the server takes to analyze your frames and return a result. Shown in the `processing_ms` field (milliseconds). ### Project Root The top-level folder of your project—where `package.json`, `pyproject.toml`, or the main config file lives. Files like `CLAUDE.md` belong here. *In practice:* Place `CLAUDE.md` next to your `README.md` or `package.json`. ### Production The live environment where real users access your app—as opposed to development or staging. Production deployments should use secure configuration, rate limiting, and proper error handling. *In practice:* Before going to production, ensure your API key is stored securely (e.g., in environment variables) and that requests are proxied through your backend. ### Prompt The text or instruction you send to an AI model. In AI APIs, the prompt defines what the model should do or answer. Moveris MCP uses prompts to instruct AI agents when to trigger liveness verification. *In practice:* When building an AI agent, you design prompts that tell it to verify human presence before high-stakes actions. ### Props Short for "properties." In React and similar frameworks, props are values you pass into a component (like settings or data) so it can display or behave correctly. ### Python A popular programming language for servers, scripts, and data processing. The Moveris Python examples show how to send frames from Python to the API (e.g., from uploaded videos or OpenCV). *In practice:* Use Python when you need to process video files server-side or run liveness checks from scripts. --- ## R ### React A JavaScript library for building user interfaces. Uses components and hooks to create interactive UIs. Moveris provides React components and hooks for liveness verification. *In practice:* Import Moveris React components into your app to add a liveness check flow with minimal code. ### React Component A reusable piece of UI built with React. Components receive props and can contain state. The Moveris SDK provides React components like `LivenessCheck` for the verification flow. *In practice:* Drop a Moveris React component into your app and pass it props (e.g., `onComplete`) to customize behavior. ### React Native A framework for building mobile apps (iOS and Android) using JavaScript or TypeScript and React. You write one codebase and deploy to both platforms. Moveris offers a React Native SDK. *In practice:* Use the Moveris React Native SDK to add liveness verification to your iOS or Android app. ### Rate Limit A limit on how many requests you can make in a given period. Exceeding it returns a `rate_limit_exceeded` error. ### Risk Level In MCP verification, the level of assurance required: `standard` (mixed-10 model, 10 frames) or `high` (mixed-30 model, 30 frames). Higher risk levels use more frames for stronger verification. *In practice:* Use `standard` for routine approvals; use `high` for wire transfers, contract signing, and regulatory actions. ### Real Score A value from 0.0 to 1.0 that represents how likely the face is to be real. Higher values mean more likely live; lower values mean more likely fake. **Use this field for decision-making** (the `confidence` field is reserved for future use and is functionally identical to `real_score`). *In practice:* The API converts this to a 0–100 `score` for display. A score of 65 or above (0.65 for `real_score`) is considered live. ### Request A message you send to the API (e.g. a POST request with frames and your API key). ### Response The message the API sends back (e.g. verdict, score, session_id). ### Rendering The process of turning your code (e.g., React components) into what the user sees on screen. When data changes, the framework re-renders to update the UI. *In practice:* After a liveness check completes, your component re-renders to show the result. ### REST **Representational State Transfer.** A style of API design that uses standard HTTP methods (GET, POST, PUT, DELETE) and URLs. Moveris is a REST API. When you see "REST" in docs, it refers to this architectural style. ### REST API A style of API that uses standard HTTP methods (GET, POST, etc.) and URLs. Moveris is a REST API. --- ## S ### SDK Software Development Kit. A package of pre-built code (components, functions, types) that makes it easier to integrate a service into your app. Moveris offers SDKs for React and React Native with ready-to-use camera and verification UI. *In practice:* Use the Moveris SDK instead of calling the API directly—it handles camera access, frame capture, and API calls for you. ### Score A number from 0 to 100 representing how likely the face is to be real. Used for display. 65–100 = live, 0–64 = fake. ### Scope (API Key Scope) A permission that restricts what an API key can do. Moveris supports scopes such as `detection:write`, `detection:read`, `session:write`, `session:read`, `session:audit`, `keys:read`, `keys:write`, `usage:read`, `credits:read`, `webhooks:read`, `webhooks:write`, `hosts:read`, and `hosts:write`. Keys with no scopes have full access (backward compatible). Keys with scopes are limited; calls requiring a missing scope return 403 with `insufficient_scope`. *In practice:* Create scoped keys in the Developer Portal for least-privilege access. Use `detection:write` only for liveness checks, or `detection:read` only for result retrieval. ### Server A computer that runs your backend code and responds to requests from clients. Your server receives frames from the frontend, adds the API key, and forwards requests to Moveris. *In practice:* Never put your API key in client-side code. Your server should add it before calling Moveris. ### Session Hijacking When an attacker steals or reuses a valid session token to impersonate a user. If someone gets your session token, they can act as you until it expires. Moveris MCP mitigates this by re-verifying human presence before high-stakes actions. *In practice:* Instead of trusting a stale session, the agent calls Moveris to confirm a live human is authorizing the action (e.g., wire transfer). ### Session ID A unique identifier you create for each verification attempt. It helps you track a specific check and appears in the API response. *In practice:* Use a UUID like `550e8400-e29b-41d4-a716-446655440000` for each new session. ### Spoofing / Spoofed Attempting to trick the system—for example, showing a photo or video of a face instead of being in front of the camera. The API tries to detect and reject these. ### Step-up Authentication Additional verification required for high-stakes actions. Instead of trusting a session token alone, you re-verify the user (e.g., with liveness) before allowing the action. *In practice:* Moveris MCP implements step-up auth: when an AI agent is about to run a consequential action, it calls Moveris to verify a live human authorized it. ### Source In the request, indicates where the frames came from: `"live"` (real-time camera) or `"media"` (file/recording). Default is `"media"`. ### stdio (Standard Input/Output) An MCP transport where the host (e.g., Cursor, Claude Desktop) starts the MCP server as a subprocess and communicates via stdin/stdout. Ideal for local development and desktop AI tools. *In practice:* Set `MCP_TRANSPORT=stdio` and configure Cursor to launch the Moveris MCP server. Contrast with httpStream for remote/production use. ### String A sequence of text characters (letters, numbers, symbols). In programming and the API, strings are enclosed in quotes—e.g., `"live"`, `"fake"`, or Base64-encoded image data. --- ## T ### TTL **Time To Live.** How long something stays valid before it expires. For sessions, TTL is how long the verification URL works (e.g., 5 minutes). For attestations, TTL is how long the signed proof can be trusted. *In practice:* Moveris session TTL defaults to 300 seconds. Completed attestations include an `expires_at` claim in the JWT so you can enforce validity independently. ### TypeScript A superset of JavaScript that adds static types. Helps catch errors earlier and improves editor support. Many React and React Native projects use TypeScript. *In practice:* The Moveris SDK ships with TypeScript definitions so your editor can suggest types and detect mistakes. --- ## U ### Unit Test Tests that run a single function or module in isolation—no database, no network. Fast and focused on logic. *In practice:* Unit tests for a `calculate_score()` function mock inputs and assert the output; they don't call the API. ### UI (User Interface) The part of an app that users see and interact with—screens, buttons, forms, etc. The Moveris SDK provides UI components so you can show the verification flow without building it from scratch. *In practice:* A "verification UI" is the screen where the user sees themselves and follows instructions to complete liveness detection. ### UX (User Experience) The overall experience a user has when interacting with your app—how easy, clear, and pleasant it is. Good UX in verification means minimal friction, clear instructions, and fast feedback. *In practice:* Time-windowed attestation balances security and UX by allowing multiple actions within a short window without re-verifying each time. ### UUID Universally Unique Identifier. A standard format for IDs that look like: `550e8400-e29b-41d4-a716-446655440000`. Uniquely identifies a session or object. --- ## V ### Verdict The API's final decision: `"live"` (real person) or `"fake"` (spoofed/not real). ### Verification Session In MCP, a session created by `verify_human_presence`. It has a URL the user opens to complete liveness, and moves through states: pending → in progress → completed (or failed/expired/cancelled). *In practice:* The agent creates a session, surfaces the URL to the user, and polls `check_verification_status` until completed. The session ID links all related API calls. --- ## W ### Webhook A URL your server provides. When an event occurs (e.g., verification complete), the API sends an HTTP POST to that URL instead of you having to poll for status. *In practice:* Register a webhook URL to receive verification results as they happen. Contrast with polling, where you repeatedly ask for status. --- ## 2 ### 2FA (Two-Factor Authentication) A security method that requires two forms of verification—for example, a password plus a code from your phone. Moveris liveness can serve as a second factor by proving you are physically present. --- ## X ### X-API-Key The HTTP header where you send your API key. The name is literal—it’s sent as `X-API-Key: your-api-key` in each request. --- ## Y ### Yarn A fast Node.js package manager. Alternative to npm and pnpm. Use `yarn add` to add packages to your project. ======================================================================== Source: getting-started/for-decision-makers.md URL: https://documentation.moveris.com/getting-started/for-decision-makers/ ======================================================================== # For Decision-Makers A high-level overview for product managers, QA leads, and stakeholders. Understand what Moveris does, why it matters, and how it fits into your product. ## What Moveris Does (Plain Terms) Moveris answers one question: **Is there a real, living person in front of the camera right now?** Unlike passwords or 2FA, which can be stolen or bypassed, Moveris uses biological signals that are hard to fake—subtle movements, micro-expressions, and physiological reactions that occur involuntarily. Deepfakes and AI-generated faces can look convincing, but they cannot replicate these live biological patterns. !!! info "In practice" Your app captures video from the user's camera, sends frames (e.g. 10 for the fast model, 30 for the balanced model) to Moveris, and receives a verdict: **live** (real person) or **fake** (spoofing attempt), with a confidence score. ## When to Use Liveness Detection | Use Case | Why Moveris Helps | |----------|-------------------| | **Account opening / KYC** | Prove the applicant is physically present—not a stolen identity or deepfake | | **High-value transactions** | Step-up authentication before wire transfers, contract signing, or admin actions | | **Account recovery** | Verify the person requesting access is the real account owner | | **Bot prevention** | Replace CAPTCHAs with frictionless liveness checks | | **AI agent authorization** | When AI agents perform actions on behalf of users, verify a human authorized it (see [MCP Integration](#ai-agents-and-mcp)) | ## Integration Paths (Non-Technical) | Path | Best For | Effort | Time to Integrate | |------|----------|--------|-------------------| | **SDK (React / React Native)** | Web and mobile apps | Low | 5–30 minutes | | **Direct API** | Custom stacks, backend-only flows | Medium | 1–2 hours | | **AI Agents (MCP)** | Cowork, Claude, and MCP-compatible hosts | Low (if already using MCP) | Configure MCP server | ## Developer Portal API keys, usage tracking, billing, and analytics are managed in the **Moveris Developer Portal**: [Open Developer Portal :material-open-in-new:](https://developers.moveris.com/){ .md-button .md-button--primary .developer-portal-btn-dm target="_blank" } The portal lets you: - Create and manage API keys - View usage and credits - Access interactive API docs - Monitor billing ## AI Agents and MCP Moveris offers an **MCP (Model Context Protocol) server** for AI agents. When an AI agent is about to perform a high-stakes action—e.g., signing a contract or initiating a transfer—it can call Moveris to verify that a live human authorized that action. This is step-up authentication for the agent era: the agent surfaces a verification link to the user, the user completes a short liveness check in the browser, and the agent receives a signed attestation before proceeding. !!! tip "For technical details" See the [API Reference](../api-reference/fast-check.md) and [SDK Overview](../sdk/overview.md) for implementation details. The MCP server documentation is available separately for AI agent integrations. ## Key Metrics at a Glance | Metric | Typical Value | |--------|---------------| | Verification time | < 1 second (model-dependent: 10–30 frames typical) | | Frames required | 10 | | Confidence score | 0–100 (65+ = live) | | Processing time | ~245 ms server-side | ## Next Steps - **[Quick Start](quick-start.md)** – For developers ready to integrate - **[How It Works](how-it-works.md)** – Science and technology overview - **[API Key](api-key.md)** – Obtain credentials from the Developer Portal ======================================================================== Source: getting-started/how-it-works.md URL: https://documentation.moveris.com/getting-started/how-it-works/ ======================================================================== # How It Works How Moveris detects real humans vs. spoofs—in plain terms and technical detail. !!! info "In plain terms" Moveris checks whether the person in front of the camera is real by measuring involuntary biological signals (micro-movements, physiological reactions) that AI and deepfakes cannot reliably replicate. No user actions required—just look at the camera for about 1 second. While deepfakes and AI-generated content have gotten remarkably good at mimicking human appearance, they can't fake biology. Our API analyzes involuntary physiological signals that occur naturally when real humans interact with cameras—subtle reactions that happen below conscious awareness and can't be replicated by even the most sophisticated generative models. Unlike traditional liveness detection that relies on challenge-response actions or AI model training, we've taken a fundamentally different approach rooted in psychophysiology. We measure what the body does automatically, not what it's told to do. ## The Science (Simplified) When you're alive and looking at a camera, your body is constantly generating signals—micro-expressions, subtle movements, physiological reactions that cascade through your system. These aren't things you can control or fake; they're the signature of a living, breathing human nervous system. But it goes deeper than isolated signals. Real humans exhibit cognitive coherence—the natural, split-second coordination between what you're thinking, what you're seeing, and how your body responds. When you react to a question, your pupil dilation, facial muscle timing, and micro-movements all sync in patterns that reflect actual neural processing. Deepfakes can replicate individual elements, but they struggle to maintain this multi-layered coherence across time. The signals don't just need to exist; they need to make sense together. Our technology reads these signals through standard webcams, requiring no special hardware. Within seconds, we can determine whether we're looking at a real person or a sophisticated fake. ## Why This Gets More Effective, Not Less Here's the counterintuitive part: as deepfakes improve, our approach becomes more valuable. Traditional detection methods look for flaws—artifacts, inconsistencies, or statistical fingerprints left by AI generators. As generative models evolve, they learn to eliminate these tells. It's an arms race that favors the attacker. We're not in that race. We're not looking for what's wrong with the fake—we're confirming what's present in the real. AI can learn to add realistic-looking blinks or micro-movements, but it can't generate authentic biological coherence. There's no actual nervous system processing stimuli, no real pupils responding to light changes, no genuine cognitive load creating coordinated physiological patterns. These require actual neural tissue, actual consciousness, actual life. As deepfakes reach visual perfection, the only reliable differentiator becomes: Is there a real biological system behind this? When pixels become indistinguishable, measuring the presence of life becomes the only moat that matters. That's what Moveris does. And that's why we get stronger as the synthetic world gets more convincing. ## Getting the Best Results ### Embed in Natural User Flows Our API works best when users are naturally engaged with their screen. Rather than treating liveness verification as a separate step, integrate it into existing user activities: #### Sign-in Flows During authentication while users wait #### Content Viewing While users watch content or read instructions #### Onboarding When users are focused on completing setup #### Video Calls In the background of video calls or identity verification !!! tip "Natural engagement" When users are actively doing something, their natural biological signals are strongest and most consistent. ## Camera Positioning Matters - :material-camera: Face the camera directly rather than at an angle - :material-lightbulb: Ensure adequate lighting for better visibility - :material-meditation: Stay relatively still during the brief capture window - :material-ruler: Position at a natural distance (head and shoulders framing is ideal) ## Integration Tips #### Capture Time Capture frames at ~10 FPS (count matches model: e.g. 10 for mixed-10-v2, 30 for mixed-30-v2) #### User Feedback Provide clear UI feedback but keep it low friction #### Retry Logic Users might need 3–4 tries in poor conditions !!! info "Low friction verification" The more natural and unobtrusive the verification feels, the better the biological signals we can measure. ======================================================================== Source: getting-started/api-key.md URL: https://documentation.moveris.com/getting-started/api-key/ ======================================================================== # Obtaining Your API Key {: .api-key-page } Get your API credentials to start integrating Moveris Liveness Detection. !!! info "In plain terms" An API key is like a password that identifies your app when it talks to Moveris. You get it from the Developer Portal, and you must include it in every request. Keep it secret—never expose it in client-side code or version control. ## Getting Started 1. **Create an account** — Sign up at the Moveris Developer Portal 2. **Navigate to API Keys** — Go to your dashboard and find the API Keys section 3. **Generate a new key** — Click "Create API Key" and give it a descriptive name 4. **Copy and secure your key** — Store your API key securely (you won't be able to see it again) [Go to Developer Portal :material-open-in-new:](https://developers.moveris.com/){ .md-button .md-button--primary .developer-portal-btn target="_blank" } !!! warning "Keep your API key secure" - Never commit API keys to version control - Use environment variables on your server - Never expose keys in client-side JavaScript - Rotate keys periodically for security ======================================================================== Source: getting-started/endpoints.md URL: https://documentation.moveris.com/getting-started/endpoints/ ======================================================================== # API Endpoints Overview of all available Moveris API (v2) endpoints. !!! info "In plain terms" Moveris offers real-time and async paths. For frame-based integrations, use **fast-check**, **fast-check-stream**, or **fast-check-crops**. For pre-recorded full videos, use **video/detect** (submit once, then poll). Frame count depends on the model (for example, 10 for `mixed-10-v2`, 30 for `mixed-30-v2`). ## Base URL ``` https://api.moveris.com ``` ## Health Check | Method | Endpoint | Description | |--------|----------|-------------| | `GET` | `/health` | Check service status and loaded models | [View Health Check documentation :material-arrow-right:](../api-reference/health-check.md) ## Fast Check | Method | Endpoint | Description | |--------|----------|-------------| | `POST` | `/api/v1/fast-check` | Server-side face detection (frame count matches model) | | `POST` | `/api/v1/fast-check-stream` | Send frames one-by-one; verdict when all frames received | | `POST` | `/api/v1/fast-check-crops` | Pre-cropped face verification (faster) :material-lightning-bolt:{ .badge-faster } | ## Video Detect (Async) | Method | Endpoint | Description | |--------|----------|-------------| | `POST` | `/api/v1/{tenant_slug}/video/detect` | Submit full video (file or URL) for async processing | | `GET` | `/api/v1/{tenant_slug}/video/detect/{submission_id}` | Poll status and fetch final verification result | !!! tip "When to use video/detect" Use this flow when you already have a full recorded video and want async processing. You submit once and poll by `submission_id` until completion. [View Video Detect documentation :material-arrow-right:](../api-reference/video-detect.md) ### Endpoint Comparison | Endpoint | Use Case | Face Detection | Latency | |----------|----------|----------------|---------| | `/fast-check` | Standard integration | Server-side | ~245ms | | `/fast-check-stream` | Real-time streaming | Server-side | ~245ms | | `/fast-check-crops` | Maximum performance | Client-side | Fastest | !!! tip "Choose the right endpoint" - Use **fast-check** for simple integrations where you send all frames at once (count depends on model) - Use **fast-check-stream** when capturing frames in real-time from a camera - Use **fast-check-crops** for the fastest processing when you can do face detection client-side !!! info "Recommended models" Use **`mixed-30-v2`** (Balanced) for most integrations. See [Models overview](../models/overview.md) for frame counts and performance characteristics. ======================================================================== Source: getting-started/authentication.md URL: https://documentation.moveris.com/getting-started/authentication/ ======================================================================== # Authentication All API requests require authentication using your API key. !!! info "In plain terms" Every request to the API must include your API key in the `X-API-Key` header. Without it, the API returns 401 Unauthorized. Never put the API key in client-side JavaScript—use a backend proxy that adds the key before calling Moveris. ## API Key Authentication Include your API key in the request header using the `X-API-Key` header. ```bash curl -X POST "https://api.moveris.com/api/v1/fast-check" \ -H "Content-Type: application/json" \ -H "X-API-Key: your-api-key" \ -d '{ ... }' ``` ## Example with JavaScript ```javascript const response = await fetch('https://api.moveris.com/api/v1/fast-check', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key' }, body: JSON.stringify({ session_id: crypto.randomUUID(), source: 'live', frames: frames }) }); ``` Successful responses use the standard [HTTP envelope](../api-reference/errors.md): read `data` for `verdict`, scores, and metadata. Errors use `success: false` and an `errors` array. ## Example with Python ```python import requests response = requests.post( 'https://api.moveris.com/api/v1/fast-check', headers={ 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key' }, json={ 'session_id': str(uuid.uuid4()), 'source': 'live', 'frames': frames } ) ``` !!! warning "Keep your API key secure" Never expose your API key in client-side code. Always make API calls from your server. ## API Key Scopes API keys can be restricted by scope for least-privilege access. When creating a key in the Developer Portal, you can assign scopes such as:
| Fast | Balanced Recommended | Thorough | Extended | Maximum | |
|---|---|---|---|---|---|
| Model ID | mixed-10-v2 |
mixed-30-v2 |
mixed-60-v2 |
mixed-90-v2 |
mixed-120-v2 |
| Frames | 10 | 30 | 60 | 90 | 120 |
| Capture time | ~0.3 s | ~1 s | ~2 s | ~3 s | ~4 s |
| Response time | ~1 s | ~3 s | ~5 s | ~7 s | ~10 s |
| EER | 4.4% | 4.0% | 4.7% | 5.7% | 4.6% |
| AUC | 0.988 | 0.991 | 0.991 | 0.989 | 0.991 |
| Balanced accuracy | 95.2% | 95.7% | 95.2% | 94.3% | 94.7% |
| Best for | Low-friction, high-volume flows | Standard KYC & identity verification | High-security onboarding | Escalation, compliance-heavy flows | Highest scrutiny, regulatory edge cases |
| Model ID | mixed-10-v2 |
| Frames required | 10 |
| Capture time | ~0.3 seconds at 30 FPS |
| Avg response time | ~1s |
| Security level | Strong |
| EER | 4.4% |
| AUC | 0.988 |
| Balanced accuracy | 95.2% |
| Avg response (fast-check) | ~1s |
| Avg response (fast-check-crops) | ~1s |
| Model ID | mixed-30-v2 |
| Frames required | 30 |
| Capture time | ~1 second at 30 FPS |
| Avg response time | ~3s |
| Security level | Strongest |
| EER | 4.0% |
| AUC | 0.991 |
| Balanced accuracy | 95.7% |
| Avg response (fast-check) | ~3s |
| Model ID | mixed-60-v2 |
| Frames required | 60 |
| Capture time | ~2 seconds at 30 FPS |
| Avg response time | ~5s |
| Security level | Strong |
| EER | 4.7% |
| AUC | 0.991 |
| Balanced accuracy | 95.2% |
| Avg response (fast-check) | ~5s |
| Model ID | mixed-90-v2 |
| Frames required | 90 |
| Capture time | ~3 seconds at 30 FPS |
| Avg response time | ~7s |
| Security level | Strong |
| EER | 5.7% |
| AUC | 0.989 |
| Balanced accuracy | 94.3% |
| Avg response | ~7s |
| Model ID | mixed-120-v2 |
| Frames required | 120 |
| Capture time | ~4 seconds at 30 FPS |
| Avg response time | ~10s |
| Security level | Maximum |
| EER | 4.6% |
| AUC | 0.991 |
| Balanced accuracy | 94.7% |
| Avg response | ~10s |
/health
## Base URL
```
https://api.moveris.com
```
## Response Fields
/api/v1/fast-check-stream
## Base URL
```
https://api.moveris.com
```
## Basic Flow
```
1. Generate a session_id (UUID) for the verification session
2. Capture N video frames (N = model's frames_required, e.g. 10 for mixed-10-v2, 30 for mixed-30-v2)
3. Encode each frame as base64 PNG
4. Send each frame in a separate POST to /api/v1/fast-check-stream (same session_id, same model)
5. On the final frame, the API returns the verdict ("live" or "fake") with confidence score
```
!!! info "Stream vs single request"
Unlike `/api/v1/fast-check` (which sends all frames in one JSON body), this endpoint receives **one frame per POST**. Use the same `session_id` and `model` for all requests. The verdict is returned only in the response to the last frame.
/api/v1/fast-check
## Base URL
```
https://api.moveris.com
```
!!! info "Frame requirement"
The API requires at least the number of frames required by the selected model. Use 10 frames for `mixed-10-v2`, 30 for `mixed-30-v2`, 60 for `mixed-60-v2`, 90 for `mixed-90-v2`, or 120 for `mixed-120-v2`. If you send fewer, you will get `insufficient_frames`. For predictable latency, send exactly `frames_required`.
/api/v1/fast-check-crops
!!! info "Frame requirement"
The API requires at least the number of crops required by the selected model (e.g. 10 for `mixed-10-v2`, 30 for `mixed-30-v2`). Each crop must be 224×224 PNG. If you send fewer, you will get `insufficient_frames`. For predictable latency, send exactly `frames_required`. See [Models](../models/overview.md).
insufficient_frames | Not enough frames provided | You sent fewer frames than the model requires. Frame count must meet the model minimum (e.g. 10 for mixed-10-v2, 30 for mixed-30-v2). |
| 400 | missing_field | Required field missing | A required field (e.g. `session_id`, `frames`) is missing from the request. |
| 401 | invalid_key | Invalid or missing API key | Your API key is wrong, expired, or not sent. Check the `X-API-Key` header. |
| 402 | insufficient_credits | Not enough credits | Your account has run out of credits. Top up in the Developer Portal. |
| 402 | account_suspended | Account suspended | Your account is suspended due to a payment issue. Update your payment method in the Developer Portal. |
| 403 | insufficient_scope | API key lacks required scope | Your API key does not have permission for this operation. Create a key with the required scope in the Developer Portal. |
| 400 | validation_error | Request validation failed | The request format is invalid (e.g. wrong field types, invalid UUID). Returned as `400` with field-level detail in `errors` (not `422`). |
| 429 | rate_limit_exceeded | Too many requests | You've hit the request limit. Wait for `retry_after` seconds and try again. |
| 500 | internal_error | Server error | Something went wrong on our side. Retry later or check the Status Page. |
## Common Errors
### Insufficient Frames (400)
Returned when the number of submitted frames is less than the required minimum for the endpoint.
```json
{
"data": null,
"success": false,
"message": "Insufficient frames. Model \"10\" requires 10 frames, received 5.",
"errors": [
{ "insufficient_frames": ["Insufficient frames. Model \"10\" requires 10 frames, received 5."] }
]
}
```
### Invalid API Key (401)
Returned when the API key is invalid or missing.
```json
{
"data": null,
"success": false,
"message": "Not authenticated",
"errors": [
{ "invalid_key": ["Not authenticated"] }
]
}
```
### Insufficient Credits (402)
Returned when your account doesn't have enough credits to process the request.
```json
{
"data": null,
"success": false,
"message": "Insufficient credits. Required: 1, available: 0",
"errors": [
{ "insufficient_credits": ["Insufficient credits. Required: 1, available: 0"] }
]
}
```
### Account Suspended (402)
Returned when your account is suspended due to a payment issue (e.g. failed payment, overdue invoice).
```json
{
"data": null,
"success": false,
"message": "Your account is suspended due to a payment issue. Please update your payment method.",
"errors": [
{ "account_suspended": ["Your account is suspended due to a payment issue. Please update your payment method."] }
]
}
```
### Validation Error (400)
Returned when the request body fails validation. The API converts 422 validation errors to 400 with field-level detail.
```json
{
"data": null,
"success": false,
"message": "session_id is required",
"errors": [
{ "session_id": ["session_id is required"] }
]
}
```
### Insufficient Scope (403)
Returned when your API key does not have the required scope for the requested endpoint.
```json
{
"data": null,
"success": false,
"message": "API key lacks required scope: detection:write",
"errors": [
{ "insufficient_scope": ["API key lacks required scope: detection:write"] }
]
}
```
### Rate Limit Exceeded (429)
Returned when you've exceeded the rate limit for your account.
```json
{
"data": null,
"success": false,
"message": "Rate limit exceeded. Please try again later.",
"errors": [
{ "rate_limit_exceeded": ["Rate limit exceeded. Please try again later."] }
]
}
```
## Handling Errors
1. Check `success` first — if `false`, the request failed
2. Read `message` for a human-readable summary
3. Iterate `errors` to find specific error codes (e.g. `insufficient_frames`, `invalid_key`)
4. Use the error code to decide your recovery action
5. For `insufficient_scope`, create a new API key with the required scope in the Developer Portal
6. For rate limits, implement exponential backoff
7. Implement retry logic for transient errors (500)
8. For persistent 500 errors, check our [Status Page](https://status.moveris.com/){ target="_blank" } for ongoing incidents
## Related
- [Health Check](health-check.md)
- [Video Detect (Async)](video-detect.md)
- [Fast Check Stream (Default)](fast-check-stream.md)
- [Fast Check (Legacy)](fast-check.md)
- [Fast Check Crops](fast-check-crops.md)
- [Rate Limits](rate-limits.md)
========================================================================
Source: examples/javascript.md
URL: https://documentation.moveris.com/examples/javascript/
========================================================================
# JavaScript Examples {: .javascript-examples-page }
Complete JavaScript examples for integrating Moveris API (v2).
!!! info "In plain terms"
Copy these code snippets to add liveness detection to your web app. They show how to capture frames from a video element, send them to the API, and display the result. Frame count depends on the model (e.g. 10 for `mixed-10-v2`, 30 for `mixed-30-v2`). Replace the placeholder API key with yours (preferably via a backend proxy).
!!! info "Moveris API (v2)"
These examples use Moveris API (v2) at `https://api.moveris.com`
!!! tip "Model selection"
Examples use the v1 flow (`model` in body). For v2 resolution, send `X-Model-Version: latest` header with `frame_count: 10|30|60|90|120` in the body. See [Model Versioning & Frames](../models/versioning-and-frames.md).
## REST API Examples
=== "JavaScript"
```javascript
async function checkLiveness(videoElement) {
const frames = [];
// Capture frames at ~10 FPS (count = model min_frames, e.g. 10 for mixed-10-v2)
const frameCount = 10; // Use getModels() for dynamic model selection
for (let i = 0; i < frameCount; i++) {
const canvas = document.createElement('canvas');
canvas.width = 640;
canvas.height = 480;
const ctx = canvas.getContext('2d');
ctx.drawImage(videoElement, 0, 0, 640, 480);
frames.push({
index: i,
timestamp_ms: performance.now(),
pixels: canvas.toDataURL('image/png').split(',')[1]
});
// Wait ~100ms between frames (10 FPS)
await new Promise(r => setTimeout(r, 100));
}
const response = await fetch('https://api.moveris.com/api/v1/fast-check', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': 'sk-your-api-key'
},
body: JSON.stringify({
session_id: crypto.randomUUID(),
source: 'live',
model: 'mixed-10-v2',
frames: frames
})
});
const body = await response.json();
if (!body.success) {
throw new Error(body.message ?? 'Request failed');
}
return body.data;
}
```
=== "TypeScript"
```typescript
interface Frame {
index: number;
timestamp_ms: number;
pixels: string; // Base64
}
interface LivenessResult {
verdict: "live" | "fake";
real_score: number;
score: number;
session_id: string;
processing_ms: number;
}
async function checkLiveness(
frames: Frame[],
sessionId: string
): Promise{eyeFeedback}
} {feedback &&{feedback}
}Frames: {progress.current}/{progress.total}
{state === 'idle' && } ); } ``` !!! tip "SDK options" - `captureIntervalMs` (default 100) — tune capture rate (e.g. 50 for ~20 FPS) - `onEyeWarning` — called with messages like "Eyes are in shadow" or "Glare detected" before `onRestartNeeded` ## Installation ```bash # Install MediaPipe for face detection (optional, for crops) npm install @mediapipe/tasks-vision ``` ## Usage Tips - **Cleanup streams:** Always stop camera tracks in the useEffect cleanup to prevent memory leaks. - **Face guide overlay:** Add a visual guide to help users position their face correctly within the frame. - **Loading states:** Disable the button and show progress during verification to prevent double-submissions. - **Error boundaries:** Wrap the camera component in an error boundary to gracefully handle permission denials. - **Mobile considerations:** Use `playsInline` and `muted` attributes for iOS compatibility. ======================================================================== Source: examples/react-native.md URL: https://documentation.moveris.com/examples/react-native/ ======================================================================== # React Native Examples Mobile integration examples for React Native apps using Expo Camera and React Native Vision Camera. !!! info "In plain terms" These examples show how to add liveness detection to mobile apps (iOS and Android) using the device camera. Use Expo Camera for simpler setups or React Native Vision Camera for more control. Always proxy API calls through your backend to protect the API key. !!! info "Moveris API (v2)" These examples use Moveris API (v2) endpoints. The `source` field is required and should be set to `"live"` for real-time camera capture. !!! warning "Backend Proxy Required" Never include your API key in mobile app code. Always route requests through your backend server to keep credentials secure. !!! tip "Model selection (v1 and v2)" You can keep both flows depending on your integration. Use `model` in body for v1 compatibility, or use `X-Model-Version: latest` with `frame_count` for v2 alias-based resolution. ## Expo Camera The easiest way to add liveness detection to Expo projects. Expo Camera provides built-in base64 encoding, making frame capture straightforward. ```typescript import { CameraView, useCameraPermissions } from 'expo-camera'; import { useRef, useState } from 'react'; import { Button, View, Text, StyleSheet, ActivityIndicator } from 'react-native'; interface Frame { index: number; timestamp_ms: number; pixels: string; } interface LivenessResult { verdict: 'live' | 'fake'; real_score: number; score: number; session_id: string; } export default function LivenessScreen() { const [permission, requestPermission] = useCameraPermissions(); const [isChecking, setIsChecking] = useState(false); const [result, setResult] = useStategetModels() | `/api/v1/models` | -- | Fetch available models (id, label, min_frames, deprecated) |
| fastCheck() | `/api/v1/fast-check` | by model | Fast liveness check with server-side face detection |
| fastCheckCrops() | `/api/v1/fast-check-crops` | by model | Fast check with pre-cropped 224x224 face images |
| fastCheckStream() | `/api/v1/fast-check-stream` | by model | Parallel frame streaming for lowest latency |
| verify() | `/api/v1/verify` | 50+ | Spatial-feature analysis for standard KYC |
| hybridCheck() | `/api/v1/hybrid-check` | 50+ | CNN + physiological hybrid model |
| hybrid50() | `/api/v1/hybrid-50` | 50+ | 50-frame hybrid (93.8% accuracy) |
| hybrid150() | `/api/v1/hybrid-150` | 150+ | 150-frame hybrid (96.2% accuracy) |
| health() | `/health` | -- | Health check |
## Protocol
The SDK communicates with the Moveris API over **HTTPS REST** with JSON payloads. Authentication is via the `X-API-Key` header. All requests include automatic retry with exponential backoff (3 attempts, 1s--10s delays).
## Next Steps
- [Installation](installation.md) -- Install the SDK packages
- [React Quick Start](react/quick-start.md) -- Get started with React in 5 minutes
- [React Native Quick Start](react-native/quick-start.md) -- Get started with React Native
- [LivenessClient Reference](shared/client.md) -- Full API client documentation
========================================================================
Source: sdk/installation.md
URL: https://documentation.moveris.com/sdk/installation/
========================================================================
# Installation {: .installation-page }
Install the SDK package for your platform. All packages are published to npm under the `@moveris` scope.
!!! info "In plain terms"
Run the install command for your platform (React web, [React Native](../glossary.md#react-native)). For React, you need `@moveris/react` and `@moveris/shared`. For React Native, add `@moveris/react-native` instead. Optional: install `@mediapipe/tasks-vision` for client-side face detection.
## React (Web)
```bash
npm install @moveris/react @moveris/shared
```
Or with other package managers:
=== "pnpm"
```bash
pnpm add @moveris/react @moveris/shared
```
=== "yarn"
```bash
yarn add @moveris/react @moveris/shared
```
### Peer Dependencies
| Package | Required Version |
|---|---|
| `react` | >= 18.0.0 |
| `react-dom` | >= 18.0.0 |
### Optional Dependencies
For client-side face detection (used by `useFaceDetection` and `useSmartFrameCapture`):
```bash
npm install @mediapipe/tasks-vision
```
## React Native
```bash
npm install @moveris/react-native @moveris/shared
```
### Peer Dependencies
| Package | Required Version |
|---|---|
| `react` | >= 18.0.0 |
| `react-native` | >= 0.83.1 |
| `react-native-vision-camera` | ^4.0.0 |
| `react-native-reanimated` | ^3.15.0 |
Install the peer dependencies:
```bash
npm install react-native-vision-camera react-native-reanimated
```
For iOS, install native pods:
```bash
cd ios && pod install
```
### Platform Configuration
#### iOS (`Info.plist`)
```xml
Status: {status}
Frames: {framesReceived}/{framesRequired}
{status === 'idle' && ( )} {result && (Verdict: {result.verdict}
Score: {(result.realScore).toFixed(0)}
)} {error &&Error: {error.message}
} ); } ``` ## With Face Detection Enable smart frame capture that only captures frames when a face is properly positioned: ```tsx import { MoverisProvider, LivenessView, createMediaPipeAdapter, } from '@moveris/react'; function App() { return ("searching" |"aligning" |"ready" | "capturing" | `"searching"` | Oval guide visual state |
| `feedbackMessage` | `string` | -- | Message displayed below the oval |
| `style` | `CSSProperties` | -- | Overlay style overrides |
---
## LivenessButton
Styled action button for start, stop, and retry actions.
```tsx
"primary" |"secondary" |"danger" | `"primary"` | Visual style |
| `disabled` | `boolean` | `false` | Disable the button |
| `loading` | `boolean` | `false` | Show loading spinner |
---
## LivenessResult
Displays the verification result with verdict, score, and `realScore` (use for decision-making).
```tsx
{feedbackMessage}
} {status === 'capturing' &&{framesReceived}/{framesRequired}
} {status === 'processing' &&Analyzing...
} {result &&Verdict: {result.verdict}
} {error &&Error: {error.message}
} {(result || error) && } ); } ``` --- ## useCamera Manages camera permissions and the video stream. ```typescript const { videoRef, stream, isReady, error, hasPermission, requestPermission, startStream, stopStream, } = useCamera(options?); ``` ### Options