Skip to content

React Native Hooks

The @moveris/react-native package provides 5 hooks for building custom native liveness verification UIs.

In plain terms

Hooks for building your own mobile UI while the SDK handles the logic: useLiveness for the full flow, useCamera for camera access, useFrameProcessor for frame processing, useFaceDetection for face detection, and useSmartFrameCapture for quality-gated capture.

Optional parameters

Parameters followed by ? (for example, options?) are optional. You can omit them if you don't need to customize the call. For instance, useLiveness() works without passing options.

Import

import {
  useLiveness,
  useCamera,
  useFrameProcessor,
  useFaceDetection,
  useSmartFrameCapture,
} from '@moveris/react-native';

useLiveness

The main hook that manages the full liveness verification flow: camera access, frame capture, API submission, and result handling.

const {
  status,
  result,
  error,
  framesReceived,
  framesRequired,
  feedbackMessage,
  start,
  stop,
  reset,
} = useLiveness(options?);

Options

Option Type Default Description
model FastCheckModel '10' Model to use
source FrameSource 'live' Frame source
sessionId string auto-generated Optional. Session ID for API calls. When provided, the same ID is used for every request in the session.
mode 'batch' \| 'stream' 'batch' Upload mode
autoStart boolean false Start capturing automatically
onResult (result: LivenessResult) => void -- Result callback
onError (error: Error) => void -- Error callback
onProgress (received: number, total: number) => void -- Progress callback

Return Value

Property Type Description
status 'idle' \| 'capturing' \| 'processing' \| 'complete' \| 'error' Current state
result LivenessResult \| null Verification result
error Error \| null Error if failed
framesReceived number Frames captured so far
framesRequired number Total frames needed
feedbackMessage string \| null User-facing feedback
start () => void Start the flow
stop () => void Stop capturing
reset () => void Reset to idle

Example

function CustomLivenessScreen() {
  const {
    status,
    result,
    framesReceived,
    framesRequired,
    feedbackMessage,
    start,
    reset,
  } = useLiveness({
    mode: 'stream',
    onResult: (r) => Alert.alert('Result', r.verdict),
  });

  return (
    <View style={{ flex: 1, justifyContent: 'center', padding: 20 }}>
      <Text>Status: {status}</Text>
      <Text>Frames: {framesReceived}/{framesRequired}</Text>
      {feedbackMessage && <Text>{feedbackMessage}</Text>}

      {status === 'idle' && (
        <Button title="Start" onPress={start} />
      )}
      {result && <Text>Verdict: {result.verdict}</Text>}
      {(result || status === 'error') && (
        <Button title="Retry" onPress={reset} />
      )}
    </View>
  );
}

useCamera

Manages camera permissions and the Vision Camera device.

const {
  device,
  isReady,
  error,
  hasPermission,
  requestPermission,
} = useCamera(options?);

Options

Option Type Default Description
facing 'front' \| 'back' 'front' Camera to use

Return Value

Property Type Description
device CameraDevice \| undefined Vision Camera device
isReady boolean Whether the camera is ready
error Error \| null Camera error
hasPermission boolean Camera permission granted
requestPermission () => Promise<boolean> Request camera permission

Example

import { Camera } from 'react-native-vision-camera';

function CameraPreview() {
  const { device, isReady, hasPermission, requestPermission } = useCamera();

  if (!hasPermission) {
    return <Button title="Enable Camera" onPress={requestPermission} />;
  }

  if (!device || !isReady) {
    return <ActivityIndicator />;
  }

  return (
    <Camera
      style={{ flex: 1 }}
      device={device}
      isActive={true}
      photo={true}
    />
  );
}

useFrameProcessor

Integration with Vision Camera's frame processor for real-time frame handling.

const {
  frameProcessor,
  capturedFrames,
  framesReceived,
  clearFrames,
} = useFrameProcessor(options?);

Options

Option Type Default Description
framesRequired number 10 Number of frames to capture
captureInterval number 100 Minimum ms between captures
onFrameCaptured (frame: CapturedFrame) => void -- Called after each capture
onComplete (frames: CapturedFrame[]) => void -- Called when all frames captured

Return Value

Property Type Description
frameProcessor FrameProcessor Pass this to Vision Camera's frameProcessor prop
capturedFrames CapturedFrame[] Captured frames
framesReceived number Count of captured frames
clearFrames () => void Clear captured frames

Example

import { Camera } from 'react-native-vision-camera';

function ProcessorCapture() {
  const { device } = useCamera();
  const { frameProcessor, framesReceived } = useFrameProcessor({
    framesRequired: 10,
    onComplete: (frames) => {
      console.log('Captured all frames:', frames.length);
    },
  });

  if (!device) return null;

  return (
    <View style={{ flex: 1 }}>
      <Camera
        style={{ flex: 1 }}
        device={device}
        isActive={true}
        frameProcessor={frameProcessor}
      />
      <Text>Frames: {framesReceived}/10</Text>
    </View>
  );
}

useFaceDetection

Real-time face detection using native adapters. Supports Google ML Kit and Expo Face Detector.

const {
  detectionResult,
  isInitialized,
  error,
} = useFaceDetection(options);

Options

Option Type Default Description
adapter FaceDetectorAdapter -- Required. Face detector adapter
enabled boolean true Enable/disable detection
interval number 100 Detection interval in milliseconds

Adapters

import { createMLKitAdapter } from '@moveris/react-native';

const adapter = createMLKitAdapter({
  performanceMode: 'fast',
  landmarkMode: 'all',
  classificationMode: 'all',
});

Expo Face Detector

import { createExpoFaceDetectorAdapter } from '@moveris/react-native';

const adapter = createExpoFaceDetectorAdapter({
  mode: 'fast',
  detectLandmarks: true,
  runClassifications: true,
});

Return Value

Property Type Description
detectionResult DetectionResult \| null Latest detection result
isInitialized boolean Whether the detector is ready
error Error \| null Initialization error

Example

function FaceDetectionStatus() {
  const { detectionResult, isInitialized } = useFaceDetection({
    adapter: createMLKitAdapter(),
  });

  if (!isInitialized) return <Text>Loading face detector...</Text>;

  return (
    <Text>
      {detectionResult?.faceDetected
        ? 'Face detected'
        : 'No face detected'}
    </Text>
  );
}

useSmartFrameCapture

Intelligent frame capture that automatically captures frames only when quality thresholds are met. Uses face detection, blur analysis, and lighting evaluation to ensure high-quality captures.

const {
  status,
  capturedFrames,
  framesReceived,
  framesRequired,
  feedbackMessage,
  qualityScore,
  start,
  stop,
  reset,
} = useSmartFrameCapture(options);

Options

Option Type Default Description
adapter FaceDetectorAdapter -- Required. Face detector adapter
framesRequired number 10 Number of frames to capture
captureInterval number 100 Minimum ms between captures
qualityThreshold number 0.7 Minimum quality score (0--1)
blurThreshold number 50 Maximum blur score
onFrameCaptured (frame: CapturedFrame) => void -- Called after each capture
onComplete (frames: CapturedFrame[]) => void -- Called when all frames captured

Return Value

Property Type Description
status 'idle' \| 'capturing' \| 'complete' Capture status
capturedFrames CapturedFrame[] Captured frames
framesReceived number Frames captured so far
framesRequired number Total frames needed
feedbackMessage string \| null User-facing guidance
qualityScore number Current frame quality (0--1)
start () => void Start smart capture
stop () => void Pause capture
reset () => void Reset and clear frames

Example

import { createMLKitAdapter } from '@moveris/react-native';

function SmartCapture() {
  const {
    status,
    framesReceived,
    framesRequired,
    feedbackMessage,
    qualityScore,
    start,
    reset,
  } = useSmartFrameCapture({
    adapter: createMLKitAdapter(),
    framesRequired: 10,
    qualityThreshold: 0.7,
    onComplete: (frames) => {
      console.log('All frames captured:', frames.length);
      // Send frames to API...
    },
  });

  return (
    <View style={{ flex: 1, padding: 20 }}>
      {feedbackMessage && (
        <Text style={{ fontSize: 16, color: '#3b82f6', textAlign: 'center' }}>
          {feedbackMessage}
        </Text>
      )}

      <Text>Frames: {framesReceived}/{framesRequired}</Text>
      <Text>Quality: {(qualityScore * 100).toFixed(0)}%</Text>

      {status === 'idle' && (
        <Button title="Start Smart Capture" onPress={start} />
      )}
      {status === 'complete' && (
        <Button title="Retry" onPress={reset} />
      )}
    </View>
  );
}

Differences from React (Web) Hooks

Feature React (Web) React Native
Camera access MediaStream API react-native-vision-camera
Frame capture useFrameCapture (canvas-based) useFrameProcessor (Vision Camera)
Face detection MediaPipe adapter ML Kit / Expo Face Detector adapters
Styles CSS / CSSProperties StyleSheet / ViewStyle
Button events onClick onPress

The hook APIs (useLiveness, useSmartFrameCapture, useFaceDetection) are intentionally identical between platforms to make code portable.