Ever wanted to build a face detection app using React Native + Expo? Here's how you can go from zero to working face detection in just 10 minutes — including real-time face bounding boxes and face status like yaw, pitch, and eye openness!

Let’s dive in. 💪

🧱 Step 1: Set up your environment

First, create a new Expo project with TypeScript. I specify the project name as "face-detection".

npx create-expo-app@latest --template blank-typescript

Then install the required packages:

cd face-detection
npx expo install \
  [email protected] \
  [email protected] \
  @shopify/[email protected] \
  react-native-worklets-core \
  react-native-reanimated

🧠 Step 2: Paste the full sample code

Open the project with VSCode

code .

Paste the sample code in App.tsx

// App.tsx
import React, {useEffect, useState, useRef} from "react"
import {StyleSheet, View, Text, useWindowDimensions } from "react-native"
import {Camera as VisionCamera, useCameraDevice, useCameraPermission } from "react-native-vision-camera"
import {Camera, Face, FaceDetectionOptions} from 'react-native-vision-camera-face-detector';
import {useSharedValue, useAnimatedStyle, withTiming} from 'react-native-reanimated';
import Animated from 'react-native-reanimated';

export default function App() {
  const {hasPermission} = useCameraPermission()
  const {width, height} = useWindowDimensions();
  const [faceStatus, setFaceStatus] = useState<{ yaw: string; pitch: string; eye: string } | null>(null);
  const device = useCameraDevice('front')

  useEffect(() => {
    (async () => {
      const status = await VisionCamera.requestCameraPermission();
      console.log(`Camera permission: ${status}`);
    })();
  }, [device]);

  const aFaceW = useSharedValue(0);
  const aFaceH = useSharedValue(0);
  const aFaceX = useSharedValue(0);
  const aFaceY = useSharedValue(0);

  const drawFaceBounds = (face?: Face) => {
    if (face) {
      const {width, height, x, y} = face.bounds;
      aFaceW.value = width;
      aFaceH.value = height;
      aFaceX.value = x;
      aFaceY.value = y;
    } else {
      aFaceW.value = aFaceH.value = aFaceX.value = aFaceY.value = 0;
    }
  };

  const faceBoxStyle = useAnimatedStyle(() => ({
    position: 'absolute',
    borderWidth: 4,
    borderLeftColor: 'rgb(0,255,0)',
    borderRightColor: 'rgb(0,255,0)',
    borderBottomColor: 'rgb(0,255,0)',
    borderTopColor: 'rgb(0,255,0)',
    width: withTiming(aFaceW.value, {duration: 100}),
    height: withTiming(aFaceH.value, {duration: 100}),
    left: withTiming(aFaceX.value, {duration: 100}),
    top: withTiming(aFaceY.value, {duration: 100})
  }));

  const faceDetectionOptions = useRef<FaceDetectionOptions>({
    performanceMode: 'accurate',
    landmarkMode: 'all',
    contourMode: 'none',
    classificationMode: 'all',
    trackingEnabled: false,
    windowWidth: width,
    windowHeight: height,
    autoScale: true,
  }).current;

  const handleFacesDetection = (faces: Face[]) => {
    try {
      if (faces?.length > 0) {
        const face = faces[0];

        // You can add your own logic here!!
        drawFaceBounds(face);
        setFaceStatus({ 
          yaw: face.yawAngle > 15 ? "Right" : face.yawAngle < -15 ? "Left" : "Center",
          pitch: face.pitchAngle  > 15 ? "Up" : face.pitchAngle < -10 ? "Down" : "Center", 
          eye: face.leftEyeOpenProbability > 0.7 && face.rightEyeOpenProbability > 0.7 ? "Open" : "Close" 
        });
      } else {
        drawFaceBounds();
      }
    } catch (error) {
      console.error("Error in face detection:", error);
    }
  }

  if (!hasPermission) return <Text>Camera permission is required to use this feature.</Text>
  if (device == null) return <Text>Camera device not found.</Text>

  return (
    <View style={StyleSheet.absoluteFill}>
      <Camera
        style={StyleSheet.absoluteFill}
        device={device}
        isActive={true}
        faceDetectionCallback={handleFacesDetection}
        faceDetectionOptions={faceDetectionOptions}
      />
      <Animated.View style={[faceBoxStyle, styles.animatedView]}>
        <Text style={styles.statusText}>Yaw: {faceStatus?.yaw}</Text>
        <Text style={styles.statusText}>Pitch: {faceStatus?.pitch}</Text>
        <Text style={styles.statusText}>Eye: {faceStatus?.eye}</Text>
      </Animated.View>
    </View>
  )
}

const styles = StyleSheet.create({
  animatedView: {
    justifyContent: 'flex-end',
    alignItems: 'flex-start',
    borderRadius: 20,
    padding: 10,
  },
  statusText: {
    color: 'lightgreen',
    fontSize: 14,
    fontWeight: 'bold',
  },
});

This will draw a green animated rectangle around your face and display the current yaw, pitch, and eye status (open/close).

📱 Step 3: Build and run on iOS

Generates native iOS/Android project files for using native modules. Then, installs iOS dependencies.

npx expo prebuild
npx pod-install

Run npm start first to launch the Metro Bundler. This ensures your JavaScript code is correctly loaded into the app. Then open the project in Xcode and click Run to build the app. There is no need to edit any code in Xcode. The reason we opened Xcode is simply to install the app onto your physical device.

npm start
open ios/facedetection.xcworkspace

🔧 In Xcode:
Select your physical device from the device dropdown at the top.

Click the Run ▶️ button in the top-left corner. This will build the app and install it directly onto your iPhone. Once installed, you'll see the camera view launch on your device with face detection and bounding box animations.

🛡️ Note:
If this is your first time installing the app on your device, you'll need to manually trust your developer certificate.

On your iPhone, go to:
Settings > General > VPN & Device Management
Tap your Apple ID under "Developer App" and select "Trust".

🙌 That's it!

You now have a fully functional real-time face detection app built with React Native + Expo — in just 10 minutes.