Skip to content

[HELP REQUIRED] React native and blazepose detection #8543

Open
@KieranL2075

Description

@KieranL2075

Hi,

I have no idea if this is the right place but I came across issue #7205 but still had issues.

The link provided in this issue to the demos was helpful, but didn't work for me. When running yarn and yarn start within the pose detection folder, I ran into issues. It just said "something went wrong" with a blue screen. Couldn't find a solution.

My issue at the moment is:
" (NOBRIDGE) ERROR Error creating detector: [TypeError: Cannot read property 'fetch' o
f undefined]
"

This is taken from the terminal for Expo. My project is a react native one, using a built apk for android. This is my first react native project so I might be getting the details wrong. Any help at all would be greatly appreciated. I have been trying for about 3 weeks to get this to work on and off and I am the bottleneck in a larger project so absolutely any advice would be awesome. I will provide code for the screen and package.json below:

start-recording.tsx

import * as React from "react";
import { useEffect, useState } from "react";
import { Redirect, useRouter } from "expo-router";
import {
	Camera,
	useCameraDevice,
	useCameraPermission,
	useFrameProcessor,
	Frame,
} from "react-native-vision-camera";
import { Platform, StatusBar, StyleSheet, View } from "react-native";
import { SafeAreaView } from "react-native-safe-area-context";
import Svg, { Circle, Rect, Line, Path } from "react-native-svg";
import { ControlButton } from "@/components/screen-elems/ControlButton";
import { PlayButton } from "@/components/screen-elems/Record";
import * as tf from "@tensorflow/tfjs";
import "@tensorflow/tfjs-backend-webgl";
import * as poseDetection from "@tensorflow-models/pose-detection";
import "whatwg-fetch";

export default function RecordScreen() {
	// decides which camera of phone to use
	const [hasPermission, setHasPermission] = useState(false);
	const [cameraPosition, setCameraPosition] = React.useState<"front" | "back">(
		"back",
	);
	const device = useCameraDevice(cameraPosition);
	const [zoom, setZoom] = React.useState(device?.neutralZoom);
	const [exposure, setExposure] = React.useState(0);
	const [flash, setFlash] = React.useState<"off" | "on">("off");
	const [torch, setTorch] = React.useState<"off" | "on">("off");
	const router = useRouter();
	const [detector, setDetector] = useState<poseDetection.PoseDetector | null>(
		null,
	);

	const model = poseDetection.SupportedModels.BlazePose;
	const detectorConfig = {
		runtime: "tfjs",
		enableSmoothing: true,
		modelType: "full",
	};

	useEffect(() => {
		const createDetector = async () => {
			try {
				console.log("Start Creating Detector");
				console.log("Waiting for Tensorflow");
				await tf.ready();
				console.log("tensorflow ready");

				console.log("Waiting for Detector");

				const d = await poseDetection.createDetector(
					model,
					detectorConfig
				);

				console.log("Detector created");
				setDetector(d);
				console.log("detector set");

			} catch (error) {
				console.error("Error creating detector:", error);
			}
		};

		console.log("Running Create Detector");
		createDetector();
	}, []);
    return(...)
}

If the package.json is needed let me know. I have removed a lot of the file above for simplicity reasons.

Thank you in advance.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions