Is it possible to run object detection models retrained with mediapipe on tensorflow directly instead of using mediapipe?

I am using this mediapipe guide to retrain an object detection model and export it to a tflite model. I want to use that model in react-native. Unfortunately, there is no direct react-native implementation for mediapipe, but I have a library that can run any .tflite model in RN.

At first I assumed that I only had to use mediapipe to retrain my model, but now I have realised in the examples that I also need mediapipe for the detection part. So I wonder if I can also run my model created with mediapipe directly in tensorflow?

After a first test, I get an output for my object detection model in the following shape for “location” and “scores”:

((1, 19125, 4), (1, 19125, 4))

The shape for location makes sense to me, but how should I interpret the “score” data? Or does it make no sense to run the model without a mediapipe?

Leave a Comment