Skip to content

[firebase_ml_vision] FirebaseVisionImage.fromBytes image format #1259

@andythehood

Description

@andythehood

In the description for the FirebaseVisionImage.fromBytes method in the firebase_ml_vision package, there is the comment that "NV21 format is expected on Android and that this can be obtained by concatenating the planes of a YUV_420_888 format image". Which is exactly what the example code does:

https://github.com/FirebaseExtended/flutterfire/blob/master/packages/firebase_ml_vision/example/lib/scanner_utils.dart

However, my understanding is that YUV_420_888 is a non-interlaced format (U and V as separate planes) whereas NV21 has the UV planes interlaced, with the first V sample first.

So, if this is right, just concatenating the YUV_420 planes won't give a true NV21 format image, as although the Y plane will be correct, and hence the image will look OK, the colours will be skewed due to the chroma planes being off.

Do the MLKit models really require NV21 or do they not really care about the UV planes that much?

_FirebaseVisionImage FirebaseVisionImage.fromBytes(Uint8List bytes, FirebaseVisionImageMetadata metadata)
package:firebase_ml_vision/firebase_ml_vision.dart

Construct a [FirebaseVisionImage] from a list of bytes.

On Android, expects android.graphics.ImageFormat.NV21 format. Note: Concatenating the planes of android.graphics.ImageFormat.YUV_420_888 into a single plane, converts it to android.graphics.ImageFormat.NV21.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions