-
Notifications
You must be signed in to change notification settings - Fork 26.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Camera document how to use ImageStream #26348
Comments
Yes agree, great feature but it needs documentation. @override
void initState() {
super.initState();
controller = CameraController(cameras[0], ResolutionPreset.medium);
controller.initialize().then((_) {
if (!mounted) {
return;
}
controller.startImageStream((CameraImage availableImage) {
controller.stopImageStream();
_scanText(availableImage);
});
setState(() {});
});
}
void _scanText(CameraImage availableImage) async {
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromBytes(availableImage.planes[0].bytes, null);
final TextRecognizer textRecognizer = FirebaseVision.instance.textRecognizer();
final VisionText visionText = await textRecognizer.processImage(visionImage);
print(visionText.text);
for (TextBlock block in visionText.blocks) {
// final Rectangle<int> boundingBox = block.boundingBox;
// final List<Point<int>> cornerPoints = block.cornerPoints;
print(block.text);
final List<RecognizedLanguage> languages = block.recognizedLanguages;
for (TextLine line in block.lines) {
// Same getters as TextBlock
print(line.text);
for (TextElement element in line.elements) {
// Same getters as TextBlock
print(element.text);
}
}
}
} Now i'm learning how to handle method FirebaseVisionImage.fromBytes to set metadata |
This is a code sample just update
when
is called. Waiting for the right direction, thank you! :) |
Uuuuh, my updated code works on Android! :) void _scanText(CameraImage availableImage) async {
_isScanBusy = true;
print("scanning!...");
/*
* https://firebase.google.com/docs/ml-kit/android/recognize-text
* .setWidth(480) // 480x360 is typically sufficient for
* .setHeight(360) // image recognition
*/
final FirebaseVisionImageMetadata metadata = FirebaseVisionImageMetadata(
rawFormat: availableImage.format.raw,
size: Size(availableImage.width.toDouble(),availableImage.height.toDouble()),
planeData: availableImage.planes.map((currentPlane) => FirebaseVisionImagePlaneMetadata(
bytesPerRow: currentPlane.bytesPerRow,
height: currentPlane.height,
width: currentPlane.width
)).toList(),
rotation: ImageRotation.rotation90
);
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromBytes(availableImage.planes[0].bytes, metadata);
final TextRecognizer textRecognizer = FirebaseVision.instance.textRecognizer();
final VisionText visionText = await textRecognizer.processImage(visionImage);
print("--------------------visionText:${visionText.text}");
for (TextBlock block in visionText.blocks) {
// final Rectangle<int> boundingBox = block.boundingBox;
// final List<Point<int>> cornerPoints = block.cornerPoints;
print(block.text);
final List<RecognizedLanguage> languages = block.recognizedLanguages;
for (TextLine line in block.lines) {
// Same getters as TextBlock
print(line.text);
for (TextElement element in line.elements) {
// Same getters as TextBlock
print(element.text);
}
}
}
_isScanBusy = false;
} Full demo project here |
I am trying to display an image from an onPressed: () async {
await controller.startImageStream((CameraImage availableImage) {
print(availableImage.planes[0].bytes.toString().substring(0,200));
setState(() {
theImage = availableImage.planes[0].bytes;
});
}); where theImage is declared as Uint8List. A list of numbers between 0 and 255 is displayed. Like
But if I try
What am I doing wrong? (I also have great problems stopping the ImageStream with controller.stopImageStream() ). |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@lewixlabs @eduardkieser Using "add reaction" on the initial comment would increase priority while |
This is a sample of what the
How can I convert this to a |
here is a great demo of how to use image stream and face detection: https://github.com/bparrishMines/mlkit_demo/blob/master/lib/face_expression_reader.dart |
Uh, interesting, i think is one of the authors of the official plugin. |
How to stream Images with correct number of pixels controller.startImageStream((CameraImage availableImage){
var framesY = availableImage.planes[0].bytes;
var framesU = availableImage.planes[1].bytes;
var framesV = availableImage.planes[2].bytes;
var lenU = framesU.length;
var lenV = framesV.length;
var lenY = framesY.length;
print('$lenU'+' '+'$lenV'+' '+'$lenY');
}); where,
I con't convert YUV_420 to BGR. I hope |
Link today is dead Demo code is here |
why do I get U & V like |
#26348 (comment) any suggestion on this? |
@sathiez How do you convert yuv420 to BGR (if you have valid input)? And can the result be used in Image.memory()? (I think it is a very reasonable scenario to scan images from an ImageStream, and, when an image meets certain criteria, display it with Image.memory, or some other widget. But I haven't been able to do that.) |
I tried to convert in python using opencv |
Same problem here, I need to display in a widget the CameraImage YUV420_888. // _image -> YUV420_888 CameraImage from Camera Plugin startImageStream
// Create Image buffer
// imgLib -> Image package from https://pub.dartlang.org/packages/image
var img = imglib.Image(width, height);
// Fill image buffer with plane[0] from YUV420_888
for(int x=0; x < width; x++) {
for(int y=0; y < height; y++) {
final pixelColor = _image.planes[0].bytes[y * width + x];
// color: 0x FF FF FF FF
// A B G R
// Calculate Grayscale pixel
img.data[y * width + x] = (0xFF << 24) | (pixelColor << 16) | (pixelColor << 8) | pixelColor;
}
}
List<int> png = imglib.encodePng(img);
return Image.memory(png); At least I can see the black and white image rotated 90 degrees on screen. |
Disabling compression of the PNG encoder, and setting filter to none, improves performance a lot // CameraImage YUV420_888 -> PNG -> Image (compresion:0, filter: none)
// Black
Future<Image> convertYUV420toImage(CameraImage image) async {
try {
final int width = image.width;
final int height = image.height;
// imgLib -> Image package from https://pub.dartlang.org/packages/image
var img = imglib.Image(width, height); // Create Image buffer
// Fill image buffer with plane[0] from YUV420_888
for(int x=0; x < width; x++) {
for(int y=0; y < height; y++) {
final pixelColor = image.planes[0].bytes[y * width + x];
// color: 0x FF FF FF FF
// A B G R
// Calculate pixel color
img.data[y * width + x] = (0xFF << 24) | (pixelColor << 16) | (pixelColor << 8) | pixelColor;
}
}
imglib.PngEncoder pngEncoder = new imglib.PngEncoder(level: 0, filter: 0);
List<int> png = pngEncoder.encodeImage(img);
muteYUVProcessing = false;
return Image.memory(png);
} catch (e) {
print(">>>>>>>>>>>> ERROR:" + e.toString());
}
return null;
} |
How to compress image stream efficiently so that I can send it through websocket? |
Level parameter is compresion level. Increase that value. Default is 6. And don't change default filter imglib.PngEncoder pngEncoder = new imglib.PngEncoder(level: 9); |
@alejamp Thank you so much for this code, it works great in my app, with OK performance. |
@sikandernoori for ImageFormatGroup.bgra8888 in ios, now uses ResolutionPreset.veryHigh. But still crash in ipad mini 2 with ios 12 when convert into RGB (native codes) and encodeJpg later. |
@CellCS I found the Solution for it. @alexcohn mentioned it somewhere. Solution was as below:
|
@sikandernoori |
@SebghatYusuf Can You Please share Meta Information of CameraImage and what resolution preset you are using. |
@sikandernoori |
@SebghatYusuf can you please elaborate on how you solved the issue? I am still getting grayscale image from @sikandernoori ' s method |
@physxP
|
To anyone looking into using compute to perform the conversion on an isolate: |
This code working in ios very well, but cannot read data in android, don't know which conversion I need to use for taking data from cameraImage
|
Any example demo how convert to image and then display it. i tried the the convert function. Display the image from
|
Don't know if this is will help, I've made a small dart plugin to help manipulate these camera images. Then they can be safely passed to detectors. |
CameraImage to imageLib.Image performance
Here is the C code (by @Hugand)
|
This is how CameraImage is handled in the iOS environment Under the startImageStream function
Called in the callback of iOS's MethodChannel
Specific treatment methods
Android processing can use the YUVTransform plugin |
I am using Codec codec = await instantiateImageCodec(ulist)
image = (await codec.getNextFrame()).image and you can show the image like this
But like others here I am getting errors in the conversion when using the images from the stream [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: Exception: Invalid image data
#0 _futurize (dart:ui/painting.dart:6950:5)
#1 ImageDescriptor.encoded (dart:ui/painting.dart:6764:12)
#2 instantiateImageCodecWithSize (dart:ui/painting.dart:2307:60)
#3 instantiateImageCodecFromBuffer (dart:ui/painting.dart:2251:10)
#4 instantiateImageCodec (dart:ui/painting.dart:2198:10)
<asynchronous suspension>
#5 _ImageStreamPageState.videoStream.<anonymous closure> (package:cbj_smart_device_flutter/presentation/client/image_stream_page.dart:39:11)
<asynchronous suspension> |
@faslurrajah this solution no longer works on latest package version due to API changes. Here's an adjusted series of methods from my project, ready for general use and should work on both Android and iOS: // CameraImage BGRA8888 -> PNG
// Color
imglib.Image imageFromBGRA8888(CameraImage image) {
return imglib.Image.fromBytes(
width: image.width,
height: image.height,
bytes: image.planes[0].bytes.buffer,
order: imglib.ChannelOrder.bgra,
);
}
// CameraImage YUV420_888 -> PNG -> Image (compresion:0, filter: none)
// Black
imglib.Image imageFromYUV420(CameraImage image) {
final uvRowStride = image.planes[1].bytesPerRow;
final uvPixelStride = image.planes[1].bytesPerPixel ?? 0;
final img = imglib.Image(width: image.width, height: image.height);
for (final p in img) {
final x = p.x;
final y = p.y;
final uvIndex =
uvPixelStride * (x / 2).floor() + uvRowStride * (y / 2).floor();
final index = y * uvRowStride +
x; // Use the row stride instead of the image width as some devices pad the image data, and in those cases the image width != bytesPerRow. Using width will give you a distored image.
final yp = image.planes[0].bytes[index];
final up = image.planes[1].bytes[uvIndex];
final vp = image.planes[2].bytes[uvIndex];
p.r = (yp + vp * 1436 / 1024 - 179).round().clamp(0, 255).toInt();
p.g = (yp - up * 46549 / 131072 + 44 - vp * 93604 / 131072 + 91)
.round()
.clamp(0, 255)
.toInt();
p.b = (yp + up * 1814 / 1024 - 227).round().clamp(0, 255).toInt();
}
return img;
}
imglib.Image? imageFromCameraImage(CameraImage image) {
try {
imglib.Image img;
switch (image.format.group) {
case ImageFormatGroup.yuv420:
img = imageFromYUV420(image);
break;
case ImageFormatGroup.bgra8888:
img = imageFromBGRA8888(image);
break;
default:
return null;
}
return img;
} catch (e) {
//print(">>>>>>>>>>>> ERROR:" + e.toString());
}
return null;
} |
FYI: I think I have an explanation. While some phones produce yuv420p, others actually quietly are producing nv21/nv12, but then creating interleaved U and V planes.
This means the U and V bytes are interleaved in memory as follows, and the U and V plane buffers are just offset by one byte:
Which is actually nv21 (or nv12 because U is first) "faking it" as yuv420p! So everything makes sense, and you can actually get nv21 just by using the whole (interleaved) U buffer, and the last byte of the V buffer! Of course, it would be best if we could actually get "explicit" n21. |
@thandal I am not sure what you want to achieve this way. Is there some API that expects an |
First of all thank you so mush for adding access to the image stream in 0.2.8, I'm sure may people will be very happy about this.
I know the latest commit is fresh off the press, bit I would love to see a little more documentation on how to use the camera image stream (other than: "use:
cameraController.startImageStream(listener)
to process the images")The text was updated successfully, but these errors were encountered: