CameraView frame processing support #1687
Replies: 5 comments 12 replies
-
Here's the Windows version: using Windows.Devices.PointOfService;
//..
private async Task StartScanAsync()
{
BarcodeScanner scanner = await BarcodeScanner.GetDefaultAsync();
ClaimedBarcodeScanner claimedScanner = await scanner.ClaimScannerAsync();
claimedScanner.DataReceived += claimedScanner_DataReceived;
claimedScanner.IsDecodeDataEnabled = true;
await claimedScanner.EnableAsync();
if (!string.IsNullOrEmpty(scanner.VideoDeviceId)) // Camera based
{
await claimedScanner.StartSoftwareTriggerAsync();
}
}
private void claimedScanner_DataReceived(ClaimedBarcodeScanner sender, BarcodeScannerDataReceivedEventArgs args)
{
var data = Windows.Security.Cryptography.CryptographicBuffer.ConvertBinaryToString(Windows.Security.Cryptography.BinaryStringEncoding.Utf8, args.Report.ScanDataLabel);
} Concerns:WinUI doesn't have |
Beta Was this translation helpful? Give feedback.
-
I guess you meant https://github.com/Redth/ZXing.Net.MAUI ?
|
Beta Was this translation helpful? Give feedback.
-
There is a version from @VladislavAntonyuk that could work Here This works on all the platforms what I am not sure of yet as I have not tested is if it just works with QR codes or now. Now we have CameraView this could a be a fairly simple and great add to MCT. |
Beta Was this translation helpful? Give feedback.
-
I have come to realise that our Firstly we will expose a new concept called
We could then define something like a public class ZXingBarcodeDetectionScenario : CameraScenario
{
// Add in the magic here.
} That all relies on a shared code layer which may not perform as well as the next part. We implement a public abstract partial class PlatformCameraScenario : CameraScenario
{
} Where the MacCatalyst/iOS base implementation currently looks like: partial class PlatformCameraScenario : CameraScenario
{
readonly Action<AVCaptureOutput> onInitialize;
public PlatformCameraScenario(AVCaptureOutput output, Action<AVCaptureOutput> onInitialize)
{
Output = output;
this.onInitialize = onInitialize;
}
public AVCaptureOutput Output { get; }
public override void Initialize()
{
onInitialize.Invoke(Output);
IsInitialized = true;
}
} Then we could implement a platform specific implementation such as: /// <summary>
/// Apple based implementation
/// </summary>
public class PlatformBarcodeScanningScenario : PlatformCameraScenario
{
#if IOS || MACCATALYST
public PlatformBarcodeScanningScenario() : base(
new AVCaptureMetadataOutput(),
o =>
{
((AVCaptureMetadataOutput)o).SetDelegate(new BarcodeDetectionDelegate(), DispatchQueue.MainQueue);
((AVCaptureMetadataOutput)o).MetadataObjectTypes =
AVMetadataObjectType.QRCode | AVMetadataObjectType.EAN13Code;
})
{
}
#endif
}
sealed class BarcodeDetectionDelegate : AVCaptureMetadataOutputObjectsDelegate
{
public override void DidOutputMetadataObjects(
AVCaptureMetadataOutput captureOutput,
AVMetadataObject[] metadataObjects,
AVCaptureConnection connection)
{
foreach (var metadataObject in metadataObjects)
{
if (metadataObject is AVMetadataMachineReadableCodeObject readableObject)
{
var code = readableObject.StringValue;
Console.WriteLine($"Metadata object {code} at {string.Join(",", readableObject.Corners?? [])}");
}
}
}
} And now this where I think it gets really elegant, developers can simply add scenarios to a <toolkit:CameraView>
<toolkit:CameraView.Scenarios>
<toolkit:PlatformBarcodeScanningScenario />
<toolkit:ZXingBarcodeDetectionScenario />
</toolkit:CameraView.Scenarios>
</toolkit:CameraView> What are peoples thoughts? |
Beta Was this translation helpful? Give feedback.
-
Would the scenario you are suggesting allow a developer to access the preview frames from the various cameras (if I am reading it correctly, I think the answer will be yes)? Additionally, could the "scenarios" have the ability to feed back some kind of overlay to the image on screen? I am looking at the feasibility of implementing document edge detection as such a scenario. This may also include some kind of processing of the captured image (non-preview) so that it can be de-skewed and resized appropriately based upon the edges detected. It also looks to me that this suggestion would address discussions #1953, #1998, #2116 & #2160 so clearly such an enhancement would be popular. |
Beta Was this translation helpful? Give feedback.
-
Following on from a request from a community member (dotMorten) I have started a discussion on the idea of introducing barcode reading functionality to the toolkit.
I have performed some research to discover the possibility of using platform specific APIs to read the contents from an image. The ideal scenario will be that we could use this feature in combination with the
CameraView
but neither should depend upon each other.This discussion/proposal will now take on multiple sections/increments. I have included them all into a single discussion for now but we will want to separate out into multiple PRs at the very least.
1. Expose platform specific access to captured frames
The aim will be to enhance the
CameraManager
implementations to expose the ability to make use of platform specific interaction with the platform supplied image data.Android
The CameraX API provides the
UseCase
abstract class which developers can make use of in order to perform a camera preview, image capture or even image analysis.I would like to propose that we introduce the following to the
CameraManager.android
implementation:iOS/macOS
Apple provides us with the
AVCaptureOutput
abstract base class that developers can use to direct inputs from audio/video sources to.I would like to propose that we introduce the following to the
CameraManager.macios
implementation:2. Consume platform specific frames
In truth, once part 1 is complete developers could implement this section themselves in their own applications and I fully expect them to for more advanced scenarios. I would like to look at use providing some level of barcode scanning support which may need to be a separate package due to further dependencies - I am also happy to try and build a PoC outside of the toolkit to prove this works.
The aim will be that the following details will be encapsulated through a
UseBarcodeScanning
extension method or something similar. Happy for suggestions here as this is still a little unclear. Perhaps we have a BarcodeCameraView in the new package?Android
https://developers.google.com/ml-kit/vision/barcode-scanning/android
Making use of the above details we could look to implement something like:
Note this is currently in Kotlin and lifted from the google docs
Then we could make use of the
CameraManager
changes from section 1 and write something like:iOS/macOS
Apple provides the ability to analyze an image.
Analyze
https://developer.apple.com/documentation/visionkit/imageanalyzer/analyze(_:configuration:)
machineReadableCode
https://developer.apple.com/documentation/visionkit/imageanalyzer/analysistypes/machinereadablecode
This would then potentially allow developers to write their own consumer of an output as follows:
Then we could make use of the
CameraManager
changes from section 1 and write something like:Tizen
TBC
Windows
TBC
3. Platform agnostic frame access
This is for the scenarios when performance is less critical. We could look to expose a variety of options:
FrameReceived
event/callback on theCameraManager
that will return abyte[]
orArrayPool<byte>
that the developer can work with.Concerns
The Android implementation appears to rely on ML Kit and introduces the potential complexity around shipping with that dependency. The Android docs do provide the following details: https://developers.google.com/ml-kit/tips/installation-paths
Questions
Do we need to introduce this given that ZXing.NET already exists?
Beta Was this translation helpful? Give feedback.
All reactions