Skip to content
This repository was archived by the owner on Jun 16, 2023. It is now read-only.

Conversation

Boris-c
Copy link

@Boris-c Boris-c commented Sep 25, 2019

Summary

Android's native camera takePicture() is very slow. I wasn't happy with it and many people complain too (see #1836 for instance).

Camera1 allows capturing frame previews. By processing frame previews we can avoid using takePicture() altogether.

The previews are in YUV and need to be transformed to RGB. The proposed implementation uses a render script for fast processing.

4 modes are proposed (as a RN prop, "camera1ScanMode"): "none", "eco", "fast" and "boost".

  • none: use takePicture(), as currently done by react-native-camera
  • eco: capture a frame preview when a picture is requested, then process it
  • fast: capture all frame previews and process one when a picture is requested
  • boost: capture and process as many frame previews as possible. When a picture is requested, send the last one

In my app, on my device, the benchmark is the following (average of 3 pictures):

  • none: 1.7 s
  • eco: 415 ms
  • fast: 361 ms
  • boost: 171 ms

These measurements were done in React Native, from the call to capture() to the result being received after conversion to base64 and bridge transfer. The speed increase of the proposed change, if seen at a lower level (Android takePicture() vs frame previews processing), is even more dramatic.

I use the library to get pictures in base64. My tests did not cover all possible uses of the library.

Note that the min SDK upgrade to version 17 is needed for the render script.

Test Plan

I'll leave it to more experienced users of the library to test every possible use case.

What's required for testing (prerequisites)?

What are the steps to reproduce (after prerequisites)?

Compatibility

OS Implemented
iOS
Android

Checklist

  • I have tested this on a device and a simulator
  • I added the documentation in README.md
  • I mentioned this change in CHANGELOG.md
  • I updated the typed files (TS and Flow)
  • I added a sample use of the API in the example project (example/App.js)

@bkDJ
Copy link

bkDJ commented Sep 25, 2019

There is a world of difference between <200ms and ~2 seconds in how a user experiences photo capture in an app. Thanks so much for the contribution! Could you please mention which device your benchmark was performed on?

@Boris-c
Copy link
Author

Boris-c commented Sep 25, 2019

@bkDJ Values were obtained on a Nokia 7.1 Android one.

@cristianoccazinsp
Copy link
Contributor

This is very interesting. Is there a lot of overhead for having the preview running?

Also, was the measured time done with Camera1 implementation? Camera1 is quite fast right now, almost instant with the latest changes that removed unnecessary focus attempts.

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

@cristianoccazinsp The overhead here comes from processing the frame preview (YUV to RGB and resizing). The overhead is minimal in eco and fast modes, bigger in boost.

The proposed implementation works with Camera1 only.

@cristianoccazinsp
Copy link
Contributor

@Boris-c thanks for the update! My question about the overhead was mostly about the phone being usable (at least on average phones) while the preview is running. I have an application that overlays a bunch of stuff on top of the camera and it is critical for the UI/JS code to remain efficient even with the preview running.

About Camera1, I do understand it is Camera1 only. My question was, did you run the tests with master? Because I'm far from seeing a 1.7s captures even on cheap devices, more like 0.5-1s at most since it now captures right away what's on the preview.

I will give this branch a test if I have time later today.

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

@cristianoccazinsp I did all my test using release 3.4.0. At what resolution do you get 0.5-1 second per capture?

@cristianoccazinsp
Copy link
Contributor

@Boris-c default resolution, and I'm even doing post processing with image resizing afterwards with another library.
Test comes from a Pixel 2 and Motorola moto g5. Perhaps the camera on Nokia is heavier by itself, hard to tell, but I'm definitely not getting those high numbers.

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

@cristianoccazinsp I think no overhead can be perceived by the user in eco and fast modes

@cristianoccazinsp
Copy link
Contributor

cristianoccazinsp commented Sep 27, 2019

I've been testing on a Moto G5 (a very, very slow device, I expect this to be much faster with prod build) with debug/dev mode on:

  • 'none': 1.1 seconds in average
  • 'eco': crash
  • 'fast': crash
  • 'boost': crash

Not sure where are the crashes coming from, did I do something wrong? All crashes are due to the same.

Crash log

2019-09-27 13:01:34.052 31138-31166/com.zinspector3.dev E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #2
    Process: com.zinspector3.dev, PID: 31138
    java.lang.RuntimeException: An error occurred while executing doInBackground()
        at android.os.AsyncTask$3.done(AsyncTask.java:353)
        at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383)
        at java.util.concurrent.FutureTask.setException(FutureTask.java:252)
        at java.util.concurrent.FutureTask.run(FutureTask.java:271)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
        at java.lang.Thread.run(Thread.java:764)
     Caused by: java.lang.NullPointerException: Attempt to get length of null array
        at java.io.FileOutputStream.write(FileOutputStream.java:309)
        at org.reactnative.camera.tasks.ResolveTakenPictureAsyncTask.doInBackground(ResolveTakenPictureAsyncTask.java:75)
        at org.reactnative.camera.tasks.ResolveTakenPictureAsyncTask.doInBackground(ResolveTakenPictureAsyncTask.java:27)
        at android.os.AsyncTask$2.call(AsyncTask.java:333)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) 
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) 
        at java.lang.Thread.run(Thread.java:764) 

Code snippet of how I'm using the Camera:

<RNCamera
  ref={ref => {
    this.camera = ref;
  }}
  style={cameraStyle}
  //useCamera2Api={true}
  camera1ScanMode='boost'
  ratio={this.state.aspectRatioStr}
  flashMode={flashMode}
  zoom={zoom}
  maxZoom={MAX_ZOOM}
  whiteBalance={WB_OPTIONS[wb]}
  autoFocusPointOfInterest={this.state.focusCoords}
>

let options = {
  quality: 1,
  skipProcessing: true, 
  fixOrientation: false
};

await this.camera.takePictureAsync(options);

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

It seems your crash happens in ResolveTakenPictureAsyncTask.java, in the section of the code that handles the option skipProcessing. I'm not sure because I don't know what version of that file you have.

I developed that improvement last year, before that option was available. skipProcessing relies on receiving a byte array in mImageData, whereas my method sends a bitmap in mBitmap.

Try to remove skipProcessing from your options.

@cristianoccazinsp
Copy link
Contributor

Still getting the same crash with skipProcessing={false}.

I'm using your master branch directly from github, so I should be using exactly your code.

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

image

Line 75 (from your crash report above) is in the section that handles skipProcessing. Are you sure you've deactivated skipProcess? Are you sure that it's the same crash?

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

Oh, but I see, they did something wrong, they just check mOptions.hasKey("skipProcessing") instead of checking its value.

Try to remove the option instead of setting it to false!...

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

It should be mOptions.hasKey("skipProcessing") && mOptions.getBoolean("skipProcessing") or something of the kind.

@cristianoccazinsp
Copy link
Contributor

You are absolutely right, that's actually a good catch/bug.

I now see where you were getting such high times, if skipProcessing is not used, the camera is definitely very slow, but it is quite fast with skipProcessing. It is really not Android's camera API fault's but rather all the processing done by the library itself.

New times, without skipProcessing:

none: 2.4 seconds (against 1.1, holy cow skipProcessing makes a huge difference)
eco: 0.9
fast: 0.7
boost: 0.5

I've noticed, however, picture quality is considerably lower with anything that's not set to "none". It is darker, blurrier, and lower resolution (a few hunder kbs lower)

Original:
1569603103737

Fast/Boost:
1569603397263
1569603275471

Are you willing to do fixes so it also works with skipProcessing? Have you also tried with back/front/wide lense cameras? Your code is not up to date to master so I can't test with the camera selection options.

@cristianoccazinsp
Copy link
Contributor

There's also another subtlety. When using any mode != 'none', on Android 10, the camera no longer makes the capture sound. I believe Android 10 is now copying iOS and any app that uses use of the camera API is forced to make a sound.

Testing on a Pixel 2 I've also got:

none / no skipProcessing: 1.1 seconds
none / skipProcessing: 1 second
boost / no skipProcessing: 0.25 seconds

Again, the picture quality is severely reduced with eco/fast/boost (mostly light and size).

I've gotta say it's a good trade off if you need very quick photos. The API might get a bit complicated with so many options (skip processing, fix orientation, scan mode).

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

You can try, around line 75 of ResolveTakenPictureAsyncTask.java:

                if (mImageData != null) {
                    // Save byte array (it is already a JPEG)
                    fOut.write(mImageData);
                } else if (mBitmap != null) {
                    // Convert to JPEG and save
                    mBitmap.compress(Bitmap.CompressFormat.JPEG, getQuality(), fOut);
                }

... as a replacement of ...

                    // Save byte array (it is already a JPEG)
                    fOut.write(mImageData);

(that's to test skipProcessing with eco, fast or boost)

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

As for the quality of the picture, no idea. I get very good pictures on my device.

@cristianoccazinsp
Copy link
Contributor

Most likely related that one mode takes the picture from the camera, and another one just captures from the preview. The preview is always lower quality than the final capture.

The change above fixes the crash with skipProcessing which is good.

Do you think you can add that fix to your branch, and rebase from master to include all the latest updates?

I would also definitely mention the differences when using any of those scan modes. Picture quality might suffer, there will be no shutter sound performed automatically by Android, and there might be a slight overhead.

@cristianoccazinsp
Copy link
Contributor

Here's another "danger" of using the fast capture options:

1569606349545

@cristianoccazinsp
Copy link
Contributor

Another comparison using none and boost. You can see the boost mode pretty much skips any software processing from the camera. This is very noticeable on a pixel 2 that relies heavily on post processing.
Again, it's a very good trade off, but it should definitely be mentioned!

Original:
normal

Boost:
fast

…act-native-camera

* 'master' of https://github.com/react-native-community/react-native-camera:
  chore(release): 3.6.0 [skip ci]
  feat(android): Support to enumerate and select Camera devices (react-native-camera#2492)
  chore(release): 3.5.0 [skip ci]
  fix(android): Update Camera1 to not crash on invalid ratio (react-native-camera#2501)
  feat(ios): videoBitrate option for iOS (react-native-camera#2504)
  Fix jitpack.io maven link (react-native-camera#2497)
  Use a more appropriate orientation change event
@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

I didn't rebase, since I didn't put my changes in a branch. But I pulled from master.

Then I implemented the changes needed to get the image as base64 even in "skipProcessing" mode, and allowed deactivating file saving in that mode.

I also fixed the check of skipProcessing.

Regarding the missing shutter sound you mentioned, there is a prop, playSoundOnCapture (not added by me), that lets you require it.

@Boris-c
Copy link
Author

Boris-c commented Sep 27, 2019

Last thing, we have an Android app used by hundreds of people (for research), who uploaded thousands of pictures of their food in various light conditions and with a whole range of devices, and we didn't notice any problems with picture quality in real life, using the implementation proposed in that PR.

@cristianoccazinsp
Copy link
Contributor

Thanks for the updates @Boris-c !.

About picture quality, the difference is really not that significant. It becomes more obvious when using the focus feature since that makes a whole lot of changes to the lightning capture.

Again, I think the difference is not too significant, but if you test it is noticeable and will vary from device to device based on the camera features. I'm still not sure where the difference comes from, but is definitely there!

@Boris-c
Copy link
Author

Boris-c commented Oct 10, 2019

I don't have experience with telephoto and ultra wide cameras, nor did I test the front/back camera switch.

But the code mainly hooks in - (void)captureOutput: didOutputSampleBuffer: fromConnection:, which was there already, taking care of text recognition, face recognition and so forth. So my guess is that everything that worked with those will work with my code.

I didn't do a side by side comparison like you did on Android. But I'll do one now and post it.

@Boris-c
Copy link
Author

Boris-c commented Oct 10, 2019

Apparently, the same thing happens as on Android, i.e. the lack of post-processing is visible. It's currently very dark in my home, and the difference is noticeable when the picture is zoomed.

The good news is that the similarity on both platforms will make it easier for the documentation ;)

@cristianoccazinsp
Copy link
Contributor

That's good to know. Do you have any timing samples? Remember to use skipProcessing as well since that flag makes a huge difference.

@cristianoccazinsp
Copy link
Contributor

cristianoccazinsp commented Oct 10, 2019

Hey @Boris-c , I've found a few issues while testing on an iPhone 7.

  • "failing real check" is constantly written to the console output. NSLog statements will be written even when built for production. This can cause some performance problems. Note that this seem to happen when changing the cameraScanMode prop.

2019-10-10 17:27:53.403369-0300 tmi3[1200:461242] failing real check
2019-10-10 17:27:53.438288-0300 tmi3[1200:461242] failing real check
.... x 100

  • Flash does not work, I'm guessing this is an obvious caveat when using such mode. This applies to Android as well.

  • There's a black screen for half a second the first time a picture is taken, have you noticed that? If not, I can try making a video of it. I'm not sure what's causing it exactly, but it causes the first picture after the camera is mounted to be nearly full black. This can happen easily also when switching from back to front camera and vice versa, the first picture always comes up fully black. When using the front camera, the first picture is pretty much 100% black, but there's some image when using the back camera. See sample pictures below

  • An "Already capturing" error can leave the camera stuck (remount is needed) after switching from back to front cameras multiple times, recording video, or changing the cameraScanMode property.

  • Lastly, I'm guessing this is to the poor iphone 7 camera, but I really didn't notice a difference in photo quality. The only noticeable difference is that the shutter sound is not forced as when using the actual capture API.

Black photos on the first picture that is taken. Back and front cameras:
46967BD4-EE7F-4605-82F6-484415CEBEEB
33EEAFCA-CFE2-42FE-9BBB-0E5D34E9C904

@Boris-c
Copy link
Author

Boris-c commented Oct 11, 2019

Hi @cristianoccazinsp, here are my comments:

  • "failing real check". I did not implement a switching off of the session output of the sample buffer to the delegate, because as a whole, this is weirdly implemented for other functionality (text, face and barcode detectors), and I didn't need that (I know it should be there though!). Take the following code:
- (void)setupOrDisableTextDetector
{
    if ([self canReadText] && [self.textDetector isRealDetector]){
        if (!self.videoDataOutput) {
            self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
            if (![self.session canAddOutput:_videoDataOutput]) {
                NSLog(@"Failed to setup video data output");
                [self stopTextRecognition];
                return;
            }
            NSDictionary *rgbOutputSettings = [NSDictionary
                dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA]
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];
            [self.videoDataOutput setVideoSettings:rgbOutputSettings];
            [self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
            [self.videoDataOutput setSampleBufferDelegate:self queue:self.sessionQueue];
            [self.session addOutput:_videoDataOutput];
        }
    } else {
        [self stopTextRecognition];
    }
}

- (void)stopTextRecognition
{
    if (self.videoDataOutput && !self.canDetectFaces) {
        [self.session removeOutput:self.videoDataOutput];
    }
    self.videoDataOutput = nil;
}

In setupOrDisableTextDetector, stopTextRecognition is called if we don't need to detect text or fail at adding the output, which in turn removes the video data output, if it exists, depending on wether we need to detect faces too. This is wrong, and the same logic appears in different places, with a lot of duplicated code. For instance, disabling text detection could make barcode detection (or my picture capture method) fail.

Should I rewrite the logic here, I'd rather have a function that checks the need for the video data output, does the job of setting up a new one or removing an existing one, and call it from different places. The code that logs "failing real check" probably shouldn't be there in the first place, since didOutputSampleBuffer should never be called unless we want it to be called.

  • Flash not working: that's no surprise I guess. No idea if it can be programmatically turned on.

  • Black screen: never had that, neither on Android nor on iOS

  • "Already capturing": might be fixed when the first issue is fixed.

  • Picture quality and shutter sound: There is definitely a small difference in quality on iPhone X. Shutter sound fixed in latest push.

@cristianoccazinsp
Copy link
Contributor

Hi @Boris-c ,

I'm not sure what's the best way to fix the "failing real check" issue. I personally don't quite understand the logic behind text/barcode recognition neither nor why those checks are needed. It would be great if you could fix it.

The black screen, really happens all the time. Can you try to reproduce it? Make sure you're using all the camera features (camera switching, zoom, white balance, touch to focus, video recording, etc.). I'm also using skipProcessing: true and fixOrientation: false. I'm also not setting defaultVideoQuality nor pictureSize. The black screen looks very similar to the flicker that happens when recording video due to preset changes.

@cristianoccazinsp
Copy link
Contributor

cristianoccazinsp commented Oct 11, 2019

@Boris-c sorry for the spam!.

I've taken a video that shows the issue of a black screen. Now that you've added sound, it is even stranger! Looks like the picture is taken twice, but it is not the code or me taking the picture twice since you can see only one picture is added, and the same code captures only once with "none" mode. Even stranger, the issue doesn't happen when in landscape mode!
You can see a video of it in action below.

"fast mode" on:
https://zinspectordev2.s3.amazonaws.com/usi/2/1570805547eec281ab6bfc4643bd1f40cd68f30714.mp4

"none" mode:
https://zinspectordev2.s3.amazonaws.com/usi/2/1570805627b9212c7925cc4ebf8e3256729b14ea19.mp4

Second issue. The pictures taken with this mode do not seem to be using the capture preset configuration very well. On the videos above, I'm also testing capturing a photo with the default settings, but also with the "High" pictureSize preset, which is the one used to capture video.
As you can see from the photos below, fast and none mode yields totally different picture sizes. Looks like "fast" does not capture exactly what's on the preview.

UPDATE: Looks like the iOS code crops the image to match the preview size here (https://github.com/react-native-community/react-native-camera/blob/master/ios/RN/RNCamera.m#L670). This would explain the difference in image sizes when using one method or the other.

"fast mode" / default and "High" pictureSize (look at the lamp):
15708055091ff0ecac22cb48af813db682727d3d7f
157080552063207ab5cb324a21aeb7ed2558478571

"none mode" / default and high pictureSize:

15708056008d6632e70a084bc1b969f1c1161f23fc
1570805611639f9beaf7504acd91f8470bf322eccc

RN Camera code that was used:

<RNCamera
  ref={ref => {
    this.camera = ref;
  }}
  style={cameraStyle}
  cameraId={cameraId} // set to back camera after onStatusChange
  camera1ScanMode={this.props.settings.photoFastMode ? 'eco' : 'none'}
  cameraScanMode={this.props.settings.photoFastMode ? 'fast' : 'none'}
  pictureSize={this.state.pictureSize} // testing using "Photo" (defaukt) and "High" (video)
  ratio={this.state.aspectRatioStr} // used 4:3 while testing
  flashMode={flashMode} // none
  zoom={zoom} // 0
  maxZoom={MAX_ZOOM} // 8
  whiteBalance={WB_OPTIONS[wb]}
  autoFocusPointOfInterest={this.state.focusCoords} // undefined / none set
  onStatusChange={this.onCameraStatusChange}
  pendingAuthorizationView={
    <SafeAreaView style={styles.cameraLoading}>
      <Spinner color={style.brandLight}/>
    </SafeAreaView>
  }
  notAuthorizedView={
    <View>
      {cameraNotAuthorized}
    </View>
  }
>
  ...
</RNCamera>

* commit '0158ebfd39e053cdaf2385e7088074b375fa268d':
  chore(release): 3.8.0 [skip ci]
  doc(tidelift): add tidelift for enterprise
  feat(android): support null object of androidPermissionOptions to avoid request window (react-native-camera#2551)
  feat(ci): CircleCI Fix & Optimizations (react-native-camera#2550)
  feat(readme): improve tidelift
  feat(torch): Torch fixes for iOS and a few nil checks. (react-native-camera#2543)
  fix(ios): Honor captureAudio flag by not requesting audio input if set to false. (react-native-camera#2542)
  chore(release): 3.7.2 [skip ci]
  fix(ios): Remove flickering due to audio input (react-native-camera#2539)
@Boris-c
Copy link
Author

Boris-c commented Oct 22, 2019

@cristianoccazinsp I fixed the dark image issue, that was caused by the call to set video orientation (that triggers a light, white balance and focus check by the camera apparently).

I also merged 3.8.0

@cristianoccazinsp
Copy link
Contributor

@Boris-c were you also able to review the other issue related to image cropping? Do you think the time difference might be really just that the original take picture does that image cropping first? If you were to add that to the fast mode, would it still be faster? I'm not sure what's the best to do there, but I'm guessing that both fast and none modes should take identical pictures.

@cristianoccazinsp
Copy link
Contributor

@Boris-c Also, do you have an idea if these changes for Android might cause issues if the run/stop steps are moved to non UI threads? I'm testing some changes to improve the camera startup time and deal with ANRs when coming from background here: https://github.com/react-native-community/react-native-camera/compare/master...cristianoccazinsp:android-ui-thread?expand=1

Lastly, I'm also trying to test issues on iOS related to audio interruptions, do you think these changes might break when the session is updated due to audio interruptions? Changes here: https://github.com/react-native-community/react-native-camera/compare/master...cristianoccazinsp:ios-interruptions?expand=1

@Boris-c
Copy link
Author

Boris-c commented Oct 22, 2019

I don't think the time difference comes from images not being cropped. My guess is that something bad is happening in Android's public camera API (since the Camera app itself performs much faster than takePicture()).

Avoiding the cropping of the preview frames to what the preview itself is showing is, in my opinion, impossible.

Regarding audio interruptions, no idea!...

@cristianoccazinsp
Copy link
Contributor

cristianoccazinsp commented Oct 23, 2019 via email

@Boris-c
Copy link
Author

Boris-c commented Oct 23, 2019

About Android, I was referring to the speed, in the sense that the huge time it takes to get the result from takePicture() is probably caused by some implementation peculiarity on Android’s side, not to the picture cropping.

And when I mean the Camera app, I mean the app provided with the system, which takes pictures much faster than what we can do (in native apps and RN apps, since the bottleneck is the native call to takePicture(), in RN as well).

Some while ago I benchmarked the whole process in detail, from RN’s capture call to the callback in RN. takePicture() is the bad guy in the chain. That’s how I came to the idea of capturing the frame preview.

@nguyenhose
Copy link

is anyone in charge on this PR ? this also happened for my app now

@sibelius
Copy link
Collaborator

does this solves this issue #2577?

@cristianoccazinsp
Copy link
Contributor

does this solves this issue #2577?

I believe that PR will make things a bit easier (since there's no skip processing to worry about, it was in fact @Boris-c who suggested removing it), but this PR still makes capture much faster on Android since it skips the take picture calls altogether and uses a much different approach.

Still think this would be a very good addition, as long as it is not the default behaviour due to all the extra caveats it might add.

@kperreau
Copy link

It would be nice if it could be merged into the master.

@ShivanshJ
Copy link

Why is this not being merged? It's been over an year.

@sibelius
Copy link
Collaborator

sibelius commented Sep 1, 2020

we need to fix conflicts

and have more people testing this

@rafaelcavalcante
Copy link

@sibelius are there any discussions outside this pr for what should be tested before it goes do master? I'm willing to help.

@louisobasohan
Copy link

looks like there are conflicts

@sibelius
Copy link
Collaborator

sibelius commented Oct 6, 2020

the first thing would be to create a new PR without conflicts

@pranatha
Copy link

Hi @Boris-c , why the width and height result of takePictureAsync in your version
are different from the current version?
I use @pontusab/react-native-image-manipulator for cropping picture,
somehow for certain device, the cropping process failed by using your version because of the image size inconsistence.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.