Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow image scaling on the mjepg stream #138

Merged
merged 24 commits into from
Feb 5, 2019

Conversation

dmissmann
Copy link

Currently we get full scale screenshots on the mjpeg stream. To reduce bandwidth or to make processing the screenshots on the host more efficient we could scale down images already on the device.

@mykola-mokhnach
Copy link

It would be nice to have an integration test for the image scaler class

@@ -0,0 +1,80 @@
/**

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@mykola-mokhnach
Copy link

It looks like CI is not very happy

@dmissmann
Copy link
Author

It's a bit happier now. I used Xcode 10 and it was fine with it, but Xcode 9 was complaining...

}
NSData *scaled = [self scaledImageWithImage:next
scalingFactor:scalingFactor
compressionQuality:compressionQuality];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to recompress the original image after it has been already compressed by XCTest? Wouldn't this be a waste of resources?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, otherwise the size keeps roughly the same.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the additional setting for compression quality and now we use the same value for screenshot taking and when we create the jpeg after scaling

@mykola-mokhnach
Copy link

Everything looks fine to me. I only have one question: does it make sense to apply the compression twice: once while taking the screenshot and the second time while performing scaling? I would assume we could simply set the compression factor to 100% for the first pass and the requested quality to the second one or vice versa

@dmissmann
Copy link
Author

Setting the compressionQuality when calling _XCT_requestScreenshotOfScreenWithID:withRect:uti:compressionQuality:withReply: increases the CPU usage. I also tried switching from kUTTypeJPEG to kUTTypeRawImage and kUTTypeBMP, but this increased this even further.
Maybe this is caused by having way bigger payloads that are passed to FBImageIOScaler and decoding this into CGImageRef objects is more expensive then.

@@ -23,6 +23,8 @@
static NSString* const ELEMENT_RESPONSE_ATTRIBUTES = @"elementResponseAttributes";
static NSString* const MJPEG_SERVER_SCREENSHOT_QUALITY = @"mjpegServerScreenshotQuality";
static NSString* const MJPEG_SERVER_FRAMERATE = @"mjpegServerFramerate";
static NSString* const MJPEG_SCALING_FACTOR = @"mjpegScalingFactor";
static NSString* const MJPEG_COMPRESSION_FACTOR = @"mjpegCompressionFactor";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mykola-mokhnach mykola-mokhnach merged commit c83f796 into appium:master Feb 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants