Skip to content

@hsiaoer hsiaoer released this Sep 11, 2019 · 3 commits to master since this release

[4.0.0]

In the latest release, we've several improvements listed below.

This repo on github moving forward will be deprecated in favor of hosting our SDK in a new maven repository:

// Change:
maven {
    url 'https://raw.github.com/fritzlabs/fritz-repository/master'
}

// To:
maven {
    url "https://fritz.mycloudrepo.io/public/repositories/android"
}

Changes

  • Adding support for model variants (fast, accurate, small).
    • Fast models are optimized for runtime performance with an accuracy tradeoff. This should be used in cases where model predictions need to happen quickly (e.g video processing, live preview, etc). This comes with a tradeoff in accuracy.
    • Accurate models are optimized to display the best model prediction with a speed tradeoff. This should be used in cases where you're dealing with still images (i.e photo editing)
    • Small models are optimized for model size at the cost of accuracy. This should be used in cases where developers are cautious of bloating their apps with models.
  • Models now have their own versioning system separate from the SDK and follow semantic versioning.
  • Removing deprecated methods for the result classes.
  • 2x speed improvement for image processing with Renderscript.
  • Adding TFL support for CPU threads, GPU Delegate, and NNAPI
  • Improved rendering on Surface views
  • Improve segmentation blend mode (hair coloring)
  • Uses GPUImage v2.0.4 (https://github.com/cats-oss/android-gpuimage)

To Migrate from 3.x.x to 4.0.0

Core

  • Image rotation is now an enum.
// Old version
int imgRotation = FritzVisionOrientation.getImageRotationFromCamera(this, cameraId);
FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap, imgRotation);

// Change
ImageRotation imageRotation = FritzVisionOrientation.getImageRotationFromCamera(this, cameraId);
FritzVisionImage visionImage = FritzVisionImage.fromBitmap(bitmap, imageRotation);

FritzVision

  • Add RenderScript support to your app
// In your app/build.gradle

android {
    defaultConfig {
        renderscriptTargetApi 21
        renderscriptSupportModeEnabled true
    }
}
  • For any of the predictor options, you can now declare option in the following way:
// Old
FritzVisionSegmentPredictorOptions options = FritzVisionSegmentPredictorOptions.Builder()
    .targetConfidenceScore(.3f);
    .build();
    
// New
FritzVisionSegmentPredictorOptions options = FritzVisionSegmentPredictorOptions();
options.confidenceThreshold = .3f;

Image Segmentation

  • Renaming Classes ("Segment" -> "Segmentation"):

    • FritzVisionSegmentPredictor -> FritzVisionSegmentationPredictor
    • FritzVisionSegmentResult -> FritzVisionSegmentationResult
    • FritzVisionSegmentPredictorOptions -> FritzVisionSegmentationPredictorOptions
    • MaskType -> MaskClass
  • Model dependencies:

    • The libraries for models are now on separate versions, allowing for individual updates and releases on when new improvements are made. As of the release, all models are now currently on version 1.0.0.
    • Sky Segmentation:
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-sky-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new SkySegmentationManagedModelFast();
          
    • Pet Segmentation
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-pet-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new PetSegmentationManagedModelFast();
          
    • Hair Segmentation
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-hair-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new HairSegmentationManagedModelFast();
          
    • Living Room Segmentation
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-living-room-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new LivingRoomSegmentationManagedModelFast();
          
    • Outdoor Segmentation
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-outdoor-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new OutdoorSegmentationManagedModelFast();
          
    • People Segmentation
      • Fast Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-people-segmentation-model-fast:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new PeopleSegmentationManagedModelFast();
          
      • Accurate Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-people-segmentation-model-accurate:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new PeopleSegmentationManagedModelAccurate();
          
      • Small Variant
        • Including it on device (in app/build.gradle):
            implementation "ai.fritz:vision-people-segmentation-model-small:1.0.0"
          
        • Downloading it OTA:
            FritzManagedModel managedModel = new PeopleSegmentationManagedModelSmall();
          
  • Blend Mode:

Alpha value is specified on the created mask. The class BlendModeType is removed.

// Old
BlendMode blendMode = BlendModeType.SOFT_LIGHT.create();
Bitmap maskBitmap = hairResult.buildSingleClassMask(MaskType.HAIR, blendMode.getAlpha(), 1, options.getTargetConfidenceThreshold(), maskColor);
Bitmap blendedBitmap = visionImage.blend(maskBitmap, blendMode);

// New
BlendMode blendMode = BlendMode.SOFT_LIGHT;
Bitmap maskBitmap = hairResult.buildSingleClassMask(MaskClass.HAIR, 180, 1, options.confidenceThreshol, maskColor);
Bitmap blendedBitmap = visionImage.blend(maskBitmap, blendMode);
Assets 2
You can’t perform that action at this time.