Skip to content
This repository has been archived by the owner on Jan 21, 2022. It is now read-only.

Add Image-Recognition and Object-Detection TensorFlow processors #9

Conversation

tzolov
Copy link
Contributor

@tzolov tzolov commented Feb 20, 2018

Resolves #2

  • Align with Spring Boot 2.0, Spring Cloud 2.0 and SCSt 2.0 stack
  • Upgrade to Tensorflow 1.6.0-rc1
  • Add Image-Recognition processor
  • Add Object Detection API processor + Protobuf pre-generated labels reader
  • Extend the common Tensorflow Processor to support multiple evaluation outputs. (FetchModel parameter allow comma-separated out model names)
    • Fix the processor output name (in header or tuple) to "result". Use to match the fetch name
    • Remove the redundant fetch-index property
    • Replace @serviceactivator by @ServiceListener
  • Refactor the contentType handling to align it with new SCSt 2.0 requirements

Resolves spring-attic#2

 - Align with Spring Boot 2.0, Spring Cloud 2.0 and SCSt 2.0 stack
 - Upgrade to Tensorflow 1.6.0-rc1
 - Add Image-Recognition processor
 - Add Object Detection API processor + Protobuf pre-generated labels reader
 - Extend the common Tensorflow Processor to support multiple evaluation outputs. (FetchModel parameter allow comma-separated out model names)
   - Fix the processor output name (in header or tuple) to "result". Use to match the fetch name
   - Remove the redundant fetch-index property
   - Replace @serviceactivator by @ServiceListener
 - Refactor the contentType handling to align it with new SCSt 2.0 requirements
@tzolov tzolov added the in pr label Feb 20, 2018
Copy link

@artembilan artembilan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is some review.

Not sure that I understand what have you done here (too low programming for me 😉 ), but this is indeed an outstanding work. And we definitely should consider to merge it as fast as possible and start to promote 😄

}

// default
return MimeTypeUtils.APPLICATION_JSON_VALUE;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. This default is the same as a variant for the String.
Maybe something is missed in the logic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

FontMetrics fm = g.getFontMetrics();

Tuple resultTuple = new JsonStringToTupleConverter().convert(result.toString());
ArrayList<Tuple> labels = (ArrayList) resultTuple.getValues().get(0);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Would be better to have generic args and use @SuppressWarning on the method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. I've replaced with ArrayList<Tuple> labels = resultTuple.getValue("labels", ArrayList.class);

logger.error(e);
}

// Null mend that QR image is found and not output message will be send.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Null means ?
and no output ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

import java.awt.image.BufferedImage;

/**
* @author Christian Tzolov

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May we have some JavaDocs on this class to determine the reason of such a big class?
Thanks

package org.springframework.cloud.stream.app.object.detection.processor;


import java.awt.*;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No asterisk imports, please.

/**
* @author Christian Tzolov
*/
public class InvalidTupleTensorflowEncoding extends RuntimeException {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah... Exception must have an Exception suffix to be clear...
But that's IMHO

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

* By default the inference result is returned in the outbound Message payload. If the saveResultInHeader property is
* set to true then the inference result would be stored in the outbound Message header by name as set by
* the getResultHeader property. In this case the message payload is the same like the inbound message payload.
* The {@link OutputMessageBuilder} defines how the computed inference score is arranged withing the output message.
*
* @author Christian Tzolov
* @author Artem Bilan
*/
@EnableConfigurationProperties(TensorflowCommonProcessorProperties.class)
public class TensorflowCommonProcessorConfiguration implements AutoCloseable {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to have here AutoCloseable.
Since you declare TensorFlowService as a bean, Spring will take care about its own AutoCloseable already.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@Bean
@StreamMessageConverter
public MessageConverter toTensorMessageConverter() {
return new AbstractMessageConverter(new MimeType("application", "x-tensor")) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where this converter is used?

I thought we need them when we convert input/output from/to external world, therefore it is a bit confusing for me why would one need this custom mime-type?

Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spot. I believe this is a left over from my experiments with the new content-type support. I will remove it

/**
* @author Christian Tzolov
*/
public class ImageRecognitionProcessorPropertiesTest {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must be plural - Tests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

float[] classes = classesTensor.copyTo(new float[1][maxObjects])[0];
float[][] boxes = boxesTensor.copyTo(new float[1][maxObjects][4])[0];

// Work in progress

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we expect something else here to be done?

Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to enable support for image segmentation feature: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md
While the TF models already support this and we can read the metadata, it turned to be quite involving to convert the mask metadata into mask images.
So this fragment can be regarded as an enabler to support the image segmentation in some of the following up releases.
I will improve the descriptions.

@tzolov
Copy link
Contributor Author

tzolov commented Mar 7, 2018

Thanks @artembilan! I've pushed an update to address the review comments.
Also note i've not disabled the integration tests for the Twitter Sentiment, Image Recognition and Object Detection processors. Later pulls large (> 50 MB), pre-build TF models from bintray.

image::{image-root}/TensorFlowProcessorArcutectureOverview.png[]

The `--tensorflow.model` property configures the Processor with the location of the serialized Tensorflow model.

The `TensorflowInputConverter` implementations converts the input data into the format, specific for the given model.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to align with the next sentence the implementations word is redundant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow_jni_gpu</artifactId>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind to elaborate more here, please?

Looks like using this here we don't have choice in the target application: https://www.tensorflow.org/versions/master/install/install_linux#determine_which_tensorflow_to_install

I wonder what is going to happen if I don't have the proper GPU or I am just on Windows (although my Nvidia GPU is pretty good 😄 )

I mean shouldn't we leave such a choice for end-user?
Or doesn't it just have any effect if we don't meet GPU requirements and it just fallback to CPU mode?

Thanks

Copy link
Contributor Author

@tzolov tzolov Mar 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point!
According to this doc: https://www.tensorflow.org/install/install_java

GPU acceleration is available via Maven only for Linux and only if your system meets the requirements for GPU.

So my assumption is that those dependencies will only be activated if the above conditions hold. For example it works find on my MacBook laptop using only the CPU there (e.g. the GPU dependencies are ignored).
Also with those additional maven dependencies i hoped we could address issues like this: https://stackoverflow.com/questions/46430512/tensorflow-train-a-model-with-gpus-and-inference-with-java-api

But having said this i acknowledge that i have not tested this functionality, so i will comment out those dependencies and open a separate issue for tackling this.

@artembilan
Copy link

Merged as cd1ea72

@artembilan artembilan closed this Mar 15, 2018
@artembilan artembilan removed the in pr label Mar 15, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants