Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.tiff image format is not compatible with the interface #89

Closed
hanzi4389604 opened this issue Jan 19, 2024 · 13 comments · Fixed by #124 or #114
Closed

.tiff image format is not compatible with the interface #89

hanzi4389604 opened this issue Jan 19, 2024 · 13 comments · Fixed by #124 or #114
Assignees
Labels
user-visible User visible enhancement or features - must QA

Comments

@hanzi4389604
Copy link

when uploading a pre-generated .tiff image into the interface, the .tiff image cannot show. Interestingly, the user can still go proceed with the "classify" function and it will generate segmentation and classification results.

If transform the .tiff into .PNG file, everything works good.

A .tiff image is now allowed to be attached. Let me know if you need sample .tiff image. Send me the request on teams : liang.zhao@inspection.gc.ca

@MaxenceGui MaxenceGui added this to the M5 (2024 June) milestone Feb 7, 2024
@MaxenceGui MaxenceGui added the user-visible User visible enhancement or features - must QA label Feb 7, 2024
@rngadam rngadam changed the title .tiff image format does not compatible with the interface .tiff image format is not compatible with the interface Mar 26, 2024
@MaxenceGui
Copy link
Collaborator

Issue reported again here :
#109 (comment)

Closing the related issue as duplicated

@ChromaticPanic
Copy link
Contributor

ChromaticPanic commented Mar 26, 2024

@CFIALeronB @rngadam
Looks like Safari is the only browser that natively supports Tiff. So I'm curious if this functionality works if the application is opened in Safari.

I found a few Tiff javascript handling libraries. I can compile some options. Are there any guidelines on choosing? Does this become an ADR?

Tiff js js Tiff decoder based on C LibTIFF
^^This repo has a demo so it would be quick to check if it can decode the Nachet files and display it in the browser

Tiff pure JavaScript image decoder

Geo Tiff Pure JS image decoder focusing on Tiff Maps

UTIF Photopea Tiff decoder (and other advanced formats) from Photopea

Should probably check if any of them work with Nachet Tiff files first.
Is there a sample file somewhere? (I'l send Liang an email / Teams message)

@hanzi4389604
Copy link
Author

hanzi4389604 commented Mar 26, 2024

087.txt
Hello Joseffus, please check the shared file. You need to change .txt to .tiff. Thank you Joseffus :)

@ChromaticPanic
Copy link
Contributor

I can confirm that the Tiff decoder based on C LibTIFF is able to display Tiff files
image

@rngadam
Copy link

rngadam commented Mar 27, 2024

@ChromaticPanic surprised that since investigation determined Option A is less work and cleaner that we would proceed with Option B?

Option A sounds good to me. It will probably mean keeping a mapping in the frontend between input tiff and their png for display.

Could you provide sequence diagrams to explain the two options?

https://github.blog/2022-02-14-include-diagrams-markdown-files-mermaid/

@ChromaticPanic
Copy link
Contributor

In terms of implementation there appears to be two options:

Option A: (Likely Less Work and cleaner)
Since PNG is also a lossless format, convert to PNG just for rendering. All the inference will still be on the Tiff format, the original Tiff file remains intact and unaltered during rendering and all saved files will still be Tiff.

Option B: (More Work)
The LoadToCanvas method needs to be rewritten so that if the image is Tiff, we draw the canvas manually.
All of the libraries use the same method to render Tiff directly to the browser

The reason Tiff is not currently displaying is that Image() does not handle it. So image.onload is not triggered when the image is Tiff. So this function would have to be reorganized to process Tiff a different way.

const loadToCanvas = useCallback((): void => {
    // loads the current image to the canvas and draws the bounding boxes and labels,
    // should update whenever a change is made to the image cache or the score threshold and the selected label is changed
    const image = new Image();
    image.src = imageSrc;
    const canvas: HTMLCanvasElement | null = canvasRef.current;
    if (canvas === null) {
      return;
    }
    const ctx: CanvasRenderingContext2D | null = canvas.getContext("2d");
    if (ctx === null) {
      return;
    }
    image.onload = () => {
      canvas.width = image.width;
      canvas.height = image.height;
      ctx.drawImage(image, 0, 0);
      imageCache.forEach((storedImage) => {
        // find the current image in the image cache based on current index
        if (storedImage.index === imageIndex && storedImage.annotated) {
          storedImage.classifications.forEach((prediction, index) => {
            // !storedImage.overlapping[index]     REMOVE THIS TO SHOW ONLY 1 BB
            if (
              storedImage.scores[index] >= scoreThreshold / 100 &&
              (prediction === selectedLabel || selectedLabel === "all")
            ) {
              ctx.beginPath();
              // draw label index
              ctx.font = "bold 0.9vw Arial";
              ctx.fillStyle = "black";
              ctx.textAlign = "center";
              Object.keys(labelOccurrences).forEach((key, labelIndex) => {
                const scorePercentage = (
                  storedImage.scores[index] * 100
                ).toFixed(0);
                // check to see if label is cut off by the canvas edge, if so, move it to the bottom of the bounding box
                if (storedImage.boxes[index].topY <= 40) {
                  if (prediction === key) {
                    if (switchTable) {
                      ctx.fillText(
                        `[${labelIndex + 1}] - ${scorePercentage}%`,
                        ((storedImage.boxes[index].bottomX as number) -
                          (storedImage.boxes[index].topX as number)) /
                          2 +
                          (storedImage.boxes[index].topX as number),
                        (storedImage.boxes[index].bottomY as number) + 23,
                      );
                    } else {
                      ctx.fillText(
                        `[${index + 1}]`,
                        ((storedImage.boxes[index].bottomX as number) -
                          (storedImage.boxes[index].topX as number)) /
                          2 +
                          (storedImage.boxes[index].topX as number),
                        (storedImage.boxes[index].bottomY as number) + 23,
                      );
                    }
                  }
                } else {
                  // draw label index and score percentage
                  if (prediction === key) {
                    if (switchTable) {
                      ctx.fillText(
                        `[${labelIndex + 1}] - ${scorePercentage}%`,
                        ((storedImage.boxes[index].bottomX as number) -
                          (storedImage.boxes[index].topX as number)) /
                          2 +
                          (storedImage.boxes[index].topX as number),
                        storedImage.boxes[index].topY - 8,
                      );
                    } else {
                      // only draw table if switchTable is false (result component switch button)
                      ctx.fillText(
                        `[${index + 1}]`,
                        ((storedImage.boxes[index].bottomX as number) -
                          (storedImage.boxes[index].topX as number)) /
                          2 +
                          (storedImage.boxes[index].topX as number),
                        storedImage.boxes[index].topY - 8,
                      );
                    }
                  }
                }
              });
              // draw bounding box
              ctx.lineWidth = 2;
              ctx.strokeStyle = "red";
              ctx.rect(
                storedImage.boxes[index].topX,
                storedImage.boxes[index].topY,
                storedImage.boxes[index].bottomX -
                  storedImage.boxes[index].topX,
                storedImage.boxes[index].bottomY -
                  storedImage.boxes[index].topY,
              );
              ctx.stroke();
              ctx.closePath();
            }
          });
        }
        // capture label in bottom left
        if (storedImage.index === imageIndex) {
          storedImage.imageDims = [image.width, image.height];
          ctx.beginPath();
          ctx.font = "bold 0.9vw Arial";
          ctx.textAlign = "left";
          ctx.fillStyle = "#4ee44e";
          ctx.fillText(`Capture ${storedImage.index}`, 10, canvas.height - 15);
          ctx.stroke();
          ctx.closePath();
        }
      });
    };
  }, [
    imageCache,
    imageIndex,
    imageSrc,
    labelOccurrences,
    scoreThreshold,
    selectedLabel,
    switchTable,
  ]);

From our meeting , I am currently working on Option B

@ChromaticPanic
Copy link
Contributor

@ChromaticPanic surprised that since investigation determined Option A is less work and cleaner that we would proceed with Option B?

Option A sounds good to me. It will probably mean keeping a mapping in the frontend between input tiff and their png for display.

Could you provide sequence diagrams to explain the two options?

https://github.blog/2022-02-14-include-diagrams-markdown-files-mermaid/

You actually bring up an important downside
Option A would be less refactoring to render png but the cost would be either processing at each re-render (useMemo can do some caching) or we store the pngs in imageCache. Which would double the runtime storage requirements.

Option B would require refactoring that code section but has no additional rendering cost

@ChromaticPanic
Copy link
Contributor

ChromaticPanic commented Mar 27, 2024

Option A.1 with single image useMemo Caching

graph TD;
    loadToCanvas((loadToCanvas))
    Tiff[Tiff]
    useMemo[useMemo single PNG cache \n new method]
    NonTiff[Non-Tiff]
    drawToCanvas[Draw image to canvas \nno refactor]
    drawAnnotations[Draw boxes annotations \nno refactor]
    None[_]

    loadToCanvas -->Tiff
    loadToCanvas -->NonTiff
    Tiff --> useMemo
    useMemo --> drawToCanvas
    NonTiff --> None
    None --> drawToCanvas
    drawToCanvas --> drawAnnotations
Loading

Option A.2 using imageCache to cache all images

graph TD;
    uploadImage((uploadImage))
    convertToPNG[convertToPNG\n new method]
    imageCache((imageCache\n update add tiff field))
    loadToCanvas((loadToCanvas))
    useMemo[grab PNG from cache]
    Tiff[Tiff]
    NonTiff[Non-Tiff]
    TiffA[Tiff]
    NonTiffA[Non-Tiff]
    drawToCanvas[Draw image to canvas \nno refactor]
    drawAnnotations[Draw boxes annotations \nno refactor]
    None[_]
    NoneA[_]

    TiffA --> convertToPNG
    uploadImage -->TiffA
    uploadImage -->NonTiffA
    NonTiffA --> NoneA
    NoneA --> imageCache
    convertToPNG --> imageCache
    imageCache --> useMemo
    loadToCanvas -->Tiff
    loadToCanvas -->NonTiff
    Tiff --> useMemo
    useMemo --> drawToCanvas
    NonTiff --> None
    None --> drawToCanvas
    drawToCanvas --> drawAnnotations
Loading

Option B.1 Refactoring

graph TD;
    loadToCanvas((loadToCanvas))
    Tiff[Tiff]
    useMemo[Use Tiff lib]
    NonTiff[Non-Tiff]
    drawToCanvasA[Draw image to canvasA \n new method]
    drawToCanvasB[Draw image to canvasB \n refactor current method]
    drawAnnotations[Draw boxes annotations \n refactor current method]
    None[Use Image lib]

    loadToCanvas -->Tiff
    loadToCanvas -->NonTiff
    Tiff --> useMemo
    useMemo --> drawToCanvasA
    NonTiff --> None
    None --> drawToCanvasB
    drawToCanvasA --> drawAnnotations
    drawToCanvasB --> drawAnnotations
Loading

Option B.2 No Refactor, Code duplication

graph TD;
    loadToCanvas((loadToCanvas))
    Tiff[Tiff]
    useMemo[Use Tiff lib]
    NonTiff[Non-Tiff]
    drawToCanvasA[Draw image to canvasA \n new method]
    drawToCanvasB[Draw image to canvasB \n no change]
    drawAnnotations[Draw boxes annotations \n no change]
    drawAnnotationsA[Draw boxes annotations \n copy paste current method]
    None[Use Image lib]

    loadToCanvas -->Tiff
    loadToCanvas -->NonTiff
    Tiff --> useMemo
    useMemo --> drawToCanvasA
    NonTiff --> None
    None --> drawToCanvasB
    drawToCanvasA --> drawAnnotationsA
    drawToCanvasB --> drawAnnotations
Loading

@ChromaticPanic
Copy link
Contributor

Looking at the code more, my impression is that Option A.1 and option B.1 will be similar in the amount of line changes and effort required,

changes will be made in this section
image

Option A.1 will run useMemo before line 462 if the file is tiff then inject the png
Option B.1 will need checking for tiff and using a refactored onload check and instead of ctx.drawImage in line 474 we would use a new tiff canvas drawing method

@ChromaticPanic
Copy link
Contributor

image

Some success. I actually tried Option A.1 first. From the current implementation it looks like Option B.1 is the cleaner way, since in all the browser side conversion examples for TIFF to PNG all of them rendered it in canvas first before converting to PNG. So converting to PNG first would lead to drawing the image to the canvas twice.

@ChromaticPanic ChromaticPanic linked a pull request Mar 28, 2024 that will close this issue
ChromaticPanic added a commit that referenced this issue Apr 2, 2024
ChromaticPanic added a commit that referenced this issue Apr 4, 2024
ChromaticPanic added a commit that referenced this issue Apr 9, 2024
@ChromaticPanic
Copy link
Contributor

image

@TaranMeyer
Copy link

Not sure where you guys are at with this issue but I am still finding some strange behaviour on the front end:
image

The image displayed is actually the previous image (Capture 6, a .jpg), with the bounding boxes from capture 7, which is .tif I uploaded, overlaid.

Also, the "correct" bounding boxes didn't show up until I went to view capture 6 and then went back to capture 7. After I hit "classify" what I initially saw was this:

image

which is the results for capture 7 but the image of capture 6 and no visible bounding boxes at all.

The classified image should look like this (but of course .tif format which I can't upload here):
img_20240322_214832

@ChromaticPanic
Copy link
Contributor

Not sure where you guys are at with this issue but I am still finding some strange behaviour on the front end:

Both your issues should be fixed when I merge #114 and #124

ChromaticPanic added a commit that referenced this issue Apr 12, 2024
This was linked to pull requests Apr 12, 2024
ChromaticPanic added a commit that referenced this issue Apr 16, 2024
* Issue #89 tiff file display
Solution uses UTIF js from Photopea
The advantage over other Tiff libraries is that it is actively maintained. Other libraries have not had any new commits for 5+ years. Photopea is a browser based photo editing software project so using this library may give other benefits in the future if we need to support other image formats.

Modification to loadToCanvas function

    Updated interface to avoid "as Number" type casting
    Cleanup redundant code in bounding box drawing section
    created an if else fork to handle Tiff Image canvas drawing
    extracted common bounding box drawing code outside of image.onload

New decodeTiff function

    uses UTIF js to decodeTiff files

Fixed infinite render loop issues
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
user-visible User visible enhancement or features - must QA
Projects
Archived in project
5 participants