Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some images will be lost due to detection #9

Closed
x12901 opened this issue Aug 23, 2021 · 2 comments
Closed

Some images will be lost due to detection #9

x12901 opened this issue Aug 23, 2021 · 2 comments

Comments

@x12901
Copy link

x12901 commented Aug 23, 2021

The size of my picture is 1280*1024,I use the command streamlit run streamlit_app.py . The result is very good. But part of my picture is missing. The displayed result is not a complete picture. Can the cropping of the picture be changed? I tried to modify the code, but the result was not good.Can the detection speed be improved?Can I just load the model without training every time?

class SPADE(KNNExtractor):
    def __init__(
            self,
            k: int = 5,
            backbone_name: str = "resnet50",
    ):
        super().__init__(
            backbone_name=backbone_name,
            out_indices=(1, 2, 3),
            pool=True,
        )
        self.k = k
        self.image_size_x = 1280
        self.image_size_y = 1024
        self.z_lib = []
        self.feature_maps = []
        self.threshold_z = None
        self.threshold_fmaps = None
        self.blur = GaussianBlur(4)

    def predict(self, sample):
        feature_maps, z = self(sample)

        distances = torch.linalg.norm(self.z_lib - z, dim=1)
        values, indices = torch.topk(distances.squeeze(), self.k, largest=False)

        z_score = values.mean()

        # Build the feature gallery out of the k nearest neighbours.
        # The authors migh have concatenated all features maps first, then check the minimum norm per pixel.
        # Here, we check for the minimum norm first, then concatenate (sum) in the final layer.
        scaled_s_map = torch.zeros(1, 1, self.image_size_y, self.image_size_x)
        for idx, fmap in enumerate(feature_maps):
            nearest_fmaps = torch.index_select(self.feature_maps[idx], 0, indices)
            # min() because kappa=1 in the paper
            s_map, _ = torch.min(torch.linalg.norm(nearest_fmaps - fmap, dim=1), 0, keepdims=True)
            scaled_s_map += torch.nn.functional.interpolate(
                s_map.unsqueeze(0), size=(self.image_size_y, self.image_size_x), mode='bilinear'
            )

        scaled_s_map = self.blur(scaled_s_map)

        return z_score, scaled_s_map
@rvorias
Copy link
Owner

rvorias commented Aug 23, 2021

Hi, you should take a look at the transformations in data.py.
Code below will resize it without respecting the aspect ratio.
I would suggest to just upscale the output feature map to 1280*1024.

class StreamingDataset:
    """This dataset is made specifically for the streamlit app."""
    def __init__(self, size: int = 224):
        self.size = size
        self.transform=transforms.Compose([
---             transforms.Resize(256, interpolation=transforms.InterpolationMode.BICUBIC),
+++             transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC),
---             transforms.CenterCrop(size),
                transforms.ToTensor(),
                transforms.Normalize(IMAGENET_MEAN, IMAGENET_STD),
            ])
        self.samples = []
    
    def add_pil_image(self, image : Image):
        image = image.convert('RGB')
        self.samples.append(image)

    def __len__(self):
        return len(self.samples)

    def __getitem__(self, index):
        sample = self.samples[index]
        return (self.transform(sample), None)

@x12901
Copy link
Author

x12901 commented Aug 24, 2021

This solved my problem. Thanks!

@x12901 x12901 closed this as completed Aug 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants