Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of SuperPoint and AutoModelForKeypointDetection #28966

Merged
merged 39 commits into from
Mar 19, 2024

Conversation

sbucaille
Copy link
Contributor

@sbucaille sbucaille commented Feb 11, 2024

What does this PR do?

This PR implements SuperPoint, one of the few models that generate keypoints and descriptors given an image, as discussed in this previous pull request
The goal is to implement this model and a new type of AutoModel : AutoModelForKeypointDetection (name to be discussed).
This PR is also the replacement of a previous PR for which the branch ended up being unusable.

Who can review?

@amyeroberts @ArthurZucker @rafaelpadilla

TODO's

  • Implement SuperPointConfig and SuperPointModel as PretrainedConfig and PretrainedModel
  • Generate a conversion script for the original weights
  • Implement the new AutoModelForKeypointDetection mapping
  • Test the model
  • Write documentation

@sbucaille
Copy link
Contributor Author

sbucaille commented Feb 11, 2024

Hi @amyeroberts @rafaelpadilla @ArthurZucker ,

I couldn't see other solution than to create a branch from scratch and re-implementing everything that was discussed since the beginning of the original PR. Here, most of my problems (RegNet related error when using make fix-copies which was caused by the fact that my environment was outdated after doing the merge, the index.md huge diff, and others...) are fixed.
As I said, I re-implemented everything that was in the other PR, and addressed all other points that were mentioned in reviews (such as docs) which of course may need more discussion. I tried to make sure everything point discussed were included. Also, I renamed InterestPointDescription to KeypointDetection as this term is more widely used in the literature.
Anyway, I wanted to implement this model in transformers and contribute, you told me it would take more time than the other method, I'll take all this mess I made myself as part of the learning path 😆
Let me know what you think, what should be the next steps, I believe we're almost there ! 🤗

Steven

@amyeroberts
Copy link
Collaborator

Hi @sbucaille, thanks for opening this PR and working from the previous reviews!

At the moment, I don't see any tests implemented for the model or image processor. The next step would be adding those 🤗

@sbucaille
Copy link
Contributor Author

Looks like I forgot to git add the tests folder, fixed !
I also addressed the issue where the model was too big for the tests, I added a configuration for a smaller one and adjusted the other tests.

@sbucaille
Copy link
Contributor Author

Hi @amyeroberts @ydshieh ,
I just looked again at the PR and noticed that there were conflicts due to recent changes to the mainbranch. So instead of making the merge directly through GitHub interface, I did what you advised me to do earlier, I updated my main branch and rebased this branch onto the main, and managed to resolve the conflicts (the implementation of StableLM ended up at the same lines SuperPoint is implemented in all the readme's and init's), then I pushed the changes.
First question, is it normal that this PR lists all the other commits ?
Second question, if I merged this way, why is GitHub still complaining and asking me to merge manually through the interface like such :
image
In the end I feel like merging manually through the interface would have been faster. Am I missing something ?

Also, appears all the tests pass now 🤗

@amyeroberts
Copy link
Collaborator

Hi @sbucaille, regarding your questions about rebasing

First question, is it normal that this PR lists all the other commits ?

No, it shouldn't. Did you force push after rebasing? It's necessary to do this, as rebasing is effectively rewriting the history of the branch,

Second question, if I merged this way, why is GitHub still complaining and asking me to merge manually through the interface like such :

What do you mean by "the interface"?

When performing a rebase, it might be necessary to resolve conflicts. What rebasing is trying to do is move the first commit of your branch to the head of main. However, as the branch was originally branched off an older commit, there might be conflicts if files in this branch make changes to files which have also been modified on main. It might also happen if you've also merged in main into this branch in the interim, as the branch now contains information about newer, upstream commits on main. Therefore, how to rewrite the history, so this branch and all its commits are from the tip of main isn't obvious.

The easiest way to manage this is to rebase main into the branch often, have short-lived branches and avoid merge commits in the branch.

Here are some nice articles about rebasing which I found useful when first learning about it:

@sbucaille
Copy link
Contributor Author

@amyeroberts Alright I think I start to grasp the base and merge things, this PR is my first in any open source project and I knew in advance I would have been in front of such issues, so thank you for taking the time 🤗
As of now, is my PR still ok ? Should we continue to review the changes ? What are the next steps ?

@amyeroberts
Copy link
Collaborator

@sbucaille I'd suggest first resolving the git history, so this branch only contains commits that are part of this PR. Once our house is tidy we can review and iterate on the PR. From first glance, things are looking good! Doing this now will be a lot easier than trying to do it in the middle of a review.

@sbucaille sbucaille force-pushed the add_superpoint_fixed branch 2 times, most recently from d7f5e36 to c1d25d0 Compare February 19, 2024 21:49
@sbucaille
Copy link
Contributor Author

@amyeroberts holy that was not easy but I think I understood the mechanism, I once more rebased the branch to include the latest commits. It seems I am now 4 commits ahead of main, which should be fine (until tomorrow morning where new commits will be pushed to main I guess).
So I guess it is better now to see a bunch of "force push" in the pull request than the list of all the commits from the main right ?
How often do you suggest to do this manipulation ? You talked about short lived branches but I guess this does not apply to mine since I've been working on it for a couple of months right ?
Anyway, I'm ready to move on the last review process

@amyeroberts
Copy link
Collaborator

@sbucaille Great! So main will continue to have lots of new commits. The history you're seeing isn't so much number of commits "ahead" of main, rather, the number of commits on top of a certain main commit, let's say ABC. When you rebase, you're re-writing so that the commits are now on top of the latest main commit, say EDF. It's kind of like shifting things along in history.

You don't need to rebase every time there's new commits on main. The purpose of rebasing is to make sure your branch and main don't diverge and any changes made to the same file has no clashes and a clear order to apply commits. As well as including any important upstream changes e.g. bug fixes. It's basically the same as having a merge commit but a lot cleaner as only the branches commits are shown.

So I guess it is better now to see a bunch of "force push" in the pull request than the list of all the commits from the main right ?

Yep! This just lets me know there's been a rewrite of the history. This is important as rebasing can also be used for editing the commits in the branch itself.

How often do you suggest to do this manipulation ? You talked about short lived branches but I guess this does not apply to mine since I've been working on it for a couple of months right ?

It depends. If you touch a lot of common files, you'll want to do it regularly. Particularly in the final steps when getting ready to merge the branch in - at least once a day. This is a model PR, so probably doesn't need it quite so often. Now you've done the difficult rebase, all the rest should be pretty easy to do. In fact, I'd image there would be little to no conflicts.

We'd like all branches to live as short as possible. This is less true for model PRs - but we still would like them to be resolved in the order of days/weeks wherever possible.

Anyway, I'm ready to move on the last review process

Great! Next step is to get all the tests passing on the CI.

@sbucaille
Copy link
Contributor Author

@amyeroberts Seems right, now. Turns out I uncommitted certain files during my rebase, I'll need to practice a bit more in the future 😅 Tests are passing !

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the work adding this model.

Beautiful PR - really nice work. Just a few small comments to address but I think we're close to merge 🤗

README_fr.md Show resolved Hide resolved
1. **[StableLm](https://huggingface.co/docs/transformers/model_doc/stablelm)** released with the paper [StableLM 3B 4E1T (Technical Report)](https://stability.wandb.io/stability-llm/stable-lm/reports/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo) by Jonathan Tow, Marco Bellagente, Dakota Mahan, Carlos Riquelme Ruiz, Duy Phung, Maksym Zhuravinskyi, Nathan Cooper, Nikhil Pinnaparaju, Reshinth Adithyan, and James Baicoianu.
1. **[StableLm](https://huggingface.co/docs/transformers/model_doc/stablelm)** (from Stability AI) released with the paper [StableLM 3B 4E1T (Technical Report) by Jonathan Tow, Marco Bellagente, Dakota Mahan, Carlos Riquelme Ruiz, Duy Phung, Maksym Zhuravinskyi, Nathan Cooper, Nikhil Pinnaparaju, Reshinth Adithyan, and James Baicoianu.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR shouldn't change the StableLM entry

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also be fixed with the latest rebase

docs/source/en/model_doc/superpoint.md Outdated Show resolved Hide resolved
@@ -1751,3 +1751,38 @@ def logits(self):
FutureWarning,
)
return self.reconstruction


@dataclass
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this to the modeling file. At the moment we don't have other models using and it's highly specific to this one model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit

Comment on lines 140 to 143
Dictionary of the form `{"shortest_edge": int}`, specifying the size of the output image. If
`size["shortest_edge"]` >= 384 image is resized to `(size["shortest_edge"], size["shortest_edge"])`.
Otherwise, the smaller edge of the image will be matched to `int(size["shortest_edge"] / crop_pct)`,
after which the image is cropped to `(size["shortest_edge"], size["shortest_edge"])`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't match the default size format with "height" and "width"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit

Comment on lines 1211 to 1270
MODEL_FOR_KEYPOINT_DETECTION_MAPPING_NAMES = OrderedDict(
[
("superpoint", "SuperPointModel"),
]
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realise this was originally my suggestion, but reflecting and looking at this again I think we don't need to add this special class. We can remove MODEL_FOR_KEYPOINT_DETECTION_MAPPING_NAMES and just use AutoModel where needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is fixed with this commit

src/transformers/models/superpoint/modeling_superpoint.py Outdated Show resolved Hide resolved


class SuperPointConvBlock(nn.Module):
def __init__(self, in_channels: int, out_channels: int, add_pooling: bool = False) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All custom layers should accept the config as their first argument

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what to do here, because if I add config as argument, it will not be used by the module because it relies only on in_channels, out_channels and add_pooling. But it can't rely exclusively on config since the SuperPointConvBlock can't know what values to pick from encoder_hidden_sizes ? 🤔

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's OK, it should still have it as its first argument though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit

@slow
def test_inference(self):
model = SuperPointModel.from_pretrained("stevenbucaille/superpoint").to(torch_device)
self.infer_on_model(model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to add a separate method here - just put all the logic from infer_on_model here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it in this commit

@sbucaille
Copy link
Contributor Author

Hey @amyeroberts ,
I addressed most of the suggestions, although I was not sure about one of them, let me know what you think. I'm looking forward this merge ! 😄 🤗

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great - thanks for iterating on this!

Just a few small comments. The final thing to do before merging in is to transfer the checkpoints to under the Magic Leap org. I can't remember the outcome of this - did you reach out to any of the authors to ask if they'd like to control the org on the hub (I don't believe it exists yet).

README_fr.md Show resolved Hide resolved
src/transformers/models/auto/modeling_auto.py Outdated Show resolved Hide resolved
with torch.no_grad():
outputs = model(**self._prepare_for_class(inputs_dict, model_class))

hidden_states = outputs.encoder_hidden_states if config.is_encoder_decoder else outputs.hidden_states
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need this conditional logic here - you already know the architecture of the model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit

self.model_tester.image_width // (2 ** (i + 1)),
self.model_tester.image_height // (2 ** (i + 1)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's funny - are you sure the pixel values throughout the model are in (b, c, w, h) order and not (b, c, h, w)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I always get confused in dimension orders, the input of the model is indeed (b, c, h, w). The tests are confusing, I fixed it all in this commit

scores=scores,
descriptors=descriptors,
mask=mask,
last_hidden_state=last_hidden_state,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

last_hidden_state should be the first returned value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit

scores: Optional[torch.FloatTensor] = None
descriptors: Optional[torch.FloatTensor] = None
mask: Optional[torch.BoolTensor] = None
last_hidden_state: torch.FloatTensor = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

last_hidden_states should be the first returned value here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in this commit



class SuperPointConvBlock(nn.Module):
def __init__(self, in_channels: int, out_channels: int, add_pooling: bool = False) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's OK, it should still have it as its first argument though

@sbucaille
Copy link
Contributor Author

Hi @amyeroberts,
I addressed the last problems apart from the README one. I sent an email to the authors, waiting for their answer. But none of them still work at MagicLeap, in the case I don't have a positive answer from them and we can't have a repository owned by the company, what should we do ?

@amyeroberts
Copy link
Collaborator

@sbucaille OK, great. I'll give a final review. I've created https://huggingface.co/magic-leap-community where we can store the checkpoints. I'll reach out to magic leap and ask if they want to take control of it, but we can use that in the meantime.

If you share a link to all the checkpoints for the model, I can copy them across to this org and then you can update the checkpoints in the PR

@sbucaille
Copy link
Contributor Author

Hi @amyeroberts ,
I managed to get the contact of a potential individual that would be in charge of this, I can add you to the mail thread if you give me an email address.
Regarding the weights, you can either copy them from the repo I initially created or run the convert_superpoint_to_pytorch.py script provided in the superpoint module

@amyeroberts
Copy link
Collaborator

@sbucaille Great! You can get me on my HF email: amy@huggingface.co

@amyeroberts
Copy link
Collaborator

@sbucaille I've copied across the checkpoints - so they can be updated in the PR now.

Could you open a PR on https://huggingface.co/magic-leap-community/superpoint to fill in the model card?

@sbucaille
Copy link
Contributor Author

Hi @amyeroberts, as you can probably see, I've added you to the mail thread and opened a pull request to the hub superpoint repo

@sbucaille sbucaille force-pushed the add_superpoint_fixed branch 2 times, most recently from 70ab66b to 40a8880 Compare March 13, 2024 22:14
@sbucaille
Copy link
Contributor Author

Also, just rebased the branch and figured out main was not passing tests because of repo-consistency checks, I've added SuperPoint to the appropriate README's and will watch closely when main is updated to rebase once again with what it missing there

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great - thanks for all your work on adding this model!

Last things to do are rebase on main and trying to resolve the weird diff we're still seeing on README.fr

README_fr.md Show resolved Hide resolved
@sbucaille
Copy link
Contributor Author

sbucaille commented Mar 15, 2024

Hi @amyeroberts ,
I found the problem, a ' was not properly escaped line 146 which causes the syntax highlighting to be completely messed for the remainder of the file. I fixed it in this commit so now if you look at my README_fr.md file, there is no more red things.
Should I create a new issue with a new PR just for this line or should we push it along with SuperPoint ?

Apart from that, my branch is rebased on the recent commits, let me know what we should do !

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the work adding this and digging into the funny README issue.

Only thing left to do is update the checkpoints!

class SuperPointModelIntegrationTest(unittest.TestCase):
@cached_property
def default_image_processor(self):
return AutoImageProcessor.from_pretrained("stevenbucaille/superpoint") if is_vision_available() else None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkpoint should be updated here

Comment on lines 54 to 55
processor = AutoImageProcessor.from_pretrained("stevenbucaille/superpoint")
model = AutoModel.from_pretrained("stevenbucaille/superpoint")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkpoints should be updated here

Comment on lines 79 to 80
processor = AutoImageProcessor.from_pretrained("stevenbucaille/superpoint")
model = AutoModel.from_pretrained("stevenbucaille/superpoint")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here

logger = logging.get_logger(__name__)

SUPERPOINT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"stevenbucaille/superpoint": "https://huggingface.co/stevenbucaille/superpoint/blob/main/config.json"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkpoint to be updated

This is the configuration class to store the configuration of a [`SuperPointModel`]. It is used to instantiate a
SuperPoint model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SuperPoint
[stevenbucaille/superpoint](https://huggingface.co/stevenbucaille/superpoint) architecture.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here


@slow
def test_inference(self):
model = SuperPointModel.from_pretrained("stevenbucaille/superpoint").to(torch_device)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here for the checkpoint

@amyeroberts
Copy link
Collaborator

@sbucaille Awesome work adding this. If you can get the checkpoints updated for today we might be able to have this model make the next release 💪

@sbucaille
Copy link
Contributor Author

Hi @amyeroberts ,
Checkpoints are updated, I also fixed deleted remains of SuperPointForInterestPointDetection, test_pr_documentation test now passes. I guess we're good !

@amyeroberts
Copy link
Collaborator

@sbucaille Great! Last thing (seriously this time) - can you run the slow tests for the model and share the output? Once you can confirm they all pass I'll merge

@sbucaille
Copy link
Contributor Author

sbucaille commented Mar 18, 2024

image

@amyeroberts Is a screenshot sufficient ? When moving from the previous pull request to this one I forgot to add the additional COCO image.
Also, I'm not very familiar with the autodoc things but what causes this issue ?
image

@amyeroberts
Copy link
Collaborator

@amyeroberts Is a screenshot sufficient ?

Main thing I'd need to see is the end of the test run, which shows how many of the tests have passed / been skipped etc. (I. can't see the command being run, but make sure to set RUN_SLOW=1)

I've commented on the part causing the error for the doctests

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
@sbucaille
Copy link
Contributor Author

@amyeroberts

RUN_SLOW=1 pytest tests/models/superpoint/
================================================================ test session starts =================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.4.0
rootdir: /home/steven.bucaille/transformers
configfile: pyproject.toml
plugins: dash-2.16.1, xdist-3.5.0, hypothesis-6.99.8, timeout-2.3.1
collected 93 items

tests/models/superpoint/test_image_processing_superpoint.py .s...........                                                                      [ 13%]
tests/models/superpoint/test_modeling_superpoint.py s........ssss....sssssssss.............s......s....sss.s.....s.......ss....ssss.           [100%]

================================================================== warnings summary ==================================================================
venv/lib/python3.10/site-packages/_pytest/config/__init__.py:1373
  /home/steven.bucaille/transformers/venv/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob

    self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_fast_init_context_manager
  /home/steven.bucaille/transformers/tests/test_modeling_common.py:465: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.
    torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_fast_init_context_manager
  /home/steven.bucaille/transformers/tests/test_modeling_common.py:468: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.
    torch.testing.assert_allclose(

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/modeling_utils.py:4225: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead
    warnings.warn(

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:475: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    list_keypoints_scores = [

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:249: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    scores = scores[0][tuple(keypoints.t())]

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:249: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
    scores = scores[0][tuple(keypoints.t())]

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:484: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    for last_hidden_state, keypoints in zip(last_hidden_state, list_keypoints)

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:312: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    divisor = torch.tensor([[(width * scale - scale / 2 - 0.5), (height * scale - scale / 2 - 0.5)]])

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:312: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    divisor = torch.tensor([[(width * scale - scale / 2 - 0.5), (height * scale - scale / 2 - 0.5)]])

tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_attentions
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_output_hidden_state
tests/models/superpoint/test_modeling_superpoint.py::SuperPointModelTest::test_torchscript_simple
  /home/steven.bucaille/transformers/src/transformers/models/superpoint/modeling_superpoint.py:487: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    maximum_num_keypoints = max(keypoints.shape[0] for keypoints in list_keypoints)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
==================================================== 65 passed, 28 skipped, 27 warnings in 4.19s =====================================================

I applied your changes on the doc

@ydshieh
Copy link
Collaborator

ydshieh commented Mar 19, 2024

(thank you for requesting running slow tests @amyeroberts and thank you for running the slow tests @sbucaille ❤️ !)

@sbucaille
Copy link
Contributor Author

@amyeroberts I'm afraid the error is back 🤔

@amyeroberts
Copy link
Collaborator

@sbucaille I was wrong about the fix then! I'll check out your branch and see if I can figure out what's happening

@ydshieh
Copy link
Collaborator

ydshieh commented Mar 19, 2024

@sbucaille I was wrong about the fix then! I'll check out your branch and see if I can figure out what's happening

don't hesitate to ping me if you have limited bandwidth @amyeroberts

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
@sbucaille
Copy link
Contributor Author

@amyeroberts should I revert the previous change ?

@amyeroberts
Copy link
Collaborator

@sbucaille For the code example? No, that also wasn't correct (I just missed it in my reviews 🙈 )

@amyeroberts
Copy link
Collaborator

OK - new issue. This one I think is because there isn't a model entry added in docs/source/en/_toctree.yml

@sbucaille
Copy link
Contributor Author

@amyeroberts Added it to _toctree, I believe this belongs to the vision section ?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@amyeroberts
Copy link
Collaborator

@sbucaille Yep! And all passing now! Thanks for all your work on this - great to see this added to the library 🤗

@amyeroberts amyeroberts merged commit 56baa03 into huggingface:main Mar 19, 2024
22 checks passed
@sbucaille
Copy link
Contributor Author

Yaaay finally ! 😂 Thanks for the opportunity and all the feedback provided during this long lasting PR, I learned a lot of things !
Time to implement SuperGlue now 😎 🤗

@sbucaille sbucaille deleted the add_superpoint_fixed branch March 19, 2024 21:23
ArthurZucker pushed a commit that referenced this pull request Mar 20, 2024
* Added SuperPoint docs

* Added tests

* Removed commented part

* Commit to create and fix add_superpoint branch with a new branch

* Fixed dummy_pt_objects

* Committed missing files

* Fixed README.md

* Apply suggestions from code review

Fixed small changes

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Moved ImagePointDescriptionOutput from modeling_outputs.py to modeling_superpoint.py

* Removed AutoModelForKeypointDetection and related stuff

* Fixed inconsistencies in image_processing_superpoint.py

* Moved infer_on_model logic simply in test_inference

* Fixed bugs, added labels to forward method with checks whether it is properly a None value, also added tests about this logic in test_modeling_superpoint.py

* Added tests to SuperPointImageProcessor to ensure that images are properly converted to grayscale

* Removed remaining mentions of MODEL_FOR_KEYPOINT_DETECTION_MAPPING

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Fixed from (w, h) to (h, w) as input for tests

* Removed unnecessary condition

* Moved last_hidden_state to be the first returned

* Moved last_hidden_state to be the first returned (bis)

* Moved last_hidden_state to be the first returned (ter)

* Switched image_width and image_height in tests to match recent changes

* Added config as first SuperPointConvBlock init argument

* Reordered README's after merge

* Added missing first config argument to SuperPointConvBlock instantiations

* Removed formatting error

* Added SuperPoint to README's de, pt-br, ru, te and vi

* Checked out README_fr.md

* Fixed README_fr.md

* Test fix README_fr.md

* Test fix README_fr.md

* Last make fix-copies !

* Updated checkpoint path

* Removed unused SuperPoint doc

* Added missing image

* Update src/transformers/models/superpoint/modeling_superpoint.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Removed unnecessary import

* Update src/transformers/models/superpoint/modeling_superpoint.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Added SuperPoint to _toctree.yml

---------

Co-authored-by: steven <steven.bucaillle@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Steven Bucaille <steven.bucaille@buawei.com>
itazap pushed a commit that referenced this pull request May 14, 2024
* Added SuperPoint docs

* Added tests

* Removed commented part

* Commit to create and fix add_superpoint branch with a new branch

* Fixed dummy_pt_objects

* Committed missing files

* Fixed README.md

* Apply suggestions from code review

Fixed small changes

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Moved ImagePointDescriptionOutput from modeling_outputs.py to modeling_superpoint.py

* Removed AutoModelForKeypointDetection and related stuff

* Fixed inconsistencies in image_processing_superpoint.py

* Moved infer_on_model logic simply in test_inference

* Fixed bugs, added labels to forward method with checks whether it is properly a None value, also added tests about this logic in test_modeling_superpoint.py

* Added tests to SuperPointImageProcessor to ensure that images are properly converted to grayscale

* Removed remaining mentions of MODEL_FOR_KEYPOINT_DETECTION_MAPPING

* Apply suggestions from code review

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Fixed from (w, h) to (h, w) as input for tests

* Removed unnecessary condition

* Moved last_hidden_state to be the first returned

* Moved last_hidden_state to be the first returned (bis)

* Moved last_hidden_state to be the first returned (ter)

* Switched image_width and image_height in tests to match recent changes

* Added config as first SuperPointConvBlock init argument

* Reordered README's after merge

* Added missing first config argument to SuperPointConvBlock instantiations

* Removed formatting error

* Added SuperPoint to README's de, pt-br, ru, te and vi

* Checked out README_fr.md

* Fixed README_fr.md

* Test fix README_fr.md

* Test fix README_fr.md

* Last make fix-copies !

* Updated checkpoint path

* Removed unused SuperPoint doc

* Added missing image

* Update src/transformers/models/superpoint/modeling_superpoint.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Removed unnecessary import

* Update src/transformers/models/superpoint/modeling_superpoint.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Added SuperPoint to _toctree.yml

---------

Co-authored-by: steven <steven.bucaillle@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Steven Bucaille <steven.bucaille@buawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants