-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sensors for free space and semantic mapping. #273
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just thought we might want to cover the case where the observations are squeezed horizontally. I didn't check for the correctness of the rotation matrices, but I know you have already tested them, so I trust them (it might still be nice to have a test in the future - you can add an issue and assign it to me).
@jordis-ai2 I've now added the active neural slam module along with some tests (some of these tests are dependent on setting up |
@@ -0,0 +1,3 @@ | |||
[submodule "projects/ithor_rearrangement"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know about this, but it looks awesome!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yah, having submodules is pretty neat. In theory this should let people have their own separate repositories while still allowing us to include them under the projects directory.
nbatch, c, ego_h, ego_w = map_probs_egocentric.shape | ||
allo_h, allo_w = allocentric_map_height_width | ||
|
||
max_view_range = math.sqrt((ego_w / 2.0) ** 2 + ego_h ** 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand why one is divided by 2 and the other is not.
allenact/utils/system.py
Outdated
@@ -29,7 +29,7 @@ def get_logger() -> logging.Logger: | |||
logger: the `logging.Logger` object | |||
""" | |||
if _new_logger(): | |||
_set_log_formatter() | |||
init_logging("debug") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if, by doing this, we make all children processes ignore the log level of the parent process. If you've tested it with e.g. log_level=none
and it worked, I guess it's safe to keep the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I think I found a slightly better solution.
The problem that I was seeing was that if we didn't explicitly call init_logging
before calling get_logger
(e.g. this is often done if you have a script that uses an allenact_plugin
without explicitly going through the allenact.main
entry point) then any print
statement would be ignored as the default log level was warning
. I've now changed it so that if init_logging
has not been called from the main process then it defaults to using a log level of "default". This seems to work for me now (log level is propagated to the subprocesses) but let me know if you foresee any other possible problems.
hi @Lucaweihs, I've just played with the example script and have difficulty reading the semantic map results. How do these two correspond to each other: semantic map and top view map? |
Hi @ugurbolat , the example script is set up so it will only include those items that are relevant to rearrangement. Since the agent is never expected to move a couch for example, the couch does not appear in the semantic map. Here's how your images overlap (roughly): If you want to include other object types you'll have to add them to the ORDERED_OBJECT_TYPES = list(sorted(PICKUPABLE_OBJECTS + OPENABLE_OBJECTS)) constant. |
@Lucaweihs thanks for the clarification, make sense :/
missed this part. I see that OBJECT_TYPES_WITH_PROPERTIES contains relevant objects to ReArrangement task. Is there a variable that populates all the object types already somewhere? Or would I need to create it from the documentation? |
I just generally write a script for doing this: from ai2thor.controller import Controller
import itertools
c = Controller()
object_types_set = set()
for i in itertools.chain(range(1, 31), range(201, 231), range(301, 331), range(401, 431)):
c.reset(f"FloorPlan{i}_physics")
object_types_set.update(o["objectType"] for o in c.last_event.metadata["objects"])
c.stop()
print(sorted(list(object_types_set))) Doing this myself just gave me the list: ['AlarmClock', 'AluminumFoil', 'Apple', 'ArmChair', 'BaseballBat', 'BasketBall', 'Bathtub', 'BathtubBasin', 'Bed', 'Blinds', 'Book', 'Boots', 'Bottle', 'Bowl', 'Box', 'Bread', 'ButterKnife', 'CD', 'Cabinet', 'Candle', 'CellPhone', 'Chair', 'Cloth', 'CoffeeMachine', 'CoffeeTable', 'CounterTop', 'CreditCard', 'Cup', 'Curtains', 'Desk', 'DeskLamp', 'Desktop', 'DiningTable', 'DishSponge', 'DogBed', 'Drawer', 'Dresser', 'Dumbbell', 'Egg', 'Faucet', 'Floor', 'FloorLamp', 'Footstool', 'Fork', 'Fridge', 'GarbageBag', 'GarbageCan', 'HandTowel', 'HandTowelHolder', 'HousePlant', 'Kettle', 'KeyChain', 'Knife', 'Ladle', 'Laptop', 'LaundryHamper', 'Lettuce', 'LightSwitch', 'Microwave', 'Mirror', 'Mug', 'Newspaper', 'Ottoman', 'Painting', 'Pan', 'PaperTowelRoll', 'Pen', 'Pencil', 'PepperShaker', 'Pillow', 'Plate', 'Plunger', 'Poster', 'Pot', 'Potato', 'RemoteControl', 'RoomDecor', 'Safe', 'SaltShaker', 'ScrubBrush', 'Shelf', 'ShelvingUnit', 'ShowerCurtain', 'ShowerDoor', 'ShowerGlass', 'ShowerHead', 'SideTable', 'Sink', 'SinkBasin', 'SoapBar', 'SoapBottle', 'Sofa', 'Spatula', 'Spoon', 'SprayBottle', 'Statue', 'Stool', 'StoveBurner', 'StoveKnob', 'TVStand', 'TableTopDecor', 'TeddyBear', 'Television', 'TennisRacket', 'TissueBox', 'Toaster', 'Toilet', 'ToiletPaper', 'ToiletPaperHanger', 'Tomato', 'Towel', 'TowelHolder', 'VacuumCleaner', 'Vase', 'Watch', 'WateringCan', 'Window', 'WineBottle'] |
thanks a lot! |
I've tried passing all objects to ORDERED_OBJECT_TYPES variable and I am still not getting semantic map that includes all the objects. What do you think it might be causing that? |
I've left this as a draft PR as we'd likely want to add some tests and better documentation but let me know if things seems sensible.
I have also created an example script showing how these sensors can be used but it is meant for the
ai2thor-rearrangement
repository (as some of the challenge participants have requested a semantic mapping example) so a few things need to be run before the script will work: