-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting sensor output from 'observations' #22
Comments
@kvas7andy Thank you for trying out MINOS! Here are some answers to your questions (sorry for the late response).
|
@kvas7andy Are you able to use the sensor data as you would expect? If yes, shall we close this issue? I will summarize some of the details of this discussion in a FAQ document so that others can also refer to it. |
@msavva it would be great to have Friday to make tests! |
@msavva @angelxuanchang |
@msavva @angelxuanchang |
@kvas7andy We are currently working on an update that adds Matterport3D semantic segmentation data. It should be coming within a few days. |
@msavva |
Hi, I have to continue this thread, because some misunderstanding occurred, while I was generating [1]. I used new code for saving objectType data, as
At the same time I save But I can't compare obtained labels with one in here from "Sink" is missed, "kitchen_appliance" and "kitchen_cabinet" are switched to "ottomon" and "stand". [2]. When training other labels, then ["ark", "door"], how can I specify to randomize episodes training with concrete set of objectTypes? |
[1]. Sry, for so long text, I came up to solution. I forgot to specify concrete [2]. This question is still not clear for me. |
Hi @kvas7andy , To confirm, problem [1] was due to the missing parameter specification for For problem [2], you will currently need to filter the scene train/val/test sets by checking for the object label you want to use as targets. To do this you can refer to this CSV file which contains a column named |
Hi @msavva, Yes, first problem occurred only because of missing Quit good workaround for now, thank you. Of course, it would be much better to implement such filtering inside the framework, but I think the least important issue. |
I have to reopen issue and ask: how to get information about concrete object, chosen by simulator for current episode? The necessary information is its type? For example, apart from "arch", "door" categories I can state another one and be sure, that with 'select': 'random', simulator will uniformly choose one of the category and find it in the scene? So I need to filter all scenes which have all of specified modelCats or at least one of them? Simulator choose Example of output I talk about:
|
If stating as usual ["arch", "door"] the output example of episode_info from simulator is:
but when I state another labels ["window", "toilet"]:
Lots of parameters are missing, but crucial is objectType (which can't be infered from objectId) I got this after looking inside |
Hi, @msavva! I am sorry to rush, but this question is vital for me. Please, could you make any suggestions ASAP! Thank you) |
Hi @kvas7andy , import csv
import json
import os
SUNCG_PATH = os.path.expanduser('~/work/suncg/')
MODEL_METADATA_FILE = os.path.expanduser('~/code/minos/minos/server/node_modules/sstk/metadata/data/suncg/suncg.planner5d.models.full.csv')
model_id2cat = {}
for r in csv.DictReader(open(MODEL_METADATA_FILE)):
model_id2cat[r['id']] = r['category']
def object_id_to_type(scene_id, object_id):
house = json.load(open(os.path.join(SUNCG_PATH, 'house', scene_id, 'house.json')))
object_nodes = house['levels'][0]['nodes']
model_id = [x for x in object_nodes if x['id'] == object_id][0]['modelId']
object_type = model_id2cat[model_id]
return object_type This is a clunky, temporary workaround. We will incorporate passing back of objectType in addition to objectId in a near future update. |
@msavva, |
@kvas7andy |
Thank you for your help, @msavva! |
Hi @msavva ! I wonder if the Matterport3D semantic segmentation data is ready to use? |
Hi!
There are some questions, I would like to ask about sensor data and its processing. I can divide this question in several one, if this is more convenient. And I would be very grateful for your detailed answers, as my own attempts were unsuccessful.
All questions are related to API from RoomSimulator.py (which is additionally used in unreal implementation, I use for research)
step()
function returns whole state of the simulator. How and what parameters influence sensor output, from simulator?observations
from sim_config.py parameter obviously do this, but there is one more, calledoutputs
, which has 'color', included in a list of values. Does it influence our sensor outputs? Does it influence reward or goal?After plenty of time searching in pygame_client.py, I couldn't guess how to get the code for recoloring of rooms & objects, while receiving them from
observations
, as one of included 'sensors'? As I understood, only Red color is actually indicating label of object (room?). I would like to color this labels, but the only implementing script is in pygame_client.py.In addition, how should
roomtypes_file
orobjecttypes_file
looks like (these are "silence" parameters in parser of sim_args.py), I mean, what is the whole list of available labels of objects and rooms?At the same time I found some metadata file for Matterport3D original coloring, but can't find the same for SUNCG (in there repository).
Whatt does
data_viz
stand for, and can this data be generated automatically for depth and semantic segmentation? Even forcolor
data, with standard configuration files, I get None, when check the value fromsensors
dictionary.I have downloaded suncg.v2.zip and mp3d.zip from download links in corresponding emails. As I see mp3d.zip file (archive for
minos
task) doesn't includeregion_segmentation
, as actually objects semantic segmentation in files. It would be great, if you could include object segmentation data into mp3d archive for minos task! And the question is, rather objectType from mp3d is already implemented along withsuncg
segmentation (as I can get this information from SUNCG inobservations
)The text was updated successfully, but these errors were encountered: