New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExecutableWorld, designed to also work with BatchWorld #170

Merged
merged 10 commits into from Jun 27, 2017

Conversation

Projects
None yet
3 participants
@jaseweston
Contributor

jaseweston commented Jun 26, 2017

A world where messages from agents can be interpreted as actions in the world which result in changes in the environment (are executed). Hence a grounded simulation can be implemented rather than just dialogue between agents.

and not confitioned on the world at all (and it is thus the same as
MultiAgentDialogWorld). """
if agent.id == act['id']:
return None

This comment has been minimized.

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

None or {}?

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

None or {}?

# All agents (might) observe the results.
for other_agent in self.agents:
obs = self.observe(other_agent, acts[index])
if obs is not None:

This comment has been minimized.

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

if obs instead of you change None to {}, but either way is fine with me

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

if obs instead of you change None to {}, but either way is fine with me

# The world has its own observe function, which the action
# first goes through (agents receive messages via the world,
# not from each other).
observation = w.observe(agents[index], validate(batch_actions[i]))

This comment has been minimized.

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

does this have a chance of returning None?

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

does this have a chance of returning None?

self.batch_observe(other_index, batch_act))
obs = self.batch_observe(other_index, batch_act, index)
if obs is not None:
batch_observations[other_index] = obs

This comment has been minimized.

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

I wonder if you need to override this always to make sure that you don't accidentally view a stale message? maybe better to fill it with {}

@alexholdenmiller

alexholdenmiller Jun 26, 2017

Member

I wonder if you need to override this always to make sure that you don't accidentally view a stale message? maybe better to fill it with {}

@jaseweston

This comment has been minimized.

Show comment
Hide comment
@jaseweston

jaseweston Jun 26, 2017

Contributor

I just didn't want to break existing code that doesn't set anything. I think all models should be able to cope with an empty observation though, ideally. So, in a batch there could be empty ones. But right now we don't allow for that.. should I change it?

Contributor

jaseweston commented Jun 26, 2017

I just didn't want to break existing code that doesn't set anything. I think all models should be able to cope with an empty observation though, ideally. So, in a batch there could be empty ones. But right now we don't allow for that.. should I change it?

@alexholdenmiller

hoping for the best

@alexholdenmiller

This comment has been minimized.

Show comment
Hide comment
@alexholdenmiller

alexholdenmiller Jun 27, 2017

Member

we can follow up later with the empty table changes

Member

alexholdenmiller commented Jun 27, 2017

we can follow up later with the empty table changes

@jaseweston jaseweston merged commit 5f1d3ac into master Jun 27, 2017

@alexholdenmiller alexholdenmiller deleted the moar branch Jun 28, 2017

huzefasiyamwala added a commit to huzefasiyamwala/ParlAI that referenced this pull request Aug 17, 2017

Merging master branch (#1)
* added MTurk update to NEWS.md

* added more MTurk news

* added more explanation for multi-assignment design

* fix virtualenv path error

* Fixed bug with vqa_v2 teacher's len (#164)

* Change MTurk local db to be in-memory

* remote agent fixes, switch model param to default None (#166)

* remote fixes (#167)

* Added Personalized Dialog dataset (#163)

* Fixed bug on building fb data (#171)

* ExecutableWorld,  designed to also work with BatchWorld (#170)

* small

* exec world

* small

* blah

* mm

* index

* index

* index

* small batch fixes

* small batch fixes

* Update NEWS.md

* updates to the training loop / logging (#172)

* add sigfig rounding and unit tests for utils (#173)

* train fixes (#174)

* fixes to train and dict (#177)

* bug fixes in drqa (#176)

* Add image feature extraction modules and fix minor bugs.  (#169)

* fix vqa_v1 and vqa_v2 testset image source.
* add image_featurizer
* add image_featurizers and examples.
* update image_featurizers.py and dialog_teacher.py based on the discussion.

* Fixes to ParsedRemoteAgent (#178)

* Capitalize 'all' to 'All' to follow naming convention

* Fixed missing img path and step size (#180)

* Fixing bibtex citation (#186)

Missing braces

* Added from scratch section to task tutorial (#184)

* vqa fixes (#183)

* Lazy Torch requirement (#182)

* fix a typo (#187)

* Update README.md

* Refactor MTurk html to make it more modular

* Add shutdowns (#191)

* Update run_tests_short.sh

* add insuranceqa as a task (#193)

* Save parameters of all agents through calling world.save()

* Move print statement

* Fix docstring

* mturk improvement and cleanup

* fixed path bug

* Adding cmdline args to task agents as well

* Make add_task_args a separate function

* updating save functions (#195)

* Added download resuming (#194)

* Support multiple tasks

* implemented send messages in bulk

* added HIT auto-test script

* better error message

* fixed message duplication error and other issues

* [MTurk] able to approve/reject work individually, block workers, and pay bonus; improved database status checking

* [MTurk] fixed auto test bug

* [MTurk] ignore abandoned HIT

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* world.save_agents fix (#198)

* save agents fix

* moar

* Update eval_model.py

* [MTurk] changed hit approval flow

* [MTurk] added unique_request_token for send_bonus

* fixed comment

* Clean up dir only on version update (#197)

* Update NEWS.md

* Update README.md

* group args when printing (#201)

* Rehosted COCO-IMG to allow for resuming download (#202)

* [MTurk] removed unnecessary wait

* [WIP] trying to increase max concurrent polling operations

* change poll test

* [MTurk] moved to us-east-1 (N. Virginia) for higher Lambda max connections

* turned off debug

* [MTurk] fixed bugs

* [MTurk] block_worker works

* elapsed time

* avg elapsed

* [MTurk] clean up

* [MTurk] added retry for ajax request

* Modified task tutorial (#206)

* Delete memnn_luatorch_cpu.rst

* Update README.md

* Update agents.py

* fix test for init files to exclude mturk html dir (#203)

* add task directory to mturk manager

* fix wikimovies kb teacher (#207)

* Default evaluation uses 'valid' datatype (#210)

* Add MS MARCO dataset, and modify insuranceQA to accept version (V1 or V2) (#200)

* Added retrying with exp backoff to downloads (#209)

* added reason for reject_work(); added MTurkWorld as parent class

* Update worlds.py

* added opt to MTurkWorld.__init__

* better mturk cost checking

* Fixed typo

* Update agents.py

* moved sync hit assignment info to a better place

* Fixed dialog data shared cands bug (#212)

* fix to acts indexing in batchworld (#215)

* fix insuranceqa bug (#211)

* Update NEWS.md

* Update NEWS.md

* Update NEWS.md

* Update NEWS.md

* minor comment and print statement fixes (#217)

* update basic tutorial (#219)

* Add TriviaQA task (#204)

* Update NEWS.md

* Tutorial for seq2seq agent, updated agent (#222)

* Add start of sentence token to dict.py (#221)

* minor comment and print statement fixes

* add start of sentence token to dict.py

* return original order of special tokens; change names of start and end tokens

* quick eos fix

* Update NEWS.md

* Update NEWS.md

* update seq2seq to use "END" like dictionary (#227)

* add hred to parlai (#228)

* Add placeholder agent for HRED model (#229)

* added email_worker API

* refactored init_aws

* setting assignment duration

* getting worker id earlier in time

* hide implementation detail of conversation_id to discourage change

* removed unnecessary create_hit_type_lock

* Added CLEVR task (#233)

* Exception to alert user of possible mistake when providing label and label candidate (#232)

* better handling for email_worker error case

* turn off keypress trigger if send button is disabled

* added abandoned HIT handling

* clean up

* disabled approve/reject for abandoned HITs

* added gating for pay_bonus

* fixed pay bonus bug

* fixed mturk agent act() and observe() bug

* fixed bug

* fixed submit HIT handling

* remove quotes around model path (#239)

The command line option doesn't require quotes, even if /tmp/model is a placeholder it's slightly misleading

* readme, example, and test fixes (#234)

* Update README.md

* bug where wrong word embedding is zeroed (#242)

word_dict['<NULL>'] returns the index for UNK rather than NULL and thus the wrong embedding is being zeroed

* Update README.md

* Added candidates to clevr (#243)

* Allow newlines in FbDialog format by using '\n' in the text file messages (#246)

* Simple task that just loads the specified FbDialogData file (#245)

* fromfile task

* again

* comment

* init py

* fromfile task: add docstring (#247)

* Fixed dialog_babi task 6 candidates (#248)

* Added image args to parlai args (#249)

* weak ranking system for seq2seq (#235)

* Added MemNN agent (#251)

* add references to msmarco and insuranceqa

* update msmarco description

* task list

* Added clevr to task list and item on news (#259)

* Update README.md

* fixed error in json format (#262)

* print() is a function in Python 3 (#256)

* print() is a function in Python 3 (#255)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment