Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correctly resurrect dead agents; include other agents properties in state #66

Merged
merged 5 commits into from
May 11, 2019
Merged

Correctly resurrect dead agents; include other agents properties in state #66

merged 5 commits into from
May 11, 2019

Conversation

HelgeS
Copy link

@HelgeS HelgeS commented May 10, 2019

No description provided.

@HelgeS
Copy link
Author

HelgeS commented May 11, 2019

@TimKam Preferences are generated as part of the desires.
Format is a dictionary, where the key is a key combination (action of first agent + action of second agent) and the numerical index of their preference for this action combination (lower means higher preference). It is in total order, although there can be action combinations which have the same result.

I had to make some changes to the generatePreferences method you proposed, but it should be possible to directly use the desires to calculate the actual reward.

@TimKam TimKam merged commit f82ade0 into TimKam:normative-equi-rl May 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants