Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
tree: 9f400439a1
Fetching contributors…

Cannot retrieve contributors at this time

84 lines (66 sloc) 4.014 kb
how to handle ai? multiprocessing to get around the GIL in CPython.
why not threads? because we are CPU restricted, not IO.
memory managers and blackboards should be related.
this will allow for expectations, since a memory can be simulated in the future
memory:
should be a heap
memory added will be wrapped with a counter
every time a memory is fetched, the counter will be added
eventually, memories not being used will be removed
and the counters will be reset
memories should be a tree
if a memory is being added that is similar to an existing memory,
then the existing memory will be updated, rather than replaced.
since goals and prereqs share a common function, "valid", it makes sene to
make them subclasses of a common class.
looking at actions.csv, it is easy to see that the behaviour of the agent will
be largely dependent on how well the action map is defined. with the little
pirate demo, it is not difficult to model his behaviour, but with larger, more
complex agents, it could quickly become a huge task to write and verify the
action map.
with some extra steps, i would like to make it possible that the agent can
infer the prereq's to a goal through clues provided within the objects that the
agent interacts with, rather than defining them within the class. i can foresee
a performance penality for this, but that could be offset by constructing
training environments for the agent and then storing the action map that the
agent creates during training.
the planner:
GOAP calls for a heuristic to be used to find the optimal solution and to reduce
the number of checks made. in a physical environment where a star is used,
it makes sense to just find a vector from the current searched node to the
goal, but in action planning, there is no spatial dimension where a simple
solution like that can be used.
without a heuristic, a* is just a tree search. the heuristic will increase the
efficiency of the planner and possibly give more consistent results. it can
also be used to guide an agents behavior by manipulating some values.
for now, while testing and building the library, the h value will not be used.
when the library is more complete, it would make sense to build a complete
agent, then construct a set of artificial scenarios to train the agent.
based on data from the scenarios, it could be possible to hardcode the h
values. the planner could then be optimized for certain scenarios.
This module contains the most commonly used parts of the system. Classes that
have many related sibling classes are in other modules.
since planning is done on the blackboard and i would like agents to be able to
make guesses, or plans about other agents, then agents will have to somehow be
able to be stored on and manipulated on a blackboard. this may meant that
agents and precepts will be the same thing
1/15/12:
overhauled the concepts of goals, prereqs, and effects and rolled them into
one class. with the new system of instanced actions, it makes design sense to
consolidate them, since they all have complimentary functionality. from a
performance standpoint, it may make sense to keep the separate, but this way is
much easier to conceptualize in your mind, and i am not making an system that
is performance sensitive....this is python.
simplify holding/inventory:
an objects location should always be a tuple of:
( holding object, position )
this will make position very simple to sort. the position in the tuple should
be a value the that holding object can make sense of, for example, an
environment might expect a zone, and (x,y), while an agent would want an index
number in their inventory.
a side effect will be that location goals and holding goals can be consolidated
into one function.
new precepts may render a current plan invalid. to account for this, an agent
will re-plan every time it receives a precept. a better way would be to tag a
type of precept and then if a new one that directly relates to the plan arrives
then re-plan.
Jump to Line
Something went wrong with that request. Please try again.