(Y)et (a)nother (s)pecification (i)nference (t)ool
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


Build Status codecov Updates

PyPI version License: MIT

(Y)et (a)nother (s)pecification (i)nference (t)ool

Yasit is a tool for learning "Boolean Specifications" from demonstrations. In particular, we are given a set of examples from an agent (also called the teacher) who is performing some task in an environment and ask what is the most likey task given the examples (also called the demonstrations).


In this context, a "Boolean Specification" (also called bounded trace properties) is a set of demonstrations, where one says a demonstration satisifies the specification if the demonstration lies in the set. Currently, we assume all demonstrations have the same length. For example, consider an agent operating in a gridworld with lava, water, drying tiles, and recharge tiles. The following are example demonstrations of the following specification: "eventually recharge, avoid lava, and if you get wet visit a drying tile before recharging". Each demonstration is visualized using github.com/mvcisback/gridworld-visualizer.

The main purpose of yasit is to take a collection of demonstrations and collection of candidate specifications and find the most probable one specification. In cartoon math, yasit optimizes:

cartoon math

For details on the posteori probability model and algorithm see: Vazquez-Chanlatte, Marcell, et al. "Learning Task Specifications from Demonstrations.", Advances in Neural Information Processing Systems, NIPS, 2018.


yasit's API centers around the infer function. For example, let concept_class denote an iterable container (e.g., a Set or List) of python objects, supporting the __call__ and __le__ dunder methods as well as the rand_sat method which computes the probability of satisfying the property given a common starting point for all the demonstrations.

  • Note: yasit currently only supports each demonstration starting at the same location. Future versions will relax this condition, but will require #unique_starting_points queries to rand_sat to evaluate the probability of a given property.

For example,

class TraceProperty:
    def __hash__(self):
        '''Must be hashable.'''

    def __call__(self, demonstration) -> bool:
        '''Evaluate if the provided demonstration satisifies this property.'''

    def __le__(self, other) -> bool:
        - Evaluate if (self) is known to imply (other).
        - As sets, this corresponds to subset inclusion.
        - If __le__ is constantly False, then the algorithm
          will compute the probability of *every* property and
          return the maximimum.
        - Only required if giving `concept_class` as a
          flat list of specifications.

    def rand_sat(self) -> [0, 1]:
        Return the probability (in interval [0, 1]) of randomly satisifying 
        the property if one applies actions uniformly at random from the
        starting point.

Then if concept_class is an iterable of objects conforming to TraceProperty's API, and demonstrations is an iterable of demonstrations (inputs to __call__), finding the most probable specification is done by:

from yasit import infer

spec, score = infer(concept_class, demonstrations)

infer also supports taking in a networkx.DiGraph. In fact, if concept_class is not a DiGraph, the first thing infer does it make it one. The currently procedure to do this is fairly slow and makes numerous <= queries. If these are slow, you may wish to implement your own concept_class to DiGraph converter.

  • Note, that the resulting graph should be transitively reduced. This can be done using networkx.transitive_reduction.
  • If concept_class is given as a networkx.DiGraph, then the TraceProperty dunder method, __leq__, does not need to be implemented.