You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The full pipeline & API for ranking should be created. It should be handled by a separate library, depending on the draco and draco-web modules.
Requirements
Create models to represent VIS tasks and their ranking logic depending on a given encoding
Create default VIS tasks and their ranking logic (at least a stub)
Expose a single ranking function, accepting the following input:
data as any[]
encoding preferences of the user (which columns should be included)
maximum number of models
--> should output an augmented SolutionSet with $n$ costs per recommendation
Additional Context
The VIS tasks and their associated ranking logic must be declarative and easily modifiable. This is absolutely necessary to be able to enhance the tool from a domain POV without making significant changes to the codebase.
Ranking Approach
The proposed ranking algorithm has two main pillars:
Data-oriented: obtained from draco by reading the cost of a model for an ASP
Task-oriented: "hard-coded" based on relevant literature, by declaring how useful a given encoding is for a specified task
Iterate through the SolutionSet, and assign a VIS-task-based weight to each solution per VIS task. After this, given that we have $n$ VIS tasks at hand, we can generate $n$ composite costs per recommendation by summing the data-based cost with the VIS-task-based cost.
Clients of the library can further process these details and use them for representation
The text was updated successfully, but these errors were encountered:
* feat: generate `@visrecly/ranking`
* feat: generate `@visrecly/vis-tasks`
* fix: fix `tsconfig`s
add proper path mapping and `include`, `exclude` lists
* feat: declare vis-tasks
* refactor: create `@visrecly/data` to manage datasets
* refactor: handle ranking on the server side
solution picked to skip custom webpack config for `clingo-wasm`
* feat: use `react-query` for API calls
* feat: support specifying data url for the vl specs
* refactor: specify `numMaxModels` for ranking
* refactor: sanitize dependency graph
minimize the number of outgoing edges per module node
* feat: generate ASP dynamically for `EncodingPreference`
* refactor: consider only data column names as prefs
* feat: add getter for `schema` in `@visrecly/draco-web`
* feat: add utils and enhance dep graph
* feat: support `VegaLiteCompositeMark`
* refactor: use `VegaLiteSpec` type alias
* feat: initial ranking implementation
Description
The full pipeline & API for ranking should be created. It should be handled by a separate library, depending on the
draco
anddraco-web
modules.Requirements
any[]
SolutionSet
withAdditional Context
The VIS tasks and their associated ranking logic must be declarative and easily modifiable. This is absolutely necessary to be able to enhance the tool from a domain POV without making significant changes to the codebase.
Ranking Approach
The proposed ranking algorithm has two main pillars:
draco
by reading the cost of a model for an ASPNon-exhaustive list of used literature:
The recommendations are generated as
vega-lite
, hence, the used marks need to be matched using its schema.Currently, the following marks are specified in the schema:
In a very high-level overview, the steps to take are:
draco
process the raw data with its wisdom collected as.lp
files as a part of Use different levels of constraints uwdata/draco#51 and Hard constraints uwdata/draco#52, and save theSolutionSet
output.SolutionSet
, and assign a VIS-task-based weight to each solution per VIS task. After this, given that we haveThe text was updated successfully, but these errors were encountered: