Skip to content
master
Go to file
Code

Latest commit

* initial squarecb implementation

* printing

* removed debugging code

* removed debugging comments

* comment

* added elimination variant

* clean up parameter descriptions and formatting for squarecb

* Added tests for squarecb

* Added refs for squarecb tests.

* Update vowpalwabbit/cb_explore_adf_squarecb.cc

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>

* Update vowpalwabbit/cb_explore_adf_squarecb.cc

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>

* Update vowpalwabbit/cb_explore_adf_squarecb.cc

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>

* Added more comments

* ran clang-format

* ran clang-format on cb_explore_adf_regcb.cc

* added squarecb files to vw_core.vcxproj

* Ran clang-format again

* clang format cb_explore_adf_greedy.cc

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: olgavrou <olgavrou@gmail.com>
dbdfde1

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
.
Jan 21, 2017
.
Jan 21, 2017
.
Jan 21, 2017

README.md

Vowpal Wabbit

Linux build status Windows build status MacOS build status

codecov Total Alerts Gitter chat

This is the Vowpal Wabbit fast online learning code.

Why Vowpal Wabbit?

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning. There is a specific focus on reinforcement learning with several contextual bandit algorithms implemented and the online nature lending to the problem well. Vowpal Wabbit is a destination for implementing and maturing state of the art algorithms with performance in mind.

  • Input Format. The input format for the learning algorithm is substantially more flexible than might be expected. Examples can have features consisting of free form text, which is interpreted in a bag-of-words way. There can even be multiple sets of free form text in different namespaces.
  • Speed. The learning algorithm is fast -- similar to the few other online algorithm implementations out there. There are several optimization algorithms available with the baseline being sparse gradient descent (GD) on a loss function.
  • Scalability. This is not the same as fast. Instead, the important characteristic here is that the memory footprint of the program is bounded independent of data. This means the training set is not loaded into main memory before learning starts. In addition, the size of the set of features is bounded independent of the amount of training data using the hashing trick.
  • Feature Interaction. Subsets of features can be internally paired so that the algorithm is linear in the cross-product of the subsets. This is useful for ranking problems. The alternative of explicitly expanding the features before feeding them into the learning algorithm can be both computation and space intensive, depending on how it's handled.

Visit the wiki to learn more.

Getting Started

For the most up to date instructions for getting started on Windows, MacOS or Linux please see the wiki. This includes:

You can’t perform that action at this time.