From wikipedia: "There are situations in which unlabeled data is abundant but manually labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning."
We found that most deep active learning is done only experimentally, and that there are no open source frameworks for groups looking to actively use and deploy current SoTA active learning methods on deep models. OpAL is our attempt at remedying this problem.
Forked from discriminative active learning, which implemented all the methods used.
In order to run our code, you'll need these main packages:
- Python>=3.5
- Numpy>=1.14.3
- Scipy>=1.0.0
- TensorFlow>=1.5
- Keras>=2.2
- Gurobi>=8.0 (for the core set MIP query strategy)
- Cleverhans>=2.1 (for the adversarial query strategy)
These are the possible names of methods that can be used in the experiments:
- "Random": random sampling
- "CoreSet": the greedy core set approach
- "CoreSetMIP": the core set with the MIP formulation
- "Uncertainty": uncertainty sampling with minimal top confidence
- "UncertaintyEntropy": uncertainty sampling with maximal entropy
- "Bayesian": Bayesian uncertainty sampling with minimal top confidence
- "BayesianEntropy": Bayesian uncertainty sampling with maximal entropy
- "EGL": estimated gradient length
- "Adversarial": adversarial active learning using DeepFool
- Multi active-learning experiment support simultaneously