Logo
latest

Overview

  • modAL in a nutshell
  • Installation
  • Extending modAL
  • Contributing

Models

  • ActiveLearner
  • BayesianOptimizer
  • Committee
  • CommitteeRegressor

Query strategies

  • Acquisition functions
  • Uncertainty sampling
  • Disagreement sampling
  • Ranked batch-mode sampling
  • Information density

Examples

  • Interactive labeling with Jupyter
  • Pool-based sampling
  • Ranked batch-mode sampling
  • Stream-based sampling
  • Active regression
  • Ensemble regression
  • Bayesian optimization
  • Query by committee
  • Bootstrapping and bagging
  • Keras models in modAL workflows
  • Pytorch models in modAL workflows

API reference

  • modAL.models
  • modAL.uncertainty
  • modAL.disagreement
  • modAL.multilabel
  • modAL.expected_error
  • modAL.acquisition
  • modAL.batch
  • modAL.density
  • modAL.utils
modAL
  • Docs »
  • modAL: A modular active learning framework for Python3
  • Edit on GitHub

modAL: A modular active learning framework for Python3¶

https://travis-ci.org/modAL-python/modAL.svg?branch=master https://codecov.io/gh/modAL-python/modAL/branch/master/graph/badge.svg https://readthedocs.org/projects/modal-python/badge/?version=latest

Welcome to the documentation for modAL!

modAL is an active learning framework for Python3, designed with modularity, flexibility and extensibility in mind. Built on top of scikit-learn, it allows you to rapidly create active learning workflows with nearly complete freedom. What is more, you can easily replace parts with your custom built solutions, allowing you to design novel algorithms with ease.

Currently supported active learning strategies are

  • uncertainty-based sampling: least confident (Lewis and Catlett), max margin and max entropy
  • committee-based algorithms: vote entropy, consensus entropy and max disagreement (Cohn et al.)
  • multilabel strategies: SVM binary minimum (Brinker), max loss, mean max loss, (Li et al.) MinConfidence, MeanConfidence, MinScore, MeanScore (Esuli and Sebastiani)
  • expected error reduction: binary and log loss (Roy and McCallum)
  • Bayesian optimization: probability of improvement, expected improvement and upper confidence bound (Snoek et al.)
  • batch active learning: ranked batch-mode sampling (Cardoso et al.)
  • information density framework (McCallum and Nigam)
  • stream-based sampling (Atlas et al.)
  • active regression with max standard deviance sampling for Gaussian processes or ensemble regressors

Overview

  • modAL in a nutshell
  • Installation
  • Extending modAL
  • Contributing

Models

  • ActiveLearner
  • BayesianOptimizer
  • Committee
  • CommitteeRegressor

Query strategies

  • Acquisition functions
  • Uncertainty sampling
  • Disagreement sampling
  • Ranked batch-mode sampling
  • Information density

Examples

  • Interactive labeling with Jupyter
  • Pool-based sampling
  • Ranked batch-mode sampling
  • Stream-based sampling
  • Active regression
  • Ensemble regression
  • Bayesian optimization
  • Query by committee
  • Bootstrapping and bagging
  • Keras models in modAL workflows
  • Pytorch models in modAL workflows

API reference

  • modAL.models
  • modAL.uncertainty
  • modAL.disagreement
  • modAL.multilabel
  • modAL.expected_error
  • modAL.acquisition
  • modAL.batch
  • modAL.density
  • modAL.utils
Next

© Copyright 2018, Tivadar Danka Revision fab3e964.

Built with Sphinx using a theme provided by Read the Docs.