Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where does the 'weight' as the input of the Focal Loss come from? #39

Closed
leonnardoleo opened this issue Oct 17, 2018 · 1 comment
Closed
Labels
question Further information is requested

Comments

@leonnardoleo
Copy link

def sigmoid_focal_loss(pred, target, weight, gamma=2.0, alpha=0.25, reduction='elementwise_mean'): pred_sigmoid = pred.sigmoid() pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) weight = (alpha * target + (1 - alpha) * (1 - target)) * weight weight = weight * pt.pow(gamma) return F.binary_cross_entropy_with_logits( pred, target, weight, reduction=reduction)
There is an input named weight of the focal loss. Could you explain what this weight is and how I can get it. Thank you very much

@hellock
Copy link
Member

hellock commented Oct 17, 2018

weight is used for masking the elements in our code. It is the same size as pred and is either 0 or 1.

@hellock hellock added the question Further information is requested label Oct 17, 2018
druzhkov-paul pushed a commit to druzhkov-paul/mmdetection that referenced this issue Jun 17, 2020
FANGAreNotGnu pushed a commit to FANGAreNotGnu/mmdetection that referenced this issue Oct 23, 2023
* simple visualizer

* distributed scheduler progress

* local node message

* distributed fifo okay
FANGAreNotGnu pushed a commit to FANGAreNotGnu/mmdetection that referenced this issue Oct 23, 2023
* init

* Adding Hyperband (open-mmlab#4)

* refactor changes

* grammer

* asyc hyperband

* Initial commit (open-mmlab#5)

* Add dataset sanity check (open-mmlab#7)

* release resources (open-mmlab#6)

* Add dataset histogram viz and check (open-mmlab#8)

* Add dataset histogram viz and check

* Add matplotlib in setup

* Checkerpoint (open-mmlab#10)

* release resources

* rename example fils

* keep track of the best result

* serialization

* save load

* add util

* checkpoint and resume

* keeping task id

* terminator state

* rm comments

* Add autogluon backend fit and refine apis (open-mmlab#9)

* add autogluon backend fit and refine apis

* update

* update

* add some doc

* refine

* refine fit

* refine fit

* refine fit

* add guideline (open-mmlab#11)

* Add Plots for Visualization (open-mmlab#12)

* add plots

* current progress

* rm comment

* Refine fit (open-mmlab#13)

* refine fit

* minor update

* fix setup (open-mmlab#14)

* fix guide (open-mmlab#15)

* Add autogluon notebook (open-mmlab#16)

* add notebook

* update notebook

* Demo patch 1 (open-mmlab#17)

* mv dataset inside

* patch

* Demo (open-mmlab#20)

* fix

* fix 1

* add notebook

* setup (open-mmlab#19)

* Fix Checkerpoint (open-mmlab#22)

* resource

* mv tasks into object method

* Fixtypo (open-mmlab#23)

* Revert "Demo (open-mmlab#20)"

This reverts commit a8fa993b461b8cd424edbe772fe6b0264f6ee79a.

* fix

* Update AutoGluon Notebook (open-mmlab#24)

* Update notebook

* remove

* raise warning for resource (open-mmlab#25)

* [WIP] AutoGluon Distributed (open-mmlab#26)

* remote resource management

* add files

* remote resource management

* distributed scheduler

* add autogluon.distributed scheduler (open-mmlab#28)

* add cifar script and tensorboard (open-mmlab#27)

* patch for state-dict (open-mmlab#29)

* distributed with ssh helper (open-mmlab#31)

* ssh helper for distributed

* tutorial

* Refactor api and update image classification results (open-mmlab#30)

* refactor mxboard api and update img classification results

* Update notebook to work on mac

* update notebook and compact svg

* Multiprocess Queue Support MacOS (open-mmlab#33)

* Queue for Mac OS

* add queue

* Backend Tutorials (open-mmlab#32)

* init tutorial

* add figures

* add figures

* add comments

* merge and demo

* add plot

* img path

* Refine notebook and add dataset statistics (open-mmlab#34)

* refactor mxboard api and update img classification results

* Update notebook to work on mac

* update notebook and compact svg

* refine notebook and dataset

* add conda

* rm ipynb

* update notebook and dataset

* uncommnent dist

* notebook results update (open-mmlab#35)

* Add MINC experiments and Refine Data Loss Metric (open-mmlab#36)

* add minc exp

* fix bug

* add auto loss and metric

* update minc results

* fix kwargs (open-mmlab#37)

* Refine auto Dataset, Nets, Losses, Metrics, Optimizers and Pipeline (open-mmlab#38)

* add comments

* fix

* refine dataset

* Add Kaggle Shopee Classification script (open-mmlab#40)

* add kaggle shopee img classification example

* update results

* Update .gitignore

* Distributed FIFO and Bug Fix (open-mmlab#39)

* simple visualizer

* distributed scheduler progress

* local node message

* distributed fifo okay

* Add local helper (open-mmlab#42)

* add local helper

* Add Distributed ImageNet Example (open-mmlab#43)

* fix img dataset (open-mmlab#45)

* Add object detection (open-mmlab#41)

* add object detection voc

* fix

* update results and fix some issues

* fix search space

* update obj detection results

* Dist-hyperband, Doc and Tutorial (open-mmlab#48)


* dist hyperband

* add docs

* Refactor fit and dataset api (open-mmlab#50)

* advance api

* initial commit

* status

* advance api

initial commit

rm

* fix example issue (open-mmlab#51)

* current progress

* save model params (open-mmlab#53)

* add save model params

* add missing file

* resume at any point

* add missing import

* fix hyperband

* dist not implemented

* add tutorial doc (open-mmlab#55)

* mxutils

* add example and notebook

* add fit tutorial

* add notebook file

* Text Classification (open-mmlab#6)

* Initial commit for Text Classification classes

* Added results obejct in core.py

* Added Estimator package

* Rebase

* Added PyTest_Cache to git ignore

* Added FTML Optimizer

* Added impl for core.py

* Added method signatures for text classification model zoo

* Added typing hints to nets.py

* Wrapped up implementation of dataset to yield dataloaders

* Added TextData Transforms and Dataset Utils

* Added impl for pipeline

* Fixed errors + formatting commit

* Added beginner example for text_classification for sst dataset

* Added handler for data loader

* Refined DataLoaderHandler

* Printind the exception stack trace if any

* Replaced print with logs

* Fixed syntax error

* Changed default GPU counts

* Changed trial scheduler to default

* Changed Max_Training_Epochs to 5

* Fixed syntax error for string formatting

* Added metrics to the reporter

* Fixed reporter issues

* Uncommented plot training curves

* Fix import error

* Made reporter a handler

* Fixed args issue

* Added exponential search space

* Added batch_size as a hyperparam to dataset class

* Added more models to text_classification

* Removed big rnn for now

* Added rules for tokenization of text data

* Now printing network architecture as well

* Changed the rules for tokenization

* Added Dropouts and Dense Layers as a hyperparam

* Added todo to fine tune LM

* Changed upper bound for batch size

* Now printing task ID as well along with the result

* Now added task ID to the reporter as well

* Added num_gpus to the args

* Added unit tests (dummy for now)

* Added skeleton for autogluon initializers

* Added demo jupyter notebooks

* Updated IMDB notebook

* Updated Demo notebook for Stanford Sentiment Treebank dataset

* Added NER base structure

* adding pipeline + model zoo for NER

* adding LR warmup handler

* NER CoNLL2003 dataset

* NER dataset format conversion

* Added NER HPO codebase

* adding core + example for NER

* update pipeline, dataset, core

* fixes

* add eval helper code

* move data proc code to utils, fixes

* Added WNUT2017 dataset support

* fix num_classes

* fix num_classes

* add bertadam optimizer

* pre-defined parameters

* Increased the maximum sequence length

* move helper code to task utils

* Modified dataset preprocessing code

* fix class name, rebase

* fix

* Added comments for modifying and copying the NER data methods from GluonNLP toolkit

* The WNUT-2017 dataset now downloads automatically, user just needs to pass the dataset name

* pylint check(round 1)

* pylint check(round 2) and import seqeval library for fetching some NER data methods

* add multi-gpu support

* pylint check(round 3)

* Minor coding formats fix

* fix multi-gpu, working version

* Cleanup

* Minor code fix

* add default params for datasets

* Minor contructor fix

* update default seq len for wnut17

* adding demo notebook

* update demo notebook

* update notebook

* add early stopping

* update net construction config

* Initial commit for making MXBoard/TensorBoard as a handler to pass to the estimator

* Added TensorBoard requirements

* Added TensorBoard support to Text classification in the form of a handler

* Refactored the transforms, speeding up the data len functions

* Added dataset name for BERT

* Added BERTAdam optimizer for BERT

* Added BERT Networks

* Added BertClassifier block

* Added support for Bert Model to the pipeline

* Added DataLoader handler for BERT

* Now passing BertDataLoaderHandler to the Estimator, instead of using SentimentDataLoaderHandler()

* Added support for BERT Models and refactored pipeline.py in text-classification

* Bunch of pycharm formatting changes

* Added example classes for Glue SST2, MNLI and Yelp Datasets

* Fixed a typo for val set

* Fixed LR range issue

* Fixed missing argument to function call

* Fixed typo in model_zoo

* add support for ontonotes-v5, auto max seq len, cleanup

* [WIP] Unittest for Named Entity Recognition

* Added more unittest for Named Entity Recognition

* adding NER integration tests

* Added more integration test methods for NER

* Added nosetest module for NER

* fix nets, optims, batch_size api for NER & add advanced user example

* refactor Scheduler to pull out Terminator

* add missing files/fix Terminator

* set cpu affinity

* assign cpu affinity within the task

* integer casting

* terminator updates

* adding jnlpba, bc5cdr datasets + fixes

* Moved dataset/utils to text_classification_dataset/utils

* Added placeholder for buildspec.yml and pylintrc

* Refactored setup.py

* Added bdist info

* Fixed setup.py issue

* Added requirements.txt and reading it in setup.py

* Fixed wrong mapping of Sent : Label when reading tsv dataset

* Added Train Field indices and Val Field indices for TSV Datasets as kwargs

* Fixed issue of loading data lengths by using multi processing

* Added LR Warmup Handler use to Text Classification's pipeline

* Now plotting Train metrics as well at epoch end as well as fixed index issues while reading MNLI dataset

* Added support for GLUE - MRPC Dataset

* Updated the download dataset method

* Removed vocab getters and setters from dataset

* Now loading json files as SimpleDataset and removed methods to load dataset from gluonnlp

* Moved transforms from dataset to task

* Added num_workers parameter to the dataset

* Added losses/metrics and moved dataset class inside TC.task

* Removed core.py as it's not needed anymore

* Reduced code duplication by creating a lightweight dataset class

* Added MXBoard Handler to estimator

* Removed uncommented code for fifo scheduler

* Now printing the exception along with its stacktrace

* Now printing the exception along with its stacktrace

cr https://code.amazon.com/reviews/CR-11188375

* Added reading of datasets in .txt format

* Removed NER task from the CR

* Added support for multi-sentence in TextDataTransform

* Added support for multi-text datasets

* Added task specific optimizers for text classification

* Removed task-id from the reporter

* Removed task_id and EPOCH_END callback from reporter

* Removed big RNN and en_de_transformer

* Now making DataLoaderHandler a single class

* Removed ClassificationHead class

* Renamed init_env to init_hparams

* Removed MXBoard Handler

* Separated model, dataset, transforms from the method

* Added dataset.py to read GluonNLP Datasets

* Refactored the dataset class

* Fixing import issues

* Undoing formatting changes

* Undoing formatting changes

* Fixed issue with return of Batchify_Fn

* Now updating validation dataset labels as well

* Removed extra files from examples folder

* Undoing the CI changes

* Removed split and load and instead now calling nlp.split and load

* Removed initializer folder for now

* Removed Exponential

* Added _Dataset to read the different formats

* Undoing formatting changes

* Removed unused files

* Added deleted import for version

* Now passing results back to scheduler via reporter

* Addressed PR comments

* Refactor get_transform_fn into task.dataset

* Addressed PR Comments

* allow uploading files

* exception handl

* reporter

* add train val split

* split reko datasets

* handle pipeerror

* wip

* rm unused

* advanced API

* rm print

* try error

* import ok

* wip cifar training ok

* advanced api wip

* add missing file

* call method

* current progress

* controller sample okay

* rl progress, controller sample okay

* rl cifar example training okay

* rm comment

* Skopt searcher (open-mmlab#4)

* Added skopt_searcher.py for BayesOpt search routine + unit-test comparing this searcher against the RandomSampling searcher on a toy optimization problem. Remaining TODOs: 1) include script to benchmark skopt_searcher against Hyperband in real autogluon image-classification task. 2) There is an issue that get_config() may become stuck in infinite while loop (for all searchers). There is no termination condition to handle the case where all possible configs have already been tried (should be inherited from BaseSearcher or Scheduler should automatically terminate).

* edited BaseTask to allow for skopt Bayesian Optimization hyperparameters search via additional searcher argument value 'bayesopt'. Added train_cifar10.py example to compare random search with BayesOpt search (under default settings for all other flags in this script). Results are in new table added to scripts/image_classification/README.md

* Rebased master into skopt. Cleaned up documentation/comments to be more presentable

* rl training

* viz

* rl controller running okay

* rl controller state dict ready

* add dependencies

* update fit etc

* test pipeline

* merge hackthon docs

* add nas progress

* reorganize folders

* working progress

* major features

* Hackathon version freeze (open-mmlab#11)

* add image classification notebook and update api doc

* address comments

* update

* update result

* add functinoalities

* update tutorials

* update tutorial

* update

* update

* update mds

* update fit etc

* update fit hackathon

* update

* handle pipeerror

* update+

* add skopt

* add searcher and scheduler notebooks

* New version of image_classification_searcher.md	

Has a couple of remaining TODOs.  The biggest issue I see is that fit() in base_task.py cannot take any keyword arguments for constructing the Searcher.

* how to pass keyword args to searcher

removed all TODOs as well. This notebook execution still needs to be tested with the new base_task.fit code that uses 'searcher_options' as a kwarg.  

Something that is still missing from this tutorial is what is the hyperparameter search space that is actually being searched here?  A curious user probably wants to know this information.
I would add a short section right before "## Random hyperparameter search" to clarify this, for example:

By default, `image_classification.fit()` will search for hyperparameter values within the following search space: 
# TODO: explicitly list the default search space.

* added searcher_options

* add lr scheduler

* update example

* Update image_classification_scheduler.md

* minor typo correction

dict definitions for searcher_options corrected

* fix own dataset

* update large dataset test

* fixed skopt bug to handle ValueError exception

* Update image_classification_scheduler.md

* Update image_classification_scheduler.md

* try to fix large data

* address comments

* test large data

* fix pipeline

* test predict batch

* fix

* test pipeline

* img name

* Freeze autogluon version

* fit running

* add utils

* fit running, pending final fit

* evaluate

* docs

* fix merge error

* fix merge error

* fix

* fix

* init method for autogluon object attr

* docs compiled

* documentation

* rm sub docs

* address merging error

* address soem comments

* choice

* choice

* fix typo

* docs improvement

* address comments

* change list with choice

* rename

* fix typo

* address some comments
FANGAreNotGnu pushed a commit to FANGAreNotGnu/mmdetection that referenced this issue Oct 23, 2023
* First commit for tabular module. Still need to move auxiliary f3_grail_data_frame_utilities package to unify tabular codebase into single module.

* show example usage

* removed notes

* cleaned up file paths

* mxnet models finished

* comments + extra code cleaned up

* problem_type inference improved

* added auxiliary metrics

* multiclass label cleaning fixed. no more rare class removal

* added embednet to git tracking

* fixed tuple-bug in binary evals

* added benchmark file

* created benchmarks folder

* more realistic benchmark numbers:

* added toy regression benchmark. updated learner to handle missing labels during training and evaluation

* added adversarial toy classification benchmark. Extended FeatureGenerator to handle test datasets which contain previously-unseen columns as well as those missing from training dataset

* Tabular internal update (open-mmlab#27)

* Added tabular vectorizers default

* Added tabular model default hyper-parameters

* Updated tabular hyperparameter default file locations

* Updated tabular code to latest internal version

* Fixed defect with vectorizer default in auto_ml_feature_generator when init is called twice

* Removed unnecessary comments and resolved TODO messages

* Added TODO comments in tabular presets and label_cleaner

* merged Nicks branch, other minor formatting changes

* Moved internal DataFrame processing utils package into AutoGluon tabular utils (open-mmlab#39)

* minor comments

* after rebase with core API (no Predictor object yet), re-added callbacks/ to git tracking

* post rebase with core API (still missing Predictor object). verified benchmarks work. Now need to use: from autogluon import PredictTableColumn as task

* temp state before rebase to final core API

* re-track tabular files

* recover tabular.__init__ files + utils

* TabularDataset object

* post-rebase, fit functionality works except for HPO

* tabular Neural Net HPO using core AutoGluon APIs, hyperparameter_tune API finalized for all models

* addressed PR comments for tabNN HPO

* got SelfBenchmarks working one by one. segfault if running all sequentially

* changed NN metrics s.t. higher = better

* fixed pointer issue to enable multiple fit calls in same python session

* updated tabular readme setup instructions

* Added sklearn style metrics

* Updated default tabular class count threshold to 10 from 100

* Removed unused code

* Added ensemble selection for tabular

* Removed verbose warning logging for LightGBM

* Added vectorizer.stop_words_ removal, 100x+ object size reduction for vectorizers

* Fixed sklearn RF models failing due to NaN values

* Enabled infrequent class removal for multiclass problems

* Removed verbose logging in lgbm dataset constructors

* Updated label_cleaner and learner score functions

* Removed max_error calculation to enable compatibility with automlbenchmark (sklearn 0.20.4)

* Updated random forest search spaces

* Removed verbose logging for tabular feature generator

* Updated auto_ml_feature_generator

* Updated tabular problem_type inference logic

* Updated tabular_nn_model to avoid crashing on text features

* Updated auto_train to use PredictTableColumn

* Added LGBM Sklearn model hyperparameter spaces

* Updated setup.py, requirements,txt and added TODO's

* Fixed self.objective_func.__name__ raising exception due to new metric objects

* text edit tabular readme

* Changed tabular binned features to int from object, corrected missing column detector code in feature_generator

* Added exception catching during model training, now trainer will continue if model throws exception

* Added missing feature_types_metadata to abstract_model in tabular

* Fixed defect when save path doesn't exist for tabular_nn, fixed HPO failing for tabular_nn due to X_test not being preprocessed

* more reasonable defaults in tabularNN

* tried regression fix. default HPO search space enlarged

* Updated ngrams generation code to gracefully remove nlp usage if it causes OOM error instead of crashing

* Clarified cols_to_ variables to cat_cols_to_ in nn_tab_model

* tabNN hpo space improved for regression

* refactored gbm models to allow user-specified hyperparameter-options

* Fixed crash when time_limits and num_trails are both None in predict_table_column, added trainer_type parameter

* Fixed nn_tab_model to have kwargs fit parameter, optimized _generate_datasets

* Added model fit and predict timings to abstract_trainer

* Updated epochs_wo_improve in tabular_nn_model to cap at 20 by default, otherwise training can take a very long time

* Added print statements of exception catching

* Explicit tabular dependency removal (Part 1)

* Updated tabular requirements.txt (Part 2)

* Updated tabular requirements.txt (Part 3)

* Updated tabular requirements.txt (Part 4)

* Added psutil memory checking to lightgbm early_stopping callback to avoid OOM errors.

* Added feature type logging in abstract_feature_generator

* Fixed defect in tabular problem_type inference where floats were being set to multiclass regardless of being convertable to int

* Fixed exception in abstract_feature_generator

* Added ngram generation memory safeguard in auto_ml_feature_generator

* Changed tabular_nn_model to early stop if it hasn't improved instead of early stopping if it has gotten worse

* Updated TODOs

* lgbm HPO complete, still needs in-depth evaluation

* temp commit of lgb_model

* lightGBM hpo completed, tested, still have segfault when running tabularNN hpo twice in same session

* re-added lgbm trainer functions

* fixed gbmHPO file creation issue

* Fixed lgb_model crashing on multiclass + other defects in lgb_model

* Fixed defect causing multiclass to fail on lgb hyperparameter tuning

* Added defect TODO to label_cleaner

* Added improved generate_kfold to tabular.ml.utils

* Removed unnecessary model predictions during AutoGluon training

* Removed incorrect defect TODO, caused by setting optimization_func to logloss on multiclass, which will crash by definition with missing classes in train.

* Added catboost model to tabular (Unused)

* moved default setup functions in fit() to separate utils file

* corrected newly introduced hpo model.names bug in abstract_trainer

* neural net uses new metrics in HPO and fit()

* Added tabular bagged_ensemble_model

* Added score_with_y_pred_proba to tabular abstract_model

* Added TODO to add nn_tab_model to contrib

* Added tabular BaggedEnsembleModel support into abstract_trainer

* changed examples+benchmarks to get data from s3

* updated kwarg names for tabular.fit() api

* additional tabular.fit() API changes for submission_columns

* added holdout_frac fit arg

* Removed legacy code, added TODO's, minor tweaks

* Removed mxboard from tabular requirements.txt

* Tabular code path refactor

* Tabular code path refactor Part 2

* simplified code in tabular examples. there is some segfault issue after refactoring

* Fixed metrics referring to incorrect version of code

* Moved fit_utils to utils

* added tabular package reqs to autogluon install setup

* Removed legacy NN code

* Updated PredictTableColumn to PredictTabular

* Removed legacy tabular location

* Moved SelfBenchmarks location

* Renamed autotune.py to feature_prune.py

* moved NN trial file + imports

* names in NN hpo

* Added TODO to try default hyperparams in searchspaces.py, updated lgb searchspaces

* Updated setup.py to include mxboard and botocore, removed duplicated scikit-optimize

* Renamed predict_tabular to tabular_prediction

* updated train_loss_name to regression from l1 in train_lgb_model

* Fixed defect in tabular_nn_model regarding final score declaration

* Fixed lgbm callback error on regression for HPO

* added autogluon scheduler/searcher HPO functionality for arbitrary models

* Fixed HPO for LightGBM, enabled HPO for Catboost and Random Forest, misc improvements

* Fixed defect in default_learner general_data_processing related to X_test containing unknown classes in multiclass

* Added initial support for multiple sequential feature_generators in tabular abstract_learner

* moved neural net default params + search spaces to outside file

* bug in search space name

* preliminary version of first 2 tabular tutorials

* Enabled X_test to not contain y in default_learner fit

* Updated tabular model initialization

* Updated Tabular model presets

* updated tutorials to reflect new preset models

* Fixed hyperparameter specification for random forest

* advanced tutorial skeleton

* documentation for user-facing tbular functions

* test tutorial conversion into markdown

* converted tutorials to markdown + index.rst file

* help ci env

* updated link to in-depth tutorial

* added CI test for tabular, tutorial webpage files complete

* Fixed support for True False labels

* Set more sane values for LGBMCustom

* Fixed nn_tab_model when dealing with missing values in test

* Added misc comments to tabular

* Reduced catboost verbosity and increase early_stopping_rounds

* Reduced default tabular Random Forest size from 1500 to 300 estimators

* Updated abstract_trainer

* Updated ensemble_selection

* Deleted unused contrib files

* Removed unused code

* Added unused stack ensemble code into abstract_trainer

* Moved tabular to utils/tabular

* Removed unused imports

* Removed tabular contrib

* Removed fastai dependency

* Moved tabular/feature_generators to tabular/features

* Removed usage of Catboost in Tabular

* Fixed test_tabular, removed legacy version

* Added TODO to lazily import mxboard

* Updated decorator name

* Removed unused sample parameter

* Updated tabular package imports from explicit absolute to explicit relative, removed tabular use of star imports

* Deleted legacy RF HPO spaces

* Added EOF line to auto_trainer

* Added ExtraTreesClassifier and ExtraTreesRegressor to preset models

* Added KNeighborsClassifier and KNeighborsRegressor models to tabular presets

* Reduced verbosity of Tabular Neural Network training

* faster CI testing for tabular

* moved mxboard/visualizer to be lazy import for tabular

* ci

* done

* Revert "done"

This reverts commit 5427f3c6061e5de570d9814f055d9980311d864c.

* debug scheduler-HPO + gpu usage to run benchmark test

* Added option for Repeated KFold

* Increase ExtraTrees n_estimators default from 100 to 150

* Added missing variable passing of holdout_frac in default_learner

* Added support for model bagging to trainer and learner

* Cleaned and streamlined usage of bagged mode

* Added kfolds option to tabular_prediction

* Fixed feature_types_metadata passing to child models in bagged_ensemble_model

* doc build first

* env

* Removed mxnet.gpu() call on autogluon task import, fixes defects occurring in mxnet.

* Removed numpy and matplotlib explicit version requirements

* Updated setup.py, removed spacy, removed explicit version on scikit-optimize

* patch for search space test

use bash run tests

try

sequential test

fix typo

* rm nsoe

* gpu count

* mv examples

* this should be in later PR

* fix docs

* unit test

* unit test

* patch for search space test

use bash run tests

try

sequential test

fix typo

* fix doc path

* unit test

* cleanup default tabular-NN params

* Removed unnecessary dictionary copies in tabular_nn hyperparameter defaults

* Parallelized KNN models
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants