Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing autogluon in conda #612

Closed
priyanga24 opened this issue Aug 12, 2020 · 134 comments
Closed

Installing autogluon in conda #612

priyanga24 opened this issue Aug 12, 2020 · 134 comments
Assignees
Labels
env: new Support AutoGluon on new service install Related to installation priority: 0 Maximum priority urgent
Milestone

Comments

@priyanga24
Copy link

Hi I am getting the following issue while installing autogluon in Conda environment. Can you please help?

> conda install -c anaconda autogluon
> Solving environment: failed
> 
> PackagesNotFoundError: The following packages are not available from current channels:
> 
>   - autogluon
> 
> Current channels:
> 
>   - https://conda.anaconda.org/anaconda/linux-64
>   - https://conda.anaconda.org/anaconda/noarch
>   - https://repo.anaconda.com/pkgs/main/linux-64
>   - https://repo.anaconda.com/pkgs/main/noarch
>   - https://repo.anaconda.com/pkgs/free/linux-64
>   - https://repo.anaconda.com/pkgs/free/noarch
>   - https://repo.anaconda.com/pkgs/r/linux-64
>   - https://repo.anaconda.com/pkgs/r/noarch
>   - https://repo.anaconda.com/pkgs/pro/linux-64
>   - https://repo.anaconda.com/pkgs/pro/noarch
> 
> To search for alternate channels that may provide the conda package you're
> looking for, navigate to
> 
>     https://anaconda.org
> 
> and use the search bar at the top of the page.
@Innixma
Copy link
Contributor

Innixma commented Aug 12, 2020

AutoGluon is not currently available via conda, but we plan to add it to conda in future.

For now, please follow these instructions to install AutoGluon in a conda environment: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-non-conda-packages

@priyanga24
Copy link
Author

is GPU supported for tabular data in autogluon?

@Innixma
Copy link
Contributor

Innixma commented Aug 14, 2020

#263 tracks GPU support for Tabular, I believe the Tabular Neural Network supports GPU at present.

@Innixma Innixma added env: new Support AutoGluon on new service and removed feature request labels Feb 13, 2021
@Innixma Innixma added this to the 0.2 Release milestone Feb 25, 2021
@Innixma Innixma modified the milestones: 0.3 Release, Feature Backlog Aug 14, 2021
@fleuryc
Copy link

fleuryc commented Sep 9, 2021

I upvote this feature request.

@arturdaraujo
Copy link

arturdaraujo commented May 31, 2022

I upvote this feature request. Actually it could be conda-forge instead of conda. It's much easier to upload and already has many frameworks to do so (which means it's now easier if there already a pip installation). It really would be a major "minor" improvement to see autogluon on conda-forge.

https://conda-forge.org/#add_recipe
https://github.com/conda-forge/staged-recipes
https://github.com/conda-incubator/grayskull

@Innixma Innixma modified the milestones: Feature Backlog, 0.5 Release May 31, 2022
@Innixma Innixma modified the milestones: 0.5 Release, 0.6 Release Jun 13, 2022
@adampilz
Copy link

upvote

@kelszo
Copy link

kelszo commented Aug 10, 2022

+1 for this. Especially when autogluon doesn't support the latest dask version and thus collides with other packages that requires it. Sadly due to autogluons structure it isn't possible to just use grayskull on the main package and publish that; all subpackages need to be packaged on conda-forge.

@Innixma Innixma added the install Related to installation label Nov 2, 2022
@Innixma Innixma modified the milestones: 0.6 Release, 0.7 Release Nov 8, 2022
@arturdaraujo
Copy link

arturdaraujo commented Jan 12, 2023

It is time to add autogluon to Conda. This package has more than 5k in stars. I'm going to reach out to conda gitter for advice on this. As @kelszo said, all the sub-packages must be on conda. They probably are already there, as they have more than 20.000 packages.

If they are not, like the dask version required by autogluon, we can make a list of packages that need to be updated on autogluon for being able to publish on conda.

I will do the following:

  • Find out what packages that autogluon uses that are not at conda. (Find out if is a version problem or if they are simply absent)
  • If the packages that are not on conda due to their version, we can try to make a list of packages to update on autogluon (if possible) so we can focus on that.
  • If the package required by autogluon is missing entirely on conda, we can publish it so we can publish autogluon afterwards.

@PertuyF
Copy link

PertuyF commented Jan 12, 2023

Funny, I was just looking for autogluon from conda-forge today.
I suggest you have a look at the conda-forge doc if you're not familiar with contributing packages there. I had a quick look and it will be a journey! You may be able to create the necessary packages iteratively using grayskull in this order:

  1. autogluon.commons (all deps available)
  2. in parallel:
    • autogluon.features
    • autogluon.core
  3. autogluon.tabular
  4. in parallel:
    • autogluon.multimodal will require:
      • torchtext for which no source distribution is available in PyPi, investigation required
      • pytorch-metric-learning (all deps available)
      • openmim will require:
        • model-index for which modelindex dependency should be ignored (weird alias implementation in pypi)
    • autogluon.vision will require:
      • gluoncv will require:
        • autocfg (all deps available)
    • autogluon.timeseries will require gluonts (all deps available)
  5. autogluon.text
  6. autogluon

@giswqs
Copy link
Contributor

giswqs commented Jan 12, 2023

I have published some packages on conda-forge before. I can help here. Who would like to be listed as the autogluon conda-forge recipe maintainers?

@arturdaraujo
Copy link

You can edit that later but absolutely add them. especially Innixma

@giswqs
Copy link
Contributor

giswqs commented Jan 12, 2023

I have submitted a PR to add autogluon.common to conda-forge. Will see how it goes.
conda-forge/staged-recipes#21714

@Innixma
Copy link
Contributor

Innixma commented Jan 12, 2023

This is awesome, and long overdue!! Thank you so much @giswqs for initiating this and @arturdaraujo and @PertuyF for providing additional information.

@gradientsky & @tonyhoo: Please monitor this thread going forward, we should prioritize making AutoGluon available in conda-forge by v0.7 at latest. v0.6.2 will be good to try for to get us familiar. We should also investigate automating GitHub Actions to do the conda-forge release at the same time as the PyPi release.

@giswqs
Copy link
Contributor

giswqs commented Jan 12, 2023

The autogluon.common conda-forge recipe fails to build because of some missing files referenced in the setup.py. All source files need to be included in the package tar file.

image

@PertuyF
Copy link

PertuyF commented Jan 12, 2023

We should also investigate automating GitHub Actions to do the conda-forge release at the same time as the PyPi release.

Conda-forge has pretty neat CI/CD that may take care about this, at least in part, if combining packages from PyPi isn't too complex.

@giswqs
Copy link
Contributor

giswqs commented Jan 12, 2023

I use GitHub Actions to automatically release the package to PyPI. It is easy to set it up. Here is a sample yml file.

@PertuyF
Copy link

PertuyF commented Jan 12, 2023

The autogluon.common conda-forge recipe fails to build because of some missing files referenced in the setup.py. All source files need to be included in the package tar file.

I see that all packages live in the same repo, and are cross referenced at build time, however the cross references are unreachable from sdist published on PyPi.
In this case it may be simpler to use sources from the GitHub repo directly (requires a release tag). And it may even make sense to have one single multioutput recipe producing artifacts for all packages at once.

This would also ensure versions are in sync for each release.
Let me know if you need help on that, I should be able to point you to examples using similar strategy.

@Innixma Innixma added the priority: 0 Maximum priority label Feb 1, 2023
@Innixma
Copy link
Contributor

Innixma commented Feb 2, 2023

Regarding those old dask and distributed dependencies we have in 0.6.2, I am looking into removing those dependencies entirely: #2691 . I think those dependencies are a remnant from old deleted code that are not necessary anymore. Hopefully they will be gone in v0.7.

@h-vetinari
Copy link

While debugging the multimodal stuff, I noticed that autogluon.core depends on

    - dask-core <=2021.11.2,>=2021.09.1
    - distributed <=2021.11.2,>=2021.09.1

Pinning to a package that's over a year old is... bad. If you want tight & well-tested ranges, the flipside is that you need to keep them up to date +/- all the time. Obviously some slack will happen, but basically for every release, each upper bound should be double-checked, and raised to the most recent version except if there are really substantial problems.

@h-vetinari
Copy link

Numpy in particular should get a different treatment: the promise there (& very strictly held) is that on a warning-free build of 1.N, you can always upgrade to 1.{N+2}, and only 1.{N+3} might break something (due to deprecations being executed at the earliest 2 releases after they get introduced). As such, if you're on a warning-free build of numpy 1.24, you could/should use an upper bound of <1.27.

@h-vetinari
Copy link

h-vetinari commented Feb 2, 2023

OK, I managed to figure out some things in conda-forge/autogluon.multimodal-feedstock#15.

I'm categorizing the various dependencies into a couple of different classes, based on how I suggest you should handle this for 0.7.0. I've collected the most current version (in conda-forge) in a comment on the right (pkg @ version).

Note: these are just the dependencies of autogluon.multimodal, not all the other autogluon-flavours.

Hard incompatibilities

With the existing pins, it was not possible to solve the environments, while it does work without an upper bound. I have not investigated where exactly the break between passing and failing is located. These are crucial to fix.

    - openmim >0.1.5,<=0.2.1                  # openmim @ 0.3.5
    - pytorch-metric-learning >=1.3.0,<1.4.0  # pytorch-metric-learning @ 2.0.0

Pytorch: ABI-dependent

These are some of the hardest dependency (in general, but especially for making sure the CUDA stuff works), because they need to built for one globally uniform pytorch version (otherwise they become incompatible with each other), which in conda-forge is now 1.13. This will be less of an issue in the future (when existing autogluon packages will still work fine with older pytorch if a newer version becomes available), but for now it's equally crucial to lift the pins for these.

-    - pytorch >=1.9,<1.13
-    - torchvision <0.14.0
-    - torchtext <0.14.0          # [not arm64]
-    - fairscale >=0.4.5,<=0.4.6  # [not win]
+    - pytorch >=1.9,<1.14                       # pytorch @ 1.13.1
+    - torchvision <0.15                         # torchvision @ 0.14.1
+    - torchtext <0.15            # [not arm64]  # torchtext @ 0.14.1
+    - fairscale >=0.4.5,<0.4.14  # [not win]    # fairscale @ 0.4.13

Breaking changes

These should IMO be made compatible if possible. I found these because removing the upper bounds makes the import of autogluon.multimodal fail:

    - nptyping >=1.4.4,<1.5.0         # nptyping @ 2.4.1
    - pytorch-lightning >=1.7.4,<1.8  # pytorch-lightning @ 1.9.0

Pytorch: secondary packages

These only run-depend on pytorch, so it's less of an issue, but should still be lifted if possible. As discussed above, nlpaug 1.1.11 is necessary to support pytorch CUDA in conda-forge (but mostly by a quirk of packaging).

    - nlpaug ==1.1.10                 # nlpaug @ 1.1.11
    - torchmetrics >=0.8.0,<0.9.0     # torchmetrics @ 0.11.1
    - transformers >=4.23.0,<4.24.0   # transformers @ 4.26.0

Other key packages in the ecosystem

Aside from the specific comments on numpy, it's quite user-hostile to limit these key packages to anything less than their most current version.

    - numpy >=1.21,<1.24         # numpy @ 1.24.1
    - scipy >=1.5.4,<1.10        # scipy @ 1.10.0
    - scikit-learn >=1.0.0,<1.2  # scikit-learn @ 1.2.1

Far behind

Just based on the version number, autogluon.multimodal is quite far behind with its upper bound relative to what's available already. This is not ideal for both users & the solver, which has to go far back in time to pick them up (incurring potential other conflicts).

    - jsonschema <=4.8           # jsonschema @ 4.17.3
    - smart_open >=5.2.1,<5.3.0  # smart_open @ 6.3.0

A bit behind

    - accelerate >=0.9,<0.14          # accelerate @ 0.16
    - albumentations >=1.1.0,<=1.2.0  # albumentations @ 1.3.0
    - omegaconf >=2.1.1,<2.2.0        # omegaconf @ 2.3.0

Up to date

Side note: hard pins (==) should ideally be avoided even for up-to-date packages (note also that autogluon.multimodal effectively has a hard pin evaluate <0.3 due to the way the respective dependencies shake out).

    - defusedxml ==0.7.1             # defusedxml @ 0.7.1
    - evaluate <=0.3.0               # evaluate @ 0.3.0
    - nltk >=3.4.5,<4.0.0            # nltk @ 3.8.1
    - pandas >=1.2.5,<1.6            # pandas @ 1.5.4
    - scikit-image >=0.19.1,<0.20    # scikit-image @ 0.19.3
    - sentencepiece >=0.1.95,<0.2.0  # sentencepiece @ 0.1.97
    - seqeval <=1.2.2                # seqeval @ 1.2.2
    - text-unidecode <=1.3           # text-unidecode @ 1.3.0
    - timm <0.7                      # timm @ 0.6.12
    - tqdm >=4.38.0                  # tqdm @ 4.64.1

@giswqs
Copy link
Contributor

giswqs commented Feb 2, 2023

@h-vetinari Thank you for the amazing work!! By building the artifacts locally using your recipe, I can confirm that now installing autogluon.multimodal can install the pytorch gpu version properly. This is awesome!!

python build-locally.py
conda create -n agu -c "file://${PWD}/build_artifacts" -c conda-forge  autogluon.multimodal

image
image
image

@Innixma
Copy link
Contributor

Innixma commented Feb 2, 2023

@h-vetinari Absolutely fantastic deep dive! Very useful.

Regarding dask/distributed: These were old dependencies that are no longer necessary, in fact we didn't even use dask/distributed at all in v0.6, but we did not realize we could remove them. They have been fully removed in #2691, and won't be present in v0.7. I agree that our team should be careful to not let these old dependencies linger without updates. I think being in conda-forge will help force us to adopt best practices here.

Aside from the specific comments on #612 (comment), it's quite user-hostile to limit these key packages to anything less than their most current version.

I think your reasoning for numpy makes a lot of sense, and I'll consider increasing the upper limit beyond what is released. Are you suggesting that I should do the same treatment for scipy and scikit-learn? (Note: scikit-learn broke us with a minor release in the past, without prior warning).

For all core/tabular dependencies, I am tracking version updates for v0.7 here: #2813

For timeseries/multimodal dependencies, these are not as closely tracked by me due to having slightly less context. I think timeseries dependencies are largely ok.

@sxjscience please refer to this comment and see if we can address concerns regarding multimodal dependencies for v0.7 release.

@sxjscience
Copy link
Collaborator

@h-vetinari Thanks for the comments!

@Innixma Do we need to loose the bound about numpy, scipy, scikit-learn, Pillow? These are widely-used packages and have been bounded in

'numpy': '>=1.21,<1.24',
'pandas': '>1.4.0,<1.6',
'scikit-learn': '>=1.0.0,<1.2',
'scipy': '>=1.5.4,<1.10.0',
'psutil': '>=5.7.3,<6',
'networkx': '>=2.3,<3.0',
'tqdm': '>=4.38.0',
'Pillow': '>=9.3.0,<=9.4.0',

@Innixma
Copy link
Contributor

Innixma commented Feb 2, 2023

@sxjscience I am planning to loosen bound on numpy. For scipy and scikit-learn, I will await response from @h-vetinari on his thoughts. (I intend to upgrade to latest, the question is whether to have version ranges go beyond what has been released).

For Pillow, this is entirely up to those working on multimodal, since that is the only module that depends on Pillow. I would recommend avoiding these micro version caps though, such as <=9.4.0. Instead, it should probably be <9.5, otherwise we would miss a theoretical security patch release like 9.4.1.

@h-vetinari
Copy link

h-vetinari commented Feb 2, 2023

@h-vetinari Absolutely fantastic deep dive! Very useful.

Happy to hear it 🙃

I think your reasoning for numpy makes a lot of sense, and I'll consider increasing the upper limit beyond what is released. Are you suggesting that I should do the same treatment for scipy and scikit-learn? (Note: scikit-learn broke us with a minor release in the past, without prior warning).

I'm not suggesting that you need to compromise you testing coverage or strategy; the main point is that people really eagerly want to use the last numpy/scipy/pandas/scikit-learn1 version (features, fixes, etc.), and barring them from doing so should be avoided where possible (e.g. by ensuring you've tested against the newest versions of those packages available at the time of an autogluon release).

In my mind, I think there's a sort of sliding scale of how conservative projects are with their APIs, where numpy is most conservative, then scipy2, then scikit-learn. I think for numpy it's fine to proceed as I described above (<1.{N+3}), for scipy it's a judgement call but should be fine to do <1.{N+2}, and for scikit-learn you might want to cap at the last tested minor version, i.e. <1.{N+1} (in all cases, N is intended to mean the most recent available minor version at time of an autogluon release).

in contrast to numpy/scipy, pandas promises to use semver so theoretically <2 should be enough, but I think it would be fine to just ensure that the last version works.

As I mentioned above, all this is relevant mostly for the PyPI side of things, where the metadata in a released version is immutable (unless you yank the whole release). In conda-forge, we have the ability to introduce version caps for a given release after the fact, so that takes quite a bit of the pressure off (though it's still not fun to have to respond to, so by all means, cap with what you're comfortable with, but try to aim for as expansive as you can make it).

Footnotes

  1. depending on the respective field, the same goes for other libraries, like networkx (3.0 released), pillow, pytorch, transformers, etc.

  2. Follows the same "warn for 2 releases before changing" strategy, but corner cases might break under rare circumstances; I happen to be on the scipy maintainer team, we try to avoid this, but it's not 100% always feasible.

@h-vetinari
Copy link

I would recommend avoiding these micro version caps though, such as <=9.4.0. Instead, it should probably be <9.5, otherwise we would miss a theoretical security patch release like 9.4.1.

Yes, please! In general, try not to prohibit patch version updates (unless it's a zero-ver library where you have the expectation that things will break), but add <{major}.{next_minor_version_you_don't_trust_yet}.

@Innixma
Copy link
Contributor

Innixma commented Feb 6, 2023

@h-vetinari We have adopted this strategy for our dependency management (new ranges). Thanks again for the suggestion!

We have also updated all version ranges to include latest releases for all packages across all submodules (with the exception of networkx 3.0 which we will do in v0.8).

The only version ranges which have not yet been upgraded are in AutoGluon TimeSeries. The upgrades for those dependencies are tracked in #2831 and are planned for v0.7 release.

@h-vetinari
Copy link

Great news, looking forward to the release! :)

Thanks a lot for the work on this!

@giswqs
Copy link
Contributor

giswqs commented Feb 7, 2023

Finally, the last two dependencies (fastai and imodels) for autogluon.tabular are now available on conda-forge. The autogluon.tabular conda-forge package now supports both cpu and gpu on win, mac, and linux.

autogluon.core

  • add ray to required dependencies for win and linux (PR)
  • add ray support for osx and arm64 (PR)

autogluon.tabular

  • add fastai and imodeals to conda-forge (PR)
  • add fastai to required dependencies (PR)
  • add catboost support osx arm64 (PR)
  • add optional dependencies (lightgbm, catboost, xgboost) (PR)
  • add support for pytorch-gpu

autogluon.multimodal

  • add nlpaug support for pytorch-gpu (PR)
  • add support for pytorch-gpu (PR)
  • do not require torchtext and fairscale for windows (PR)
  • add torchtext support for arm64 (Issue)
  • add support for windows

autogluon.timeseries

  • add optional dependencies (sktime, pmdarima, tbats) (PR)

autogluon

  • add support for pytorch-gpu

@giswqs
Copy link
Contributor

giswqs commented Feb 7, 2023

autogluon.tabular v0.6.2 build 7 is now available on conda-forge. Tested it on my Linux machine, both the installation and testing went smoothly.

mamba install -c conda-forge autogluon.tabular

image

from autogluon.tabular import TabularPredictor, TabularDataset

if __name__ == '__main__':
    train_path = 'https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv'
    test_path = 'https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv'
    label = 'class'
    train_data = TabularDataset(train_path)
    test_data = TabularDataset(test_path)
    subsample_size = 10000  # subsample subset of data for faster demo, try setting this to much larger values
    if subsample_size is not None and subsample_size < len(train_data):
        train_data = train_data.sample(n=subsample_size, random_state=0)
    predictor = TabularPredictor(label=label).fit(train_data)
    predictor.persist_models('all')
    leaderboard = predictor.leaderboard(test_data)
AutoGluon training complete, total runtime = 41.91s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20230207_185114/")
Persisting 14 models in memory. Models will require 1.7% of memory.
                  model  score_test  score_val  pred_time_test  pred_time_val   fit_time  pred_time_test_marginal  pred_time_val_marginal  fit_time_marginal  stack_level  can_infer  fit_order
0   WeightedEnsemble_L2    0.871942      0.876        0.053593       0.024476   4.291905                 0.002357                0.002044           0.695114            2       True         14
1               XGBoost    0.871430      0.875        0.037228       0.011892   0.922987                 0.037228                0.011892           0.922987            1       True         11
2              CatBoost    0.869895      0.862        0.014007       0.010540   2.673804                 0.014007                0.010540           2.673804            1       True          7
3              LightGBM    0.868666      0.866        0.031962       0.010774   0.358547                 0.031962                0.010774           0.358547            1       True          4
4            LightGBMXT    0.862831      0.863        0.088603       0.015626   1.679357                 0.088603                0.015626           1.679357            1       True          3
5         LightGBMLarge    0.862627      0.868        0.024133       0.010504   0.800561                 0.024133                0.010504           0.800561            1       True         13
6      RandomForestEntr    0.857304      0.859        0.106724       0.105371   0.818089                 0.106724                0.105371           0.818089            1       True          6
7      RandomForestGini    0.855973      0.860        0.241889       0.101089   0.861663                 0.241889                0.101089           0.861663            1       True          5
8       NeuralNetFastAI    0.851469      0.851        0.223587       0.074786  15.760570                 0.223587                0.074786          15.760570            1       True         10
9        NeuralNetTorch    0.847579      0.840        0.066115       0.057496  14.421620                 0.066115                0.057496          14.421620            1       True         12
10       ExtraTreesEntr    0.846760      0.844        0.114438       0.122360   0.802011                 0.114438                0.122360           0.802011            1       True          9
11       ExtraTreesGini    0.846658      0.843        0.114969       0.109683   0.780853                 0.114969                0.109683           0.780853            1       True          8
12       KNeighborsUnif    0.770703      0.771        0.065273       0.023134   0.031498                 0.065273                0.023134           0.031498            1       True          1
13       KNeighborsDist    0.749616      0.754        0.133108       0.018203   0.011131                 0.133108                0.018203           0.011131            1       True          2

@h-vetinari
Copy link

  • add support for windows

Just to set expectations, this is blocked indefinitely until we find a way to build pytorch on windows: conda-forge/pytorch-cpu-feedstock#32

@giswqs
Copy link
Contributor

giswqs commented Feb 8, 2023

@h-vetinari Thanks for the heads up. Really appreciate your work on building these challenging packages on conda-forge! Without your help, we could not have made this far. I don't expect the pytorch window build will be available any time soon. For the three remaining items on the list, I think adding torchtext support for arm64 would be the top priority for the autogluon team. There are many autogluon users using Apple M1. We would love to add autogluon.multimodal support for arm64.

autogluon.core

  • add ray support for osx and arm64 (PR)

autogluon.multimodal

  • add torchtext support for arm64 (Issue)
  • add support for windows

@arturdaraujo
Copy link

arturdaraujo commented Feb 8, 2023

Thanks h-vetinari !

@Innixma Innixma modified the milestones: 0.7 Release, 0.7 Fast-Follow Items Feb 10, 2023
@giswqs
Copy link
Contributor

giswqs commented Feb 17, 2023

autogluon v0.7.0 is now available on conda-forge. To install it on Linux and macOS:

conda install -n base mamba -c conda-forge
mamba create -n ag autogluon python -c conda-forge

The autogluon.multimodal conda-forge package does not yet support Windows. To install autogluon.tabular and autogluon.timeseries on Windows:

conda install -n base mamba -c conda-forge
mamba create -n ag autogluon.tabular autogluon.timeseries python -c conda-forge

This issue can be closed now.

@PertuyF
Copy link

PertuyF commented Feb 17, 2023

FYI, instructions could be simplified to

mamba create -n ag autogluon python=3.9 -c conda-forge

The python version is optional, the most recent allowed by the solver will be installed. Hence if autogluon is built for 3.9 max, it is taken care of already during environment creation.

In case you want to specifically provide guidance for mamba install, it's better to install in base environment, alongside conda itself.

conda install -n base mamba -c conda-forge

This way mamba is "centralized" and you don't have to install in each environment.

@Innixma
Copy link
Contributor

Innixma commented Feb 17, 2023

An astounding amount of work has been put into adding AutoGluon to conda-forge.

With 133 comments, this has over double the comments of our 2nd most commented GitHub issue (51), so finally marking this as resolved is quite an exciting feeling!

Kudos to everyone:

@Innixma Innixma closed this as completed Feb 17, 2023
@Innixma Innixma modified the milestones: 0.7 Fast-Follow Items, 0.7 Release Feb 17, 2023
@giswqs
Copy link
Contributor

giswqs commented Feb 17, 2023

@PertuyF Thank you for the suggestion. I have simplified the installation instructions as follows.

Thank you everyone for your support during this long journey! Special thanks to @PertuyF for the many suggestions and @h-vetinari for helping build some of the most challenging conda-forge recipes! This would not be possible without your help. Thank you.

For Linux and macOS:

conda install -n base mamba -c conda-forge
mamba create -n ag autogluon python -c conda-forge

For Windows:

conda install -n base mamba -c conda-forge
mamba create -n ag autogluon.tabular autogluon.timeseries python -c conda-forge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
env: new Support AutoGluon on new service install Related to installation priority: 0 Maximum priority urgent
Projects
None yet
Development

No branches or pull requests