Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Idea: pixi env "conda workflow" #1610

Open
ruben-arts opened this issue Jul 13, 2024 · 32 comments
Open

Feature Idea: pixi env "conda workflow" #1610

ruben-arts opened this issue Jul 13, 2024 · 32 comments
Labels
conda Issue related to Conda dependencies UX Related to the User Experience of pixi

Comments

@ruben-arts
Copy link
Contributor

ruben-arts commented Jul 13, 2024

EDIT: This initial proposal is outdated because its already clear this is not what the community wants. Please follow the thread as there is no clear conclusion yet!.


One of the most asked features I hear constantly is "conda drop in replacement". Note that this would not be a move against conda but a request I would like to fill in for the users.

This issue describes a feature that gives a more conda-like feel to the pixi environments.

Workflow example

Create an environment

pixi env create -n test-env -c conda-forge python==3.10 matplotlib "numpy>1.23"
# or 
pixi env create -f environment.yml

Activation

# like conda activate, not a fan but possibly needed.
pixi env activate test-env
# or like pixi shell to make it easier for the pixi first users.
pixi env -n test-env shell 

Add packages

# in an activated env
pixi env install ruff 
# in a non-activated env
pixi env install ruff -n test-env 
# or/alias to make it easier for the pixi first users
pixi env add ruff -n test-env

Remove packages

pixi env remove ruff matplotlib -n test-env

In the background

  • Put the environment in .pixi/envs
  • Add a pixi.toml file to the .pixi/envs/test-env/conda-meta/
  • Add a pixi.lock file after solve to the .pixi/envs/test-env/conda-meta/

What it should be

  • A way for users who like the conda workflow to use pixi and enjoy it in their way, hopefully motivating them to also consider the practices of the pixi projects.
  • A minimal implementation of the conda cli where conda is replaced with pixi env
  • A helper to ease the move from conda to pixi projects.
  • using uv like pixi projects do, instead of calling pip for the pypi dependencies.

What it is not going to be

  • A exact reimplementation of condas CLI
    • We don't promise to build everything that conda does into pixi.
    • We implement what the community wants, not what conda has already build. Using the same upvote system we use for the other parts, so please make issues requesting features, not bug reports that we don't support a conda feature.
  • Our main focus. We believe in the project format, vs the environment format, and that will stay our main focus.
  • We'll use pixi's config.toml and not conda's .condarc. Mixing those two configuration files is going to be a mess. We're much more likely to add a converter between these files then to support both in one tool.
  • It's not going to be a plugin or separate binary. Almost all the environment logic we need is already present in pixi. We just need to build the front-end for it and make the logic execute accordingly.
  • A moved pixi project.
    • We're not adding tasks, features, system-requirements, etc.

Ps. this is a start of the design, tracking the interest and a place to discuss design. After the design is finalized we can start building and other contributors are free to help.

@Hofer-Julian
Copy link
Contributor

One thought: pixi env doesn't make clear that we talk about global environments here. But adding a global prefix will clash with pixi global install. Naming that feature will not be obvious.

@maresb
Copy link
Contributor

maresb commented Jul 20, 2024

My initial thought is to not reinvent the wheel.

From my perspective the useful part would be

pixi create-external-env -p destination-prefix -e pixi-environment-name

and then let the user use micromamba from that point.

@ruben-arts
Copy link
Contributor Author

@maresb from the single command I'm not getting enough information to get the gist of your proposal.

We're not looking to reinvent the wheel here, it's more of adding another wheel to the oiled machine that is pixi 😉
Having the conda like workflow within the pixi tool is something a lot of people have been asking for. Especially without having to install any other tools. So we're openly brainstorming what that would look like, without having a 1-on-1 implementation of conda's cli.

It sounds that what you are looking for might be pixi project export #1427 so you can switch between the tools you like. That is a different story from this Issue.

@maresb
Copy link
Contributor

maresb commented Jul 21, 2024

EDIT2: I don't like what I wrote, and I think @synapticarbors is saying something very similar and articulates it much better, completely and simply.

Previous comment

Sorry that I didn't communicate clearly, and thanks @ruben-arts for pushing me to articulate this better. Here is my opinionated thought process for whatever it's worth. As TLDR, I think I'm proposing something similar but distinct from #1427. Feel free to mark my comments as off-topic if appropriate to keep this discussion focused, and if there's something to discuss I could open a new issue.

A exact reimplementation of condas CLI

I know this proposal is preliminary, but the implication here seems to me that you're thinking about some sort of partial reimplementation, which seems perilous, and raises alarms from my admittedly-often-very-wrong software design senses.

One of the most asked features I hear constantly is "conda drop in replacement"

I'm wondering why users might want this. Maybe users don't have the time to learn a new interface and transition from conda/mamba/micromamba to pixi. (Indeed, that was me for many months.)

I finally found the time, and now I'm sold. But it still feels like something's missing, and so perhaps it's sufficient to just address that?...

We believe in the project format, vs the environment format, and that will stay our main focus.

I totally agree, and this vision was a major selling point for me. But environments I create in pixi are bound to a particular project, and often it's nice to have some general-purpose user-scoped environments.

I'm suggesting: instead of adding some huge amount of functionality to the pixi CLI, what could we do so that pixi and micromamba can interact seamlessly, leveraging their existing strengths in their own domains?

Thus I'm pondering a situation where we could have pixi-managed environments that I could micromamba activate. One way to achieve this would be to allow pixi to export/install/mirror pixi environments to my MAMBA_ROOT_PREFIX.

(Honestly I arrived at this use case from the motivation of using pixi to replace conda-lock install, so I'm questioning how contrived this actually is 😂.)

I just realized that an alternative hacky way of mostly accomplishing my stated goal is to point micromamba to some .pixi/ containing my desired user environments, specifically:

mkdir ~/repos/user-environments
cd ~/repos/user-environments
pixi init
cat <<EOF >> pixi.toml

[feature.forge.dependencies]
conda-smithy = "*"
conda-build = "*"
boa = "*"  # still behind the times here

[environments]
forge = ["forge"]
EOF
pixi install -e forge
export MAMBA_ROOT_PREFIX=~/repos/user-environments/.pixi
micromamba activate forge

In summary, I'm thinking it would be nice to have a non-hacky way to install pixi environments (not just create an environment spec as per #1427) to a particular prefix (perhaps CONDA_PREFIX or MAMBA_ROOT_PREFIX by default). And I'm wondering if this sort of feature might obviate the desire to have conda-like functionality in pixi.

EDIT: I coincidentally ran across #188 which seems to overlap a lot with what I'm saying here.

@ethanc8
Copy link

ethanc8 commented Jul 21, 2024

As a quick minimal solution, could we expose the configuration option detached-evironments as an environment variable or CLI flag? Then you can create environments using PIXI_DETACHED_ENVIRONMENTS=1 pixi install --manifest-path /path/to/pixi.toml and enter the environment using PIXI_DETACHED_ENVIRONMENTS=1 pixi shell --manifest-path /path/to/pixi.toml

@synapticarbors
Copy link
Contributor

synapticarbors commented Jul 21, 2024

@ruben-arts -- Thank you for introducing this. I'm definitely one of those users that would benefit from an alternative workflow. In thinking about what you've proposed, you said this is not going to be "a moved pixi project". I think this may result in a missed opportunity and result in something more limited than "pixi with a conda workflow". What I would want as a user of both pixi and conda is something that is just pixi with a global project/env registry, so the workflow would look something like:

# Create and register a new project in ~/.pixi/envs or some other configurable location 
# (i.e `conda create -n test-proj` but no initial dependencies)
pixi global-env init -n test-proj

# List registered global projects (i.e `conda info --envs`). Perhaps also list the environments of each project
pixi global-env list

# "activate" the project. Optionally specify environments through `-e` flag
pixi global-env shell -n test-project 

# delete global project
pixi global-env remove -n test-project

If pixi global-env shell -n test-project binds pixi to that project/env by making the manifest-path persistent within the shell, then within that shell, I should be able to do all of the things I would normally do with pixi, but without having to be in the project directory or specify --manifest-path.

My ideal here would be to have a very lightweight global-env (or global-proj) subcommand, and then have the user drop into a pixi shell, and have all of the features of pixi. I'm not sure if there are major technical issues to making it work like this, but my hope would be that this would way of doing things that wouldn't undermine a bunch of the features that makes pixi great, allowing users to have the best of both worlds (if they choose).

@ruben-arts
Copy link
Contributor Author

@synapticarbors Thanks for you comment!

It sounds like you would just like to add a manifest-path to a map of environment names to manifest paths.
This might not be a completely different subcommand but more of an extra feature. e.g. by default we register the name in a file that keeps track of all the projects. And on duplicate we spawn a interactive request or simple overwrite the name to the latest modified project with that name.

Example of that map:

pixi = "~/development/pixi"
rattler-build = "~/development/rattler-build"

Then adding a --global or --detatch or similar flag to the init command. To move it under the ~/.pixi/envs directory.
e.g.:

> pixi init my_env -g
Created project in ~/.pixi/global_projects/my_env

Resulting in the mapping file including:

pixi = "~/development/pixi"
rattler-build = "~/development/rattler-build"
my_env = "~/.pixi/global_projects/my_env"

allowing for

> pixi shell -n my_env
(my_env) > pixi add python
Added python >=3.12.3,<4 to my_env
(my_env) > 

What I'm affraid of, is that pixi projects are designed with much more features than a conda environment. Think tasks, multi-environment setups, system-requirements. All these features don't fit into the conda workflow and might spook or influence the expected behavior.

That all said, this could be a first step and possibly enough for most users.

@ethanc8
Copy link

ethanc8 commented Jul 22, 2024

It sounds like you would just like to add a manifest-path to a map of environment names to manifest paths.

I would also like this.

All these features don't fit into the conda workflow and might spook or influence the expected behavior.

I don't really think that's an issue -- the point of this is to support workflows that conda supported and pixi hasn't really supported while bringing pixi's unique features to those kinds of workflows. Anyone who wants extremely conda-like behavior can just use conda, mamba, or micromamba, or a new minimalistic rattler client could be built for them.

@synapticarbors
Copy link
Contributor

@ruben-arts I'm sure you've talked to more people about this, but just based on various discussions here and on discord, I think what most people want when they say a "conda-like workflow", is just pixi with an optional global registry of environments like conda. What you were suggesting with:

# create project in global registry
$ pixi init my_env -g

# activate env in global registry (with optional -e flag)
$ pixi shell -n my_env 

would satisfy this as long it things worked as expected if you were to use further pixi commands (e.g. pixi run, pixi add, pixi list, etc) in that shell and have pixi behave like it was in that global project's directory. I think if you keep it simple like this, you minimize the risk of bifurcating the pixi user base into people getting the full pixi experience based on a project-based workflow vs a greatly limited conda-based one. I don't think there is a reason to try to directly map commands onto a conda-like interface. I think users will adapt if they have this alternative mechanism.

@ethanc8
Copy link

ethanc8 commented Jul 23, 2024

I think another way would be to allow simply naming the manifest files. For example, when I run pixi shell --manifest-file=my-project/pixi.toml -n my-project, then in the future I'd be able to run pixi shell -n my-project, with pixi updating the environment whenever I change the manifest file at my-project/pixi.toml. This way, it'd be easy to edit the manifest for a project while still using that environment in related projects.

@matrss
Copy link

matrss commented Aug 12, 2024

While I understand the desire to have a familiar workflow, I think it is an inherently broken one that shouldn't be supported. Having environments that can be enabled anywhere encourages re-using these environments between unrelated projects, which will inevitably lead to issues where changes done for one project will break another. This in turn means that there will be no useful documented computational environment for either project, breaking one of the main advantages of pixi IMO.

To avoid this issue you need one environment per project, at which point you can just use normal pixi projects.

What I do see some value in is having a singular global environment that is always available, to install packages in that are always needed and not tied to a single project (e.g. git, git-annex and DataLad in my case). This already exists with pixi global install, although it doesn't work with python packages since each package installed with it is technically in a separate environment so python doesn't find the modules. But I also cannot think of a python module that I would want to install in this kind of way, anyway.

@ethanc8
Copy link

ethanc8 commented Aug 12, 2024

Having environments that can be enabled anywhere encourages re-using these environments between unrelated projects, which will inevitably lead to issues where changes done for one project will break another.

It's quite often that you would have multiple environments that are usable for a project or multiple related projects in different directories. Also, you might be working on a project and one of its dependencies at the same time, in which case it'd be useful to share an environment between them. There are many more examples of usecases where you would want a project to have the same environment. Lastly, small "playgrounds" where you just need to write some code and test it against some dependencies, to understand how the dependencies work, are also useful, and it's not useful to create a new environment just to test a single dependency, especially if you're planning to use the information about your dependencies' behavior or API in an existing project.

@apcamargo
Copy link

I'll share my thoughts as someone who has been using Conda for ~6 years in computational science (specifically, bioinformatics). It seems to me that the project/directory-centric workflow works great for software development, where I agree that reusing the same environment across multiple projects is beneficial. However, in bioinformatics, we typically don't work that way (though I'm using "we" loosely here, as this is based on personal experience rather than broad consensus). We often create environments to install the software required to run specific pipelines and to avoid conflicts between different pipelines or even different steps within the same pipeline. Forcing users to always run pipeline X in the same directory might be too restrictive since pipeline X could be executed on very different datasets across distinct scientific projects.

I also wonder how something like Snakemake would work within Pixi's current paradigm, considering it creates Conda environments on demand for different steps of a workflow, all within a single directory as the workflow runs (or at least that's my understanding of it).

It would be great to hear from others in the BioConda community, as they might have different perspectives (and might even disagree with me).

@matrss
Copy link

matrss commented Aug 13, 2024

It's quite often that you would have multiple environments that are usable for a project

Pixi supports multiple environments per project already.

It's quite often that you would have multiple environments that are usable for [...] multiple related projects in different directories.

Again, I think that this is never a good idea because you will never be able to name the exact environment you have used in your project without making it exclusive to the project.

All cases that I can think of in which someone might want this fall into one of two categories:

  1. You have two distinct projects that require the same packages to be installed. They are just superficially similar and should be two distinct pixi projects (albeit potentially with the same pixi.toml and lock file). This is to ensure that if the dependencies of one project change you don't accidentally affect the other, e.g. through an update or adding a dependency.
  2. You have two directories which really need the exact same environment at all times. This simply means that they are part of the same project and might as well be sub-directories of the same pixi project, to group them together.

Also, you might be working on a project and one of its dependencies at the same time, in which case it'd be useful to share an environment between them.

I don't think so. If one project depends on the other in the sense of it needs the other installed as a software, then it should just be declared as a dependency (Granted, I don't know if it is possible with pixi to install a dependency from another local project. Probably only as PyPI dependencies, currently.). If the other project is otherwise free-standing, then I would want it to have its own environment. If both depend on each other then they might as well be the same project anyway.

Lastly, small "playgrounds" where you just need to write some code and test it against some dependencies

That's what I have a ~/Playground directory for, in which I can just create random projects as needed. Again, grouped together for easy cleanup.

[...] and it's not useful to create a new environment just to test a single dependency, especially if you're planning to use the information about your dependencies' behavior or API in an existing project.

If you plan to use this testing in an existing project, then just do it in that project. This is what branches are for, no need to set up a completely separate test project/environment.


We often create environments to install the software required to run specific pipelines and to avoid conflicts between different pipelines or even different steps within the same pipeline. Forcing users to always run pipeline X in the same directory might be too restrictive since pipeline X could be executed on very different datasets across distinct scientific projects.

How do you tie it all together when your pipeline has produced a result and you want to reference it e.g. in a publication? You would have to reference the exact version of the pipeline, the pipelines environment(s), and the data. In my opinion this is only really feasible when bringing all three together into one project, e.g. with DataLad to store pipeline and data, and pixi (or nix, or guix, or another similar tool) to describe the environment.

Users shouldn't be forced to always run pipeline X in the same directory. They need to create a new project for their intended data processing, install the pipeline and the necessary dependencies into it, and then run the pipeline on their data.

I would see some value in templating a pixi project with a common set of dependencies for this purpose, but in the end they must be distinct projects so that updating pipeline X and its environment does not break the provenance of the results processed from some data using a previous version of that pipeline.


This is just my opinion though, from the somewhat idealistic viewpoint of making (scientific) results traceable to their origin and giving them enough accompanying metadata to make it even remotely feasible to understand how they were conceived and what issues there might have been (or not) in the process.

@synapticarbors
Copy link
Contributor

@matrss -- While I appreciate that you have a clear idea of what a scientific workflow should look like, it's mildly off putting to say that other workflows are broken or wrong. I think the goal here is to make the transition to pixi easier for the large community of conda users who have a particular workflow that differs from what pixi currently offers. I'm guessing a lot of people try pixi, find it doesn't work with their workflow and then return to just using conda/mamba. It's better to give them an option, and then when they stick around, maybe they will adopt the project-based workflow.

@matrss
Copy link

matrss commented Aug 13, 2024

I didn't say that other workflows are wrong, I said that I think the workflow induced by conda/mamba environments is fundamentally flawed and shouldn't be encouraged by supporting it, and I also explained why I think that way.

Yes, sure, it might make the transition to pixi a bit simpler for some, but is that worth diluting the major benefit of using pixi over conda/mamba, instead of communicating that benefit and encouraging taking advantage of it?

Yes, giving an option might get people to use pixi and adopt a project-based workflow later on, but it also makes it simpler to not re-consider a flawed workflow and stick with the familiar, instead of adopting something much better.

@ethanc8
Copy link

ethanc8 commented Aug 13, 2024

There are still many reasons why one would not want to use the workflow that you claim is "much better". Meanwhile, pixi has many advantages over conda/mamba other than its opinionated workflow, and for many users including myself, those other advantages are much more important than its opinionated workflow. For example, once pixi build is available, that would become one of pixi's largest advantages. In addition, pixi supports PyPI/wheel packages much better than conda/mamba, has a pixi.toml which is better than environment.yml, shares some of its codebase with rattler-build (which is useful for rattler-build contributors and users), and has a nicer CLI. I don't see why we should limit all of these advantages to use the opinionated workflow when there is no technical reason we couldn't support all of these things with the traditional conda/mamba global environments workflow.

@apcamargo
Copy link

How do you tie it all together when your pipeline has produced a result and you want to reference it e.g. in a publication? You would have to reference the exact version of the pipeline, the pipelines environment(s), and the data. In my opinion this is only really feasible when bringing all three together into one project, e.g. with DataLad to store pipeline and data, and pixi (or nix, or guix, or another similar tool) to describe the environment.

If you use Snakemake, for instance, the environment files for each step in the pipeline will be there. Even if you don't, it's not that hard to generate files describing your exact environment.

I don't disagree with you that pixi's approach has benefits. My point is that these are almost always discussed in the context of development environments. Maybe allowing Conda-like environment management can have benefits in other contexts, such as scientific research. I don't have strong opinions, just want to bring a different perspective (that might not even align with peers').

@ruben-arts
Copy link
Contributor Author

Hi all, thanks for all your feedback.

This is a great discussion but I would like to make it a little more concrete. Could you share your use-case in a user story format.

e.g. “As a [persona], I [want to], [so that].”

This would help us greatly, thanks in advance.

@matrss
Copy link

matrss commented Aug 14, 2024

There are still many reasons why one would not want to use the workflow that you claim is "much better".

Maybe, but I also didn't say that pixi's project-based workflow is always better than everything else. I said that every instance in which someone might think that they want to use conda-style environments, they would be better served by using pixi projects. So, there might be reasons why one would not want to use what I called "much better", but I don't see any situation in which conda-style environments would then be a better pick than pixi's project-based workflow. If you can name a situation that doesn't fit into one of the two categories I mentioned then please do so, I genuinely can't think of any.

For example, once pixi build is available, that would become one of pixi's largest advantages.

I am not sure how you would want to take advantage of pixi build without associating the build environment to the thing to be build.

In addition, pixi supports PyPI/wheel packages much better than conda/mamba

True enough, yes.

has a pixi.toml which is better than environment.yml

Yes, but IMO pixi.toml is much better due to the underlying difference in workflow. When you try to use it to simply define a set of installed packages, then it becomes an environment.yml in toml format, without much gain.

shares some of its codebase with rattler-build (which is useful for rattler-build contributors and users)

Fair.

and has a nicer CLI

Implementing conda-style environments in pixi will necessarily weaken this point, since the CLI to interact with them must be similar to conda's CLI, otherwise you wouldn't get the benefit of familiarity.


If you use Snakemake, for instance, the environment files for each step in the pipeline will be there.

OK so I don't have much experience with snakemake, but my understanding of snakemake is that the environment files are part of a snakemake pipeline definition, which resides in a directory. So they are part of the project that defines the pipeline, which in turn will be referenced in a specific version to apply the pipeline to something (either by directly running it in that project, or including it in a different project's snakemake workflow).

I don't see why the snakemake pipeline couldn't use a pixi project to define these environments (utilizing pixi's multi environment feature), other than that snakemake does not yet support pixi in that way. But it would need to be extended to support pixi's hypothetical conda-style environments as well, so that is not a downside, just some necessary integration work.

What it does with environment files is already just a more brittle version of using a pixi project to define these environments. IMO it is not necessary to weaken pixi's benefits to support this use case, it can be implemented with what pixi has already.

Maybe allowing Conda-like environment management can have benefits in other contexts, such as scientific research.

That's the thing, I don't think it can have benefits in any context and instead think that conda-like environments are an anti-pattern that encourage bad practices, especially in scientific research. I would be happy to be proven wrong though.


e.g. “As a [persona], I [want to], [so that].”

As someone involved in research data management and wanting to make research more traceable and reproducible (improving mainly the last two aspects of the "FAIR" principles), I want to encourage people to publish artifacts with their research that fully encapsulate the process that they followed from raw data or even data acquisition to the results they wrote about, so that anyone is able to go from e.g. a plot in a paper back through the exact processes used to generate it (this includes the entire computational environment, all pipelines, etc.) and arrive at the data from which one started. I see pixi's currently implemented project-based workflow as one potential piece of this puzzle, to clearly define the computational environment(s) used. Conda-style environments are fundamentally not compatible with this vision, unless they are used in a way that simply emulates a project-based workflow. I have been recommending pixi over conda/mamba to colleagues for this reason and if pixi implemented support for conda-style environments this would make it harder for me, since I would always have to add the disclaimer "but don't use these conda-style environments, they will inevitably break reproducibility if you don't limit yourself to a single project per environment, at which point you can just use the easier to work with project-based workflow".

@dhirschfeld
Copy link
Contributor

I think this exposes a fundamental schism between software engineers and scientific development which IME is exploratory, REPL-driven development, NOT project based development.

It is simultaneously true that:

  1. Notebooks are awful tools for software engineering which encourage worst practices
  2. Notebooks are incredibly powerful tools for exploring and understanding your data / algorithms
  3. Kitchen-sink environments are awful for reproducibility
  4. Kitchen-sink environments are incredibly useful for exploratory development

As a quant analyst who has moved more into development I tend to use more (pixi) project based development in an IDE such as PyCharm or VSCode, but when I want to test things out I use Jupyter Notebooks in my kitchen-sink environment.

There are different stages of development and when you just want to test some Python code out against your latest models you don't want the overhead associated with creating an arbitrary "project" just to run some code. If I want to run arbitrary code against arbitrary dependencies I have a dev environment I can do that in with zero overhead. I have dev-310 and dev-312 environments for testing arbitrary snippits against anything I might want. Each of those environments has all the tools installed to pull data from any system I might need and has all of our latest code installed (in editable mode).

I only create "projects" when I want to actually develop/deploy something and make it usable by others. This is a different phase of development from the exploratory analysis phase. It's not a phase most scientists are familiar with so it's important to make it easy to transition to a more robust/reproducible project based workflow, and I think pixi could be an incredibly powerful tool for that... but only if there is some way to support both workflows (whilst making the project workflow the default).

I'm (one of) the biggest pixi fans around but working in data science I didn't make the switch until I could at least simulate a conda env with pixi shell --manifest-path. Forcing users to create a "project" and install the myriad dependencies which may be required to test the latest and greatest code adds so much overhead to the exploratory development workflow that it makes it ~infeasible as compared to activating an environment with all the necessary dependencies already installed.

@cdeil
Copy link
Contributor

cdeil commented Aug 21, 2024

I've been using conda for 10 years and like it. Thank you all for making the conda package ecosystem better and better and now creating pixi!

I would love to adopt pixi as a drop-in replacement and just use pixi in the next years instead of a pixi / conda / pip combination. Similar to what some others said here, I do have the use case to have personal "dev" and "play" conda envs and I also sometimes use a conda "base" env just for generic Python scripts or one-offs where I don't create a project and manifest files.

So I'm +1 to extend pixi to allow creating / managing / deleting "global envs" just like conda does.

Why not keep using conda? Well I will for now. But pixi would give me (a) better speed and (b) easier bootstrapping and (c) consistency with the pixi workflow and specs I'm happy to adopt for projects where I an convince collaborators - usually long-time conda users like me.

Made one concrete related suggestion here: #1877

@maresb
Copy link
Contributor

maresb commented Aug 25, 2024

As one of the developers of PyMC, we encourage users to set up Conda environments to ensure that beginners can easily set up the required C compilers and BLAS. We end up dealing with a large number of beginners asking for help with broken Conda environments. I would like to add pixi to our installation instructions since I believe it's a major step up from other tools and would be much simpler for everyone in the long-term. (I believe that pixi's isolated project-based config would fix most causes of broken environments.) However, many of these beginners are copy-pasting instructions without understanding what a Conda environment actually is. Some of these people will encounter problems and then run any command they can find in an attempt to "just fix it" without understanding what's broken. While pixi env in capable hands would undoubtedly be very useful, I'm concerned that pixi env could become a source of confusion for such beginners.

In order to add pixi to the PyMC installation instructions I must convince skeptical comaintainers that the support and maintenance efforts are worthwhile. If pixi adds a pixi env subcommand then this may increase our support burden. Maintainers are comfortable with conda and to a lesser extent mamba and micromamba. Already the subtle differences between conda and micromamba lead to confusion, so a pixi env subcommand that only partially attempts to reimplement the conda interface may lead to difficulties and make it harder for me to argue to them that pixi is complete and stable enough for inclusion. From this perspective I'd be a lot more comfortable with this proposal if we picked out a few essential environment-related features and gave them a distinctive interface that can't be confused with the conda interface.

Thanks everyone for the great discussion. I appreciate the focus on the user story approach.

@ruben-arts
Copy link
Contributor Author

After going over this issue with the core team we've got a specific proposal based on the comment by @synapticarbors

Create new project and register it immediately

pixi init project_a --register
# or using the current folder:
pixi init --register

Register an existing project with an optional name that you could give the project otherwise default to the name in the manifest.

cd project_b
pixi project register
pixi project register optional_name

This would add the following field to your configuration.

registered-projects = { project_a = "/path/to/project_a", project_b = "/path/to/project_b", optional_name = "/path/to/project_b" }

Then you can run the following commands

pixi run --name/-n project_a start
pixi shell --name/-n project_b
pixi shell-hook --name/-n optional_name

Basically adding the --name to all the commands that also use the --manifest-path.

I hope this fits the most use-cases while staying true to the pixi workflow.

@dhirschfeld
Copy link
Contributor

It sounds great to me & I'm keen to test it out!

@akhmerov
Copy link

I am not sure whether this question belongs to the repo, but I would like to ask how this feature would interact with pixi jupyter kernel?

@synapticarbors
Copy link
Contributor

Awesome! Thank you to the pixi team for taking the time to consider this.

@dhirschfeld
Copy link
Contributor

dhirschfeld commented Aug 30, 2024

I am not sure whether this question belongs to the repo, but I would like to ask how this feature would interact with pixi jupyter kernel?

I just write my own kernelspecs. for pixi I imagine it would look like:

{
 "argv": [
  "pixi", "run", "-n", "my-env",
  "python", "-m", "ipykernel_launcher", "-f", "{connection_file}"
 ],
 "display_name": "Python (my-env)",
 "language": "python",
 "env" : {
   "A_VARIABLE": "a_value"
 },
 "metadata": {
  "debugger": true
 }
}

@ruben-arts ruben-arts added conda Issue related to Conda dependencies UX Related to the User Experience of pixi labels Sep 26, 2024
@JaRoSchm
Copy link

Couldn't this issue be resolved by adding a pixi global shell --environment ... to the recently introduced pixi global? As far as I understand pixi shell comes closest to conda activate. Inside of this shell every binary of the environment should be exposed and not only the ones explicitly defined using pixi global expose. These binaries should be first in PATH such that binaries of the environment are preferred to potentially conflicting ones, just like in a conda environment.

@ethanc8
Copy link

ethanc8 commented Oct 31, 2024

Couldn't this issue be resolved by adding a pixi global shell --environment ... to the recently introduced pixi global? As far as I understand pixi shell comes closest to conda activate. Inside of this shell every binary of the environment should be exposed and not only the ones explicitly defined using pixi global expose. These binaries should be first in PATH such that binaries of the environment are preferred to potentially conflicting ones, just like in a conda environment.

@ruben-arts's proposal (#1610 (comment)) seems a lot cleaner -- you just make environments like in a normal pixi project, and then you give them names which you can use from elsewhere, rather than using --manifest-path.

@dhirschfeld
Copy link
Contributor

Here's my take on it:

activate() {
    env="${1:-dev310}"
    history -a
    pixi shell --manifest-path "/opt/python/envs/${env}/pixi.toml"
}

@JaRoSchm
Copy link

JaRoSchm commented Nov 1, 2024

@ruben-arts's proposal (#1610 (comment)) seems a lot cleaner -- you just make environments like in a normal pixi project, and then you give them names which you can use from elsewhere, rather than using --manifest-path.

But the same is true for pixi global, right? There the environments also get a name and you don't have to use --manifest-path. In their blog post they even state that pixi global can also be used as a replacement for conda environments. The only difference is that you have to decide which binaries to expose and how to refer to them, e.g. python_env1, python_env2 could be done. This could be avoid with my proposal above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
conda Issue related to Conda dependencies UX Related to the User Experience of pixi
Projects
None yet
Development

No branches or pull requests