-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Activating features for tests/benchmarks #2911
Comments
Unfortunately, no, it's not possible for |
I just ran into this issue because I'm currently using a feature to decide whether or not to do some rather verbose tracing. I'd like to be able to set this during testing so that when tests fail I can look at the trace. To be clear, an actual consuming crate has no problem setting the feature and getting corresponding results. It is just for testing the library crate itself that it doesn't seem possible at the moment. Update: There is a workaround I just found:
The only thing is that this is activated manually, and being able to specify it in the library's |
As of yet, the Cargo panic is gone.
Excerpt from
The output is now as follows: cargo test --color=always --package my_app --bin my_app test_1 -- --nocapture
You wrote this isn't possible. It appears there are no immediate objections to adding such a feature. Could you explain how fundamental the limitation is? And, if possible, sketch a solution? Just so a PR becomes more likely to happen. |
@alexcrichton: the crashes have been resolved yes, but the original feature request still stands AFAIK. Please reopen. |
Oh sorry, missed that aspect. |
Is there any way to add features per profile now? That is, if I want to enable certain features during tests, can I do that yet? |
@sunjay unfortunately no, you'll still need to pass |
Is this change something that would require an RFC or could it just be implemented? It sounds like being able to specify features in the profile configuration is something that could benefit a lot of people. |
@sunjay I think due to the subtle possible ramifications of auto-enabling features I'd personally want to see an RFC, but it may be worth canvassing others from @rust-lang/cargo first as well |
I would love to work on an RFC for this. Very interested to hear what the cargo team thinks. Right now, I'm having to constantly remember to pass features in when I run cargo test. I would really prefer to not have to do that. It's an extra point of friction for anyone who uses my code. Though it would be great to even just be able to configure features in tests, I would really like to see cargo support a features/default features option in each of the [profile.*] sections. This would make it so you could configure default features for your release, debug and benchmarking profiles too. My RFC would be proposing something like that. |
I think we have an RFC by @withoutboats for this already: rust-lang/rfcs#1956, which is postponed because of profiles. |
@matklad Ah yup that's exactly what I would have proposed. Thanks for looking that up! I read the discussion and it seems like that's pending on some work with how multiple profiles are applied together (e.g. Is there any way to do this in a backwards compatible way? It would be great if there was some way to just add another configuration option that is in effect the same as the command line argument while keeping everything the same as it is now. Would that make it too hard to change to something better (like profiles) later? The main issue I'd like to solve is that right now, introducing features into your codebase makes it so that you have to pass features in as command line arguments every time you run a certain commands. If these aren't default features that you need for all builds, it's not possible to simply configure the ones you want in Cargo.toml. This makes it harder for people new to your codebase or to cargo set things up. It is very repetitive and easy to forget. Is there an easier way to just address that issue? |
I've also run into the need to have features automatically enabled in test builds. In my case I want the python bindings "extension-module" feature on for production builds, but off for test builds so that I can link directly to python during testing. |
Also it's very useful for |
Note that you can do this with
|
@ebfull That only skips building/running tests if feature is not enabled instead of force-enabling the feature during build. |
Some ugly workaround, at least for some cases, is to extend all your It works nice for me but for sure there are cases where it is not enough. |
Yea I feel like an RFC here would make sense if one doesn't already exist. I'm using rust again, and finding myself really wanting something like |
In case of In martinvonz/jj#2277, I used the dev-dependencies hack to acheive this. Unfortunately, that doesn't work well with rust-analyzer, see e.g. rust-lang/rust-analyzer#14167 . Other than that, it works. (Update: TBH, we already had issues with rust-lang/rust-analyzer#14167 before my PR; I'm not 100% sure if having to use dev-dependencies for that PR actually made the problem any worse. But now we have that problem for two reasons rather than one.) For my purpose, I don't need any conditional compilation within a crate, only a way to prevent generating some binaries outside of tests. |
Thanks @ilyagr for the clear use case! Access to test binaries is a sore point right now I can see how this helps in that case. Another feature that could help is artifact dependencies |
Here's another use case from our GUI application Packetry. Although our codebase does include some unit tests written in the usual way with That test has to be declared in To achieve the testing we want, we have to add a bunch of logging output to our normal UI code. We don't ever want that logging code at normal runtime. It's only there for testing. But we can't gate it behind Most of the above details aren't relevant: what matters is just that we have a custom test, not using the standard harness, which requires some feature, and which we want to always be run. In What we want is to be able to just run I've just tried, and the dev-dependency hack works for us. But that's not documented anywhere other than this thread, and the discussion in #9518 says that this isn't really something that should be allowed, and the only reason it's not being made an error yet is that it allows working round this issue. That doesn't sound like something we should be relying on. I am opposed to closing this issue without either a proper solution, or another issue left open tracking what's needed. The proposal to close suggests:
|
Actually it is relevant, particularly the combination of (1) tests that need access to private APIs or behaviors and (2) not being able to write these as unit tests (with enough context to understand the "why") This kind of information is the core of whats needed to decide how important a use case is and how to handle it. Is the logged information access through a feature-gated API or is it exposed through stdio / GUI? |
@ilyagr another potential workaround is to have the test bins auto-discovered b |
Could you elaborate? I'm far from an expert on Naively, I was wondering whether there could be a dev-dependency on a separate crate for the fake binaries, but I'm not sure whether the tests will be able to find the binaries from another crate. |
I take your point; I was just trying to pre-emptively avoid the discussion getting dragged into the weeds of why GTK/macOS has that particular annoying limitation, when there are any number of other reasons why someone might want to have tests that don't use the standard test harness.
The test creates a UI in the same way the normal program would, but the feature causes the UI to be created off-screen, with logging code. The test then uses feature-gated APIs on the UI to set where to direct the logging, and also to inject a sequence of pre-recorded user actions into the UI. The log output is then compared against a reference file, and if the two match, the test passes. As I understand it, the reason that |
For our particular use case, if there were a way to tag a unit test as "must be run in the main thread of a process", and have Cargo honour that somehow (either by executing from its main thread or spawning a new process), then that would be a better solution for us. Originally |
By default (controlled by
That is what the artifact dependencies feature is about that I linked to. Its not just about depending on them but needing to tell cargo to build the |
@martinling The reason for that question I asked was to understand whether there would be problems by the fact that btw some other potential workarounds for your use case
|
I have just verified that yes, it is possible to make Yes, it is possible to work around the lack of the feature that people are asking for in this issue. That doesn't make the feature request invalid. The complexity of the workarounds you're suggesting illustrate exactly why it would be helpful.
Even with a lot of work, I do not believe that it would be either practical or performant for us to implement the same test through the GTK accessibility APIs. We're not just clicking on some buttons and looking for dialog messages. We're dealing with a list model with what may be millions of rows that the UI loads from on demand. And the test also needs to control the inflow of packets into the decoder engine so that it can verify how descriptions of items are updated by new packets. There's no way to do all that through the normal application UI, we have to hook into it specially, and that requires gating some code. But even if we could, the fact that this may a suitable solution for some GUI applications doesn't invalidate the use cases where it isn't. Why is there so much desire to close this? I don't really see anyone asking for anything complicated here. A solution could be as simple as a |
I would assume you could get away with just two packages and two As for the point, it was to offer another alternative solution for people running into it. It might work for some people; it might not work for others. It does offer the benefit of having the minimal affect on feature activations which is likely needed in some other cases. I do feel it important to call out that what is important in an issue is the use cases. We shouldn't get attached to any given solution.
I think there might be a misunderstanding. None of this is "arguing back to force through a closure". I am trying to understand use cases and explore the problem space. There are valid workflow problems people are having and they weren't all communicated earlier. The question is whether this is still the right solution. That, I'm still not sure on. And yes, we can add a lot of little features like this but
|
That intent would have been clearer had you not first entered the discussion with a proposal to close, and then proceeded to respond to use cases by listing workarounds. But anyway. In our specific case, as I already said, I don't think this feature is actually the best solution to our problem. What we have is really a unit test, we're just lacking a way to specify that this unit test must be executed on the main thread of a program. I guess I should go raise that as a feature request for libtest. |
Are you proposing something like #4942? It doesn't seem too hard in terms of supporting that. However but I've outlined some unresolved questions around that needing to be addressed. If we look into RFC 3374 a bit deeper, there is a paragraph calling out some difficulties and open questions on the implementation side. I know it looks not too hard but things are under the surface 😞. Anyway, thank you for bringing more info on the table. |
*Issue #, if available:* Resolves #458 *Description of changes:* Supports compiling the common Dafny tests in a test model for Rust. This involved having to change some properties of how we lay out the Rust crates: * ~Removed the `dafny_impl` sub-crate and moved `implementation_from_dafny.rs` into the main `src` directory - hence replacing `::simple_boolean_dafny` with `crate::implementation_from_dafny` everywhere. I originally had this separated to better divide Dafny-generated code from Smithy-generated code, but it made implementing externs hard/impossible, and Dafny tests make use of "wrapped services" which are essentially testing-only extern shims.~ * This turned out to be unnecessary, because it is reasonable to patch in additional `use ::simple_boolean_dafny::*;` and `use simple_boolean::*;` imports into the compiled tests. This is equivalent to the `pub use dafny_standard_library::implementation_from_dafny::*;` import the Makefile is adding, and will be replaced by a Dafny feature named something like `--rust-module-name` as some other other supported languages already have. * Added ~`tests/tests_from_dafny/_wrapped.rs`~ `wrapped::client::Client` with the implementation of the "wrapped service", which is implementing the Dafny-client interface using the Rust idiomatic client. * Since this is only used for testing, but implements a Dafny extern that is only defined by Dafny test code, I guarded this with a `wrapped-client` feature which the Dafny-compiled integration test enables, as per rust-lang/cargo#2911 (comment) * ~Removed `async` from the client interfaces - we'd originally kept these for better forwards-compatibility, but AFAICT it's impossible to implement the synchronous Dafny-generated trait methods with async methods. This just means that in the future we'll eventually have to provide separate async clients, but that's happened with the AWS SDKs frequently as well.~ * Worked around this by instantiating a `current_thread` Tokio `Runtime` as per https://tokio.rs/tokio/topics/bridging instead.
Given a library crate, I want to have optional features which are available to integration-level tests and examples so that 'cargo test' tests them.
Here's what I have:
Here's what I've tried so far. Attempt one: add the crate into dev-dependencies with the feature, ie:
I kind of expected that to work, but it panics:
(version: "cargo 0.11.0-nightly (259324c 2016-05-20)")
Attempt two: see if
features.*
works likeprofile.*
:alas
Is this possible?
See also #4663
The text was updated successfully, but these errors were encountered: