Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor compat/getnameinfo.cc #619

Closed
wants to merge 3 commits into from

Conversation

kinkie
Copy link
Contributor

@kinkie kinkie commented Apr 28, 2020

This file contains a lot of cruft an unnecessary complexity.
Simplify, remove and reformat

getnameinfo.cc contains cruft and unnecessary complexity.
Remove the needless part, and reformat
HAVE_SA_LEN is never set anywhere
double c-style cast -> reinterpret_cast
@yadij
Copy link
Contributor

yadij commented Apr 28, 2020

As indicated in the file prefix this is code duplicated from fetchmail. It is also compat code. I try to keep third-party code in this area as close to the origin's as possible until such time as we can drop it entirely.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 29, 2020

As indicated in the file prefix this is code duplicated from fetchmail. It is also compat code. I try to keep third-party code in this area as close to the origin's as possible until such time as we can drop it entirely.

That file was last changed in 2015, then only sourceformat-related changes until a bugfix from you in January. It is stable, it has already diverged due to the fix and formatting, might as well make it more readable.
Or if not, let's agree that it (and maybe all other files in compat/)? Should not be subject to source formatting rules and/or are not relevant for the discussion about adopting clang-format (PR #603)

Copy link
Contributor

@rousskov rousskov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify, remove and reformat

Simplify: If we really need to keep this code very close to its version maintained by the original authors, then perhaps Amos is right that such simplification is not a good idea. However, I have not verified how close our code is to its current outside version and whether that outside version is indeed actively maintained. Moreover, I have a strong suspicion that we should stop treating copied code specially. License/copyright statement preservation aside, once the code is copied, it should be treated as any other Squid code. If you do not want that, do not copy -- there are ways to package Squid sources together with externally maintained sources.

Remove: I would probably be OK with the "removing craft" part, assuming that means removing code that is never compiled.

Reformat: I am worried about creating a precedence of manually reformatting fairly complex and otherwise-unchanged low-level code. I would leave unchanged code formatting to automation. Ideally, compat/ code should be auto-formatted. It should also be moved to src/. These two changes would probably require an agreement on the "stop treating copied code specially" part discussed in the first bullet.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 29, 2020

Simplify, remove and reformat

Simplify: If we really need to keep this code very close to its version maintained by the original authors, then perhaps Amos is right that such simplification is not a good idea. However, I have not verified how close our code is to its current outside version and whether that outside version is indeed actively maintained. Moreover, I have a strong suspicion that we should stop treating copied code specially. License/copyright statement preservation aside, once the code is copied, it should be treated as any other Squid code. If you do not want that, do not copy -- there are ways to package Squid sources together with externally maintained sources.

in fetchmail that code is apparently also retained in a compat directory named "KAME".
We have elided bits (via #if 0), reformatted (the original has tabs and not spaces), added our own include guards. As a result, diff can't match the two files.
I agree with you on the non-special treatment of imported code.

Remove: I would probably be OK with the "removing craft" part, assuming that means removing code that is never compiled.
That's what I've done, yes.
For instance, USE_GETIPNODEBY is never defined anywhere

Reformat: I am worried about creating a precedence of manually reformatting fairly complex and otherwise-unchanged low-level code. I would leave unchanged code formatting to automation. Ideally, compat/ code should be auto-formatted. It should also be moved to src/. These two changes would probably require an agreement on the "stop treating copied code specially" part discussed in the first bullet.

Reformatting has been done via automation, yes. I'm not sure on moving compat/ code to src/, but I don't have a strong enough reason to block anyone else from doing it

@rousskov
Copy link
Contributor

As a result, diff can't match the two files

diff --ignore-all-space reports some differences related to preprocessor guards and removed/reformatted code. They all look unimportant to me, but I did not study them in detail.

in fetchmail that code is apparently also retained in a compat directory named "KAME".

FTR, the above refers to the legacy_64 branch of fetchmail. That KAME file has not been modified by the fetchmail team in ~12 years. Interestingly enough, they dropped the getnameinfo() compatibility wrapper completely in their master branch.

I'm not sure on moving compat/ code to src/

To clarify: I meant moving the whole compat directory, not polluting src/ with individual compat/ entries. This move may not be trivial due to special rules for using compatibility files, but I think it should be done to minimize cases of compat/ neglect due to its current "out of src, out of mind" location.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 30, 2020

As a result, diff can't match the two files

diff --ignore-all-space reports some differences related to preprocessor guards and removed/reformatted code. They all look unimportant to me, but I did not study them in detail.

I hadn't considered --ignore-all-space . But yes, this matches my eyeballing

in fetchmail that code is apparently also retained in a compat directory named "KAME".

FTR, the above refers to the legacy_64 branch of fetchmail. That KAME file has not been modified by the fetchmail team in ~12 years. Interestingly enough, they dropped the getnameinfo() compatibility wrapper completely in their master branch.

Thanks for checking, I hadn't noticed

I'm not sure on moving compat/ code to src/

To clarify: I meant moving the whole compat directory, not polluting src/ with individual compat/ entries. This move may not be trivial due to special rules for using compatibility files, but I think it should be done to minimize cases of compat/ neglect due to its current "out of src, out of mind" location.

Oh, got it, I hadn't understood. Then it sounds reasonable and I agree to it

Copy link
Contributor

@rousskov rousskov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My original review, besides complaining about manual formatting, had an inlined change request for a bug introduced by this PR. It looks like GitHub dropped that change request. I am restoring it in this followup.

Comment on lines 268 to 270
#if INET6
case AF_INET6: {
case AF_INET6:
int error;
Copy link
Contributor

@rousskov rousskov Apr 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing these curly braces will lead to compile errors if this code is ever compiled: https://stackoverflow.com/questions/92396/why-cant-variables-be-declared-in-a-switch-statement

Please note that the #if INET6 guard is probably a bug. Fetchmail defines that macro and Squid does not:

configure.ac:    AC_DEFINE(INET6,1,Define to 1 if your system defines AF_INET6 and PF_INET6.)

This problem highlights why it is usually a bad idea to polish code without proper testing.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 30, 2020 via email

@yadij
Copy link
Contributor

yadij commented Apr 30, 2020

compat/ is at the top level because code in most of the non-src/ directories need to link to it without the src/ having been built.

@rousskov
Copy link
Contributor

compat/ is at the top level because code in most of the non-src/ directories need to link to it without the src/ having been built

That fact is not a good reason to keep compat outside of src/ AFAICT. The guiding principle here is that there should be no Squid code outside of src/. Moving compat may also require moving lib and include (at least), but they all should be moved for the same reason, following the same principle.

libltdl probably can (and perhaps should) stay outside of src because libltdl is not Squid code.

@yadij
Copy link
Contributor

yadij commented Apr 30, 2020

Oh damn, I had done the testing but it didn't highlight the issue. Ok, I'll drop this PR

I think it is worth the time investigating whether the reason fro this hack still exists. IIRC it was some old OS lacking the modern networking API. Either better documenting why it exists, or removing entirely are both good progress.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 30, 2020

Oh damn, I had done the testing but it didn't highlight the issue. Ok, I'll drop this PR

I think it is worth the time investigating whether the reason fro this hack still exists. IIRC it was some old OS lacking the modern networking API. Either better documenting why it exists, or removing entirely are both good progress.

A good question is what OSen we want to support; I'll throw in my 2c: those we can test against, which at this time means Linux and FreeBSD, mostly. MacOS, maybe.

We may add openbsd and netbsd and Windows, but I'd say that any commercial Unix vendor who wishes us to support their variant can get in touch with us to discuss sponsoring the project with some kind of hosted hardware. This would kill most of the compat/ directory

@yadij
Copy link
Contributor

yadij commented Apr 30, 2020

All the BSD flavours, Solaris and Windows have community use (and support). We do not pro-actively fix issues there ourselves, but are expected to retain any relevant contributions.

That situation is behind the officially documented policy for compat/ as "Dont touch unless you know what you are doing", and my insistence on retaining the code as original as possible. We need to take near-paranoid levels of care when touching it and coordinate with sometimes difficult to contact people.

@rousskov
Copy link
Contributor

rousskov commented Apr 30, 2020

A good question is what OSen we want to support;

Thank you for trying to establish a guiding principle to judge low-level changes! I agree that this is a good question to ask.

I'll throw in my 2c: those we can test against, which at this time means Linux and FreeBSD, mostly. MacOS, maybe.

Let's try to be more precise here: "can test" or "do test"? Technically, with enough motivation, we can test on nearly any modern OS and a few antiquated ones. We can add build nodes and even purchase OS licenses.

I think we should define three tiers:

  1. Fully supported. We promise that Squid works well in these environments. Reported environment-specific bugs block new major releases. At least one active contributor runs Squid in these environments. Popular Linux flavors are the prime examples here. Not sure about FreeBSD these days.

  2. Tolerated: Neither fully supported nor unsupported. Consequently: We make no promises about Squid working in these environments, but we still accept test nodes and quality changes that improve environment-specific portability. Working environment-specific code is not purposefully removed or disabled. Windows and Solaris are probably the primary examples here.

  3. Unsupported. Any code specific to these environments may be removed at any time without warning and for any reason, including general cleanup reasons like "reducing code complexity". New code specific to these environments is rejected. For example, LibreSSL environments are currently unsupported.

Naturally, assigning or changing an environment classification requires judgment calls. However, we can have useful rules of thumbs like

  • If Squid master could not have been built in an environment for a year, then that environment is not fully supported.

  • If the latest two stable branches (currently v3.5 and v4) could not have been built in an environment, then that environment is unsupported.

(Please note that the above rules do not work in reverse -- an environment may be unsupported for reasons other than build failures).

We may add openbsd and netbsd and Windows, but I'd say that any commercial Unix vendor who wishes us to support their variant can get in touch with us to discuss sponsoring the project with some kind of hosted hardware. This would kill most of the compat/ directory

Yes, we should accept test nodes or equivalent resources for supported and tolerated environments.
However, we would need a proof of additional development resources to move an environment into the supported category.

@rousskov
Copy link
Contributor

We need to take near-paranoid levels of care when touching it and coordinate with sometimes difficult to contact people.

Or we do not actually need that. Obviously, just because something happens to be in compat/ does not necessarily mean that we must keep babysitting it forever. Francesco has asked the right first question IMO. That first question is not "Can I touch a getnameinfo() replacement if I know what I am doing" but "Does any of the tolerated environments need a getnameinfo() replacement?" If the answer is "no", then we may not need to ask more complicated questions.

@kinkie
Copy link
Contributor Author

kinkie commented Apr 30, 2020

A good question is what OSen we want to support;

Thank you for trying to establish a guiding principle to judge low-level changes! I agree that this is a good question to ask.

I'll throw in my 2c: those we can test against, which at this time means Linux and FreeBSD, mostly. MacOS, maybe.

Let's try to be more precise here: "can test" or "do test"? Technically, with enough motivation, we can test on nearly any modern OS and a few antiquated ones. We can add build nodes and even purchase OS licenses.

I agree.
At the same time, our human and technical resources are limited and we should invest them where it matters.

Unfortunately we do not have numbers - I would LOVE to have some idea about where squid is installed. I'll go on a limb however and guessstimate that:
90% of our user base is Linux
9% of our user base is FreeBSD
0.9% of our user base is OpenBSD
0.09% of our user base is windows
and the rest is 0.01% of our total installs

Now, these numbers are taken out of my backside, because of course we can't know.
But we still have compat stuff for aix, hpux, next, os2, qnx, irix, sunos. I'd just rip them off.
We can of course run a poll on squid-users. We can run it on google forms or similar and let it run for months - it is after all an important decision to make. But IMO it would be better for us now as a project to simplify and strip down. For squid 6, deprecate all platforms except for the 3-4 most popular ones, and if somebody comes up with a convincing enough need (or even better code based on more modern abstractions) add them back in time.

I think we should define three tiers:

  1. Fully supported. We promise that Squid works well in these environments. Reported environment-specific bugs block new major releases. At least one active contributor runs Squid in these environments. Popular Linux flavors are the prime examples here. Not sure about FreeBSD these days.

I disagree we make any such promise. We can promise that we test them and we can promise that we will try to address bugs against these platforms, but we make no such promise, loud and clear in our license.
I would like to keep FreeBSD in because it would help us avoid falling into linux-exclusive-isms, which I find dangerous long term.

  1. Tolerated: Neither fully supported nor unsupported. Consequently: We make no promises about Squid working in these environments, but we still accept test nodes and quality changes that improve environment-specific portability. Working environment-specific code is not purposefully removed or disabled. Windows and Solaris are probably the primary examples here.
  2. Unsupported. Any code specific to these environments may be removed at any time without warning and for any reason, including general cleanup reasons like "reducing code complexity". New code specific to these environments is rejected. For example, LibreSSL environments are currently unsupported.

I don't think we need to make such a distinction. What is not supported is unsupported. People have the source, they're free to hack things back in if they want and they're welcome to submit patches which, if they pass the quality bar, we will gladly accept.

Yes, we should accept test nodes or equivalent resources for supported and tolerated environments.
However, we would need a proof of additional development resources to move an environment into the supported category.

I agree, and consider code good enough to be merged as sufficient proof

@rousskov
Copy link
Contributor

I disagree we make any such promise. We can promise that we test them and we can promise that we will try to address bugs against these platforms, but we make no such promise, loud and clear in our license.

I hear you. We can probably find wording that would satisfy us both. What you are talking about is (lack of) a legally-binding warranty that Squid does "work on X". What I am talking about is a (non-legally binding) expectation/intent/effort that Squid should "work on X" (with all the natural caveats related to an open source project run by volunteers). There is a significant overlap between the two areas. We just need to find a better word for it than "promise".

BTW, my suggested tier definitions are just drafts, of course. I am sure they can be improved if we agree that we do need three tiers.

IMO it would be better for us now as a project to simplify and strip down.

I agree, as long as we are guided by good principles. Without them, we are risking falling into "let's tear everything old down even if it is not in the way!" and/or "do not touch that old code, ever" pitfalls as well as endless "you must (not) strip this" classification battles during reviews. I suggested three tiers, in part, to avoid the possible (wrong IMO) conclusion that we should either strip everything related to, say, Windows and Solaris OR keep everything related to HPUX. There may be an important difference between those two sets/tiers.

they're welcome to submit patches which, if they pass the quality bar, we will gladly accept

That approach does not work, unfortunately. For example, we should reject a high-quality patch adding LibreSSL support because we do not have the resources to support more than one TLS library (and we already support two). And the more LibreSSL bits we land, the more forceful arguments for other (not necessarily high-quality) bits landing, more bugs fixed (often by developers not using LibreSSL), etc. become. There is significant cost in supporting something, even conditionally. And since we probably do not want to discontinue existing (limited!) Windows and Solaris support, we need three tiers rather than two.

@kinkie
Copy link
Contributor Author

kinkie commented May 2, 2020

I disagree we make any such promise. We can promise that we test them and we can promise that we will try to address bugs against these platforms, but we make no such promise, loud and clear in our license.

I hear you. We can probably find wording that would satisfy us both. What you are talking about is (lack of) a legally-binding warranty that Squid does "work on X". What I am talking about is a (non-legally binding) expectation/intent/effort that Squid should "work on X" (with all the natural caveats related to an open source project run by volunteers). There is a significant overlap between the two areas. We just need to find a better word for it than "promise".

Kind of. My point is that we should focus on where our users are. This overwhelmingly means Linux and the BSD family. It would be nice to run a poll on squid-users, although I have a feeling that the subscribers to that list are not representative of our users at large.
Could it be time to enable a default-on anonymous registration/telemetry feature? This is by now quite common practice in the industry.

BTW, my suggested tier definitions are just drafts, of course. I am sure they can be improved if we agree that we do need three tiers.

IMO it would be better for us now as a project to simplify and strip down.

I agree, as long as we are guided by good principles. Without them, we are risking falling into "let's tear everything old down even if it is not in the way!" and/or "do not touch that old code, ever" pitfalls as well as endless "you must (not) strip this" classification battles during reviews. I suggested three tiers, in part, to avoid the possible (wrong IMO) conclusion that we should either strip everything related to, say, Windows and Solaris OR keep everything related to HPUX. There may be an important difference between those two sets/tiers.

TBH I don't like the "tolerated" concept. If we can test it, we can do it. If we can't test, then we can't support and keeping the baggage is not going to help the parts that we can test.
I'm happy to have a very generous definition of testing, as in "someone in the development community contributes code to keep platform XXX unrotten". I don't think we need to SEE the testing or much less run it ourselves.

they're welcome to submit patches which, if they pass the quality bar, we will gladly accept

That approach does not work, unfortunately. For example, we should reject a high-quality patch adding LibreSSL support because we do not have the resources to support more than one TLS library (and we already support two). And the more LibreSSL bits we land, the more forceful arguments for other (not necessarily high-quality) bits landing, more bugs fixed (often by developers not using LibreSSL), etc. become. There is significant cost in supporting something, even conditionally. And since we probably do not want to discontinue existing (limited!) Windows and Solaris support, we need three tiers rather than two.

My point exactly :)

I wouldn't mind discontinuing Solaris. Even Oracle pushes Linux over it for most workloads, including network functions and Openindiana has little propulsion. @yadij, why do you think it is relevant as a platform for us?

Thanks,
Francesco

@rousskov
Copy link
Contributor

rousskov commented May 2, 2020

@kinkie, if I am interpreting your suggestion correctly, you propose to define "supported" as "somebody can test it". Everything else is unsupported. This can be a good start, but it does not provide any means to exclude environments that we cannot afford to support for reasons other than lack of build tests (e.g., LibreSSL).

I also fear that "somebody can test it" is too broad because any person trying to merge some "craft" specific to their environment will say, "sure, I can test it for you!". IMO, the bar has to be a bit higher to be able to disqualify some environment-specific craft because adding something has its costs (even if somebody else is doing the testing or pretends to do the testing), especially adding support for a new environment (as opposed to fixing a bug in the already supported or tolerated environment).

The idea of a middle tier allows us to, on one hand, accept some environment-specific craft while, on the other hand, keep other environment-specific craft away. Unlike the "somebody can test it" condition, the three-tear proposal does not attempt to define the exact boundary of unsupported -- that is left to our judgement calls.

There could be specific preconditions (e.g., being tested by Squid Project CI) for being in the "fully supported" tier, but they would not fully define that category. Same for "tolerated".

It is possible to merge "fully supported" and "tolerated" tiers together, but I think an important distinction would be lost in such a merger -- only problems in the "fully supported" environments block releases. Another loss would be an arguably significant loss of clarity -- I know that Windows and BSD flavors are neglected (e.g., in performance work) but our users cannot easily discern that.

@kinkie
Copy link
Contributor Author

kinkie commented May 13, 2020

@kinkie, if I am interpreting your suggestion correctly, you propose to define "supported" as "somebody can test it". Everything else is unsupported. This can be a good start, but it does not provide any means to exclude environments that we cannot afford to support for reasons other than lack of build tests (e.g., LibreSSL).

Fair point. At the same time, I'm not really concerned about these because since we cannot afford to support them, we also cannot afford to develop them, and I expect anybody who can develop them should be in the position to credibly and visibly test them

I also fear that "somebody can test it" is too broad because any person trying to merge some "craft" specific to their environment will say, "sure, I can test it for you!". IMO, the bar has to be a bit higher to be able to disqualify some environment-specific craft because adding something has its costs (even if somebody else is doing the testing or pretends to do the testing), especially adding support for a new environment (as opposed to fixing a bug in the already supported or tolerated environment).

Do you have in mind any occurrence of this happening to us in a reasonably recent past? In other words, are we looking at a real or theoretical problem? If it's theoretical, I'd propose to defer coming up with a solution until it becomes real, and keep things simple until then (if ever)

The idea of a middle tier allows us to, on one hand, accept some environment-specific craft while, on the other hand, keep other environment-specific craft away. Unlike the "somebody can test it" condition, the three-tear proposal does not attempt to define the exact boundary of unsupported -- that is left to our judgement calls.

There could be specific preconditions (e.g., being tested by Squid Project CI) for being in the "fully supported" tier, but they would not fully define that category. Same for "tolerated".

It is possible to merge "fully supported" and "tolerated" tiers together, but I think an important distinction would be lost in such a merger -- only problems in the "fully supported" environments block releases. Another loss would be an arguably significant loss of clarity -- I know that Windows and BSD flavors are neglected (e.g., in performance work) but our users cannot easily discern that.

What I am afraid of is that we have a tendency to overregulate and overprocess, where simple common sense, trust and reputation may be sufficient. You may notice I've recently started a pattern of asking more often if we are dealing with a real problem or a potential problem.

�Which also goes back to my original question, what platforms are relevant for us today, exactly for the reason of being unable to test them. We have deprecated a few, but there is some more that I believe could be.

[edit: removed unintended foul word that crept in. I blame voice recognition. Apologies]

@rousskov
Copy link
Contributor

I'm not really concerned about these because since we cannot afford to support them, we also cannot afford to develop them, and I expect anybody who can develop them should be in the position to credibly and visibly test them

I am concerned, perhaps because I often end up doing (or paying for) the dirty work of those who "should be in the position to test" but aren't testing. More importantly, the question is not just about testing immediate changes but making sure the code continues to work long-term. Naturally, we cannot get long-term commitments from many people, despite their best intentions.

Do you have in mind any occurrence of this happening to us in a reasonably recent past?

Yes, of course. We get many requests to add or fix something that should not be in Squid or that the Project does not have enough resources to support (IMO). LibreSSL and ESI are just two small recent public examples. We are also constantly bumping into the essentially abandoned code that (allegedly) has to be kept up to date during refactoring.

What I am afraid of is that we have a tendency to overregulate and overprocess, where simple common sense, trust and reputation may be sufficient.

Believe it or not, I fully agree! I really wish more folks would just trust me, but since they do not, I see no viable alternative to the painful "process" of convincing them to do the right thing.

This particular discussion is not so much about the "process" but (What should be supported?) "principle". Good principles are even more difficult to define than good regulations.

You may notice I've recently started a pattern of asking more often if we are dealing with a real problem or a potential problem.

Yes, your distrust increases the pain, but the existing process protects your right to be skeptical.

Which also goes back to my original question, what platforms are relevant for us today [expletive deleted] exactly for the reason of being unable to test them.

If I understand your question correctly, a meaningful answer requires an agreement on the meaning of "able to test". One could propose a simple principle -- "regularly tested by Project CI" -- but that would immediately disqualify Windows, Solaris, and FreeBSD. That is why I used a 3-tier approach -- to create a twilight zone where things are not tested by the Project but are still not rejected outright.

If you think that we are ready to drop official support for Windows, Solaris, and/or FreeBSD (because neither is regularly tested by the Project), you will probably need to convince Amos of that. In my approach, all three can be moved into the second tier (after finalizing a principle that would exclude them from the third/unsupported tier).

We have deprecated a few, but there is some more that I believe could be.

I am sure you are right!

@kinkie
Copy link
Contributor Author

kinkie commented May 13, 2020

I'm not really concerned about these because since we cannot afford to support them, we also cannot afford to develop them, and I expect anybody who can develop them should be in the position to credibly and visibly test them

I am concerned, perhaps because I often end up doing (or paying for) the dirty work of those who "should be in the position to test" but aren't testing. More importantly, the question is not just about testing immediate changes but making sure the code continues to work long-term. Naturally, we cannot get long-term commitments from many people, despite their best intentions.

Do you have in mind any occurrence of this happening to us in a reasonably recent past?

Yes, of course. We get many requests to add or fix something that should not be in Squid or that the Project does not have enough resources to support (IMO). LibreSSL and ESI are just two small recent public examples. We are also constantly bumping into the essentially abandoned code that (allegedly) has to be kept up to date during refactoring.

Fair example.
I just wish (again) we had the telemetry to figure out what features are used in the wild to decide what to keep and what to remove. Is it something we're willing to entertain doing?

What I am afraid of is that we have a tendency to overregulate and overprocess, where simple common sense, trust and reputation may be sufficient.

Believe it or not, I fully agree! I really wish more folks would just trust me, but since they do not, I see no viable alternative to the painful "process" of convincing them to do the right thing.

Everybody is well meaning here. There are different definitions of "the right thing" unfortunately. In many cases that's due to lack of data (see my point about telemetry)

This particular discussion is not so much about the "process" but (What should be supported?) "principle". Good principles are even more difficult to define than good regulations.

Fair, and agreed

You may notice I've recently started a pattern of asking more often if we are dealing with a real problem or a potential problem.

Yes, your distrust increases the pain, but the existing process protects your right to be skeptical.

I'm sorry if you feel not trusted, that's not the case. I simply think (and I'd like to be wrong) that we as a group sometimes tend to go down the rabbit hole without having a good way to reconcile views. This may boil down to insufficient data.

Which also goes back to my original question, what platforms are relevant for us today [expletive deleted] exactly for the reason of being unable to test them.

If I understand your question correctly, a meaningful answer requires an agreement on the meaning of "able to test". One could propose a simple principle -- "regularly tested by Project CI" -- but that would immediately disqualify Windows, Solaris, and FreeBSD. That is why I used a 3-tier approach -- to create a twilight zone where things are not tested by the Project but are still not rejected outright.

Apologies for the expletive, it was not intended and I have edited the original text. I blame voice recognition.

That's one way to see it. Another way is by asking ourselves how many people are using these platforms, and we fall back into the lack of data point.

If you think that we are ready to drop official support for Windows, Solaris, and/or FreeBSD (because neither is regularly tested by the Project), you will probably need to convince Amos of that. In my approach, all three can be moved into the second tier (after finalizing a principle that would exclude them from the third/unsupported tier).

We are regularly testing FreeBSD. We are not testing Solaris or Windows. I would like to set up a windows testing because there are people who may be interested in running it. I can't imagine anyone willing to pay Oracle good money for Solaris when Linux can do the job for a fraction of the cost, therefore unless data appears showing the opposite I'd be quite happy to drop Solaris, along with AIX and essentially all commercial Unices. I'd gladly onboard OpenBSD instead.

We have deprecated a few, but there is some more that I believe could be.

I am sure you are right!

What to do about it?
My proposal would be to support as first-class citizens:

  • Main Linuxes
  • FreeBSD
  • OpenBSD

As best-effort:

  • MacOS

Aspirationally:

  • Windows

@yadij
Copy link
Contributor

yadij commented May 14, 2020

While both of your viewpoints have merits I think this discussion is going too far into the policy engineering based on wrong data points.

TL;DR too much focus on "users" and popularity of Squid and not enough on who is actually building and packaging Squid. Also, separate the policy on features supported (dev work) from the policy on CI node maintenance (kinkie work).

As you both know we are not the actual Vendor here. As maintainer I have to be keeping track of those downstream distribution channels. That lends some degree of "data" to work from.

First data point is that our downstream distros do not all pull directly from us. Most of them actually pull from another distro vendor, so they only receive what has already been tested outside our CI and works on their feeder distro. eg Ubuntu receives whatever works on Debian Sid.
That lends the question of whether is it worth our time CI testing eg Ubuntu for PR merge consideration. If it works on Debian Sid, it will most likely work on Ubuntu latest and can be double-checked later (eg at snapshot publication).

  • The keystone distro who pull directly from us are: Debian, Fedora, Gentoo, OpenBSD, FreeBSD, NetBSD. IMO these are the OS we should treat as first-class whenever we can have a farm node for their latest.

  • The derivative distro who only receive pre-tested packages but are variously active in their feedback to us directly are: Dragonfly (BSD), RedHat Linux, LTS variants of all the keystone distro.

The above is all gathered from actual interactions I/we have had with downstream distros. We do not have to know how many end-users or installations a particular one has relative to some other unpopular OS. They are the distributors who pull from us directly and provide feedback about issues. Others may arrive in future, in which case they get treated as appropriate for their desired channel for receiving the sources.

IMO;

  • the pr-test and pr-auto CI job should be just the keystone OS (or an agreed subset). As many of them as we are able to build on reliably. Problems on them are critical to the PR failing, but should not block other PRs being tested. And should never fail to build the master branch, never.

  • the N-matrix jobs for snapshot publication is where the wider set of derivative and any extra OS we want to support are tested.

@kinkie, IMO that is the primary criteria the build farm should be using to both inform your focus on node stability, updates, and which ones are suitable to add to which CI jobs. If you want to go further then base your decision on the distrowatch popularity metrics for those OS to see which ones will have highest end-user impact.
It is no use optimizing our CI builds for Ubuntu (high popularity) if some Debian (lower popularity) newer GCC version block them getting our tested code. Likewise optimizing testing for RedHat (high popular) if Fedora (low popularity) clang version can block them getting bug fixes.

And yes none of this has anything to do with the code features. I am perfectly fine with situations like the recent Kerberos build errors on Heimdal occuring. Where new code made assumptions that all environments were some sane/modern API, tested with keystone OS - then added workarounds only where feedback provided evidence of breakage.

Since pushback against removing old OS has been brought up. FYI my pushback on those has been based on distrowatch data about whether those particular OS are still alive (even in LTS) and publishing Squid. The research there has brought up some surprising info about OS that were thought dead, but turned out to have modern forks under other names or been absorbed into more current OS.

@rousskov
Copy link
Contributor

I will comment on a few points raised in recent comments, but we have stopped making progress in establishing guiding principles AFAICT.

IMO, if we have to select between high-quality testing of one old environment and low-quality testing of many new environments, then we should start with the former and make sure that any further improvements do not sacrifice quality. Jenkins CI lost reliability (a key quality!) in the past two months or so. I fear that the core reason is actually not the attempted breadth of coverage or even migration; I fear that there is just no focus on providing a reliable service. The focus is on other things (like breadth of coverage, keeping the environment in sync with some external targets, and probably other honorable goals). It is, of course, debatable what the focus should be, but there is currently no avenue for that debate. The process is very efficient but currently produces poor results (AFAICT).

I would not object to adding (optional) usage telemetry but will not volunteer to do it. FWIW, I expect the incoming data to be a distraction as far as important decisions are concerned because I believe that those decisions should be driven by primary factors other than reported usage popularity.

I do not think we should focus on distributions because they are not, IMO, what keeps Squid alive these days. We all should be thankful for those who package Squid, of course, but Squid will not prosper or die based on what distros carry it five years from now. Supporting (or rejecting) specific distribution needs should be a side effect of following more important principles that are established to facilitate Squid survival and evolution. If Squid is a high-quality, relevant product, the distros will continue to package it.

I find Amos' bullets about pr-test, pr-auto, and matrix jobs compatible with the proposed three support tiers. However, I think all (required) tests, including the matrix, should be reliable. IMO, the difference between first and second tier is not in test reliability but in what happens when a test breaks. Roughly speaking: If a first-tier test breaks, then all master commits stop (until the test is fixed). If the second-tier test breaks, then the test is removed from CI or marked as optional (until the test is fixed), and master commits resume without it.

I cannot evaluate Francesco's specific proposal of using "first-class", "best-effort", and "aspirational" categories because I do not know what those categories imply for PRs. In other words, it is not clear to me what a membership in any of the three categories would mean for a given PR dealing with changes/additions/failures specific to the categorized OS. Do we ignore test failures on best-effort OSes? Accept untested changes for aspirational OSes? Reject changes adding support for uncategorized environments? Etc.

@kinkie
Copy link
Contributor Author

kinkie commented May 15, 2020

  • The keystone distro who pull directly from us are: Debian, Fedora, Gentoo, OpenBSD, FreeBSD, NetBSD. IMO these are the OS we should treat as first-class whenever we can have a farm node for their latest.

Out of these we are missing Gentoo, OpenBSD, NetBSD.
Of these, Gentoo concerns me a little bit, as the maintenance overhead would be quite high, but all are feasible.

  • The derivative distro who only receive pre-tested packages but are variously active in their feedback to us directly are: Dragonfly (BSD), RedHat Linux, LTS variants of all the keystone distro.

The above is all gathered from actual interactions I/we have had with downstream distros. We do not have to know how many end-users or installations a particular one has relative to some other unpopular OS. They are the distributors who pull from us directly and provide feedback about issues. Others may arrive in future, in which case they get treated as appropriate for their desired channel for receiving the sources.

IMO;

  • the pr-test and pr-auto CI job should be just the keystone OS (or an agreed subset). As many of them as we are able to build on reliably. Problems on them are critical to the PR failing, but should not block other PRs being tested. And should never fail to build the master branch, never.
  • the N-matrix jobs for snapshot publication is where the wider set of derivative and any extra OS we want to support are tested.

We are doing that: pr-test uses centos-7 and debian-stable ("stable keystone") the latest fedora and ubuntu release ("edge keystone").
pr-auto covers more ground as it's invoked less frequently, with a mix of newer and older releases.

Until we move to pipelines, it'll be hard to add non-linux distros to the mix.
On FreeBSD we are testing 4 different clang versions (6 to 9) and it could be done on some Linuxes as well if useful.

@kinkie, IMO that is the primary criteria the build farm should be using to both inform your focus on node stability, updates, and which ones are suitable to add to which CI jobs. If you want to go further then base your decision on the distrowatch popularity metrics for those OS to see which ones will have highest end-user impact.
It is no use optimizing our CI builds for Ubuntu (high popularity) if some Debian (lower popularity) newer GCC version block them getting our tested code. Likewise optimizing testing for RedHat (high popular) if Fedora (low popularity) clang version can block them getting bug fixes.

Sure, we're testing both. The only known incompatibility right now is fedora-31 with clang, which is hardly a showstopper

And yes none of this has anything to do with the code features. I am perfectly fine with situations like the recent Kerberos build errors on Heimdal occuring. Where new code made assumptions that all environments were some sane/modern API, tested with keystone OS - then added workarounds only where feedback provided evidence of breakage.

Since pushback against removing old OS has been brought up. FYI my pushback on those has been based on distrowatch data about whether those particular OS are still alive (even in LTS) and publishing Squid. The research there has brought up some surprising info about OS that were thought dead, but turned out to have modern forks under other names or been absorbed into more current OS.

If such a thing exists, sure, let's do. Without sounding disparaging however I believe that Openindiana is not a particularly interesting environment; there seems to be little actual innovation happening there. If you are aware of other interesting environments, let's talk about it and add them

@yadij
Copy link
Contributor

yadij commented May 28, 2020

Redirecting this discussion back to the PR scope of changes to compat/getnameinfo.cc ...

@kinkie, currently waiting on your decision whether to;
a) close the PR, or
b) change scope to "remove the file(s)", or
c) do some code archaeology and document why it needs to stay.

@yadij yadij added the S-waiting-for-author author action is expected (and usually required) label May 28, 2020
@squid-anubis squid-anubis added M-failed-other https://github.com/measurement-factory/anubis#pull-request-labels and removed M-failed-other https://github.com/measurement-factory/anubis#pull-request-labels labels Feb 22, 2021
@squid-anubis squid-anubis added M-failed-other https://github.com/measurement-factory/anubis#pull-request-labels and removed M-failed-other https://github.com/measurement-factory/anubis#pull-request-labels labels Mar 2, 2021
@kinkie
Copy link
Contributor Author

kinkie commented May 20, 2021

Abandoning

@kinkie kinkie closed this May 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-for-author author action is expected (and usually required)
Projects
None yet
4 participants