Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: crypto engines #33281

Open
derekparker opened this issue Jul 25, 2019 · 25 comments

Comments

@derekparker
Copy link
Contributor

commented Jul 25, 2019

Crypto Engines

The Go standard library cryptography implementation is very comprehensive and
widely used. The problem, however, comes when a user wants to swap out the
implementation for another, while using the same interface. This has already
happened upstream with the dev.boringcrypto [0] branch. Use cases for using an
alternate implementation include FIPS compliance (where a user may need to call
into a FIPS validated module), and interfacing with external crypto devices
(e.g. accelerator cards, specialty hardware).

It would be easy enough to maintain a package that does this already. However,
the problem comes when trying to swap crypto implementation in a project you do
not own. This would also ensure consistency so that there is a formal way to
declare that all or part of the crypto operations provided by the standard
library should be implemented by some outside package.

This proposal outlines a way for an alternate implementation of the crypto
functionality to be loaded / unloaded via function calls, and allow the
alternate crypto implementation to live in a separate package maintained outside
of the Go source tree. It also allows all this without the user having to modify
import paths, allowing a user to load an engine once and have that change
propagated everywhere automatically.

Engine implementation

Users should be able to load a single global engine which if present will be
used instead of the standard library implementation. The design allows engines
to replace all or part of the standard library implementations. There is no
concept of having multiple engines. Loading an engine makes it the default for
any crypto implemented by that engine. This means that a single engine could
replace all or part of the crypto functionality present in the crypto package
in the standard library.

Engine interface

All engines to be registered internally must conform to a specific interface.
The interfaces will be focused on specific areas of cryptography following the
package boundaries in the standard library.

type Engine interface {
	AES()    AESEngine
	Cipher() CipherEngine
	DES()    DESEngine
	...
	TLS()    TLSEngine

	Setup()   error // Engines would provide their own setup method.
	Cleanup() error // Engines would provide their own cleanup method.
}

type AESEngine interface {
	NewCipher(key []byte) (cipher.Block, error)
}
...

The goal would be that all crypto operations could be handled by an engine
implementation, including TLS specific functionality (PRF, et al).

Loading

The loading of an engine should be simple and straightforward. Only a single
engine can be loaded at any given time. This means one must unload an engine
before attempting to load another one.

The user is presented with 3 options:

  1. Load an engine directly by having the engine source code and directly
    calling LoadEngine with the struct.
  2. Load an engine via Go plugin functionality [1].
  3. Load an engine based on the environment variable
    GO_CRYPTO_ENGINE=<path>. This would cause the program to load the
    engine from <path> at runtime. <path> should point to a valid Go
    plugin.

The basic outline for these functions would look like:

package crypto

// LoadEngine loads the engine specified by `e`.
// This engine will be used for any crypto functionality
// it provides, falling back to the standard library for
// any functionality not provided by the engine.
func LoadEngine(e Engine) error {
	...
}

// LoadEnginePlugin will load the plugin located
// at `path` and subsequently create and register 
// an engine from it.
func LoadEnginePlugin(path string) error {
	...
}

// engineFromPlugin wraps the plugin in an Engine
// implementation suitable for usage.
func engineFromPlugin(p *plugin.Plugin) Engine {
	...
}

It would also be very easy to unload an Engine, if need be, via:

func UnloadEngine() error {
	...
}

It is the responsibility of the engine to maintain correctness. In other words,
the engine must be responsible for maintaining locks, reference counting, or any
other method to ensure that when EngineCleanup is called the resources used by
that engine will not be left in an invalid state.

Usage

The standard library code will branch based on the engine and specific
implementation being non-null.

For example:

// NewCipher creates and returns a new cipher.Block.
// The key argument should be the AES key,
// either 16, 24, or 32 bytes to select
// AES-128, AES-192, or AES-256.
func NewCipher(key []byte) (cipher.Block, error) {
	k := len(key)
	switch k {
	default:
		return nil, KeySizeError(k)
	case 16, 24, 32:
		break
	}
	if e := getEngine(); e != nil && e.AES() != nil {
		return e.AES().NewCipher(key)
    	}
	return newCipher(key)
}

Real world use cases

The upstream Go repository currently has a branch [0] that is being maintained
separate from the main master branch which adds support for calling into
BoringCrypto [2]. Instead of maintaining an entirely separate branch, the
functionality could be rewritten as an engine and included in any project that
may need it.

Additionally, if a Go user has a requirement that their code meet FIPS
requirements and would not like to use (or cannot use) the dev.boringcrypto
branch, this new functionality would allow the user to pick their crypto library
of choice (OpenSSL, LibreSSL, et al).

Finally, if there are users who wish to take advantage of certain hardware for
crypto, that too could be accomplished via engines.

Implementation details

Originally when drafting this proposal I wanted to get deep into specifics of
how this functionality would be implemented as far as how the engine is
represented internally (after loading / assignment). However after a few drafts
and further thinking I figured I'd leave it a bit more abstract at first lest
this devolve into a code review of sorts. I'd rather the discussion be around
the idea of engines as a feature. If necessary I can draft a further design doc
on some of the proposed specifics. Otherwise, we can leave that to a later date
in a proper code review setting.

Summary

Engines would provide an easy way to swap all or part of the standard library
crypto functionality with one provided by the user. There are already real world
use cases and examples of folks using other crypto libraries either by creating
their own package [3][4][5], or even by upstream maintaining a separate
branch to support BoringSSL.

This proposal suggests a way to remove the maintenance burden and make the Go
standard library crypto implementation more extensible.


@gopherbot gopherbot added this to the Proposal milestone Jul 25, 2019

@gopherbot gopherbot added the Proposal label Jul 25, 2019

@elagergren-spideroak

This comment has been minimized.

Copy link

commented Jul 25, 2019

A use case: business reasons (FIPS, etc.) cause us to have to use one of a couple crypto libraries, depending on the situation. We wrote a library that copies the stdlib's crypto API and uses different "backends" based on build tags.

One of the major pain points is instead of import "crypto/aes" we have to write import "some/project.tld/crypto/aes". In order to be sure that we don't accidentally import the wrong crypto/aes, we have to have tests that scan our code base and check for bad imports. This is a fraught endeavor.

Another pain point is when libraries don't implement entirely what we need. In this case, we have to fall back to the Go stdlib which results in sloppy and error-prone backend code.

Additionally, we have no real way to switch backends at runtime depending on, e.g., whether FIPS mode is enabled on the host machine. This also means for each backend we want to test we have to re-compile the binary instead of switching engines.

Similarly, the crypto/tls package uses crypto, meaning we've had to rewrite crypto/tls to use their own backends. It would be lovely to just be able to use net/http without worrying about whether our http.Client uses an http.RoundTripper that uses the proper backend.

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Jul 25, 2019

@Freeaqingme

This comment has been minimized.

Copy link

commented Jul 26, 2019

Originally when drafting this proposal I wanted to get deep into specifics of
how this functionality would be implemented as far as how the engine is
represented internally (after loading / assignment). However after a few drafts
and further thinking I figured I'd leave it a bit more abstract at first lest
this devolve into a code review of sorts. I'd rather the discussion be around
the idea of engines as a feature.

I understand this desire and it does make sense; There's no need to get stuck on implementation details while still pondering on whether we even need such a thing. Having said that, I would prefer to see a proposal in an early stage for the interfaces where there's an implementation of at least two separate engines.

Once too often I've seen someone gobble up "a single generic interface to rule them all", which turns out to be basically a copy of their own implementation, and not be compatible with any alternatives. If we set out to create a single interface to rule them all (as I see this proposal), we should at least know that that is realistic and that it fulfils all requirements of other (potential) engines as well.

If that turns hard to do -for example because no other implementations exist yet - perhaps a small case study of other similar initiatives (e.g. in the C-ecosystem) could be done to show how these evolved and where they lack - if at all.

Finally, it's not mentioned explicitly in the proposal why this should be part of the standard library, rather than be a third party package that simply encapsulates the existing crypto stuff. Am I right in assuming it's necessary because you also want other things in the standard library to use this engine?

All in all, +1 from me :)

@ericlagergren

This comment has been minimized.

Copy link
Contributor

commented Jul 26, 2019

@Freeaqingme I don't want to speak for @derekparker, but I know that my +1 would change to a -1 if this didn't affect the stdlib's crypto. My previous comment sort of implies why, but since the default crypto for Go is the stdlib's crypto, there'd still be lots of pain points if this was implemented as a 3rd party library.

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Jul 26, 2019

I understand this desire and it does make sense; There's no need to get stuck on implementation
details while still pondering on whether we even need such a thing. Having said that, I would prefer
to see a proposal in an early stage for the interfaces where there's an implementation of at least
two separate engines.

I understand that and can work to provide a working example if necessary once discussion on this proposal has progressed. I just wanted to get the conversation started here.

Finally, it's not mentioned explicitly in the proposal why this should be part of the standard library, rather than be a third party package that simply encapsulates the existing crypto stuff. Am I right in assuming it's necessary because you also want other things in the standard library to use this engine?

Sorry, I thought I has clarified that. There are a few reason for this being part of the standard library as opposed to simply an external package. One, there are already many external crypto packages for Go and yet this proposal still exists. The issue isn't with the ability to use other crypto with the Go programming language itself, the issue is being able to swap crypto back ends without changing import paths. This is an important detail for a few reason:

  1. crypto/tls: the TLS stack uses crypto/* (of course) and if you want to use the Go stdlib TLS to use FIPS validated crypto based on a system being booted in FIPS mode, there's really no way to accomplish that as of now, even with changes in the program code using TLS.
  2. crypto/tls: there is crypto functionality implemented in this package that would also need to be swapped out (PRF, etc).
  3. Large projects: the goal would be for projects like Kubernetes to be able to simply load a new crypto engine without changing tons of import paths and worry about diverging from the standard library crypto interfaces.
  4. Also look at the example above by @ericlagergren - program code might need to use alternate crypto based on business needs, and swapping out import paths isn't the best solution.
  5. There's probably more examples, but hopefully this is a good enough start.

The dev.boring crypto branch (and other forks based on it) are created so that Go programs built with that version of the toolchain just simply call into other crypto libs without user code having to do much, if anything. That is the real goal here, to distill and simplify this need of having to use crypto other than pure-Go standard library crypto, and being able to do it without using an "unofficial" branch or fork of the language.

Hope that all makes sense! I'm excited by the support and discussion around this.

@jech

This comment has been minimized.

Copy link

commented Aug 1, 2019

type Engine interface {
    ...
}

What happens when a new algorithm gets added to the stdlib? Do all existing engines become obsolete?

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 1, 2019

What happens when a new algorithm gets added to the stdlib? Do all existing engines become obsolete?

Yes, that potential is there. However, looking back at the past few releases:

Go 1.12 - No new crypto functions added
Go 1.11 - Added 1 new function, 1 new method
Go 1.10 - Added 3 new functions
Go 1.9 - No new crypto functions

These changes only come in minor version updates which happen on a 6 month cadence. I feel that if a person needs to go through the trouble of using and maintaining a cryptographic library, the maintenance burden of adding ~4 new functions to said library over the span of ~2 years (from 1.9 to 1.12) doesn't seem insurmountable.

@Freeaqingme

This comment has been minimized.

Copy link

commented Aug 2, 2019

Alternatively, would it be possible to have a single interface that returns which algo's are supported, and then have a single interface per algorithm?

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 2, 2019

Alternatively, would it be possible to have a single interface that returns which algo's are supported, and then have a single interface per algorithm?

Could you sketch out what you mean by that?

@ericlagergren

This comment has been minimized.

Copy link
Contributor

commented Aug 2, 2019

Sorta related to having a “single interface per algorithm,” check out how bazil.org/fuse/fs handles multiple optional methods.

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 2, 2019

Sorta related to having a “single interface per algorithm,” check out how bazil.org/fuse/fs handles multiple optional methods.

ACK. That seems like an interesting addition in the interface approach. That could solve the problem of making older engines obsolete.

EDIT: For those who are following along, this is the series of interfaces I found: https://github.com/bazil/fuse/blob/master/fs/serve.go

@mundaym

This comment has been minimized.

Copy link
Member

commented Aug 3, 2019

It would definitely be nice to have some way of transparently adding FIPS or distro managed crypto support to Go. I have some concerns with doing this at runtime though:

  1. If the Go crypto library is always used as a fall back then I don't think that there is a way to verify that you are actually using your crypto engine and not ending up in the std implementation. Typos might result in the engine code not being called and yet everything might well still work since the Go crypto library is used instead. This seems like it would be an issue for people working in regulated environments.

  2. It is likely that this proposal would make most of the crypto API opaque to the compiler because of the additional level of indirection. For example, escape analysis will fail for more parameters passed into crypto calls. If you want to use a crypto engine you are probably going to need to pay this price anyway but without significant cross-package optimizations (e.g. to figure out that LoadEngine() isn't called anywhere and therefore the engine won't be present) all crypto library users will also end up paying for it, even if they aren't using crypto engines.

Personally I think it would be nicer to have a clean way to swap out standard library packages at compile time using modules or a compiler option. That way you could statically check that the binary doesn't contain any Go crypto code (unless the replacement you are using links to it as a fallback - not sure what sort of mechanism that would use). Also this approach would have no impact on people who just want to use the standard Go crypto library as-is.

@Freeaqingme

This comment has been minimized.

Copy link

commented Aug 3, 2019

@mundaym I'm not sure I agree that it should be required at compile time. But I think the underlying concern that it should be verifiable which engine is being used is a valid one. I suppose that's something that can also be done at runtime. E.g. a call to crypto.ActiveEngine() that returns a struct with a field to identify the engine in use. That could be printed while starting the application, shown through a http handler, etc.

@mundaym

This comment has been minimized.

Copy link
Member

commented Aug 3, 2019

@Freeaqingme I guess it depends on your level of paranoia (or the size of the fines...) :). A call to crypto.ActiveEngine() would certainly suffice for some use cases but I don't think it can guarantee that std crypto isn't using its own implementation due to a bug.

Does anyone know what sort of guarantees are required for FIPS compliant programs? Do they just need to avoid calling non-FIPS code or should they also avoid linking against it?

@Freeaqingme

This comment has been minimized.

Copy link

commented Aug 3, 2019

Well, if you don't use the stdlib crypto anywhere, then those symbols would be unused? They could then be stripped from the binary if so desired, right? However, were the application compiled to remove all debug symbols, it may be hard to analyze if std crypto is still used, so therefore I think it's at least required to be able to determine it at runtime. Removing them at compile time would be a nice bonus, but I think they could be stripped after compiling as well.

@jech

This comment has been minimized.

Copy link

commented Aug 3, 2019

@magical

This comment has been minimized.

Copy link
Contributor

commented Aug 4, 2019

There has been talk of potentially splitting parts of the standard library into modules. If the crypto/ packages were part of a separate module, then you could use the replace directive to replace the crypto implementation. That seems like a simpler mechanism than the one proposed here.

Additionally, if your primary goal is to use a different TLS stack, it might be easier to make just crypto/tls pluggable, as @FiloSottile has suggested in the past: #21753, #32466 (comment), #30241 (comment)

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 5, 2019

If the Go crypto library is always used as a fall back then I don't think that there is a way to verify that you are actually using your crypto engine and not ending up in the std implementation.

This is a very great point, and one that is addressed in forks such as the boringcrypto branch by panic'ing if a non-boring code path is executed if boring mode is activated. I agree this proposal is a bit lax on this, but to be honest I mostly wanted to get the conversation started. I think we should have a way of explicitly failing instead of falling back to the stdlib.

Does anyone know what sort of guarantees are required for FIPS compliant programs? Do they just need to avoid calling non-FIPS code or should they also avoid linking against it?

They must fail when calling non-FIPS code.

There has been talk of potentially splitting parts of the standard library into modules. If the crypto/ packages were part of a separate module, then you could use the replace directive to replace the crypto implementation. That seems like a simpler mechanism than the one proposed here.

That would be interesting, but would have to wait until Go 2.

@rsc

This comment has been minimized.

Copy link
Contributor

commented Aug 6, 2019

Maybe this is a distraction, but as a thought experiment, what if this proposal were instead "garbage collection engines" or "channel engines" or "reflect engines" or "encoding/json engines?" Would any of those make sense to try to support? Why or why not?

Obviously some other implementations of Go do substitute different garbage collection and channel and reflect implementations, though not as far as I know encoding/json. The existence of those alternate Go implementations that carry pieces more appropriate to their use case has not caused us to try to make the garbage collector or runtime or reflection systems "pluggable". I wonder if crypto is like those or more like encoding/json or somewhere in the middle.

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

Is crypto different solely because of FIPS? Or something more general?

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 6, 2019

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

I appreciate the thought experiment and the discussion, however I would argue yes. My main reasoning being that nobody will be prevented and / or charged large fines due to running Go binaries because of the garbage collector, runtime, or channel implementation, reflect or encoding implementations. However, certain companies simply cannot run Go binaries in situations requiring certain standards of cryptography due to, as you mentioned, government mandates such as requiring FIPS compliance. That's the fundamental issue here, is that by not allowing a standard upstream way for users to switch crypto implementations based on actual business needs and requirements we limit what can be used in the Go ecosystem, or at least make it way more difficult.

The argument against that would be "just use an external package". However I feel if it really were that easy, the dev.boringcrypto branch would never have been created and internally within Google there would be a bunch of rewrite rules or similar to swap projects using crypto/* with some other package. Also, as mentioned elsewhere in this thread, simply swapping out the crypto packages with a user defined one does nothing for other internal Go packages such as net which will always use the stdlib crypto anyways. So now not only does a user need to use an external crypto library, they also need to use their own net/http stack which uses their own TLS implementations. One of the beautiful aspects of the Go language is in the fact that you need not look much further than the standard library in order to build pretty much anything you'd like. Having to replace large chunks of that standard library to meet these business needs kind of takes away from that I think.

Now, along with that I think there are things that should be updated in this proposal to reflect the discussion. Namely, I think there should be some way to prevent falling back to stdlib crypto if an engine is loaded to prevent situations where non-FIPS code gets called in situations where it would be inappropriate. However I think those details can be hashed out later in this thread, certainly before any kind of implementation. I just want to address the basic need for something like this and stress that cryptography is fundamentally different than a runtime or garbage collector.

@Freeaqingme

This comment has been minimized.

Copy link

commented Aug 6, 2019

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

That's a legit question I suppose. I'm not 100% informed when it's about the alternate GC or runtime implementations, so I might make some (incorrect) assumptions here...

I assume the alternate GC or runtime stuff is implemented because of technical considerations. It's done by people who have a different vision on certain technical aspects of these components, and therefore want to do it themselves. Simply because they can do it "better" (mind the quotes! :)).

However, this proposal is not about companies that want to swap the crypto implementation because of technical reasons. If they feel they can do better, they may temporarily fork it, but only to contribute those changes back to upstream later (like CloudFlare did with the TLS asm implementation).

The companies that are looking to use a FIPS implementation (and rule out usage of stdlib crypto), do so because of business considerations, rather than technical considerations. They're not looking to maintain a fork because they can do it better, they simply have a business requirement to substitute a single component (crypto) with another. That's why I think that allowing for alternate crypto implementations is different from allowing alternate GC or channel implementations.

Of course we can say that those concerns are not the concerns of the Golang community and therefore a fork is the way to go. Had this been proposed 10 years ago, it might have been very well because the language was less mature and resources were better spent on other things. I think that now that Go is gaining more and more traction, it could be a good moment to consider it.

In the early days of Android, vendors also wanted functionality that was not present in the stock linux kernel. This in turn lead to many forks, often barely updated, hardly ever kept in sync with upstream. This reflected badly on the Linux community, and lead to a worse user experience for the end user. In the end, much effort (by Greg Kroah-Hartman among others) was invested into getting it all back into sync. This in turn has lead to an increased amount of available developer hours on a single 'implementation' of Linux, rather than having them split between the two.

I realize that Go != Linux and that a programming language != kernel, but it could be an argument of why having forks could be a bad thing in the long term.

@FiloSottile

This comment has been minimized.

Copy link
Member

commented Aug 6, 2019

Let me start by saying that as the current maintainer of dev.boringcrypto, I am sympathetic to the pain of maintaining a fork, and I understand where this is coming from.

I think you make a clear argument that there is a business case for swapping crypto backends. I'm not convinced it's qualitatively different from the business case for swapping other parts of the language though: for example, the GC and runtime make Go incompatible with embedded devices, which also "limit[s] what can be used in the Go ecosystem".

There are however ways in which swapping cryptography backends is peculiar technically, and they all suggest leaning against this proposal.

  • It was tried before, not with great success. I regularly hear complaints about the disparate implementations of the Java Cryptography Extension, and the OpenSSL engines add a lot of complexity to their codebase.
  • The cryptography standard library is not a monolithic and stable feature set, like garbage collection or channels might be. New algorithms are added regularly, and having to account for the possibility that they might not be implemented by the engine would add a layer of complexity in some of the most gnarly places, like TLS certificate selection and algorithm negotiation.
  • Cryptography is complex by nature, so a lot of its complexity budget is spent on the algorithms themselves, leaving less space for this kind of flexibility. This is acknowledged by golang.org/design/cryptography-principles in two places: the first instrument of Secure being "reduced complexity", and Practical specifying that we "focus[] on common use cases to stay minimal".

Overall, it's a tradeoff. Not making cryptography engines pluggable puts the cost of maintaining a FIPS fork on the organizations that need it (Google included), instead of adding complexity to the standard library used by most. Also considering that it's mostly well-staffed organizations that need FIPS compliance, I think that's the right call, and I don't think we should implement this.

Maintaining one of the simplest cryptography libraries in the whole ecosystem is an exercise in resisting complexity, and it sometimes comes at the cost of this kind of flexibility.

(This focuses on the FIPS case of switching the whole engine, because for providing hardware implementations of specific values we have interfaces like crypto.Signer, and if we need to add more like it you should feel free to open specific issues, as I'd be happy to consider them. They already proved valuable and powerful.)

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 7, 2019

Thanks for your thoughtful reply @FiloSottile.

Let me start by saying that as the current maintainer of dev.boringcrypto, I am sympathetic to the pain of maintaining a fork, and I understand where this is coming from.

I was definitely awaiting your reply and thoughts on this as we have talked before about the dev.boringcrypto branch in private mail and I knew we were likely experiencing the same pain points.

I think you make a clear argument that there is a business case for swapping crypto backends. I'm not convinced it's qualitatively different from the business case for swapping other parts of the language though: for example, the GC and runtime make Go incompatible with embedded devices, which also "limit[s] what can be used in the Go ecosystem".

Thank you. One counter I would provide here is that Go was never intended as a language suitable for embedded devices. As a language that is geared towards writing performant server software, I think that differentiates the crypto packages as they are a necessity these days, as running a server without TLS and proper security in any kind of production environment is irresponsible at this point.

There are however ways in which swapping cryptography backends is peculiar technically, and they all suggest leaning against this proposal.

  • It was tried before, not with great success. I regularly hear complaints about the disparate implementations of the Java Cryptography Extension, and the OpenSSL engines add a lot of complexity to their codebase.
  • The cryptography standard library is not a monolithic and stable feature set, like garbage collection or channels might be. New algorithms are added regularly, and having to account for the possibility that they might not be implemented by the engine would add a layer of complexity in some of the most gnarly places, like TLS certificate selection and algorithm negotiation.
  • Cryptography is complex by nature, so a lot of its complexity budget is spent on the algorithms themselves, leaving less space for this kind of flexibility. This is acknowledged by golang.org/design/cryptography-principles in two places: the first instrument of Secure being "reduced complexity", and Practical specifying that we "focus[] on common use cases to stay minimal".

I appreciate this feedback and the time you took to provide specific examples of where this might be a burden in the future if implemented as described.

Overall, it's a tradeoff. Not making cryptography engines pluggable puts the cost of maintaining a FIPS fork on the organizations that need it (Google included), instead of adding complexity to the standard library used by most. Also considering that it's mostly well-staffed organizations that need FIPS compliance, I think that's the right call, and I don't think we should implement this.

Maintaining one of the simplest cryptography libraries in the whole ecosystem is an exercise in resisting complexity, and it sometimes comes at the cost of this kind of flexibility.

I completely understand and agree that I wouldn't want to do anything to unnecessarily complicate the language, as the simplicity of it is one of the main factors that drove me towards being a Go programmer many years back now.

I guess my last question would be: are we doomed to maintain forks for the rest of time, or is there still room for brainstorming a solution which has an acceptably small complexity vs flexibility tradeoff? As I mentioned before, if this proposal is not "the one" I'm not offended, my goal was to provide a jumping off point and open the conversation.

@elagergren-spideroak

This comment has been minimized.

Copy link

commented Aug 7, 2019

With respect to comparing alternate garbage collectors and alternate cryptography implementations, there's one pertinent difference: the business reasons for cryptography backends usually necessitate multiple backends. I'm unaware of any business reasons that would require, say, three different garbage collectors or runtimes.

On Windows, the correct option for a FIPS-compliant Go package is to use the CNG API (in particular, BCrypt.dll or NCrypt.dll). On Linux, one might choose OpenSSL. In addition to OS-specific packages, however, those who need FIPS compliance often have to deal with client-specific libraries or hardware.

This greatly complicates forks, because now instead of just one BoringSSL fork you're required to maintain N forks. Or, try to combine the forks into one. But that's not entirely possible because you can't always include client A's cryptography library (or your own code that accesses client A's hardware) in your fork if that fork is provided to client B.

And often you have to provide your fork to client B because the type of clients that require FIPS compliance quite often need to build the software themselves, or at least have a trusted intermediary build it for them.

For something like FIPS-compliant cloud storage this probably isn't a big deal. You can pick and choose your desired VM and only maintain a fork that runs on that VM. But the lack of pluggability makes writing software that the clients run on their own machines incredibly difficult.

Also considering that it's mostly well-staffed organizations that need FIPS compliance...

I very strongly disagree. Large companies like Google, Amazon, Cloudflare, etc. are not the only companies that require FIPS compliance.

@derekparker

This comment has been minimized.

Copy link
Contributor Author

commented Aug 7, 2019

With respect to comparing alternate garbage collectors and alternate cryptography implementations, there's one pertinent difference: the business reasons for cryptography backends usually necessitate multiple backends. I'm unaware of any business reasons that would require, say, three different garbage collectors or runtimes.

On Windows, the correct option for a FIPS-compliant Go package is to use the CNG API (in particular, BCrypt.dll or NCrypt.dll). On Linux, one might choose OpenSSL. In addition to OS-specific packages, however, those who need FIPS compliance often have to deal with client-specific libraries or hardware.

This is a great point as well. I'm coming from the point of view of maintaining a fork calling into OpenSSL (as opposed to BoringSSL) for RHEL, however the idea of potentially needing multiple forks within the same organization is a major pain point for consumers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.