-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(fs/s3): Initial support for s3 filesystem backend #1900
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is amazing @ahobson !! Thank you so much for the contribution!! Very much appreciated, I think an s3 backend makes total sense. Also the code looks 👨🏻🍳 💋
One question about the UX from a config standpoint
Again thank you very much this is awesome!
Codecov Report
@@ Coverage Diff @@
## main #1900 +/- ##
==========================================
- Coverage 71.64% 71.26% -0.38%
==========================================
Files 58 60 +2
Lines 5501 5743 +242
==========================================
+ Hits 3941 4093 +152
- Misses 1335 1419 +84
- Partials 225 231 +6
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great @ahobson, just a couple minor suggestions, one typo. exactly what I had in mind!! thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so great. Thanks for taking the time to build this @ahobson 🙏 and the for attention to the tests. Was a nice Monday surpise.
I couldn't come up with any blocking feedback either, lets get this in 🙌 💪
I tried to ponder if there was something we could do with s3 and prefix listing for directory heirarchy, but the more I sat on it, the more I realized it was probably brining zero value for lots of pointless complexity. What you have here makes sense. Its enough to support multiple namespaces, via multiple files in a single bucket 👌
We're looking to make this FS backends an non-experimental in a coming release. We can definitely gets some docs written up around this. I wonder, going forward, if it would be valuable to have a new sub-command (e.g. flipt upload
) for e.g. taking the flipt index file and finding the flag files in your project and just putting the minimum set in a target bucket. Food for thought / potential nice to have.
internal/s3fs/s3fs.go
Outdated
// try to return fs compatible error if possible | ||
var nsbe *types.NoSuchBucket | ||
if errors.As(err, &nsbe) { | ||
return nil, pathError | ||
} | ||
var nske *types.NoSuchKey | ||
if errors.As(err, &nske) { | ||
return nil, pathError | ||
} | ||
var nfe *types.NotFound | ||
if errors.As(err, &nfe) { | ||
return nil, pathError | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take it or leave it: We have a little utility to make this terser if you want
Lines 8 to 19 in 2416a01
// As is a utility for one-lining errors.As statements. | |
// e.g. cerr, match := errors.As[MyCustomError](err). | |
func As[E error](err error) (e E, _ bool) { | |
return e, errors.As(err, &e) | |
} | |
// AsMatch is the same as As but it returns just a boolean to represent | |
// whether or not the wrapped type matches the type parameter. | |
func AsMatch[E error](err error) (match bool) { | |
_, match = As[E](err) | |
return | |
} |
// try to return fs compatible error if possible | |
var nsbe *types.NoSuchBucket | |
if errors.As(err, &nsbe) { | |
return nil, pathError | |
} | |
var nske *types.NoSuchKey | |
if errors.As(err, &nske) { | |
return nil, pathError | |
} | |
var nfe *types.NotFound | |
if errors.As(err, &nfe) { | |
return nil, pathError | |
} | |
// try to return fs compatible error if possible | |
if flipterrors.AsMatch[*types.NoSuchBucket](err) || | |
flipterrors.AsMatch[*type.NoSuchKey](err) || | |
flipterrors.AsMatch[*types.NotFound](err) { | |
return nil, pathError | |
} |
// Stat implements fs.StatFS. For the s3 filesystem, this gets the | ||
// objects in the s3 bucket and stores them for later use. Stat can | ||
// only be called on the currect directory as the s3 filesystem only | ||
// supports walking a single bucket configured at creation time. | ||
func (f *FS) Stat(name string) (fs.FileInfo, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my clarity then: The FS implementation needs to be primed with a call to Stat before it has an entries in it? (unless you happen to know the available keys beforehand of-course.
I see that fs.WalkDir
starts with a call to Stat
so that is why that works for the snapshot implementation we use.
I think this is good, I was just contemplating e.g. why we want to perform the operation on Stat and not e.g. on ReadDir instead. I think based on how we interact with fs.FS
in the snapshot what you have here works great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to do a minimum viable s3fs and part of that was looking into how fs.WalkDir
works and seeing that the call to Stat
will happen before the call to ReadDir
.
Re-reading the s3fs code, I suppose I could have ReadDir
populate the filesystem entries if necessary, but because this internal flipt s3fs is only used by fs.WalkDIr
I didn't think it was necessary or maybe even desirable, as I haven't tested ensuring all the operations work independently of each other.
Said another way, this isn't a general purpose s3 filesystem wrapper and I was trying to be explicit about that.
I'd vote leaving it as it is for now, and if we come up with another use case where this restriction is a problem, we can revisit it then.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep agreed 👍 and thanks for the clarification there.
I am going to sync up with the rest of the team later and figure out if we can fit this nicely into next weeks (1.24) release. I suspect we can 👍
Hmm. The aws cli provides |
@ahobson I agree that aws cli for syncing would be the best course of action at first. Addtionally, we already have the concept of the flipt/internal/storage/fs/snapshot.go Lines 127 to 208 in 2416a01
But as I said, not for this pass. Just a potential idea to build upon to create something that gives a similar experience to the git and local backends. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so cool. thanks again!!
@all-contributors please add @ahobson for code |
I've put up a pull request to add @ahobson! 🎉 |
This follows the same pattern as
fs/git
to add support for fetching feature flag data from S3.In an AWS environment, this would allow deploying a readonly container and not require communication to a git repository outside of the deployment environment.
Let me know if you have suggestions or thoughts on how to improve the implementation.
Thank you for flipt.
Testing
The unit tests take advantage of an existing s3 bucket, using minio to simulate it locally.
Just like for the git backend, the
mage dagger:run test:unit
does the work to start, provision and configure the test suite appropriately.However, if you do want to run this test locally to experiment or investigate, you can always do the following: