Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add caching for fluxv2 plugin (first step toward ##3032) #3044

Merged

Conversation

gfichtenholt
Copy link
Contributor

@gfichtenholt gfichtenholt commented Jun 25, 2021

This is just a first stab at #3032, just to get the electricity flowing from A to B, as it were. Introduces redis dependency and does some basic writes and reads to/from redis cache.
Plus tiny clean up - I saw some code in asset syncer that would be better to move to http-client util

@gfichtenholt gfichtenholt self-assigned this Jun 25, 2021
@gfichtenholt gfichtenholt linked an issue Jun 25, 2021 that may be closed by this pull request
@gfichtenholt gfichtenholt marked this pull request as draft June 27, 2021 08:25
@gfichtenholt
Copy link
Contributor Author

gfichtenholt commented Jun 27, 2021

making some significant changes to do with unit testing the code. Gets a bit messy due to several reasons:

  1. current clientGetter impl assumes a Bearer Token in incoming request context. If I am starting a background goroutine, I don't have an incoming request, so I need to either
    a) not use clientGetter func at all, in which case I need to define something similar to solve the issue with server side/client side unit test context OR
    b) hack something to get around the current limitations, which is what I did to get past this
  2. need to fake (mock)server-side redis client, just like fake k8s client. Gets pretty detailed and verbose
  3. current fake k8s client doesn't handle watches "as is", which is the functionality I rely on. In other words, fake k8s client does not act the same way w.r.t. watch events as the real server does. Need to play with it a bit more to see if I can make it work.

So, long story short. please wait to review until I mark it as ready for review again. Thanks

@absoludity
Copy link
Contributor

OK, I'll not look at the code for now, but just wanted to be sure you saw my comment on the issue (#3032) last week (didn't see a response, so wasn't sure). Not that it needs to change anything, but I'd just tried to think through and discuss some points there.

@gfichtenholt
Copy link
Contributor Author

yes, I saw your comment. Read it multiple times, as a matter of fact. Everything makes sense and all will eventually address all of it. I am not blocked by anything at the moment. Just mocking everything in unit tests takes a lot of effort. I have to manually mock k8s server firing events (see kubernetes/kubernetes#54075 for details) in my unit test code, and every single call to redis being made. It's gross beyond belief, but I will be done with it eventually

@gfichtenholt gfichtenholt added this to In progress in Kubeapps via automation Jul 1, 2021
@gfichtenholt gfichtenholt marked this pull request as ready for review July 1, 2021 06:11
@gfichtenholt
Copy link
Contributor Author

ok, this can be reviewed at this point. All tests are passing, at least on my end. Keep in mind, this is only the first incremental step toward a big feature. MANY MANY changes are to come incrementally. Among them:

  1. get rid of nasty client getter hack in cache.go
  2. investigate how to handle updates to index.yaml that flux does in the background
    These need to be addressed before I can move on to longer term:
  3. eventually the cache engine will be generalized and most code moved out of flux plug in to core server, except maybe for one or two functions relating to what specifically is cached and how updates are handled, so move towards that.

Copy link
Contributor

@absoludity absoludity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great stuff Greg! I'm not yet half-way through, but just posting thoughts so far. On Monday I'll pull the branch and try running it. Very excited to see this getting closer :)

@@ -65,11 +67,20 @@ To access Kubeapps from outside your K8s cluster, follow the steps below:
##########################################################################################################
{{- end }}

{{ if and (not .Values.redis.existingSecret) (empty .Values.redis.redisPassword) -}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now we probably want to condition this note so it only appears when the kubeapps apis service is enabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep

@@ -85,6 +85,16 @@ spec:
env:
- name: PORT
value: {{ .Values.kubeappsapis.containerPort | quote }}
# These env vars are required by the plugins for caching functionality
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should document this as required by the kubeapps apis service (which will longer-term pass something to the plugins so that the plugins won't need to know these details) though I understand that currently they'll be used directly by the flux plugin. If you prefer to remember to change it later, no problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will leave a TODO comment to state this

redis:
## @param redis.enabled Deploy a Redis server to satisfy the applications caching requirements
##
enabled: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why we need a redis.enabled in the chart values given that it'll be conditional on featureFlags.kubeappsAPIsServer I think? (that is, if the apis server is running then redis should be running?)

Also, rather than adding this new section to our chart's values.yaml right now (which could be confusing for users), can you pls add it to the docs/developer/manifests/values.kubeappsapis.yaml where we've also got the values for the service itself, which we'll move back here into the chart's values once we're ready?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do

req.Header.Set("Authorization", r.AuthorizationHeader)
reqHeaders := make(map[string]string)
reqHeaders["User-Agent"] = userAgent()
if len(r.AuthorizationHeader) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it looks like the change you're making here is related to using httclient.GetStream below, but I'm guessing you had a conflict when merging master due to the bug fix @antgamdia landed with #3057 . But if you look at the change here, I think you missed incnluding the new conditions that were a part of that change, when resolving the conflict?

What's not clear to me is what functionality you've actually changed in this file (other than the accidental reverts). It seems to be just switching to use httpclient.GetStream() without any benefit? (Ah, because what I'm not seeing in this context is that GetStream is probably doing something extra that wasn't done before?).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, indeed, I messed up the one line in 'if' statement when merging. I will it back to what Antonio intended. And yes, the goal of this little change, which BTW has nothing to do with caching, was just to move more http-client specific functionality into pkg. The benefits are potential code reuse by other pieces of code. Streaming response should be a fairly common use-case. It just an a "little" thing I noticed while waiting on other things.

return err
}
b = buf.Bytes()
imaging.Encode(&buf, resizedImg, imaging.PNG)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar here - #3057 improved this code by handling the error which has been reverted here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, my bad. consider it fixed


func newCache(clientGetter server.KubernetesClientGetter) (*fluxPlugInCache, error) {
log.Infof("+newCache")
REDIS_ADDR, ok := os.LookupEnv("REDIS_ADDR")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine for now, but when this is moved out of the plugin, then small preference for reading all config in the main.go (whether from env vars or cmd-line options) only in the one spot and passing explicitly to functions (so functions are less dependent on env state).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a comment so I don't forget

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did add a comment in the code, as promised. Just wanted to also mention that on a second look, I don't necessarily agree. The way it is right now, is one place that reads env vars. What you're suggesting is to move this code out to server.go (kind of like helm plug-in server.go reads ASSET_SYNCER_* env vars today) or main.go? So then we'd each plug-in that wants cache read these env vars? That doesn't seem right. I think I am missing something. Anyway, we'll discuss this on a call I am sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cmd/root.go - as in, the root CLI command would read all config on startup. If you look there, you'll see we already do this for the clusters config path and pinniped proxy URL, populating a server options which is passed on to server.go where the server is started. Note that the library we're using there (cobra/viper) is already reading in env vars that match the configured options, so you can supply a --pinniped-proxy-url or you can specify it in the env, either way, it'll end up in the value for the option.

So then we'd each plug-in that wants cache read these env vars? That doesn't seem right.

No, each plugin would just be passed a function or interface that is already configured with the env vars. Not suggesting this can be done now, just when it's generalised for all plugins to use. This is much the same way that we currently create a function closure to be passed to plugins so they can create a k8s client without needing to know anything about the pinniped proxy configuration.

I think I am missing something. Anyway, we'll discuss this on a call I am sure.

Yep, and it won't be relevant until we're generalising the caching work for all plugins anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, sounds good

redis.NewClient(&redis.Options{
Addr: REDIS_ADDR,
Password: REDIS_PASSWORD,
DB: REDIS_DB_NUM,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooh, interesting - wondering if different plugins could/would have different dbs? (Sorry, that's unrelated to this PR, just thinking out aloud while reading).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I was wondering the same exact thing. Instead of a 'namespace' which redis doesn't really have a concept of, use a different DB. Will need to see what the cost is, if any, of having multiple concurrent DBs

}
go c.startHelmRepositoryWatcher()
return &c, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice and clear!

c.watcherMutex.Unlock()
log.Infof("watcher already started. exiting...")
}
// we should never reach here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this point be reached if startHelmRepositoryWatcher() is called a second time? Ah, so I assume the assumption is that it is only ever called once, but in that case, I'm confused about the locks.... maybe it'll make more sense as I continue on :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I mean we should never reach here under normal conditions. if we reached there something bad happened. I will update the comment.


func (c *fluxPlugInCache) newHelmRepositoryWatcherChan() (<-chan watch.Event, error) {
// TODO this is a temp hack to get around the fact that the only clientGetter we've got today
// always expects authorization Bearer token in the context
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It always expects the bearer token if it's not setup to use the service account. Right, so we need to ensure that the client can also be used with an explicit request to use the service's service account right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite. I am going to say that my testing shows that the bearer token is ALWAYS expected with the current impl of client getter
https://github.com/kubeapps/kubeapps/blob/master/cmd/kubeapps-apis/server/plugins.go#L307
I could be wrong, but the call to extractToken() is the very first thing that happens and that's what fails in my case. If I am right, as I said before this is going to take some doing. We either need to enhance clientGetter somehow so I don't have to hack it like this or come up with some new kind of 'client getter' that can be used for my purposes, that supports both server-side and client-side (fake with unit tests)

Copy link
Contributor

@absoludity absoludity Jul 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's odd, in that if you look at the implementation of extractToken() just below your link, you'll see it at least intends to return an empty token with a nil error if there was no authorization in the context:

https://github.com/kubeapps/kubeapps/blob/master/cmd/kubeapps-apis/server/plugins.go#L353

I haven't checked, but perhaps context.TODO() doesn't include any metadata at all, and it's just that metadata is requried? (in which case we should change the code so it is not, which is easy enough).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In your case, you do actually want to use a context (as you're passing it to the Watch call below. So I'd not use context.TODO() but context.Background() below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is raising an error in my usage

	md, ok := metadata.FromIncomingContext(ctx)
	if !ok {
		return "", fmt.Errorf("error reading request metadata/headers")
	}

I will change my hack to something a little cleaner like

     ctx := metadata.NewIncomingContext(context.Background(), metadata.MD{})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, or we can easily update the extractToken() to return an empty token with a nil error if there is no metadata with the context. That'd be cleaner IMO, and consistent with its current behaviour.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will make this change this as part of this PR

Copy link
Contributor

@absoludity absoludity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for taking so long to get through this. I'm happy for this to land but have left a bunch of thoughts.

The biggest one though, which may be a bit of a blocker - but better to deal with it now than later: I know we'd all talked about redis as an example for the caching layer, and you'd even asked in the past why we use postgresql for caching with the existing helm, but looking at this PR made me realise that cached data in redis may not be easily queried with filters or sorted?

For example, if this is used to cache the bitnami repo, will it still be a problem to request all pkgs available that begin with "apach" ?


// clear that key so cache doesn't contain any stale info for this object if not ready or
// indexing or marshalling fails for whatever reason
c.redisCli.Del(c.redisCli.Context(), *key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't the cache want to have the most recent value if the next update/index/whatever fails? Seems odd to create a time period where there isn't a value for it like this until it does possibly succeed later?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for taking so long to get through this. I'm happy for this to land but have left a bunch of thoughts.

The biggest one though, which may be a bit of a blocker - but better to deal with it now than later: I know we'd all talked about redis as an example for the caching layer, and you'd even asked in the past why we use postgresql for caching with the existing helm, but looking at this PR made me realise that cached data in redis may not be easily queried with filters or sorted?

For example, if this is used to cache the bitnami repo, will it still be a problem to request all pkgs available that begin with "apach" ?

Shouldn't be a problem with the right abstractions, Redis does have rich query capabilities.
https://redis.io/commands/KEYS
The example you gave should be fairly trivial to implement. Remains to be seen as a whole the query syntax as rich or richer than SQL but I'm sure there's enough there to get us started.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't the cache want to have the most recent value if the next update/index/whatever fails? Seems odd to create a time period where there isn't a value for it like this until it does possibly succeed later?

my rationale at the time was to better to have a cache miss than return a out-of-date value? I am willing to be convinced otherwise :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be a problem with the right abstractions, Redis does have rich query capabilities.
https://redis.io/commands/KEYS
The example you gave should be fairly trivial to implement. Remains to be seen as a whole the query syntax as rich or richer than SQL but I'm sure there's enough there to get us started.

I guess you mean SCAN rather than KEYS (KEYS is for debugging, non-prod use only, afaict)? Checking the guarantees of SCAN, one point in particular (about elements being returned multiple times) could be an issue? See what you think.

my rationale at the time was to better to have a cache miss than return a out-of-date value? I am willing to be convinced otherwise :-)

Up to you, I was just surprised because elsewhere I saw you explicitly weren't associating expiration with data, so seemed odd to do so here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be a problem with the right abstractions, Redis does have rich query capabilities.
https://redis.io/commands/KEYS
The example you gave should be fairly trivial to implement. Remains to be seen as a whole the query syntax as rich or richer than SQL but I'm sure there's enough there to get us started.

I guess you mean SCAN rather than KEYS (KEYS is for debugging, non-prod use only, afaict)? Checking the guarantees of SCAN, one point in particular (about elements being returned multiple times) could be an issue? See what you think.

my rationale at the time was to better to have a cache miss than return a out-of-date value? I am willing to be convinced otherwise :-)

Up to you, I was just surprised because elsewhere I saw you explicitly weren't associating expiration with data, so seemed odd to do so here.

just FYI, I am not sure SCAN is the right answer here. I see the redis go client does have
https://pkg.go.dev/github.com/go-redis/redis/v8@v8.10.0#Client.Keys
which looks like it invokes the KEYS command. It does take a string pattern as an input argument. Nowhere in go client-related documentation (or code) does it tell you it's not meant for production use. I do see a warning about it on https://redis.io/commands/KEYS. I am not sure how relevant that particular warning is for us and our usage. I think it applies to VERY LARGE databases. It does say right above it "While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds.". Have we ever encountered a use case with a million keys (i.e. repos in my use case)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I'm not trying to make redis look bad or say postgres is better (no interest in that :) ). I'm just keen that we ensure that we can support the current functionality. I'm only pointing out postgresql's json support because that's what we currently use, not because I'm trying to find something better than Redis. For example, see the GenerateWhereClause at https://github.com/kubeapps/kubeapps/blob/master/cmd/assetsvc/pkg/utils/postgresql_utils.go#L211 which uses postgresql's json support for querying the json data which the assetsvc caches.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, so another way of saying it is that our existing code base isn't portable today. We're locked in to a particular SQL vendor. Bah

Copy link
Contributor

@absoludity absoludity Jul 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well when Kubeapps began from the Monocular project (before my time), it was originally a MongoDB document DB. Then when we had to switch away from MongoDB due to licensing issues, we looked for a reliable and trusted OSS solution that would give us the same ability to query JSON blobs. We weren't looking for an SQL solution, but rather something that made an easy transition from MongoDB. Postgresql's JSON support fit that pretty well - that's all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough. And now we need in memory DB with similar JSON support. Redis may or may not fit the bill. I am working on finding out. Just saw this https://oss.redislabs.com/redisjson/#client-libraries - there is an add-on to go client I use. I will find out if it does what we need

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be a problem with the right abstractions, Redis does have rich query capabilities.
https://redis.io/commands/KEYS
The example you gave should be fairly trivial to implement. Remains to be seen as a whole the query syntax as rich or richer than SQL but I'm sure there's enough there to get us started.

I guess you mean SCAN rather than KEYS (KEYS is for debugging, non-prod use only, afaict)? Checking the guarantees of SCAN, one point in particular (about elements being returned multiple times) could be an issue? See what you think.

my rationale at the time was to better to have a cache miss than return a out-of-date value? I am willing to be convinced otherwise :-)

Up to you, I was just surprised because elsewhere I saw you explicitly weren't associating expiration with data, so seemed odd to do so here.

just FYI, I am not sure SCAN is the right answer here. I see the redis go client does have
https://pkg.go.dev/github.com/go-redis/redis/v8@v8.10.0#Client.Keys
which looks like it invokes the KEYS command. It does take a string pattern as an input argument. Nowhere in go client-related documentation (or code) does it tell you it's not meant for production use. I do see a warning about it on https://redis.io/commands/KEYS. I am not sure how relevant that particular warning is for us and our usage. I think it applies to VERY LARGE databases. It does say right above it "While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds.". Have we ever encountered a use case with a million keys (i.e. repos in my use case)?

Yikes. I just read redis/redis#3627
I am going to stay away from KEYS :-)

Comment on lines 229 to 237
var value interface{}
var setVal bool
if add {
funcName = "OnAdd"
value, setVal, err = c.config.onAdd(*key, unstructuredObj)
} else {
funcName = "OnModify"
value, setVal, err = c.config.onModify(*key, unstructuredObj)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine like this, but remember that functions can be assigned to vars just like anything else, so you could also do the following if you prefer:

// Define an actual type so you can use it in your interface earlier also, as well as below:
type CacheSetter func(string, map[string]interface{}) (interface{}, bool, error)

var addOrModify CacheSetter
if add {
    funcName = "OnAdd"
    addOrModify = c.config.onAdd
} else {
    funcName = "OnModify"
    addOrModify = c.config.onModify
}
value, setVal, err := addOrModify(*key, unstructuredObj)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will do

for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
for job := range requestChan {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for job := range requestChan {
// The following loop will only terminate when the request channel is closed (and there are no more items)
for job := range requestChan {

Just to help people parsing the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

}
}
return responseItems, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat. As above, I'd sprinkle a small comment just for things that are less obvious if you haven't recently been using channels.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

svr := NewServer(clientGetter)
func RegisterWithGRPCServer(s grpc.ServiceRegistrar, clientGetter server.KubernetesClientGetter) (interface{}, error) {
log.Infof("+fluxv2 RegisterWithGRPCServer")
// TODO (gfichtenholt) return an error when func signature is changed to allow for it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you've now done that TODO?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops. will remove. thanks

}

if request.Context.Cluster != "" {
if request != nil && request.Context != nil && request.Context.Cluster != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grpc compiles in getters for you which automatically return a default (empty) struct if the pointer was nil. That is, you should be able to replace the above with:

Suggested change
if request != nil && request.Context != nil && request.Context.Cluster != "" {
if request != nil && request.GetContext().GetCluster() != "" {

(not even sure if it works with a nil object to begin with, forget).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

// TODO (gfichtenholt)
// 1. HelmChart object needs to be co-located in the same namespace as the HelmRepository it is referencing.
// 2. flux impersonates a "super" user when doing this (see fluxv2 plug-in specific notes at the end of
// design doc). We should probably be doing simething similar to avoid RBAC-related problems
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have to go back and re-read that to get more context, generally not keen to avoid RBAC-related problems with "super" users, but it probably means something different. Generally the user's token should have permission to create the chart. I'll read the doc more thoroughly later.

}
})
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These last two are very long tests... but I guess that's due to the related state that you're needing to setup (with the cache) to actually run the test? Looks painful (for you).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it WAS painful. took several days to make work. bleh. Will see if there is some opportunity for clean up/tidy up later

if err != nil {
return nil, err
}
return protoMsg.AvailablePackagesSummaries, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, so we're just caching the blob of available package summaries. From a bit of reading, it sounds like we'd need to add our own indicies manually to have some sort of selection/filtering of results with redis, which might be one big reason to use a database for the cache in the first place :/ (just thinking out aloud based on what we're currently doing with the helm support).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I know. I will change what is being cached for flux. Wasn't meant to be final. The goal was to get the plumbing right.

@gfichtenholt gfichtenholt merged commit c079ec1 into vmware-tanzu:master Jul 8, 2021
Kubeapps automation moved this from In progress to Done Jul 8, 2021
@gfichtenholt gfichtenholt deleted the add-caching-for-fluxv2-plugin branch July 8, 2021 00:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Kubeapps
  
Done
Development

Successfully merging this pull request may close these issues.

Investigate caching layer for kubeapps plug-ins
3 participants