Skip to content
This repository has been archived by the owner on Nov 5, 2021. It is now read-only.

Reduce resource consumption when using file discovery with same file in multiple probes #634

Closed
benner opened this issue Jul 21, 2021 · 23 comments · Fixed by #640, #642, #643, #645 or #646
Closed
Assignees
Milestone

Comments

@benner
Copy link
Contributor

benner commented Jul 21, 2021

Reduce resource consumption when using file discovery with same file in multiple probes

Setup:
Multiple HTTP probes with same file_targets {file_path} (generated from consul-template) and different filter{} to for each probe to select related targets (similar approach as with run_on {}), .textpb file is about 220 MiB in size with more than 1M resource{}s, cloudprober.cfg has more than 500 probes.

Problem:
During start all probes launch file.refresh() and it kills almost any machine. Looking at code - all probes simultaneous opens same file and do .Unmarshal() in same time. This is memory and CPU heavy operation. Later on everything normalized due random delays and GC.

Description of what you want to happen and what problem will it solve for you.

I tried few different ways like putting global file mutex which serializes all processing but then it takes ages to gather targets for all probes. Doing similar randomization as in refresh loop will help too but same as with serial processing it will take ages. I also tried similar approach as described here: https://firas.dev.sy/post/go/2020/making-timed-maps-golang/ but simpler and with extended API with .LoadOrStore() as in sync.Map for reserving computation (file reading to avoid thundering herd problem), two maps - for reservation and another one for caching Unmarshal()'ed resources (before filtering). I got some promising results but don't have working/publishable code yet.

Some numbers I've measured (no concurrency):

  • loading file - almost instant
  • .Unmarshal() - ~50s
  • filter targets for probe - ~1s

BTW, I have everything working with all probes and targets when I put everything statically in main configuration file.

@manugarg
Copy link
Contributor

manugarg commented Jul 21, 2021

@benner If all probes are using the same targets you can consider using shared_targets.

Sorry, it's not documented, but here is what you can do:

Add the following config stanza at the config file level:

shared_targets {
  name: "my_file_targets"

  targets {
    file_targets {
      ..
    }
  }
}

and, then at the probe level:

probe {
  ...
  targets {
    shared_targets: "my_file_targets"
  }
}

This way we create only one target structure in memory .. and hence only one file refresh loop.

@manugarg
Copy link
Contributor

Ah, but you're using different filters. Can these filters be a regex on target names?

targets {
  shared_targets: "my_file_targets"
  regex: ".."
}

@benner
Copy link
Contributor Author

benner commented Jul 21, 2021

I saw shared_targets or regex filtering nut in my case all probes have different targets. Now I'm using label filters.

@benner
Copy link
Contributor Author

benner commented Jul 21, 2021

Example of cfg:

resource {                                                                              
name: "<target>"                                                                  
labels {                                                                                
    key: "probe"                                                                        
    value: "<probe name>-<HTTPS|HTTP>"                                        
  }                                                                                     
}   
        file_targets {                                                                  
          file_path: "/tmp/targets.textpb"               
          filter {                                                                      
            key: "labels.probe",                                                        
            value: "<probe name>-HTTP"                                  
          }                                                                             
        }            

@manugarg
Copy link
Contributor

manugarg commented Jul 21, 2021

I see. I guess rds_targets semantics would have been better for file_targets as well. RDS server could run in the same process (all transparent to the probes or configuration), and all probes could just rds_targets.

From configuration standpoint, it will look like this:

probe {
  ...
  targets{
    rds_target {                                                                  
       resource_path: "file://<file_name>"
          filter {                                                                      
             key: "labels.probe",                                                        
             value: "<probe name>-HTTP"                                  
          }                                                                             
        }
      }
   }
...
}

...

rds_server {
  provider {
    id: "file"
    file_config {
      file_path: ....
    }
  }
}

Does that make sense at all? I can try implementing it. It may take a few days to get it reviewed and submitted though.

@benner
Copy link
Contributor Author

benner commented Jul 21, 2021

I will look into that

@manugarg
Copy link
Contributor

@benner, fyi: I am also looking at adding the "file" provider to RDS. I'll try to create a PR soonish, so that you can test with it if you want.

@manugarg
Copy link
Contributor

I still need to get changed reviewed internally, but here is the proposed change: #635 (branch rds_file).

With this change you can do the following:

  1. Add the following stanza to your cloudprober.cfg:
rds_server {
  provider {
    file_config {
      file_path: "/tmp/targets.json" 
      re_eval_sec: 60
    }
  }
  1. Then specify targets in probes like this:
probe {
  ...
  targets{
    rds_target {                                                                  
       resource_path: "file://"
       filter {                                                                      
         key: "labels.probe",                                                        
         value: "<probe name>-HTTP"                                  
       }                                                                             
    }
  }
  ...
}

If you'd like to test, you can download pre-built binaries from here: linux, all.

@benner
Copy link
Contributor Author

benner commented Jul 23, 2021

I tried rds_target and it works as expected or I wanted to achieve. What I found missing or did not found how to change is rds.ClientConf.re_eval_sec which has default of 30s (with this freshness Cloudprober grows to ~40 GiB instead of 14 GiB in static version).

From what I see rds.ClientConf.re_eval_sec is not reachable in configuration file:

% rg ClientConf --glob '*.proto'
rds/client/proto/config.proto
18:// ClientConf represents resource discovery service (RDS) based targets.
20:message ClientConf {

targets/proto/targets.proto
22:  optional rds.ClientConf.ServerOptions rds_server_options = 1;
129:  optional rds.ClientConf.ServerOptions rds_server_options = 4;

targets/lameduck/proto/config.proto
43:  optional rds.ClientConf.ServerOptions rds_server_options = 6;

Only inner ServerOptions can be configured.

@manugarg
Copy link
Contributor

Good point regarding not being to control client side re_eval_sec through rds_targets. We can fix it, but I am not sure if it will help a lot. Client side refreshing causes data copy between server and client, but is filtered data.

To change refresh interval on the server side (where the file is actually reloaded), you can modify your rds_server config:

rds_server {
  provider {
    file_config {
      file_path: "/tmp/targets.json" 
      re_eval_sec: 60  <-- here
    }
}

I am wondering if 14GB vs 40GB difference is only due to temporary objects in the memory or something else is going on. At the time of refresh, new objects are created, and old objects should automatically be garbage collected.

I've thought about implementing some improvements to RDS protocol to track if resources have changed and signal to the client in case refresh is not needed, but I've largely avoided that as it may make things more complex and may introduce new failure modes.

Was it 14GB with file_targets or targets directly embedded in the config file?

@benner
Copy link
Contributor Author

benner commented Jul 23, 2021

I am wondering if 14GB vs 40GB difference is only due to temporary objects in the memory or something else is going on.

I think it is directly related. I actually did some heap dumps (pporf) during RSS growth and looked at .svg.

Attaching one when RSS was 5G (less probes). I don't have svg now when it was 40G but situation was the same but numbers was bigger.

image

Next week I'll redo the test with hands changed and recompiled rds.ClientConf.re_eval_sec.

Ether way, situation with rds files solution is like 5-6 times better than it was before :-)

@benner
Copy link
Contributor Author

benner commented Jul 23, 2021

Was it 14GB with file_targets or targets directly embedded in the config file?

Yes

@manugarg
Copy link
Contributor

manugarg commented Jul 23, 2021

Thanks for sharing the heap profile. Few observations:

  • Large amount of memory is being used while parsing the textproto file - 60% of the total (earlier you were using JSON right?).
  • 13% is going in runtime.malg -- that's basically for goroutines. I think these must be HTTP probe goroutines.
  • 10% + 3.54% is being used for running probes. Interestingly ListEndpoints is using only 1.64% of the heap.
  • fileLister.ListResources is using 8.4% of the heap space.

Thoughts:

  1. Parsing textproto is obviously the most expensive operation -- it seems that it uses a lot of intermediate storage for doing the TextProto->Proto conversion. One thing that can improve this usage is if we somehow parse text-proto one resource at a time and re-use intermediate data structures. Not sure if it's possible to do it in textproto (I can think of how we can do something like this in JSON, using json.RawMessage to do lazy decoding). If we can do it, it will improve the memory usage, but may increase compute time a little bit, though hopefully not by much, because behind the scene, TextProto->Proto decoder is also parsing one message at a time.

  2. Goroutine heap usage and probes' heap usage is largely unavoidable.

  3. We can likely improve fileLister.ListResources's performance. Currently we are allocating an array of size as big as the total number of resources (to avoid doing too many allocs) but that may be a wrong strategy. We may start with a min size, let's say 10 or even 100, and then let append handle the growth of the slice.

I'll look at implementing 1 (at least for JSON) & 3.

@benner
Copy link
Contributor Author

benner commented Jul 24, 2021

I was always using TextProto

@benner
Copy link
Contributor Author

benner commented Jul 27, 2021

Tested full configuration with default 30s and with 60min

rds/client/client.go:
+	// reEvalInterval := time.Duration(client.c.GetReEvalSec()) * time.Second
+	reEvalInterval := 30 * 60 * time.Second

with 60 min re_eval_sec it works as expected. RSS grows almost the same as with static configuration.

For default 30s interval main reason for memory usage is big number of probes. In my setups it means RDS server receives around 3 requests per second. Result: all CPUs are busy (20 logical CPUs) and memory grows to ~40G.

30s ~900 probes:
image

60 min ~900 probes:

image

One drawback with longer re_eval_sec is that all probing can be delayed up to same interval for actual probing but I can live with that because whole point of that is to keep Cloudprober without restarting :-)

manugarg added a commit that referenced this issue Jul 27, 2021
To avoid breaking existing users, continuing providing "file_targets" (directly), but implement them using RDS file targets provider.

Main reasoning behind this RDS provider is to allow multiple probes to use the same targets file without each probe having to load the same file again and again.

Ref: #634
PiperOrigin-RevId: 387196102
manugarg added a commit that referenced this issue Jul 27, 2021
To avoid breaking existing users, continuing providing "file_targets" (directly), but implement them using RDS file targets provider.

Main reasoning behind this RDS provider is to allow multiple probes to use the same targets file without each probe having load the same file again and again.

Ref: #634
PiperOrigin-RevId: 387196102
manugarg added a commit that referenced this issue Jul 27, 2021
To avoid breaking existing users, continuing providing "file_targets" (directly), but implement them using RDS file targets provider.

Main reasoning behind this RDS provider is to allow multiple probes to use the same targets file without each probe having load the same file again and again.

Ref: #634
PiperOrigin-RevId: 387196102
@manugarg
Copy link
Contributor

Interesting. Thanks again for the heap profiles @benner!

So ListResources should get better with the change that I just committed in the main branch: #640. There was inefficient resource allocation in the previous change.

Another improvement I am planning to make is to reload file "only if" it has been modified since the last load. With that you won't have to worry about re_eval_sec so much.

@manugarg
Copy link
Contributor

manugarg commented Jul 28, 2021

Even though resources allocation efficiency mentioned in the previous update will help quite a bit, I think #641 will have the most impact for a scenario like yours.

@manugarg
Copy link
Contributor

With all the changes so far, I think you should see substantial improvement in performance (much better than the default case earlier). You can even keep the re_eval_sec very low in RDS file provider config -- it won't reload file unless it has changed.

Next up is #641. I'm working on it.

@manugarg manugarg added this to the v0.11.3 milestone Jul 30, 2021
manugarg added a commit that referenced this issue Aug 3, 2021
If request contains if_modified_since field, use that field to decide whether to send all the resources in the response or not.

We always returns last_modified in the response. This timestamp corresponds to when lister's cache was refreshed from the actual file.

Ref: #641
& #634
PiperOrigin-RevId: 388502695
manugarg added a commit that referenced this issue Aug 3, 2021
If request contains if_modified_since field, use that field to decide whether to send all the resources in the response or not.

We always returns last_modified in the response. This timestamp corresponds to when lister's cache was refreshed from the actual file.

Ref: #641 & #634
PiperOrigin-RevId: 388502695
@manugarg
Copy link
Contributor

manugarg commented Aug 3, 2021

@benner, I think we should be good here now. You should be able to use lower re_eval_sec (30s) without any performance impact. Until you update the targets file, no new large data allocation should happen.

Let me know how it goes.

@manugarg manugarg closed this as completed Aug 3, 2021
@manugarg manugarg reopened this Aug 4, 2021
@manugarg manugarg self-assigned this Aug 5, 2021
@benner
Copy link
Contributor Author

benner commented Aug 6, 2021

Thank you. Tested for few hours. Everything seems even better.

image

P.S. I tested on my laptop with full config but without actual HTTP requests. After vocation I will test on real system.

@benner benner closed this as completed Aug 6, 2021
@manugarg
Copy link
Contributor

manugarg commented Aug 6, 2021

Awesome. Great to hear. Thanks for checking!

@benner
Copy link
Contributor Author

benner commented Sep 22, 2021

Tested in semi-prod. Now it has more that 2M targets. Some stats (from Cloudprober exported metrics):
image
image
image
image

@manugarg
Copy link
Contributor

@benner Awesome, great to see.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.