Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory rising to 29Gb in one hour run #477

Open
niranjanhm-bh-git opened this issue Mar 3, 2023 · 3 comments
Open

Memory rising to 29Gb in one hour run #477

niranjanhm-bh-git opened this issue Mar 3, 2023 · 3 comments

Comments

@niranjanhm-bh-git
Copy link

niranjanhm-bh-git commented Mar 3, 2023

Steps to reproduce:
List the minimal actions needed to reproduce the behavior.

  1. Singleton InfluxDBClient instance used for all writes. Single writeApi used for all calls throughout
client = new InfluxDBClient(url, Token);
var writeOptions = new WriteOptions
            {
                BatchSize = 5000, 
                FlushInterval = 500, 
                JitterInterval = 0, 
            };
converter = new DomainEntityConverter();
writeApiConv = client.GetWriteApi(writeOptions, converter);

// Below set data is called with other test client with load like:
// 200 parellel calls with each call having 500 samples, this is equal to 1lakh tags (or point data).
// Above load is called at every 1 second interval.
// Observation: Good news (happy with) writes are pretty fast in ~100ms, 200 parallel calls finishes easily.
// My main concern is memory rise.
image
Below details are for 10 min run memory going to ~15GB
image
image
image

public async Task SetData(SetDataRequest dataRequest, string bucket = bucketTagAsMeasurementName_Conv)
        {
           var data = new List<StaticTagAsMeasurement2>();
            foreach (var sampleSet in dataRequest.SampleSets)
            {
                for (int sampleIndex = 0; sampleIndex < sampleSet.Samples.Count; sampleIndex++)
                {
                    var sample = sampleSet.Samples[sampleIndex];
                    data.Add(new StaticTagAsMeasurement2
                    {
                        Id = Guid.Parse(sampleSet.TagId),
                        HashId = sampleSet.TagId.GetHashCode(),
                        Value = double.Parse(sample.Value.ToString()),
                        DataStatus = sample.DataStatus,
                        NodeStatus = sample.NodeStatus,
                        IsValid = sample.DataStatus == 0,
                        Time = sample.Timestamp
                    });
                }
            }
            writeApiConv.WriteMeasurements(data, WritePrecision.Ns, bucket, orgName);
}

// I tried first with commented PointData approach & then with PointData.Builder approach - same memory rise with both.

DomainEntityConverter{
 public PointData ConvertToPointData<T>(T entity, WritePrecision precision)
        {
            if (entity is StaticTagAsMeasurement2 tag2)
            {
                var pointBuilder = PointData.Builder.Measurement(tag2.Id.ToString());
                pointBuilder = pointBuilder.Field("hashId", tag2.HashId);
                pointBuilder = pointBuilder.Field("value", tag2.Value);
                pointBuilder = pointBuilder.Field("isValid", tag2.IsValid);
                pointBuilder = pointBuilder.Field("dataStatus", tag2.DataStatus);
                pointBuilder = pointBuilder.Field("nodeStatus", tag2.NodeStatus);
                pointBuilder = pointBuilder.Timestamp(new DateTime(tag2.Time, DateTimeKind.Utc), precision);
                return pointBuilder.ToPointData();

               // var pointData = PointData
               //.Measurement(tag2.Id.ToString())
               //.Field("hashId", tag2.HashId);
                //.Field("value", tag2.Value)
                //.Field("isValid", tag2.IsValid)
                //.Field("dataStatus", tag2.DataStatus)
                //.Field("nodeStatus", tag2.NodeStatus)
                //.Timestamp(new DateTime(tag2.Time, DateTimeKind.Utc), precision);

                // return pointData;
            }
        }
}

Expected behavior:
The memory should be stable say around 1GB so that long runs can be achieved

Actual behavior:
Memory is rising very fast

Specifications:

  • Client Version:
  • InfluxDB Version: v2.6
  • Platform: Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz   2.59 GHz
    64.0 GB (63.7 GB usable)
    64-bit operating system, x64-based processor
@niranjanhm-bh-git
Copy link
Author

Hi any update on the issue, we are doing a POC with Influx, without this support we wont be able to proceed

@niranjanhm-bh-git
Copy link
Author

Kindly have a look at findings & if a quick update on planing to consider this as bug or not would help.

@bednar
Copy link
Contributor

bednar commented Apr 24, 2023

Hi @niranjanhm-bh-git,

It should be cause by Windows Defender Network Filter. For more info see #164 (comment).

Regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants