Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 20 additions & 12 deletions examples/get_data_advanced.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
#!/usr/bin/env python
#
# This script shows an advanced Sysdig Cloud data request that leverages
# This script shows an advanced Sysdig Monitor data request that leverages
# filtering and segmentation.
#
# The request returns the last 10 minutes of CPU utilization for all of the
# containers inside the given host, with 1 minute data granularity
# The request returns the last 10 minutes of CPU utilization for the 5
# busiest containers inside the given host, with 1 minute data granularity
#

import os
import sys
import json
sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(sys.argv[0])), '..'))
from sdcclient import SdcClient

Expand All @@ -34,10 +35,10 @@
metrics = [
# The first metric we request is the container name. This is a segmentation
# metric, and you can tell by the fact that we don't specify any aggregation
# criteria. This entry tells Sysdig Cloud that we want to see the CPU
# criteria. This entry tells Sysdig Monitor that we want to see the CPU
# utilization for each container separately.
{"id": "container.name"},
# The second metric we reuest is the CPU. We aggregate it as an average.
# The second metric we request is the CPU. We aggregate it as an average.
{"id": "cpu.used.percent",
"aggregations": {
"time": "avg",
Expand All @@ -51,19 +52,26 @@
#
filter = "host.hostName = '%s'" % hostname

#
# Paging (from and to included; by default you get from=0 to=9)
# Here we'll get the top 5.
#
paging = { "from": 0, "to": 4 }

#
# Fire the query.
#
res = sdclient.get_data(metrics, # metrics list
-600, # start_ts = 600 seconds ago
0, # end_ts = now
60, # 1 data point per minute
filter, # The filter
'container') # The source for our metrics is the container
res = sdclient.get_data(metrics=metrics, # List of metrics to query
start_ts=-600, # Start of query span is 600 seconds ago
end_ts=0, # End the query span now
sampling_s=60, # 1 data point per minute
filter=filter, # The filter specifying the target host
paging=paging, # Paging to limit to just the 5 most busy
datasource_type='container') # The source for our metrics is the container

#
# Show the result!
#
print res[1]
print json.dumps(res[1], sort_keys=True, indent=4)
if not res[0]:
sys.exit(1)
11 changes: 3 additions & 8 deletions examples/get_data_simple.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python
#
# This script shows the basics of getting data out of Sysdig Cloud by creating a
# very simple request that has no filter an no segmentation.
# This script shows the basics of getting data out of Sysdig Monitor by creating a
# very simple request that has no filter and no segmentation.
#
# The request queries for the average CPU across all of the instrumented hosts for
# the last 10 minutes, with 1 minute data granularity
Expand Down Expand Up @@ -54,15 +54,10 @@
#
sampling = 60

#
# Paging (from and to included; by default you get from=0 to=9)
#
paging = { "from": 0, "to": 9 }

#
# Load data
#
res = sdclient.get_data(metrics, start, end, sampling, filter = filter, paging = paging)
res = sdclient.get_data(metrics, start, end, sampling, filter = filter)

#
# Show the result
Expand Down
2 changes: 1 addition & 1 deletion sdcclient/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ def get_data(self, metrics, start_ts, end_ts=0, sampling_s=0,
- **sampling_s**: the duration of the samples that will be returned. 0 means that the whole data will be returned as a single sample.
- **filter**: a boolean expression combining Sysdig Monitor segmentation criteria that defines what the query will be applied to. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.
- **datasource_type**: specify the metric source for the request, can be ``container`` or ``host``. Most metrics, for example ``cpu.used.percent`` or ``memory.bytes.used``, are reported by both hosts and containers. By default, host metrics are used, but if the request contains a container-specific grouping key in the metric list/filter (e.g. ``container.name``), then the container source is used. In cases where grouping keys are missing or apply to both hosts and containers (e.g. ``tag.Name``), *datasource_type* can be explicitly set to avoid any ambiguity and allow the user to select precisely what kind of data should be used for the request. `examples/get_data_datasource.py <https://github.com/draios/python-sdc-client/blob/master/examples/get_data_datasource.py>`_ contains a few examples that should clarify the use of this argument.
- **paging**:
- **paging**: if segmentation of the query generates values for several different entities (e.g. containers/hosts), this parameter specifies which to include in the returned result. It's specified as a dictionary of inclusive values for ``from`` and ``to`` with the default being ``{ "from": 0, "to": 9 }``, which will return values for the "top 10" entities. The meaning of "top" is query-dependent, based on points having been sorted via the specified group aggregation, with the results sorted in ascending order if the group aggregation is ``min`` or ``none``, and descending order otherwise.

**Success Return Value**
A dictionary with the requested data. Data is organized in a list of time samples, each of which includes a UTC timestamp and a list of values, whose content and order reflect what was specified in the *metrics* argument.
Expand Down