Skip to content

Commit

Permalink
Fix spelling errors in documentation only (#9906)
Browse files Browse the repository at this point in the history
* Fix typos in readme, docs, metadata, and comments

* Fix metric names
  • Loading branch information
ChristineTChen committed Aug 17, 2021
1 parent e1367f2 commit 3bf453a
Show file tree
Hide file tree
Showing 75 changed files with 132 additions and 132 deletions.
6 changes: 3 additions & 3 deletions aerospike/metadata.csv
Expand Up @@ -70,7 +70,7 @@ aerospike.cluster_is_member,gauge,,,,,0,aerospike,
aerospike.migrate_allowed,gauge,,,,,0,aerospike,
aerospike.system_swapping,gauge,,,,Indicate that the system is currently swapping RAM to disk,-1,aerospike,
aerospike.dlog_free_pct,gauge,,percent,,The percentage of the digest log free and available for use.,1,aerospike,
aerospike.free_pct_disk,gauge,,percent,,The percentage of disk capacity free for this namespace. Deprecated sicne 3.9.,1,aerospike,
aerospike.free_pct_disk,gauge,,percent,,The percentage of disk capacity free for this namespace. Deprecated since 3.9.,1,aerospike,
aerospike.free_pct_memory,gauge,,percent,,The percentage of memory capacity free for this namespace. Deprecated since 3.9.,1,aerospike,
aerospike.free_dlog_pct,gauge,,percent,,The percentage of the digest log free and available for use. Deprecated since 3.9,1,aerospike,
aerospike.heap_efficiency_pct,gauge,,percent,,The heap_allocated_kbytes / heap_mapped_kbytes ratio. A lower number indicates a higher fragmentation rate.,0,aerospike,
Expand Down Expand Up @@ -702,8 +702,8 @@ aerospike.datacenter.dc_recs_shipped,gauge,,record,,"The number of records that
aerospike.datacenter.dc_recs_shipped_ok,gauge,,record,,The number of records that have been successfully shipped. [Removed in 5.0.0],0,aerospike,
aerospike.datacenter.dc_remote_ship_avg_sleep,gauge,,millisecond,,The average number of ms of sleep for each record being shipped. [Removed in 5.0.0],0,aerospike,
aerospike.datacenter.dc_size,gauge,,,,The cluster size of the destination DC. [Removed in 5.0.0],0,aerospike,
aerospike.namespace.tps.read,gauge,,,,The throughput performace of reads. [Removed in 5.1.0],0,aerospike,
aerospike.namespace.tps.write,gauge,,,,The throughput performace of writes. [Removed in 5.1.0],0,aerospike,
aerospike.namespace.tps.read,gauge,,,,The throughput performance of reads. [Removed in 5.1.0],0,aerospike,
aerospike.namespace.tps.write,gauge,,,,The throughput performance of writes. [Removed in 5.1.0],0,aerospike,
aerospike.set.memory_data_bytes,gauge,,byte,,The memory used by this set for the data part (does not include index part). Value will be 0 if data is not stored in memory. ,0,aerospike,
aerospike.set.objects,gauge,,record,,The total number of objects (master and all replicas) in this set on this node.,0,aerospike,Set Objects
aerospike.set.stop_writes_count,gauge,,,,The total count this set has hit stop_writes,0,aerospike,
Expand Down
2 changes: 1 addition & 1 deletion agent_metrics/metadata.csv
Expand Up @@ -49,7 +49,7 @@ datadog.agent.memstats.alloc,gauge,,,,,0,agent_metrics,mem alloc
datadog.agent.memstats.free,gauge,,,,,0,agent_metrics,mem free
datadog.agent.memstats.heap_alloc,gauge,,,,,0,agent_metrics,mem heap alloc
datadog.agent.memstats.heap_idle,gauge,,,,,0,agent_metrics,mem heap idle
datadog.agent.memstats.heap_insue,gauge,,,,,0,agent_metrics,mem heap insue
datadog.agent.memstats.heap_inuse,gauge,,,,,0,agent_metrics,mem heap inuse
datadog.agent.memstats.heap_objects,gauge,,,,,0,agent_metrics,mem heap objects
datadog.agent.memstats.heap_released,gauge,,,,,0,agent_metrics,mem heap released
datadog.agent.memstats.heap_sys,gauge,,,,,0,agent_metrics,mem heap sys
Expand Down
2 changes: 1 addition & 1 deletion avi_vantage/metadata.csv
Expand Up @@ -50,7 +50,7 @@ avi_vantage.l7_client.avg_waf_rejected,gauge,15,transaction,second,Average numbe
avi_vantage.l7_client.pct_get_reqs,gauge,15,percent,,Number of HTTP GET requests as a percentage of total requests received.,0,avi_vantage,
avi_vantage.l7_client.pct_post_reqs,gauge,15,percent,,Number of HTTP POST requests as a percentage of total requests received.,0,avi_vantage,
avi_vantage.l7_client.pct_response_errors,gauge,15,percent,,Percent of 4xx and 5xx responses,0,avi_vantage,
avi_vantage.l7_client.pct_waf_attacks,gauge,15,percent,,Malicious transactions (attacks) identified by WAF as the pecentage of total requests received.,0,avi_vantage,
avi_vantage.l7_client.pct_waf_attacks,gauge,15,percent,,Malicious transactions (attacks) identified by WAF as the percentage of total requests received.,0,avi_vantage,
avi_vantage.l7_client.pct_waf_disabled,gauge,15,percent,,Transactions bypassing WAF as the percentage of total requests received.,0,avi_vantage,
avi_vantage.l7_client.sum_application_response_time,gauge,15,second,,Total time taken by server to respond to request.,0,avi_vantage,
avi_vantage.l7_client.sum_client_data_transfer_time,gauge,15,second,,Average client data transfer time computed by adding response latencies across all HTTP requests.,0,avi_vantage,
Expand Down
Expand Up @@ -142,7 +142,7 @@ def _process_nodetool_output(self, output):
nodes = []
datacenter_name = ""
for line in output.splitlines():
# Ouput of nodetool
# Output of nodetool
# Datacenter: dc1
# ===============
# Status=Up/Down
Expand Down
2 changes: 1 addition & 1 deletion cilium/metadata.csv
Expand Up @@ -34,7 +34,7 @@ cilium.k8s_client.api_latency_time.seconds.count,count,,request,,"Count of proce
cilium.k8s_client.api_latency_time.seconds.sum,gauge,,second,,"Sum of processed API call duration",-1,cilium,api latency sum
cilium.kubernetes.events_received.total,count,,event,,"Number of Kubernetes received events processed",0,cilium,k8s events rcv count
cilium.kubernetes.events.total,count,,event,,"Number of Kubernetes events processed",0,cilium,k8s events count
cilium.nodes.all_datapath_validations.total,count,,unit,,"Number of validation calls to implement the datapath implemention of a node",0,cilium,node validations
cilium.nodes.all_datapath_validations.total,count,,unit,,"Number of validation calls to implement the datapath implementation of a node",0,cilium,node validations
cilium.nodes.all_events_received.total,count,,event,,"Number of node events received",0,cilium,node events rcvd
cilium.nodes.managed.total,gauge,,node,,"Number of nodes managed",0,cilium,node managed
cilium.policy.count,gauge,,unit,,"Number of policies currently loaded",0,cilium,policy count
Expand Down
2 changes: 1 addition & 1 deletion cisco_aci/metadata.csv
Expand Up @@ -126,7 +126,7 @@ cisco_aci.capacity.leaf.multicast.limit,gauge,,item,,the multicast endpoint capa
cisco_aci.capacity.apic.endpoint_group.utilized,gauge,,item,,the endpoint groups the apic is utilizing,0,cisco_aci,
cisco_aci.capacity.apic.bridge_domain.utilized,gauge,,item,,the bridge domains the apic is utilizing,0,cisco_aci,
cisco_aci.capacity.apic.endpoint.utilized,gauge,,item,,the endpoints the apic is utilizing,0,cisco_aci,
cisco_aci.capacity.apic.fabric_node.utilized,gauge,,item,,the numer of fabric nodes the apic is using,0,cisco_aci,
cisco_aci.capacity.apic.fabric_node.utilized,gauge,,item,,the number of fabric nodes the apic is using,0,cisco_aci,
cisco_aci.capacity.apic.private_network.utilized,gauge,,item,,the number of private networks the apic is using,0,cisco_aci,
cisco_aci.capacity.apic.tenant.utilized,gauge,,item,,the number of tenants the apic is using,0,cisco_aci,
cisco_aci.capacity.apic.vmware_domain.limit,gauge,,item,,the limit on number of vmware domains the apic can use,0,cisco_aci,
Expand Down
4 changes: 2 additions & 2 deletions clickhouse/metadata.csv
Expand Up @@ -21,8 +21,8 @@ clickhouse.thread.local.total,gauge,,thread,,The number of threads in local thre
clickhouse.thread.local.active,gauge,,thread,,The number of threads in local thread pools running a task.,0,clickhouse,
clickhouse.query.memory,gauge,,byte,,Total amount of memory allocated in currently executing queries. Note that some memory allocations may not be accounted.,0,clickhouse,
clickhouse.merge.memory,gauge,,byte,,Total amount of memory allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.,0,clickhouse,
clickhouse.background_pool.processing.memory,gauge,,byte,,"Total amount of memory allocated in background processing pool (that is dedicated for backround merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.",0,clickhouse,
clickhouse.background_pool.move.memory,gauge,,byte,,"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.",0,clickhouse,
clickhouse.background_pool.processing.memory,gauge,,byte,,"Total amount of memory allocated in background processing pool (that is dedicated for background merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.",0,clickhouse,
clickhouse.background_pool.move.memory,gauge,,byte,,"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks.",0,clickhouse,
clickhouse.background_pool.schedule.memory,gauge,,byte,,Total amount of memory allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables).,0,clickhouse,
clickhouse.merge.active,gauge,,merge,,The number of executing background merges,0,clickhouse,
clickhouse.file.open.read,gauge,,file,,The number of files open for reading,0,clickhouse,
Expand Down
Expand Up @@ -34,15 +34,15 @@ def check_election_status(self, config):
or no record is found. Monitors on the service-check should have
no-data alerts enabled to account for this.
The config objet requires the following fields:
The config object requires the following fields:
namespace (prefix for the metrics and check)
record_kind (leases, endpoints or configmap)
record_name
record_namespace
tags (optional)
It reads the following agent configuration:
kubernetes_kubeconfig_path: defaut is to use in-cluster config
kubernetes_kubeconfig_path: default is to use in-cluster config
"""
try:
record = self._get_record(
Expand Down
Expand Up @@ -335,7 +335,7 @@ def __init__(self, collector=None, callback=None):
the collector will be called when the result from the Job is
ready
:param callback: when not None, function to call when the
result becomes available (this is the paramater passed to the
result becomes available (this is the parameter passed to the
Pool::*_async() methods.
"""
self._success = False
Expand Down Expand Up @@ -396,7 +396,7 @@ def _set_value(self, value):
traceback.print_exc()

def _set_exception(self):
"""Called by a Job object to tell that an exception occured
"""Called by a Job object to tell that an exception occurred
during the processing of the function. The object will become
ready but not successful. The collector's notify_ready()
method will be called, but NOT the callback method"""
Expand Down
Expand Up @@ -165,7 +165,7 @@ def _finalize_tags_to_submit(self, _tags, metric_name, val, metric, custom_tags=

def _filter_metric(self, metric, scraper_config):
"""
Used to filter metrics at the begining of the processing, by default no metric is filtered
Used to filter metrics at the beginning of the processing, by default no metric is filtered
"""
return False

Expand Down
Expand Up @@ -258,7 +258,7 @@ def create_scraper_configuration(self, instance=None):
config['type_overrides'].update(instance.get('type_overrides', {}))

# `_type_override_patterns` is a dictionary where we store Pattern objects
# that match metric names as keys, and their corresponding metric type overrrides as values.
# that match metric names as keys, and their corresponding metric type overrides as values.
config['_type_override_patterns'] = {}

with_wildcards = set()
Expand All @@ -271,7 +271,7 @@ def create_scraper_configuration(self, instance=None):
for metric in with_wildcards:
del config['type_overrides'][metric]

# Some metrics are retrieved from differents hosts and often
# Some metrics are retrieved from different hosts and often
# a label can hold this information, this transfers it to the hostname
config['label_to_hostname'] = instance.get('label_to_hostname', default_instance.get('label_to_hostname', None))

Expand Down Expand Up @@ -743,7 +743,7 @@ def process_metric(self, metric, scraper_config, metric_transformers=None):
self.log.warning('Error handling metric: %s - error: %s', metric.name, err)

return
# check for wilcards in transformers
# check for wildcards in transformers
for transformer_name, transformer in iteritems(metric_transformers):
if transformer_name.endswith('*') and metric.name.startswith(transformer_name[:-1]):
transformer(metric, scraper_config, transformer_name)
Expand Down
Expand Up @@ -139,7 +139,7 @@ def normalize_metric_config(check_config):

def get_native_transformer(check, metric_name, modifiers, global_options):
"""
Uses whatever the endpoint describes as the metric type in the first occurence.
Uses whatever the endpoint describes as the metric type in the first occurrence.
"""
transformer = None

Expand Down
Expand Up @@ -142,7 +142,7 @@ def get_scraper(self, instance):
scraper.NAMESPACE = namespace
# Metrics are preprocessed if no mapping
metrics_mapper = {}
# We merge list and dictionnaries from optional defaults & instance settings
# We merge list and dictionaries from optional defaults & instance settings
metrics = default_instance.get("metrics", []) + instance.get("metrics", [])
for metric in metrics:
if isinstance(metric, string_types):
Expand Down
Expand Up @@ -140,7 +140,7 @@ def __init__(self, *args, **kwargs):
# overloaded/hardcoded in the final check not to be counted as custom metric.
self.type_overrides = {}

# Some metrics are retrieved from differents hosts and often
# Some metrics are retrieved from different hosts and often
# a label can hold this information, this transfers it to the hostname
self.label_to_hostname = None

Expand Down
Expand Up @@ -143,7 +143,7 @@ def _get_counter_dictionary(self):

# create a table of the keys to the counter index, because we want to look up
# by counter name. Some systems may have an odd number of entries, don't
# accidentaly index at val[len(val]
# accidentally index at val[len(val]
for idx in range(0, len(val) - 1, 2):
WinPDHCounter.pdh_counter_dict[val[idx + 1]].append(val[idx])

Expand Down
Expand Up @@ -397,7 +397,7 @@ def assert_metrics_using_metadata(
actual_metric_type = AggregatorStub.METRIC_ENUM_MAP_REV[metric_stub.type]

# We only check `*.count` metrics for histogram and historate submissions
# Note: all Openmetrics histogram and summary metrics are actually separatly submitted
# Note: all Openmetrics histogram and summary metrics are actually separately submitted
if check_submission_type and actual_metric_type in ['histogram', 'historate']:
metric_stub_name += '.count'

Expand Down
2 changes: 1 addition & 1 deletion datadog_checks_base/datadog_checks/base/utils/tailfile.py
Expand Up @@ -66,7 +66,7 @@ def _open_file(self, move_end=False, pos=False):
pos = False

# Check if file has been truncated and too much data has
# alrady been written (copytruncate and opened files...)
# already been written (copytruncate and opened files...)
if size >= self.CRC_SIZE and self._crc is not None and crc != self._crc:
self._log.debug("Beginning of file modified, reopening")
move_end = False
Expand Down
Expand Up @@ -75,7 +75,7 @@ def get_changes_per_agent(since, to):
changes_per_agent[current_tag] = {}

for name, ver in catalog_now.items():
# at some point in the git history, the requirements file erroneusly
# at some point in the git history, the requirements file erroneously
# contained the folder name instead of the package name for each check,
# let's be resilient
old_ver = (
Expand Down
Expand Up @@ -357,7 +357,7 @@ def _is_comment(start, config_lines, indent, errors):
idx = start
end = len(config_lines)
if "## @param" in config_lines[idx]:
# If wee see @param, no matter how correctly formatted it is, we expect it to be a param declaration
# If we see @param, no matter how correctly formatted it is, we expect it to be a param declaration
return False

while idx < end:
Expand All @@ -366,7 +366,7 @@ def _is_comment(start, config_lines, indent, errors):
idx += 1
continue
elif is_blank(current_line):
# End of bloc with only ## comments, the whole block is indeed only a comment
# End of block with only ## comments, the whole block is indeed only a comment
return True
elif re.match(INCORRECTLY_INDENTED_COMMENT_REGEX, current_line):
# This is still a comment but incorrectly indented
Expand Down
Expand Up @@ -37,7 +37,7 @@
# Increase requests timeout.
tuf_settings.SOCKET_TIMEOUT = 60

# After we import everything we neeed, shut off all existing loggers.
# After we import everything we need, shut off all existing loggers.
logging.config.dictConfig({'disable_existing_loggers': True, 'version': 1})


Expand Down
2 changes: 1 addition & 1 deletion docs/developer/meta/ci.md
Expand Up @@ -95,7 +95,7 @@ ddev validate config
```
This verifies that the config specs for all integrations are valid by enforcing our configuration spec [schema](config-specs.md#schema). The most common failure at this validation stage is some version of `File <INTEGRATION_SPEC> needs to be synced.` To resolve this issue, you can run `ddev validate config --sync`

If you see failures regarding formatting or missing parameters, see our [config spec](config-specs.md#schema) documentation for more details on how to construc configuration specs.
If you see failures regarding formatting or missing parameters, see our [config spec](config-specs.md#schema) documentation for more details on how to construct configuration specs.

### Dashboard definition files

Expand Down
4 changes: 2 additions & 2 deletions docs/developer/tutorials/snmp/profiles.md
Expand Up @@ -24,7 +24,7 @@ Generally, you'll want to search the web and find out about the following:
- Available versions of the device, and which ones we target.

> E.g. HP iLO devices exist in multiple versions (version 3, version 4...). Here, we are specifically targetting HP iLO4.
> E.g. HP iLO devices exist in multiple versions (version 3, version 4...). Here, we are specifically targeting HP iLO4.
- Supported MIBs and OIDs (often available in official documentation), and associated MIB files.

Expand Down Expand Up @@ -105,7 +105,7 @@ SNMP-FRAMEWORK-MIB:
- snmpEngine
```

Will include `system`, `interfaces` and `ip` nodes from `RFC1213-MIB`, no node fro, `CISCO-SYSLOG-MIB` and node `snmpEngine` from `SNMP-FRAMEWORK-MIB`.
Will include `system`, `interfaces` and `ip` nodes from `RFC1213-MIB`, no node from `CISCO-SYSLOG-MIB`, and node `snmpEngine` from `SNMP-FRAMEWORK-MIB`.

Note that each `MIB:node_name` correspond to exactly one and only one OID. However, some MIBs report legacy nodes that are overwritten.

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/TEMPLATE.md
Expand Up @@ -34,7 +34,7 @@ fulfill. For example:*

*Describe your solution in the bare minimum detail to be understood. Explain
why it's better than the current implementation and better than other options. Address
any critical operational issues (failure modes, failover, redudancy, performance, cost).
any critical operational issues (failure modes, failover, redundancy, performance, cost).
For example:*

Embedding CPython is a well known, documented and supported practice which is quite
Expand Down

0 comments on commit 3bf453a

Please sign in to comment.