Skip to content
This repository has been archived by the owner on Feb 3, 2023. It is now read-only.

Commit

Permalink
Merge pull request #117 from guilhemmarchand/docs_update
Browse files Browse the repository at this point in the history
Docs fixing
  • Loading branch information
guilhemmarchand committed Aug 1, 2020
2 parents 64b9e1b + 336e7ab commit d0e217a
Show file tree
Hide file tree
Showing 7 changed files with 108 additions and 113 deletions.
20 changes: 10 additions & 10 deletions docs/FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ See :ref:`Data Sources tracking concept and features`
In the context of data source, the field **"data_name"** represents the unique identifier of the data source.

- for regular data sources created by TrackMe, this is equal to combination of <index>:<sourcetype>.
- for Elastic Sources, the definition choosen when the entity is created
- for Elastic Sources, the definition defined when the entity is created

The data_name unique identifier is used in different parts of the application, such as the search trackers which rely on it to idenfity the data source.
The data_name unique identifier is used in different parts of the application, such as the search trackers which rely on it to identify the data source.

**What are the numbers in the "lag summary" column?**

Expand All @@ -31,7 +31,7 @@ In this context:
- event time stamp is the _time field, which is the time stamp that Splunk affected for a given event when it was indexed on disk
- index time is the _indextime field, which is the epoch time corresponding to the moment when Splunk indexed the event on disk

Depending on the use cases, both key performance metrics might be very important, or potienally one will matter more than the other.
Depending on the use cases, both key performance metrics might be very important, or potentially one will matter more than the other.

For continuous data flow, you need to know that the data is being indexed with the expected performance, and that you are not running late due to some underneath performance issues, this is where the lag_ingestion_sec matters.

Expand All @@ -44,7 +44,7 @@ See :ref:`Priority management`

The **"priority"** field is configurable within the UI, when you click on any entity and enter the modification space (``Modify`` bytton).

Its value is defined automically when the entity is discovered, stored in the KVstores, its default value is defined by the following macro:
Its value is defined automatically when the entity is discovered, stored in the KVstores, its default value is defined by the following macro:

``[trackme_default_priority]``

Expand All @@ -55,7 +55,7 @@ As well, the OOTB alerts are filtering by default on a given list of priorities,

``[trackme_alerts_priority]``

By default all entities are added as “medium”, one option can be to update the macro to be looking at high only, such that you qualify for each entity is you want to be alerted on it and define it to high priority.
By default, all entities are added as “medium”, one option can be to update the macro to be looking at high only, such that you qualify for each entity is you want to be alerted on it and define it to high priority.
Another option would be to define everything to low besides what you qualify and want to be monitoring and get alerted.

The purpose of the priority field is to provide a granularity in which entities should be generating alerts, while all information remains easily visible and summarised in the UI.
Expand Down Expand Up @@ -121,15 +121,15 @@ How can you see a list of deleted entries? Can you undelete an entry?

A user can delete an entity stored in the KVstore, assuming the user has write permissions over the KVstores and other objects. (admin, part of trackme_admin role or custom allowed)

The deletion feature is provided natively via the UI, when an entity is deleted the following worklow happens:
The deletion feature is provided natively via the UI, when an entity is deleted the following workflow happens:

- The UI retrieves the key id of the record in the KVstore and performs a DELETE rest call over the KVstore endpoint
- In addition, the full entity record is logged to the audit KVstore, and exposed via the UI within the audit changes tab
- When the user deletes an entity, it can be delete temporary or permanently
- If the deletion is temporary, the entity will be recreated automatically if it is still actively sending data to Splunk, and the conditions (allow lists, block lists...) permit it
- If the deletion is permanent, an additional flag is added to the record in the audit, this flag allow the trackers to exclude creating an entitiy that was permanently deleted
- If the deletion is permanent, an additional flag is added to the record in the audit, this flag allow the trackers to exclude creating an entitythat was permanently deleted

While it is not supported at the moment to undo the deletion, the audit record contains all the information related to the entitiy previously deleted.
While it is not supported at the moment to undo the deletion, the audit record contains all the information related to the entitypreviously deleted.

Finally, the audit changes tab provides the relevant filters to allow accessing to all deletion events, including answers to when / who / how and why if an update note was added filled during the operation.

Expand All @@ -144,7 +144,7 @@ How to deal with sourcetypes that are emitting data occasionally or sporadically
There are no easy answers to this question, however:

- From a data source perspective, what matters is monitoring the data from a pipeline point of view, which translated in TrackMe means making sure you have a data source that corresponds to this unique data flow
- From a data host perspective, there wouldn't be the value one could expcet in having a strict monitoring of every single sourcetype linked to a given host, specially because many of them can be generating data in a sporadic fashion depending on the circumstances
- From a data host perspective, there wouldn't be the value one could be expecting in having a strict monitoring of every single sourcetype linked to a given host, especially because many of them can be generating data in a sporadic fashion depending on the circumstances
- On the opposite, what matters and provides value is being able to detect global failures of hosts (endpoints, whatever you call these) in a way that is not generating noises and alert fatigue
- This is why the data host design takes in consideration the data globally sent on a per host basis, TrackMe provides many different features (allowlist / blocklist, etc) to manage use cases with the level of granularity required
- Finally, from the data host perspective, the outliers detection is a powerful feature that would provide the capability of detecting a siginificant change in the event distrubution, for example when a major sourcetype has stopped to be emitted
- Finally, from the data host perspective, the outliers detection is a powerful feature that would provide the capability to detect a significant change in the data volume, for example when a major sourcetype has stopped to be emitted
4 changes: 2 additions & 2 deletions docs/compatibility.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Compatibility
Splunk compatibility
####################

This is application is compatible with Splunk 7.2.x and later.
This application is compatible with Splunk 7.2.x and later.

Previous main branch of TrackMe (V1.1.x) was compatible starting from 7.0.x, which changed from 7.2.x due to usage of the mcollect command.
The previous main branch of TrackMe (V1.1.x) was compatible with Splunk versions starting from Splunk 7.0.x, which changed from 7.2.x due to the usage of the mcollect command.

Web Browser compatibility
#########################
Expand Down
10 changes: 5 additions & 5 deletions docs/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ If Enterprise Security is running on a different search head, one option is to d

**Any kind of CMDB data available in Splunk:**

Similarly you can use any lookup available in the Splunk instance which provides Assets context looking up a key which in most cases would be host name, dns name or IP address.
Similarly, you can use any lookup available in the Splunk instance which provides Assets context looking up a key which in most cases would be host name, dns name or IP address.

Make sure your asset lookup definition is exported to the system, is case insensitive and contains the relevant information, then customize the macros depending on your configuration, example: ``lookup name_of_lookup key as data_hosts`` for data hosts, ``lookup name_of_lookup key as metric_hosts`` for metric hosts.

Expand Down Expand Up @@ -76,7 +76,7 @@ By default, summary events are indexed in index=summary, customize this macro if
Indexers macro definition
=========================

The builtin views "Ops: Indexes queues" and "Ops: Parsing issues" rely on the usage of the following macro:
The built-in views "Ops: Indexes queues" and "Ops: Parsing issues" rely on the usage of the following macro:

::

Expand All @@ -90,7 +90,7 @@ Customise the macro definition to match your indexers host naming convention.
Allowlisting and blocklisting
=============================

TrackMe version 1.0.22 introduced builtin support for both allowlisting of indexes and blocklisting of indexes, sourcetypes and hosts.
TrackMe version 1.0.22 introduced built-in support for both allowlisting of indexes and blocklisting of indexes, sourcetypes and hosts.

.. image:: img/allowlist_and_blocklist.png
:alt: allowlist_and_blocklist.png
Expand All @@ -113,7 +113,7 @@ Finally, in addition the following macro is used within the searches, and can be
definition = sourcetype!="stash" sourcetype!="*too_small"
iseval = 0

Activation of builtin alerts
Activation of built-in alerts
============================

**TrackMe provides out of the box alerts that can be used to deliver alerting when a monitored component reaches a red state:**
Expand All @@ -129,7 +129,7 @@ Activation of builtin alerts
trackme_admin role for granular access
======================================

**The application contains a builtin role that can be used for granular permissions:**
**The application contains a built-in role that can be used for granular permissions:**

- trackme_admin

Expand Down
2 changes: 1 addition & 1 deletion docs/deployment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Dependencies
Indexes
=======

**Since the version 1.2.6 TracKme requires the creation of an event index and a metric index:**
**Since the version 1.2.6 TrackMe requires the creation of an event index and a metric index:**

- summary event index defaults to ``trackme_summary``, handled by the macro ``trackme_idx``
- metric index defaults to ``trackme_metrics``, handled by the macro ``trackme_metrics_idx``
Expand Down
2 changes: 1 addition & 1 deletion docs/download.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Download
========

**The Splunk application can be downloaded from:**
**The application can be downloaded from:**

Splunk base
###########
Expand Down
4 changes: 2 additions & 2 deletions docs/itsi_integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Note: make sure to edit the permission of the report and shared at the global le

| savedsearch "TrackMe - Entities gen report"

*Make sure you select a time range that makes sense, TrackMe generate metrics potentially evey 5 minutes but less frequent data sources might not appear if the time range is too short.*
*Make sure you select a time range that makes sense, TrackMe generate metrics potentially every 5 minutes but less frequent data sources might not appear if the time range is too short.*

.. image:: img/itsi_entities.png
:alt: itsi_entities.png
Expand Down Expand Up @@ -186,7 +186,7 @@ Step 4: create a service that will be used for the service template definition
:alt: itsi_service5.png
:align: center

**Finally, save but DO NOT activate the pseudo service, this service was required temporarily for the purposes of the service templace creation in the next step:**
**Finally, save but DO NOT activate the pseudo service, this service was required temporarily for the purposes of the service template creation in the next step:**

.. image:: img/itsi_service6.png
:alt: itsi_service6.png
Expand Down

0 comments on commit d0e217a

Please sign in to comment.