Skip to content
/ tdm Public

Telemetry Data Mapper to ease data discovery, correlation, and usage with YANG, MIBs, etc.

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-DATA
Notifications You must be signed in to change notification settings

cisco-ie/tdm

Repository files navigation

Telemetry Data Mapper (TDM)

Telemetry Data Mapper to map data identifiers from SNMP, gRPC, NETCONF, CLI, etc. to each other.

TDM provides an offline, immutable view into advertised data availability from data models, with search affordance to quickly identify data of interest and the capability to map between data to aid in keeping track of what is roughly equivalent to what.

Index Screenshot

For example, discovering and mapping...

SNMP OID YANG XPath
bgpLocalAS Cisco-IOS-XR-ipv4-bgp-oper:bgp/instances/instance/instance-active/default-vrf/global-process-info/global/local-as
ifMTU Cisco-IOS-XR-pfi-im-cmd-oper:interfaces/interface-xr/interface/mtu
cdpCacheDeviceId Cisco-IOS-XR-cdp-oper:cdp/nodes/node/neighbors/details/detail/device-id
... ...

Problem Statement

In its current state, network telemetry can be accessed in many different ways that are not easily reconciled – for instance, finding the same information in SNMP MIBs and NETCONF/YANG modules. Discovering the datapaths is often tedious and somewhat arcane, and there is no way of determining if the information gathered will have the same values, or which is more accurate than another. Further, the operational methods of deploying this monitoring varies across platforms and implementations. This makes networking monitoring a fragmented ecosystem of inconsistent and unverified data. There needs to be manageability, and cross-domain insight into data availability.

Solution

TDM seeks to solve this problem by providing an overlay platform to generically access network telemetry and capability purported to be supported by an OS/release or platform, and create relationships between individual datapaths to demonstrate qualities in consistency, validity, and interoperability. This will be both exposed by UI for human usage, and API for automated usage. TDM will not seek to provide domain-specific manageability, but serve as an overlay insight tool. There is significantly more documentation in doc/.

Architecture

Architecture

Schema

TDM Schema

Usage

System Requirements

This has only been lightly evaluated, but we recommend:

  • 4 cores
  • 16 GB RAM (ETL process is wildly inefficient at the moment)
  • 20 GB SSD/HDD

These specifications are also semi-dependent on the expected load levels.

Prerequisites

  • Docker CE (don't you love it when everything is simplified).
    • There are DockerHub/external dependencies, so proxy settings must be configured in the Docker daemon and some Dockerfiles if necessary. This is not already handled, and an issue should be opened for assistance if required.
  • docker-compose for deployment. Docker Swarm support has been deprecated to support ElasticSearch provisioning. Docker Swarm does not support the required ulimit settings.
  • Unix-like environment, bash et al. support.

Commands

  • setup.sh to install docker-compose for your user.
  • start.sh [http|https] to start the Docker stack.
  • stop.sh [http|https] to stop the Docker stack.
  • reset.sh [http|https] to stop the Docker stack and delete the persisted storage volumes.

Installation

Ensure there are no port conflicts or change the forwarded ports in docker-compose*.yml.

git clone https://www.github.com/cisco-ie/tdm.git
# If docker-compose is not installed...
./setup.sh
# Start the stack!
./start.sh [http|https]
# You're good to go :)
# To monitor ETL process...
docker logs -f tdm_etl_1

Once the containers are built and running, it currently takes ~8 hours for all the data to be fully parsed and TDM to be fully available. Progress is visible through the etl Docker container logs, usually easily accessible via docker logs -f tdm_etl_1. Once the etl container has died, TDM is loaded with the current snapshot of data available. Unfortunately there is not a recurring ETL process at this time.

If deploying with HTTPS, an SSL .crt and .key must be placed under nginx/ as tdm.cisco.com.crt and tdm.cisco.com.key respectively. Usage of a different filename requires changing these filenames in docker-compose.https.yml and nginx/nginx.https.conf. This allows components to run over plain HTTP and have NGINX proxy the encryption of public web traffic - which is wonderfully useful for now albeit not as inherently secure.

Access

All ports will be exposed on your Docker host interface. Typically you don't need to worry what this is, assume everything will be available from 127.0.0.1/localhost.

  • Port 80 (HTTP) or 443 (HTTPS) exposes the TDM Web UI.
    • /goaccess_web.html exposes website access statistics.
    • /goaccess_db.html exposes ArangoDB access statistics.
    • /goaccess_kibana.html exposes Kibana access statistics.
  • Port 8529 exposes the ArangoDB Web UI and API.
  • Port 5601 exposes Kibana for exploring the TDM Search cache (Elasticsearch).

Licensing

TDM is dual-licensed with Apache License, Version 2.0 applying to its softwares and Community Data License Agreement – Sharing – Version 1.0 applying to data, such as the mappings.

Related Projects

Special Thanks

Drew Pletcher, Einar Nilsen-Nygaard, Joe Clarke, Benoit Claise, Charlie Sestito, Glenn Matthews, Michael Ott

UTDM is dead, long live UTDM.

About

Telemetry Data Mapper to ease data discovery, correlation, and usage with YANG, MIBs, etc.

Topics

Resources

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-DATA

Stars

Watchers

Forks

Packages

No packages published

Languages