Skip to content
This repository has been archived by the owner on Aug 23, 2023. It is now read-only.

Panic in mt-whisper-importer-reader #1987

Closed
GuillaumeConnan opened this issue Jun 14, 2021 · 1 comment
Closed

Panic in mt-whisper-importer-reader #1987

GuillaumeConnan opened this issue Jun 14, 2021 · 1 comment

Comments

@GuillaumeConnan
Copy link
Contributor

Describe the bug
Hi, we are experiencing panics with mt-whisper-importer-reader tool on some whisper files when importing then into Metrictank.

Additional context
The vast majority of our whisper files are correctly read and imported but some make the application crash with the following error :

2021-06-10 14:45:04.057 [DEBUG] Processing file /opt/whisper/test/count.wsp (test.count)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7c8a9a]

goroutine 7 [running]:
github.com/grafana/metrictank/mdata/chunk.(*Chunk).Finish(...)
        /go/src/github.com/grafana/metrictank/mdata/chunk/chunk.go:54
github.com/grafana/metrictank/mdata/importer.encodeChunksFromPoints(0xc000026280, 0x1, 0x1, 0x1c200000003c, 0x1, 0x4, 0x891bd2, 0x7)
        /go/src/github.com/grafana/metrictank/mdata/importer/chunk_encoder.go:44 +0xba
github.com/grafana/metrictank/mdata/importer.NewArchiveRequest(0xc000020700, 0xc0000764b0, 0x1, 0x1, 0xc000192000, 0x4, 0x4, 0x891bd2, 0x7, 0xc00018c3c0, ...)
        /go/src/github.com/grafana/metrictank/mdata/importer/archive_request.go:72 +0x549
main.processFromChan(0xc0000583c0, 0xc000196000, 0xc000026130)
        /go/src/github.com/grafana/metrictank/cmd/mt-whisper-importer-reader/main.go:183 +0x56d
created by main.main
        /go/src/github.com/grafana/metrictank/cmd/mt-whisper-importer-reader/main.go:152 +0x250

We were able to reproduce the issue using the official Docker image with the following configuration :

docker-compose.yml
version: "2"

services:
  writer:
    hostname: writer
    image: grafana/metrictank:v1.1
    volumes:
      - "./metrictank.ini:/etc/metrictank/metrictank.ini:ro"
      - "./storage-aggregation.conf:/etc/metrictank/storage-aggregation.conf:ro"
      - "./storage-schemas.conf:/etc/metrictank/storage-schemas.conf:ro"
    command: "/usr/bin/mt-whisper-importer-writer"
    environment:
      WAIT_HOSTS: metrictank:2003
      WAIT_TIMEOUT: 60
    links:
     - cassandra
     - metrictank

  metrictank:
    hostname: metrictank
    image: grafana/metrictank:v1.1
    volumes:
      - "./metrictank.ini:/etc/metrictank/metrictank.ini:ro"
      - "./storage-aggregation.conf:/etc/metrictank/storage-aggregation.conf:ro"
      - "./storage-schemas.conf:/etc/metrictank/storage-schemas.conf:ro"
    environment:
      WAIT_HOSTS: cassandra:9042
      WAIT_TIMEOUT: 60
      MT_HTTP_MULTI_TENANT: "false"
    links:
     - cassandra

  cassandra:
    hostname: cassandra
    image: cassandra:3.11.10
    environment:
      MAX_HEAP_SIZE: 1G
      HEAP_NEWSIZE: 256M
metrictank.ini
## misc ##

# instance identifier. must be unique. used in clustering messages, for naming queue consumers and emitted metrics.
instance = default

## data ##

# see https://github.com/grafana/metrictank/blob/master/docs/memory-server.md for more details

# forego persisting of first received (and typically incomplete) chunk
drop-first-chunk = false
# only ingest data for chunks that have a t0 equal or higher to the given timestamp. Specified per org. syntax: orgID:timestamp[,...]
ingest-from =
# max age for a chunk before to be considered stale and to be persisted to Cassandra
chunk-max-stale = 1h
# max age for a metric before to be considered stale and to be purged from in-memory ring buffer.
metric-max-stale = 3h
# Interval to run garbage collection job
gc-interval = 1h

# duration until when secondary nodes are considered to have enough data to be ready and serve requests.
# To prevent gaps in charts when running a cluster of nodes you need to either
# 1) have the nodes backfill data from Kafka (set via "offset" in the "kafka-mdm-in" config) or
# 2) set the warm-up-period to a value long enough to ensure all data received before the node started has been persisted in the store.
# See https://github.com/grafana/metrictank/blob/master/docs/clustering.md#priority-and-ready-state
warm-up-period = 1h

# org Id for publically (any org) accessible data
# leave at 0 to disable.
public-org = 0

## Profiling and logging ##

# see https://golang.org/pkg/runtime/#SetBlockProfileRate
block-profile-rate = 0
# 0 to disable. 1 for max precision (expensive!) see https://golang.org/pkg/runtime/#pkg-variables")
mem-profile-rate = 524288 # 512*1024

# heap profiletrigger: triggers a heap (memory) profile for diagnosis when usage threshold is breached
# recommended usage: set proftrigger-heap-thresh-rss such that it is much larger than "normal" usage, but lower
# then how much RAM capacity you have, so that a profile can be captured before the process gets killed by the OOM-killer
# inspect status frequency. set to 0 to disable
proftrigger-freq = 10s
# path to store triggered profiles
proftrigger-path = /tmp
# minimum time between triggered profiles
proftrigger-min-diff = 1h
# threshold for process RSS, the amount of RAM memory used. (0 to disable) (see "rss" on dashboard)
proftrigger-heap-thresh = 25000000000
# threshold for bytes allocated on heap (0 to disable) (see "allocated in heap" on dashboard)
# typically, this is not all that useful, "rss" above is what most people care about (and the heap uses less than rss),
# but this setting can help detect a large heap even if some of the memory is swapped out (and thus not accounted for in rss)
proftrigger-heap-thresh-heap = 0

# only log log-level and lower (read right to left: to the left is lower). panic|fatal|error|warning|info|debug
log-level = info

[jaeger]
enabled = false

## metric data storage in cassandra ##
[cassandra]
# see https://github.com/grafana/metrictank/blob/master/docs/cassandra.md for more details

# enable the cassandra backend store plugin -- This setting is ignored and overridden (set to false) in query mode
enabled = true
# comma-separated list of hostnames to connect to
addrs = cassandra:9042
# keyspace to use for storing the metric data table
keyspace = metrictank
# desired write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one
consistency = one
# how to select which hosts to query
# roundrobin                : iterate all hosts, spreading queries evenly.
# hostpool-simple           : basic pool that tracks which hosts are up and which are not.
# hostpool-epsilon-greedy   : prefer best hosts, but regularly try other hosts to stay on top of all hosts.
# tokenaware,roundrobin              : prefer host that has the needed data, fallback to roundrobin.
# tokenaware,hostpool-simple         : prefer host that has the needed data, fallback to hostpool-simple.
# tokenaware,hostpool-epsilon-greedy : prefer host that has the needed data, fallback to hostpool-epsilon-greedy.
host-selection-policy = tokenaware,hostpool-epsilon-greedy
# cassandra timeout
timeout = 10s
# max number of concurrent reads to cassandra
read-concurrency = 20
# max number of concurrent writes to cassandra
write-concurrency = 10
# max number of outstanding reads before reads will be dropped. This is important if you run queries that result in many reads in parallel
read-queue-size = 200000
# write queue size per cassandra worker. should be large engough to hold all at least the total number of series expected, divided by how many workers you have
write-queue-size = 100000
# how many times to retry a query before failing it
retries = 0
# size of compaction window relative to TTL
window-factor = 20
# if a read is older than this, it will be omitted, not executed
omit-read-timeout = 60s
# CQL protocol version. cassandra 3.x needs v3 or 4.
cql-protocol-version = 4
# enable the creation of the mdata keyspace and tables, only one node needs this
create-keyspace = true
# File containing the needed schemas in case database needs initializing
schema-file = /etc/metrictank/schema-store-cassandra.toml
# enable SSL connection to cassandra
ssl = false
# cassandra CA certficate path when using SSL
ca-path = /etc/metrictank/ca.pem
# host (hostname and server cert) verification when using SSL
host-verification = true
# enable cassandra user authentication
auth = false
# username for authentication
username = cassandra
# password for authentication
password = cassandra
# instruct the driver to not attempt to get host info from the system.peers table
disable-initial-host-lookup = false
# interval at which to perform a connection check to cassandra, set to 0 to disable.
connection-check-interval = 5s
# maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval.
connection-check-timeout = 30s
# Maximum chunkspan size used.
max-chunkspan = 24h

[bigtable-store]
enabled = false

## Retention settings ##
[retention]
# path to storage-schemas.conf file
schemas-file = /etc/metrictank/storage-schemas.conf
# path to storage-aggregation.conf file
aggregations-file = /etc/metrictank/storage-aggregation.conf
# enables/disables the enforcement of the future tolerance limitation
enforce-future-tolerance = true
# defines until how far in the future we accept datapoints. defined as a percentage fraction of the raw ttl of the matching retention storage schema
future-tolerance-ratio = 10

## instrumentation stats ##
[stats]
# enable sending graphite messages for instrumentation
enabled = true
# stats prefix (will add trailing dot automatically if needed)
# The default matches what the Grafana dashboard expects
# $instance will be replaced with the `instance` setting.
# note, the 3rd word describes the environment you deployed in.
prefix = metrictank.stats.docker-env.$instance
# graphite address
addr = localhost:2003
# interval at which to send statistics
interval = 1
# timeout after which a write is considered not successful
timeout = 10s
# how many messages (holding all measurements from one interval. rule of thumb: a message is ~25kB) to buffer up in case graphite endpoint is unavailable.
# With the default of 20k you will use max about 500MB and bridge 5 hours of downtime when needed
buffer-size = 20000

## chunk cache ##
[chunk-cache]
# maximum size of chunk cache in bytes. 512 MB = (1024 ^ 2) * 512 = 536870912
# 0 disables cache
max-size = 536870912

## http api ##
[http]
# tcp address for metrictank to bind to for its HTTP interface
listen = :6060
# use gzip compression
gzip = true
# use HTTPS
ssl = false
# SSL certificate file
cert-file = /etc/ssl/certs/ssl-cert-snakeoil.pem
# SSL key file
key-file = /etc/ssl/private/ssl-cert-snakeoil.key
# lower resolution rollups will be used to try and keep requests below this number of datapoints. (0 disables limit)
max-points-per-req-soft = 1000000
# limit of number of datapoints a request can return. Requests that exceed this limit will be rejected. (0 disables limit)
max-points-per-req-hard = 20000000
# limit of number of series a request can operate on. Requests that exceed this limit will be rejected. (0 disables limit)
# note here we look at all lowlevel series (even if they will be merged or are equivalent), and can't accurately account for
# requests with duplicate or overlapping targets. See PR #1926 and #1929 for details
max-series-per-req = 250000
# require x-org-id authentication to auth as a specific org. otherwise orgId 1 is assumed
multi-tenant = true
# in case our /render endpoint does not support the requested processing, proxy the request to this graphite
fallback-graphite-addr = http://localhost:8080
# proxy to graphite when metrictank considers the request bad
proxy-bad-requests = true
# timezone for interpreting from/until values when needed, specified using [zoneinfo name](https://en.wikipedia.org/wiki/Tz_database#Names_of_time_zones) e.g. 'America/New_York', 'UTC' or 'local' to use local server timezone.
time-zone = local
# maximum number of concurrent threads for fetching data on the local node. Each thread handles a single series.
get-targets-concurrency = 20
# default limit for tagdb query results, can be overridden with query parameter "limit"
tagdb-default-limit = 100
# ratio of peer responses after which speculative querying (aka spec-exec) is used. Set to 1 to disable.
speculation-threshold = 1
# enable pre-normalization optimization
pre-normalization = true
# enable MaxDataPoints optimization (experimental)
mdp-optimization = false
# output query headers in logs
log-headers = true

## metric data inputs ##

[input]
# reject received metrics that have invalid input data (invalid utf8 or invalid tags)
reject-invalid-input = true

### carbon input (optional)
[carbon-in]
# This setting is ignored and overridden (set to false) in query mode
enabled = true
# tcp address
addr = :2003
# represents the "partition" of your data if you decide to partition your data.
partition = 0

[kafka-mdm-in]
enabled = false

## basic clustering settings ##
[cluster]
# Unique name of the cluster.  This node will only be able to join clusters with the same name.
name = metrictank
# The primary node writes data to cassandra. There should only be 1 primary node per shardGroup.
primary-node = true
# maximum priority before a node should be considered not-ready.
max-priority = 10
# TCP addresses of other nodes, comma separated. use this if you shard your data and want to query other nodes.
# If no port is specified, it is assumed the other nodes are using the same port this node is listening on.
peers =
# Operating mode of this node within the cluster. (dev|shard|query)
# * dev: gossip disabled. node is not aware of other nodes but can serve up all data it is aware of (from memory or from the store)
# * shard: gossip enabled. node receives data and participates in fan-in/fan-out if it receives queries but owns only a part of the data set and spec-exec if enabled.
# * query: gossip enabled. node receives no data and fans out queries to shard nodes (e.g. if you rather not query shard nodes directly)
mode = dev
# minimum number of shards that must be available for a query to be handled.
min-available-shards = 0
# How long to wait before aborting http requests to cluster peers and returning a http 503 service unavailable
http-timeout = 60s
# GOGC value to use when node is not ready.  Defaults to GOGC
# you can use this to set a more aggressive, latency-inducing GC behavior when the node is initializing and hungry for extra memory
# gc-percent-not-ready = 100
# duration until when the cluster topology can be considered up-to-date and this node to be ready to serve requests (when gossip enabled)
gossip-settle-period = 10s

## SWIM/gossip clustering settings ##
# for more details, see https://godoc.org/github.com/hashicorp/memberlist#Config
# all values correspond literally to the memberlist.Config options except where noted
[swim]
# config setting to use. If set to anything but manual, will override all other swim settings.
# Use manual|default-lan|default-local|default-wan. Note all our swim settings correspond to default-lan
# see:
# * https://godoc.org/github.com/hashicorp/memberlist#DefaultLANConfig
# * https://godoc.org/github.com/hashicorp/memberlist#DefaultLocalConfig
# * https://godoc.org/github.com/hashicorp/memberlist#DefaultWANConfig
use-config = manual
# binding TCP Address for UDP and TCP gossip (full ip/dns:port combo unlike memberlist.Config)
bind-addr = 0.0.0.0:7946
# advertised TCP address for UDP and TCP gossip (full ip/dns:port combo, or empty to use bind-addr)
# Useful for traversing NAT such as from inside docker
advertise-addr =
# timeout for establishing a stream connection with peers for a full state sync, and for stream reads and writes
tcp-timeout = 10s
# number of nodes that will be asked to perform an indirect probe of a node in the case a direct probe fails
indirect-checks = 3
# multiplier for number of retransmissions for gossip messages. Retransmits = RetransmitMult * log(N+1)
retransmit-mult = 4
# multiplier for determining when inaccessible/suspect node is delared dead. SuspicionTimeout = SuspicionMult * log(N+1) * ProbeInterval
suspicion-multi = 4
# multiplier for upper bound on detection time.  SuspicionMaxTimeout = SuspicionMaxTimeoutMult * SuspicionTimeout
suspicion-max-timeout-mult = 6
# interval between complete state syncs. 0 will disable state push/pull syncs
push-pull-interval = 30s
# interval between random node probes
probe-interval = 1s
# timeout to wait for an ack from a probed node before assuming it is unhealthy. This should be set to 99-percentile of network RTT
probe-timeout = 500ms
# turn off the fallback TCP pings that are attempted if the direct UDP ping fails
disable-tcp-pings = false
# will increase the probe interval if the node becomes aware that it might be degraded and not meeting the soft real time requirements to reliably probe other nodes.
awareness-max-multiplier = 8
# number of random nodes to send gossip messages to per GossipInterval
gossip-nodes = 3
# interval between sending messages that need to be gossiped that haven't been able to piggyback on probing messages. 0 disables non-piggyback gossip
gossip-interval = 200ms
# interval after which a node has died that we will still try to gossip to it. This gives it a chance to refute
gossip-to-the-dead-time = 30s
# message compression
enable-compression = true
# system's DNS config file. Override allows for easier testing
dns-config-path = /etc/resolv.conf

[kafka-cluster]
enabled = false

## metric metadata index ##

### in memory, cassandra-backed
[cassandra-idx]
# This setting is ignored and overridden (set to false) in query mode
enabled = true
# Cassandra keyspace to store metricDefinitions in.
keyspace = metrictank
# Cassandra table to store metricDefinitions in.
table = metric_idx
# Cassandra table to archive metricDefinitions in.
archive-table = metric_idx_archive
# comma separated list of cassandra addresses in host:port form
hosts = cassandra:9042
#cql protocol version to use
protocol-version = 4
# write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one
consistency = one
# cassandra request timeout. valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'
timeout = 10s
# number of concurrent connections to cassandra
num-conns = 10
# Max number of metricDefs allowed to be unwritten to cassandra
write-queue-size = 100000
#Interval at which the index should be checked for stale series. valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'
prune-interval = 3h
# Number of partitions to load concurrently on startup.
init-load-concurrency = 1
# synchronize index changes to cassandra. not all your nodes need to do this.
update-cassandra-index = true
#frequency at which we should update flush changes to cassandra. only relevant if update-cassandra-index is true. valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'. Setting to '0s' will cause instant updates.
update-interval = 4h
# enable SSL connection to cassandra
ssl = false
# cassandra CA certficate path when using SSL
ca-path = /etc/metrictank/ca.pem
# host (hostname and server cert) verification when using SSL
host-verification = true
# enable cassandra user authentication
auth = false
# username for authentication
username = cassandra
# password for authentication
password = cassandra
# enable the creation of the index keyspace and tables, only one node needs this
create-keyspace = true
# File containing the needed schemas in case database needs initializing
schema-file = /etc/metrictank/schema-idx-cassandra.toml
# instruct the driver to not attempt to get host info from the system.peers table
disable-initial-host-lookup = false
# interval at which to perform a connection check to cassandra, set to 0 to disable.
connection-check-interval = 5s
# maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval.
connection-check-timeout = 30s

[memory-idx]
enabled = false

[bigtable-idx]
enabled = false

### in memory, cassandra-backed
[cassandra-meta-record-idx]
enabled = true
# Cassandra keyspace to store metricDefinitions in.
keyspace = metrictank
# Cassandra table to store meta records.
meta-record-table = meta_records
# Cassandra table to store meta data of meta record batches.
meta-record-batch-table = meta_record_batches
# Interval at which to poll store for meta record updates.
meta-record-poll-interval = 10s
# Interval at which meta records of old batches get pruned.
meta-record-prune-interval = 24h
# The minimum age a batch of meta records must have to be pruned.
meta-record-prune-age = 72h
# comma separated list of cassandra addresses in host:port form
hosts = localhost:9042
#cql protocol version to use
protocol-version = 4
# write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one
consistency = one
# cassandra request timeout. valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'
timeout = 1s
# number of concurrent connections to cassandra
num-conns = 10
# synchronize index changes to cassandra. not all your nodes need to do this.
update-cassandra-index = true
# enable SSL connection to cassandra
ssl = false
# cassandra CA certficate path when using SSL
ca-path = /etc/metrictank/ca.pem
# host (hostname and server cert) verification when using SSL
host-verification = true
# enable cassandra user authentication
auth = false
# username for authentication
username = cassandra
# password for authentication
password = cassandra
# enable the creation of the index keyspace and tables, only one node needs this
create-keyspace = true
# File containing the needed schemas in case database needs initializing
schema-file = /etc/metrictank/schema-idx-cassandra.toml
# instruct the driver to not attempt to get host info from the system.peers table
disable-initial-host-lookup = false
# interval at which to perform a connection check to cassandra, set to 0 to disable.
connection-check-interval = 5s
# maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval.
connection-check-timeout = 30s

[bigtable-meta-record-idx]
enabled = false
storage-aggregation.conf
[count]
pattern = \.count$
xFilesFactor = 0.0
aggregationMethod = max,sum
[max]
pattern = \.max$
xFilesFactor = 0.0
aggregationMethod = max
[min]
pattern = \.min$
xFilesFactor = 0.0
aggregationMethod = min
[sum]
pattern = \.sum$
xFilesFactor = 0.0
aggregationMethod = avg,max,min,sum
[default]
pattern = .*
xFilesFactor = 0.0
aggregationMethod = avg,max,min
storage-schemas.conf
[default]
pattern = .*
retentions = 1m:30d,1h:180d,1d:3y
Docker run command for mt-whisper-importer-reader
docker run -ti --rm \
-v /opt/metrictank/docker/docker-test/whisper:/opt/whisper \
-v /opt/metrictank/docker/docker-test/metrictank-config/storage-schemas.conf:/opt/storage-schemas.conf:ro \
-v /opt/metrictank/docker/docker-test/mt-whisper-importer-reader.position:/opt/mt-whisper-importer-reader.position \
--network="docker-test_default" --entrypoint="" grafana/metrictank:v1.1 \
mt-whisper-importer-reader -http-endpoint "http://writer:8080/metrics/import" -custom-headers "X-Org-Id:1" -whisper-directory /opt/whisper -dst-schemas /opt/storage-schemas.conf -position-file /opt/mt-whisper-importer-reader.position -threads 1 -write-unfinished-chunks -verbose

One of the problematic whisper files has the following attributes :

maxRetention: 15552000
xFilesFactor: 0.0
aggregationMethod: sum
fileSize: 570280

Archive 0
retention: 2592000
secondsPerPoint: 60
points: 43200
size: 518400
offset: 40

Archive 1
retention: 15552000
secondsPerPoint: 3600
points: 4320
size: 51840
offset: 518440

It's only datapoint is the following (which is too old for this archive btw) :

[...]
Archive 1 data:
0: 1605704400,          4
[...]

Do you have any idea of what can cause this error to happen ?

If you need more details or traces, could you help us provide them ?

In the worst case, how could we ignore those errors/files and prevent mt-whisper-importer-reader from crashing ?

Thanks for your help as it is our last step to completely switch to Metrictank !

Helpful Information
Metrictank Version: v1.1
Golang Version: go1.15.2
OS: Alpine 3.10.0 (official Docker image)
Cassandra : 3.11.10 (official Docker image)

@stale
Copy link

stale bot commented Sep 19, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Sep 19, 2021
@stale stale bot closed this as completed Oct 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant