Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No chunks with azureblobs over minio #800

Closed
khauser opened this issue Jul 26, 2019 · 6 comments
Closed

No chunks with azureblobs over minio #800

khauser opened this issue Jul 26, 2019 · 6 comments

Comments

@khauser
Copy link

khauser commented Jul 26, 2019

Describe the bug
When enabling loki aws s3 storage (over minio to azureblobs since I realized #343) logging entries still appear in grafana but none in the defined "logs" bucket whereas the loki chunk data folder stays empty.

To Reproduce
Steps to reproduce the behavior:

  1. Started minio proxying azureblobs
    docker run -d -p 9000:9000 --name azure-s3 -e "MINIO_ACCESS_KEY=<azure-storage-key>" -e "MINIO_SECRET_KEY=<azure-storage-secret>" minio/minio gateway azure
  2. Tested minio proxying azureblobs
aws --endpoint-url http://<minio-host>:9000 s3 cp test.log s3://logs
upload: .\test.log to s3://logs/test.log
aws --endpoint-url http://<minio-host>:9000 s3 ls s3://logs
2019-07-26 09:37:35          0 test.log

So wonderful, it's possible to speak with azureblobs through an s3 api!
3. Started loki (master-d31577f)

  schema_config:
    configs:
    - from: 2018-04-15
      store: boltdb
      object_store: s3
      schema: v9
      index:
        prefix: index_
        period: 168h
  server:
    http_listen_port: 3100
  storage_config:
    boltdb:
      directory: /data/loki/index
    aws:
      s3: s3://<azure-storage-key>:<azure-storage-secret>@http://<minio-host>:9000/logs
      s3forcepathstyle: true

Expected behavior
The regulary appearing logs in grafana shall also appear in azureblobs. But this is not happening.

Great that they still appear in grafana! ;) But from where?

Also the log states nothing, I also tried with log.level: debug.

Environment:

  • Infrastructure: Kubernetes
  • Deployment tool: helm

Thanks for help

@cyriltovena
Copy link
Contributor

cyriltovena commented Jul 26, 2019

Hello @khauser,

The logs you're seeing in Grafana are from the ingestion path.

Chunks are flushed to s3 when they are full or idle for a long time.

config:
  auth_enabled: false
  ingester:
    chunk_idle_period: 15m
    chunk_block_size: 262144

chunk_idle_period is the idle period explained while chunk_block_size gives you the amount of characters to create a single block for a given chunk. You need 10 full blocks to have a chunk full.

Once you reach one or the other you should have a chunk in s3, if not check Loki logs and let us know.

Does that help ?

@khauser
Copy link
Author

khauser commented Jul 29, 2019

I understand.

Then I now reached one of these thresholds over the weekend but the flush failes with:
level=error ts=2019-07-29T05:10:48.550723833Z caller=flush.go:156 org_id=fake msg="failed to flush user" err="NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" like here: #483

@cyriltovena
Copy link
Contributor

Something is wrong with your credentials, not sure exactly what as I never used azure, but you’re on the right track.

@khauser
Copy link
Author

khauser commented Jul 30, 2019

Thanks to @duythien I'm now a bit further..

"RequestError: send request failed\ncaused by: Put https://s3.http:.amazonaws.com/minio:9000/logs/fake/bb8b36685552d9f0:16c422ae9fb:16c422aeb0a:dc14f2a0: invalid URL port \".amazonaws.com\""

So s3://<azure-storage-key>:<azure-storage-secret>@http://<minio-host>:9000/logs with s3forcepathstyle: true is reconfigured to an amazon request.

@khauser
Copy link
Author

khauser commented Jul 30, 2019

I have it working now..

There were two issues:

  • the segretKey is not allowed to a have a slash since it's an url which needs to be parsed.. clear and I was blind
  • First I used a minio named minio proxy container talking to azureblobs. This was not working with loki since the parser obviously searches for a dot in the url to decide whether it's a amazonaws.com call or not. I saw somebody (see https://groups.google.com/d/msg/lokiproject/YEMj2pdbg_I/EiWfwrDpAwAJ) using minio.local with docker because of this restriction (at least I assume). In k8s I can't add such a container with dots because of a standard DNS-1035.
    So I then switched to an external minio instance and it worked. Great work to all of you 🥇 🥇 🥇 ..

So the dot parsing (coming from here?: https://github.com/weaveworks/common/blob/master/aws/config.go seems like a bug to me , which prevents me from having a local minio container.

@cyriltovena
Copy link
Contributor

cyriltovena commented Jul 30, 2019

Feel free to open a PR there and I will try to get this updated in Loki.

Also if you got some time to add some documentation for running over azure, that sounds like a great thing to have for the community.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants