Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use on non-GCP non-AWS systems #156

Closed
ghost opened this issue Aug 31, 2017 · 25 comments
Closed

Use on non-GCP non-AWS systems #156

ghost opened this issue Aug 31, 2017 · 25 comments

Comments

@ghost
Copy link

ghost commented Aug 31, 2017

Are non-GCP non-AWS systems supported by this plugin? Since Stackdriver APIs can be used from anywhere, this plugin should work from anywhere/any platform. Also, please document how to configure and use it on other platforms where metadata server/zone/vm_id etc are not applicable.
Thanks.

@qingling128
Copy link
Contributor

Hi @meta-coder, unfortunately we don't yet support non-GCP and non-AWS systems as of today. It is on our roadmap though. We don't have a specific timeline yet, but it's on our radar.

@igorpeshansky
Copy link
Member

To expand on what @qingling128 said, the plugin does a bit more than call the API -- it also embeds the logic to detect where it's running or where the data is coming from, so that users don't have to supply that information. This logic has not yet been adapted to non-GCP and non-AWS platforms.

@Azuka
Copy link

Azuka commented Nov 22, 2017

Just wanted to show interest in this as well. I'm building out a service that would benefit from forwarding logs to Stackdriver for monitoring.

@zacman85
Copy link

zacman85 commented Dec 4, 2017

+1 from my team as well. We use cloud logging from inside our data center and would love to have a more standardized, maintained approach using this fluentd plugin.

@qingling128
Copy link
Contributor

Thanks all for showing the interest and helping us prioritize our tasks! We've added a note in the agenda for tomorrow's Logging Agent product sync meeting to re-emphasize the impact / interest from customers on this feature. Stay tuned. :)

@bakkerpeter
Copy link

bakkerpeter commented Dec 6, 2017

Also makes sense, for us at least, to be able to use this in a Minikube environment. This now gives the following error:
config error file="/etc/fluent/fluent.conf" error="Unable to obtain metadata parameters: zone vm_id"

Would be nice if you could just set params like this using ENV variables.

@mjkelly
Copy link

mjkelly commented Dec 6, 2017

This was the first option I looked into for a bare-metal kubernetes installation we are planning to use for some prod traffic. I would have used it if it was supported.

@igorpeshansky
Copy link
Member

This is not officially supported. However, you can override the required parameters (project_id, zone, vm_id) via agent configuration. The project_id is generally picked up from the credentials file. In Kubernetes environments, you would have to change the ConfigMap to adjust the logging agent configuration.
Note that the zone, and VM id you specify are significant to various Stackdriver subsystems, so with manual overrides you may get something that only works with Stackdriver Logging and not with the rest of Stackdriver.

@bakkerpeter
Copy link

@igorpeshansky Thanks for pointing that out. I got the project id working, indeed by loading the credentials file. I also tried to figure out how to change the configMap but since I'm to far from understanding the Fluent config, it felt like trail and error.
Could you give me a short sample? Thanks a lot.

@igorpeshansky
Copy link
Member

In the fluentd config, right after @type google_cloud, add:

  vm_id MY_VM_ID
  zone MY_ZONE

The instructions I pointed to for changing the Kubernetes ConfigMap should tell you how to get to that bit of configuration.

@derekperkins
Copy link

derekperkins commented Dec 28, 2017

@igorpeshansky After following your instructions to manually set the vm_id and zone, fluentd got further along and actually started, but then kicked out an error, presumably on flush. Is this something that can be easily changed/overridden? I do have my GOOGLE_APPLICATION_CREDENTIALS set.

2017-12-28 12:34:18 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-12-28 12:34:48 +0000 error_class="Signet::AuthorizationError" error="Authorization failed.  Server message:\n{\n \"error\": \"invalid_scope\",\n \"error_description\": \"Invalid downscoping, scopes should not be specified as a request parameter.\"\n}" plugin_id="object:3fba9d778d40"

@igorpeshansky
Copy link
Member

igorpeshansky commented Dec 28, 2017

This is #197 (caused by #179, which was rolled back in #198). You should bump the plugin version to 0.6.12.

@derekperkins
Copy link

Thanks for the clarification. I was just using gcr.io/google-containers/fluentd-gcp:2.0.11, which must have shipped with 0.6.11. I rolled my image back to gcr.io/google-containers/fluentd-gcp:2.0.10 and it started working. Thanks!

@mlushpenko
Copy link

Hi, any update on this? Spent few hours trying to set up monitoring with stackdriver only to figure out it is not working for on-prem environments :)

@rabenhorst
Copy link

rabenhorst commented Jul 3, 2018

@mlushpenko I got it working with the latest plugin version (0.6.21). Add the following records:

<record>
  "logging.googleapis.com/local_resource_id" ${"k8s_container.#{tag_suffix[4].rpartition('.')[0].split('_')[1]}.#{tag_suffix[4].rpartition('.')[0].split('_')[0]}.#{tag_suffix[4].rpartition('.')[0].split('_')[2].rpartition('-')[0]}"}
  message ${record['log']}
</record>

and config parameters (to the google_cloud output plugin config params):

  k8s_cluster_name CLUSTER_NAME
  k8s_cluster_location CLUSTER_LOCATION

@sarmadali20
Copy link

@mactr0n is this for minikube or vmware k8s cluster I am getting the following in the logging-agent pod

2018-08-21T15:44:32.717051457Z 2018-08-21 15:44:32 +0000 [error]: Failed to access metadata service:  error=#&lt;Net::OpenTimeout: execution expired&gt;
2018-08-21T15:44:32.717083776Z 2018-08-21 15:44:32 +0000 [info]: Unable to determine platform
2018-08-21T15:44:32.721066461Z 2018-08-21 15:44:32 +0000 [error]: config error file="/etc/google-fluentd/google-fluentd.conf" error="Unable to obtain metadata parameters: zone vm_id"
2018-08-21T15:44:32.725580067Z 2018-08-21 15:44:32 +0000 [info]: process finished code=256
2018-08-21T15:44:32.725642498Z 2018-08-21 15:44:32 +0000 [error]: fluentd main process died unexpectedly. restarting.

@rabenhorst
Copy link

@sarmadali20 we use it on a custom (kubeamd) and aks cluster. Could you please post the complete config?
It seems as if you forget to disable the metadata agent:

      enable_metadata_agent false

@duanshiqiang
Copy link

Would like to see this plugin working on any environment.

@usu-github-bot
Copy link

@duanshiqiang just follow my hints and it will work anywhere. We use the plugin for log forwarding on a custom cluster and azure.

@bryanlarsen
Copy link

Steps to running stackdriver logging on non Kubernetes environments outside of GCE or AWS:

First get it running on a GCE instance so you can compare against something that acutally works.

  1. Create a service account and put creds in /etc/google/auth/application_default_credentials.json

  2. Follow installation steps from https://cloud.google.com/logging/docs/agent/installation

  3. edit /etc/google-fluentd/google-fluentd.conf and add after @type google_cloud

    project_id XXX
    vm_id my-machine-name
    zone northamerica-northeast1-b

  4. sudo systemctl restart google-fluentd

  5. look for your logs under "GCE VM Instance", my-machine-name

@caarlos0
Copy link

@bryanlarsen suggestion works, thanks!

@AlexandreProenca
Copy link

@bryanlarsen works for me, thanks!.

@jkohen
Copy link
Contributor

jkohen commented Jul 30, 2019

I'm glad you found a solution that works for you. I'll close this ticket because the plugin meets our design goals.

@jkohen jkohen closed this as completed Jul 30, 2019
@adamlukaszczyk
Copy link

@jkohen Could you mention in the Documentation about this configuration changes suggested by @bryanlarsen. It will help a lot others ;) 👍

@luckoseabraham
Copy link

I have a PKS cluster and I have followed the above steps and the logs are coming in the stackdriver under the GCE Compute section.
Are there additional steps needed to push these logs to kubernetes cluster group ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests