Lodge is an open source self-managed logging framework for small, distributed applications based on the Elastic stack. Lodge allows users to ship, transform, store, visualize, and monitor their logs.
Sam Clark Software Engineer Dallas, TX
Rana Deeb Software Engineer San Francisco, CA
Regina Donovan Software Engineer Atlanta, GA
Justin Lo Software Engineer Vancouver, BC
- Dashboard Overview
- Shipping Logs
- Managing Kafka and Zookeper
- Using Lodge Restore
- Using Kibana
- Helpful Resources
This is what Lodge looks like from a user’s perspective on a high level. The user has deployed Lodge on their network, so now all the applications in that network can ship logs to the stack using Filebeat, a subset of Beats specifically for collecting and forwarding log data. The user can then view those logs from the Lodge dashboard that is deployed with the stack.
Simply remove the configuration file and create a brand new one based on the configuration generated from the Lodge Dashboard, specifically from the shippers section. Follow the instructions and pick the module that you would like to be incorporated into your filebeat config and download the module configuration file, here is an example of the configuration that is dynamically generated by Lodge based on what module option you have selected.
In the Lodge dashboard, we have built in monitoring tools to manage Kafka and Zookeeper.
Kafka Kowl is a management tool that will allow the user to see a list of all the topics dynamically generated by filebeat when it ships logs to Kafka. Within the consumer group tab, we can see that logstash belongs to a consumer group that is consuming logs from Kafka topics and in the brokers tab we can see all the kafka brokers.
Heading over to ZooNavigator, it’s a management tool that allows us to manage both kafka and zookeeper in one place. If we zoom into kafka, we have access to all things kafka, for example, list of kafka brokers, consumers, config, etc.
Lodge Restore is a service that allows you to retrieve archival log data from Amazon S3 given a specific date range and to reindex the log data back into Elasticsearch to be visualized in Kibana.
First, the user defines the start and end date to retrieve the archived logs from S3. A list of all the log files that are inserted into S3 during that time frame will be fetched and listed under log files. Once we see the success message, it indicates that the logs are re-indexed into Elasticsearch and are ready to be visualized in Kibana.
You can also download the log files and here is the raw log text file that is downloaded.