Skip to content
This repository has been archived by the owner on May 13, 2024. It is now read-only.

OVA Appliance - Cannot Log In No Graylog Server Available #62

Closed
magurwara opened this issue May 22, 2015 · 7 comments
Closed

OVA Appliance - Cannot Log In No Graylog Server Available #62

magurwara opened this issue May 22, 2015 · 7 comments

Comments

@magurwara
Copy link

I did the initial configuration of password for admin and timezone. All services are running but cannot see anything on port 12900 and getting No Graylog Servers running... Checked server log /var/log/graylog/server/current and see "ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid Configuraiton".

No idea why? Running in VMWare Workstation 7.1.3 on Windows 7 64-bit with 4GB allocated to the VM.

@joschi
Copy link
Contributor

joschi commented May 22, 2015

@magurwara Are there any other messages before or after the one you've mentioned? Ideally post the complete, unabridged server log.

@magurwara
Copy link
Author

The log file is really big. It has 649,000 lines. Is there a way to upload it rather than copy the logs here?

@magurwara
Copy link
Author

I think this bit of the log is repeating continuously. I set the indices to 10 and this may be causing problem.

It looks like you are trying to access MongoDB over HTTP on the native driver port.
2015-05-22_08:48:56.05935 2015-05-22 09:48:56,058 INFO : org.graylog2.bootstrap.CmdLineTool - Loaded plugins: []
2015-05-22_08:48:56.19252 2015-05-22 09:48:56,184 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
2015-05-22_08:48:56.19253 com.github.joschi.jadconfig.ParameterException: Couldn't convert value "=10" to Integer.
2015-05-22_08:48:56.19253 at com.github.joschi.jadconfig.converters.IntegerConverter.convertFrom(IntegerConverter.java:28)
2015-05-22_08:48:56.19254 at com.github.joschi.jadconfig.converters.IntegerConverter.convertFrom(IntegerConverter.java:11)
2015-05-22_08:48:56.19254 at com.github.joschi.jadconfig.JadConfig.convertStringValue(JadConfig.java:160)
2015-05-22_08:48:56.19254 at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:138)
2015-05-22_08:48:56.19254 at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99)
2015-05-22_08:48:56.19255 at org.graylog2.bootstrap.CmdLineTool.readConfiguration(CmdLineTool.java:316)
2015-05-22_08:48:56.19255 at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:161)
2015-05-22_08:48:56.19255 at org.graylog2.bootstrap.Main.main(Main.java:58)
2015-05-22_08:48:56.19256 Caused by: java.lang.NumberFormatException: For input string: "=10"
2015-05-22_08:48:56.19256 at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
2015-05-22_08:48:56.19256 at java.lang.Integer.parseInt(Integer.java:580)
2015-05-22_08:48:56.19256 at java.lang.Integer.valueOf(Integer.java:766)
2015-05-22_08:48:56.19257 at com.github.joschi.jadconfig.converters.IntegerConverter.convertFrom(IntegerConverter.java:25)
2015-05-22_08:48:56.19259 ... 7 more

@joschi
Copy link
Contributor

joschi commented May 22, 2015

2015-05-22_08:48:56.19253 com.github.joschi.jadconfig.ParameterException: Couldn't convert value "=10" to Integer.

This seems to be the culprit. There's some invalid numeric value in your configuration file.

Could you please post the configuration file (remove sensitive information like password_secret and root_password_sha2) or at least the line(s) you've changed? You can post it to https://gist.github.com/ and link the gist into this issue.

@magurwara
Copy link
Author

Graylog config is below

# If you are running more than one instances of graylog-server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting graylog-server from init scripts or similar.
node_id_file = /var/opt/graylog/graylog-server-node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
password_secret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# the default root user is named 'admin'
root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user.
# Default is UTC
root_timezone = GB

# Set plugin directory here (relative or absolute)
plugin_dir = /opt/graylog/plugin

# REST API listen URI. Must be reachable by other graylog-server nodes if you run a cluster.
rest_listen_uri = http://0.0.0.0:12900/

# REST API transport address. Defaults to the value of rest_listen_uri. Exception: If rest_listen_uri
# is set to a wildcard IP address (0.0.0.0) the first non-loopback IPv4 system address is used.
# If set, his will be promoted in the cluster discovery APIs, so other nodes may try to connect on
# this address and it is used to generate URLs addressing entities in the REST API. (see rest_listen_uri)
# You will need to define this, if your Graylog server is running behind a HTTP proxy that is rewriting
# the scheme, host name or URI.
#rest_transport_uri = http://192.168.1.1:12900/

# Enable CORS headers for REST api. This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
# This is disabled by default. Uncomment the next line to enable it.
#rest_enable_cors = true

# Enable GZIP support for REST api. This compresses API responses and therefore helps to reduce
# overall round trip times. This is disabled by default. Uncomment the next line to enable it.
#rest_enable_gzip = true

# Enable HTTPS support for the REST API. This secures the communication with the REST API with
# TLS to prevent request forgery and eavesdropping. This is disabled by default. Uncomment the
# next line to enable it.
#rest_enable_tls = true

# The X.509 certificate file to use for securing the REST API.
#rest_tls_cert_file = /path/to/graylog2.crt

# The private key to use for securing the REST API.
#rest_tls_key_file = /path/to/graylog2.key

# The password to unlock the private key used for securing the REST API.
#rest_tls_key_password = secret

# The maximum size of a single HTTP chunk in bytes.
#rest_max_chunk_size = 8192

# The maximum size of the HTTP request headers in bytes.
#rest_max_header_size = 8192

# The maximal length of the initial HTTP/1.1 line in bytes.
#rest_max_initial_line_length = 4096

# The size of the execution handler thread pool used exclusively for serving the REST API.
#rest_thread_pool_size = 16

# The size of the worker thread pool used exclusively for serving the REST API.
#rest_worker_threads_max_pool_size = 16

# Embedded Elasticsearch configuration file
# pay attention to the working directory of the server, maybe use an absolute path here
#elasticsearch_config_file = /etc/graylog-elasticsearch.yml

# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine
# when to rotate the currently active write index.
# It supports multiple rotation strategies:
#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure
#   - "size" per index, use elasticsearch_max_size_per_index below to configure
# valid values are "count", "size" and "time", default is "count"
rotation_strategy = size

# (Approximate) maximum number of documents in an Elasticsearch index before a new index
# is being created, also see no_retention and elasticsearch_max_number_of_indices.
# Configure this if you used 'rotation_strategy = count' above.
#elasticsearch_max_docs_per_index = 20000000

# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.
# Configure this if you used 'rotation_strategy = size' above.
elasticsearch_max_size_per_index = 1073741824

# (Approximate) maximum time before a new Elasticsearch index is being created, also see
# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.
# Configure this if you used 'rotation_strategy = time' above.
# Please note that this rotation period does not look at the time specified in the received messages, but is
# using the real clock value to decide when to rotate the index!
# Specify the time using a duration and a suffix indicating which unit you want:
#  1w  = 1 week
#  1d  = 1 day
#  12h = 12 hours
# Permitted suffixes are: d for day, h for hour, m for minute, s for second.
elasticsearch_max_time_per_index = 1h

# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
#elasticsearch_disable_version_check = true

# Disable message retention on this node, i. e. disable Elasticsearch index rotation.
#no_retention = false

# How many indices do you want to keep?
elasticsearch_max_number_of_indices = =10

# Decide what happens with the oldest indices when the maximum number of indices is reached.
# The following strategies are availble:
#   - delete # Deletes the index completely (Default)
#   - close # Closes the index and hides it from the system. Can be re-opened later.
retention_strategy = delete

# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.
elasticsearch_shards = 4
elasticsearch_replicas = 1

# Prefix for all Elasticsearch indices and index aliases managed by Graylog.
elasticsearch_index_prefix = graylog

# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: http://graylog2.org/resources/documentation/general/queries
allow_leading_wildcard_searches = true

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false

# settings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file)
# all these
# this must be the same as for your Elasticsearch cluster
#elasticsearch_cluster_name = graylog2

# you could also leave this out, but makes it easier to identify the graylog client instance
#elasticsearch_node_name = graylog-server

# we don't want the graylog server to store any data, or be master node
#elasticsearch_node_master = false
#elasticsearch_node_data = false

# use a different port if you run multiple Elasticsearch nodes on one machine
#elasticsearch_transport_tcp_port = 9350

# we don't need to run the embedded HTTP server here
#elasticsearch_http_enabled = false

#elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 151.211.7.37:9300

# Change the following setting if you are running into problems with timeouts during Elasticsearch cluster discovery.
# The setting is specified in milliseconds, the default is 5000ms (5 seconds).
# elasticsearch_cluster_discovery_timeout = 5000

# the following settings allow to change the bind addresses for the Elasticsearch client in graylog
# these settings are empty by default, letting Elasticsearch choose automatically,
# override them here or in the 'elasticsearch_config_file' if you need to bind to a special address
# refer to http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.3/modules-network.html
# for special values here
# elasticsearch_network_host =
# elasticsearch_network_bind_host =
# elasticsearch_network_publish_host =

# The total amount of time discovery will look for other Elasticsearch nodes in the cluster
# before giving up and declaring the current node master.
#elasticsearch_discovery_initial_state_timeout = 3s

# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.
# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom
# Elasticsearch documentation: http://www.elasticsearch.org/guide/reference/index-modules/analysis/
# Note that this setting only takes effect on newly created indices.
elasticsearch_analyzer = standard

# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output
# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been
# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember
# that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
output_batch_size = 500

# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1

# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and
# over again. To prevent this, the following configuration options define after how many faults an output will
# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3

#outputbuffer_processor_keep_alive_time = 5000
#outputbuffer_processor_threads_core_pool_size = 3
#outputbuffer_processor_threads_max_pool_size = 30

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Start server with --statistics flag to see buffer utilization.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

# Enable the disk based message journal.
message_journal_enabled = true

# The directory which will be used to store the message journal. The directory must me exclusively used by Graylog and
# must not contain any other files than the ones created by Graylog itself.
message_journal_dir = /var/opt/graylog/data/journal

# Journal hold messages before they could be written to Elasticsearch.
# For a maximum of 12 hours or 5 GB whichever happens first.
# During normal operation the journal will be smaller.
#message_journal_max_age = 12h
message_journal_max_size = 1gb
#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb

# Number of threads used exclusively for dispatching internal events. Default is 2.
#async_eventbus_processors = 2

# EXPERIMENTAL: Dead Letters
# Every failed indexing attempt is logged by default and made visible in the web-interface. You can enable
# the experimental dead letters feature to write every message that was not successfully indexed into the
# MongoDB "dead_letters" collection to make sure that you never lose a message. The actual writing of dead
# letter should work fine already but it is not heavily tested yet and will get more features in future
# releases.
dead_letters_enabled = false

# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual
# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3

# Every message is matched against the configured streams and it can happen that a stream contains rules which
# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.
# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other
# streams, Graylog limits the execution time for each stream.
# The default values are noted below, the timeout is in milliseconds.
# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times
# that stream is disabled and a notification is shown in the web interface.
# stream_processing_timeout = 2000
# stream_processing_max_faults = 3

# Length of the interval in seconds in which the alert conditions for all streams should be checked
# and alarms are being sent.
#alert_check_interval = 60

# Since 0.21 the graylog server supports pluggable output modules. This means a single message can be written to multiple
# outputs. The next setting defines the timeout for a single output module, including the default output module where all
# messages end up. This setting is specified in milliseconds.

# Time in milliseconds to wait for all message outputs to finish writing a single message.
#output_module_timeout = 10000

# Time in milliseconds after which a detected stale master node is being rechecked on startup.
#stale_master_timeout = 2000

# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.
# shutdown_timeout = 30000

# MongoDB Configuration
mongodb_useauth = false
#mongodb_user = grayloguser
#mongodb_password = 123
mongodb_host = 127.0.0.1
#mongodb_replica_set = localhost:27017,localhost:27018,localhost:27019
mongodb_database = graylog
mongodb_port = 27017

# Raise this according to the maximum connections your MongoDB server can handle if you encounter MongoDB connection problems.
mongodb_max_connections = 100

# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5, then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.org/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5

# Drools Rule File (Use to rewrite incoming log messages)
# See: http://graylog2.org/resources/documentation/general/rewriting
#rules_file = /etc/graylog.drl

# Email transport
transport_email_enabled = true
transport_email_hostname = xxxxxxxx
transport_email_port = 587
transport_email_use_auth = false
transport_email_use_tls = true
transport_email_use_ssl = true
transport_email_auth_username = 
transport_email_auth_password = 
transport_email_subject_prefix = [graylog]
transport_email_from_email = graylog@graylog

# Specify and uncomment this if you want to include links to the stream in your stream alert mails.
# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.
#
transport_email_web_interface_url = http://graylog

# HTTP proxy for outgoing HTTP calls
#http_proxy_uri =

# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize
# cycled indices.
#disable_index_optimization = true

# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch
# on heavily used systems with large indices, but it will decrease search performance. The default is 1.
#index_optimization_max_num_segments = 1

# Disable the index range calculation on all open/available indices and only calculate the range for the latest
# index. This may speed up index cycling on systems with large indices but it might lead to wrong search results
# in regard to the time range of the messages (i. e. messages within a certain range may not be found). The default
# is to calculate the time range on all open/available indices.
#disable_index_range_calculation = true

# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification
# will be generated to warn the administrator about possible problems with the system. Default is 1 second.
#gc_warning_threshold = 1s

# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.
#ldap_connection_timeout = 2000

# https://github.com/bazhenov/groovy-shell-server
#groovy_shell_enable = false
#groovy_shell_port = 6789

# Enable collection of Graylog-related metrics into MongoDB
#enable_metrics_collection = false

# Disable the use of SIGAR for collecting system stats
#disable_sigar = false

# TELEMETRY
# Enable publishing Telemetry data
#telemetry_enabled = false

# Base URL of the Telemetry service
#telemetry_url = https://telemetry-in.graylog.com/submit/

# Authentication token for the Telemetry service
#telemetry_token = 

# How often the Telemetry data should be reported
#telemetry_report_interval = 1m

# Number of Telemetry data sets to store locally if the connection to the Telemetry service fails
#telemetry_max_queue_size = 10

# TTL for Telemetry data in local cache
#telemetry_cache_timeout = 1m

# Connect timeout for HTTP connections
#telemetry_service_connect_timeout =  1s

# Write timeout for HTTP connections
#telemetry_service_write_timeout = 5s

# Read timeout for HTTP connections
#telemetry_service_read_timeout = 5s

@joschi
Copy link
Contributor

joschi commented May 22, 2015

elasticsearch_max_number_of_indices = =10

This line is the problem. It should read elasticsearch_max_number_of_indices = 10 instead. You can set this configuration setting with sudo graylog-ctl set-retention --indices=10 (see https://github.com/Graylog2/graylog2-images/tree/master/ova#configuration).

@magurwara
Copy link
Author

Thanks, it is working.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants