HTTP file server to help DevOps.
Index:
Download an archive file which matches to your environment from latest release.
Extract an executable nvgd
or nvgd.exe
from the archive file. Then copy it
to one of directory in PATH environment variable. (ex. /usr/local/bin
)
Run:
$ nvgd
Access:
$ curl http://127.0.0.1:9280/file:///var/log/message/httpd.log?tail=limit:25
NOTE: Pre-compiled binary for Linux is built with newer glibc. So it can't be run on Linux with old glibc, like CentOS 7 or so. In that case, you must compile nvgd by your self. Please check next section to build from source.
Nvgd uses replace
directives. So it couldn't be installed with go install
for now.
Requirements to build:
- Go 1.19 or above (1.20.4 is recommended)
- Git
- C compiler (gcc or clang) for CGO
# Check out source.
$ git clone -b v1.12.2 --depth 1 https://github.com/koron/nvgd.git nvgd
# Change current working directory.
$ cd nvgd
# Build and install.
$ go install
See also: How to build on CentOS 7
Nvgd accepts path in these like format:
/{protocol}://{args/for/protocol}[?{filters}]
Nvgd supports these protocol
s:
-
file
-/file:///path/to/source
-
support glob like
*
/files:///var/log/access*.log
-
-
command
- result of pre-defined commands -
s3obj
- get object:
/s3obj://bucket-name/key/to/object
- get object:
-
s3list
- list common prefixes and objects:
/s3list://bucket-name/prefix/of/key
- list common prefixes and objects:
-
db
- query pre-defined databases-
query
id
andemail
form users indb_pq
:/db://db_pq/select id,email from users
-
support multiple databases:
/db://db_pq2/foo/select id,email from users /db://db_pq2/bar/select id,email from users
This searches from
foo
andbar
databases. -
show query form for
db_pq
:/db://db_pq/
-
-
db-dump
- dump tables to XLSX.curl http://127.0.0.1:9280/db-dump://mysql/ -o dst.xlsx
Or access
http://127.0.0.1:9280/db-dump://mysql/
by web browser. Then start to download a excel file. -
db-restore
- restore (clear all and import) tables from XLSX.curl http://127.0.0.1:9280/db-restore://mysql/ -X POST \ -H 'Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' \ --data-binary @src.xlsx
Or access
http://127.0.0.1:9280/db-dump://mysql/
by web browser. You can upload a excel file from the form. -
db-update
- update tables by XLSX (upsert)curl http://127.0.0.1:9280/db-update://mysql/ -X POST \ -H 'Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' \ --data-binary @src.xlsx
Or access
http://127.0.0.1:9280/db-dump://mysql/
by web browser. You can upload a excel file from the form. -
redis
- access to redis. -
trdsql
- TRDSQL query editor -
echarts
- ECharts query editor -
examples
- Example files to use demo/document of filters -
config
- current nvgd's configuration/config://
or/config/
(alias) -
help
- show help (README.md) of nvgd./help://
or/help/
(alias)It would be better that combining with
markdown
filter./help/?markdown
-
version
- show nvgd's versionPath is
/version://
or/version/
(alias)
See also:
Nvgd takes a configuration file in YAML format. A file nvgd.conf.yml
in
current directory or given file with -c
option is loaded at start.
nvgd.conf.yml
consist from these parts:
# Listen IP address and port (OPTIONAL, default is "127.0.0.1:9280")
addr: "0.0.0.0:8080"
# Path prefix for absolute links, use for sub-path multiple tenancy
path_prefix: /tenant_name/
# Destination (path or keyword) for error log, default is `(stderr)`
error_log: (stderr)
# Destination (path or keyword) for access log, default is `(discard)`
access_log: (stdout)
# File which served as "/" root.
root_contents_file: "/opt/nvgd/index.html"
# Configuration for protocols (OPTIONAL)
protocols:
# File protocol handler's configuration.
file:
...
# Pre-defined command handlers.
command:
...
# AWS S3 protocol handler configuration (see other section, OPTIONAL).
s3:
...
# DB protocol handler configuration (OPTIONAL, see below)
db:
...
# Configuration for each filters (OPTIONAL)
indexhtml:
...
htmltable:
...
markdown:
...
# Default filters: pair of path prefix and filter description.
default_filters:
...
# Custom prefix aliases, see later "Prefix Aliases" section.
aliases:
...
# Enable "Cross-Origin Resource Sharing" (CORS).
# This value is put with "Access-Control-Allow-Origin" header in responses.
access_control_allow_origin: "*"
Example:
file:
locations:
- '/var/log/'
- '/etc/'
forbiddens:
- '/etc/ssh'
- '/etc/passwd'
use_unixtime: true
This configuration has locations
, forbiddens
properties. These props
define accessible area of file system.
When paths are given as locations
, only those paths are permitted to access,
others are forbidden. Otherwise, all paths are accessible.
When forbiddens
are given, those paths can't be accessed even if it is under
path in locations
.
If the value of use_unixtime
property is set to true, UNIX time will be used
instead of RFC1123 for all time expressions: modified_at
or so.
Configuration of pre-defined command protocol handler maps a key to corresponding command source.
Example:
command:
"df": "df -h"
"lstmp": "ls -l /tmp"
This enables two resources under command
protocol.
/command://df
/command://lstmp
You could add filters of course, like: /command://df?grep=re:foo
Configuration of S3 protocor handlers consist from 2 parts: default
and
buckets
. default
part cotains default configuration to connect S3. And
buckets
part could contain configuration for each buckets specific.
s3:
# IANA timezone to show times (optional). "Asia/Tokyo" for JST.
timezone: Asia/Tokyo
# default configuration to connect to S3 (REQUIRED for S3)
default:
# Access key ID for S3 (REQUIRED)
access_key_id: xxxxxxxxxxxxxxxxxxxx
# Secret access key (REQUIRED)
secret_access_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Access point to connect (OPTIONAL, default is "ap-northeast-1")
region: ap-northeast-1
# Session token to connect (OPTIONAL, default is empty: not used)
session_token: xxxxxxx
# MaxKeys for S3 object listing. valid between 1 to 1000.
# (OPTIONAL, default is 1000)
max_keys: 10
# HTTP PROXY to access S3. (OPTIONAL, default is empty: direct access)
http_proxy: "http://your.proxy:port"
# bucket specific configurations (OPTIONAL)
buckets:
# bucket name can be specified as key.
"your_bucket_name1":
# same properties with "default" can be placed at here.
# other buckets can be added here.
"your_bucket_name2":
...
# UNIX time will be used instead of RFC1123 for all time expression:
# `modified_at` or so. (OPTIONAL)
use_unixtime: true
Sample of configuration for DB protocol handler.
db:
# key could be set favorite name for your database
db_pq:
# driver supports 'postgres' or 'mysql' for now
driver: 'postgres'
# name is driver-specific source name (DSN)
name: 'postgres://pqgotest:password@localhost/pqgotest?sslmode=verify-full'
# limit number of rows for a query (default: 100)
max_rows: 50
# sample of connecting to MySQL
db_mysql:
driver: 'mysql'
name: 'user:password@/dbname'
With above configuration, you will be able to access those databases with below URLs or commands.
curl 'http://127.0.0.1:9280/db://db_pq/select%20email%20from%20users'
curl 'http://127.0.0.1:9280/db://db_mysql/select%20email%20from%20users'
To restore or update MySQL database with db-restore
or db-update
protocol,
we recommend to use TRADITIONAL mode to make MySQL checks types strictly. You
should add ?sql_mode=TRADITIONAL
to connection URL to enabling it.
Example:
db:
mysql1:
driver: mysql
name: "mysql:abcd1234@tcp(127.0.0.1:3306)/mysql?sql_mode=TRADITIONAL"
To make DB protocol handler connect with multiple databases in an instance, there are 3 steps to make it enable.
-
Add
multiple_database: true
property to DB configuration. -
Add
{{.dbname}}
placeholder in value ofname
. -
Access to URL
/db://DBNAME@db_pq/you query
.DBNAME is used to expand
{{.dbname}}
in above.
As a result, your configuration would be like this:
db:
db_pq:
driver: 'postgres'
name: 'postgres://pqgotest:password@localhost/{{.dbname}}?sslmode=verify-full'
multiple_database: true
# sample of connecting to MySQL
db_mysql:
driver: 'mysql'
name: 'user:password@/{{.dbname}}'
multiple_database: true
Some filters can be configured by filters
section.
-
custom_css_urls
: Array of string. Specify external CSS's URL for each string.Supported filters: htmltable, indexhtml, markdown
Example: markdown filter outputs two
link
elements to including external CSS. indexhtml filters outputs alink
elements for CSS.filters: markdown: custom_css_urls: - https://www.kaoriya.net/assets/css/contents.css - https://www.kaoriya.net/assets/css/syntax.css indexhtml: custom_css_urls: - https://www.kaoriya.net/assets/css/contents.css
Default filters provide a capability to apply implicit filters depending on path prefixes. See Filters for detail of filters.
To apply tail
filter for under /file:///var/log/
path:
default_filters:
"file:///var/log/":
- "tail"
If you want to show last 100 lines, change like this:
default_filters:
"file:///var/log/":
- "tail=limit:100"
You can specify different filters for paths.
default_filters:
"file:///var/log/":
- "tail"
"file:///tmp/":
- "head"
Default filters can be ignored separately by all (pseudo) filter.
Default filters are ignored for directories source of file protocols.
Nvgd supports these filters:
- Grep filter
- Head filter
- Tail filter
- Cut filter
- Pager filter
- Hash filter
- LTSV filter
- JSONArray filter
- Index HTML filter
- HTML Table filter
- Text Table filter
- Markdown filter
- Refresh filter
- Download filter
- TRDSQL filter
- Echarts filter
- All (pseudo) filter
Where {filters}
is:
{filter}[&{filter}...]
Where {filter}
is:
{filter_name}[={options}]
Where {options}
is:
{option_name}:{value}[;{option_name}:{value}...]
See other section for detail of each filters.
Example: get last 50 lines except empty lines.
/file:///var/log/messages?grep=re:^$;match:false&tail=limit:50
Output lines which matches against regular expression.
As default, matching is made for whole line. But when valid option field
is
given, then matching is made for specified a field, which is splitted by
delim
character.
grep
command equivalent.
- filter_name:
grep
- options
re
- regular expression used for match.match
- output when match or not match. default is true.field
- a match target N'th field counted from 1. default is none (whole line).delim
- field delimiter string (default: TAB character).context
- show a few lines before and after the matched line. default is0
(no contexts).number
- prefix each line of output with the 1-based line number. whentrue
Output the first N lines.
head
command equivalent.
- filter_name:
head
- options
start
- start line number for output. default is 0.limit
- line number for output. default is 10.
Output the last N lines.
tail
command equivalent.
- filter_name:
tail
- options
limit
- line number for output. default is 10.
Output selected fields of lines.
cut
command equivalent.
- filter_name:
cut
- options:
delim
- field delimiter string (default: TAB character).white
- use consecutive whites as one single field separator (default: false)list
- selected fields, combinable by comma,
.N
- N'th field counted from 1.N-M
- from N'th, to M'th field (included).N-
- from N'th field, to end of line.-N
- from first, to N'th field.
pager
is a filter that divides the input stream into pages by lines that
match the specified pattern.
- filter_name:
pager
- options:
-
eop
: Regular expression that matches page separator lines. -
pages
: Page number to output (1-based number)You can specify multiple pages separated by commas. Examples
1
: First page only2,4,6
: Page 2, 4, and 6-1
: Last page-3
: 3rd page from the end1,-1
: First and last pages10-12
: Pages 10 to 12
-
num
: Boolean. Output a page number at the top of the page.Example:
(page 12)
-
Output hash value.
- filter_name:
hash
- options:
algorithm
- one ofmd5
(default),sha1
,sha256
orsha512
encoding
- one ofhex
(default),base64
orbinary
Count lines.
- filter_name:
count
- options: (none)
A filter that outputs only the specified labels from the rows of LTSV that match the another specified label value.
- filter_name:
ltsv
- options:
grep
- match parameter:{label},{pattern}
match
- output when match or not match. default is true.cut
- selected labels, combinable by comma,
.
Convert each line as a string of JSON array.
- filter_name:
jsonarray
- options: (none)
Convert LTSV to Index HTML. (limited for s3list and files (dir) source for now)
- filter_name:
indexhtml
- options:
timefmt
: Time layout for "Modified At" or so. default isRFC1123
. Possible values are, case insensitive:ANSIC
,UNIX
,RUBY
,RFC822
,RFC822Z
,RFC850
,RFC1123
,RFC1123Z
,RFC3339
,RFC3339NANO
,STAMP
, andDATETIME
- configurations:
custom_css_urls
: list of URLs to link as CSS.
Example: list objects in S3 bucket "foo" with Index HTML.
http://127.0.0.1:9280/s3list://foo/?indexhtml
This filter should be the last of filters.
Convert LTSV to HTML table.
- filter_name:
htmltable
- options:
linefeed
- boolean: expand all\n
as linefeed.
- configurations:
custom_css_urls
: list of URLs to link as CSS.
Example: query id and email column from users table on mine database.
http://127.0.0.1:9280/db://mine/select%20id,email%20from%20users?htmltable
This filter should be the last of filters.
Convert LTSV to plain text table.
- filter_name:
texttable
- options: (none)
Example: query id and email column from users table on mine database.
http://127.0.0.1:9280/db://mine/select%20id,email%20from%20users?texttable
Above query generate this table.
+-----+-----------------------+
| id | email |
+-----+-----------------------+
| 0|foo@example.com |
| 1|bar@example.com |
+-----+-----------------------+
This filter should be the last of filters.
Convert markdown text to HTML.
- filter_name:
markdown
- options: (none)
- configurations:
custom_css_urls
: list of URLs to link as CSS.
Example: show help in HTML.
http://127.0.0.1:9280/help://?markdown
http://127.0.0.1:9280/help/?markdown
Add "Refresh" header with specified time (sec).
- filter_name:
refresh
- options: interval seconds to refresh. 0 for disable.
Example: Open below URL using WEB browser, it refresh in each 5 seconds automatically.
http://127.0.0.1:9280/file:///var/log/messages?tail&refresh=5
Add "Content-Disposition: attachment" header to the response. It make the browser to download the resource instead of showing in it.
- filter_name:
download
- options: (none)
Example: download the file "messages" and would be saved as file.
http://127.0.0.1:9280/file:///var/log/messages?download
TRDSQL filter provides SQL on CSV. See docuemnt for detail.
Echarts filter provides drawing charts feature. See docuemnt for detail.
Ignore default filters
- filter_name:
all
- options: (none)
Example: if specified some default filters for file:///var/
, this ignore
those.
http://127.0.0.1:9280/file:///var/log/messages?all
nvgd supports prefix aliases to keep compatibilities with koron/night. Currently these aliases are registered.
files/
->file:///
commands/
->command://
config/
->config://
help/
->help://
trdsql/
->trdsql:///
echarts/
->echats:///
version/
->version://
For example this URL:
http://127.0.0.1:9280/files/var/log/messages
It works same as below URL:
http://127.0.0.1:9280/file:///var/log/messages
You can add custom prefix aliases with aliases
section in nvgd.conf.yml
.
For example with below settings...
aliases:
'dump/': 'db-dump://'
You can dump a "mytable" table in "mydb" RDBMS with this URL:
http://127.0.0.1:9280/dump/mydb/mytable
Instead of this:
http://127.0.0.1:9280/db-dump://mydb/mytable
Custom prefix aliases can be used to avoid to containing ://
sub string in
path.
- koron/night previous implementation which written in NodeJS.