Each Git TAG (without the leading 'v') is built and uploaded to S3 where you can download from:
export VERSION=2.69.0-b9
curl -LO "https://s3.amazonaws.com/dp-cli/dp-cli_${VERSION}_Darwin_x86_64.tgz"
curl -LO "https://s3.amazonaws.com/dp-cli/dp-cli_${VERSION}_Darwin_arm64.tgz"
curl -LO "https://s3.amazonaws.com/dp-cli/dp-cli_${VERSION}_Linux_x86_64.tgz"
curl -LO "https://s3.amazonaws.com/dp-cli/dp-cli_${VERSION}_Windows_x86_64.tgz"
To see the available commands dp -h
.
NAME:
DataPlane command line tool
USAGE:
dp [global options] command [command options]
VERSION:
snapshot-2018-11-19T19:59:11
AUTHOR(S):
Hortonworks
COMMANDS:
audit audit related operations
blueprint blueprint related operations
cloud information about cloud provider resources
cluster cluster related operations
completion prints the bash completion function
configure configure the server address and credentials used to communicate with this server
credential credential related operations
database database management related operations
env environment related operations
imagecatalog imagecatalog related operations
ldap ldap related operations
mpack management pack related operations
proxy proxy related operations
recipe recipe related operations
user user related operations
workspace workspace related operations
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
--help, -h show help
--version, -v print the version
Each command provides a help flag with a description and the accepted flags and subcommands, e.g: dp configure -h
.
NAME:
DataPlane command line tool
USAGE:
Hortonworks DataPlane command line tool configure [command options]
DESCRIPTION:
it will save the provided server address and credential to ~/.dp/config
REQUIRED OPTIONS:
--server value server address [$CB_SERVER_ADDRESS]
OPTIONS:
--profile value selects a config profile to use [$CB_PROFILE]
--workspace value name of the workspace [$CB_WORKSPACE]
--apikeyid value API key ID
--privatekey value API private key
--output value supported formats: json, yaml, table (default: "json") [$CB_OUT_FORMAT]
Although there is an option to provide some global flags to every command to which DataPlane to connect to, it is recommended to save the configuration.
A configuration entry contains the DataPlane server's address, the username and optionally the password and the output format.
Multiple configuration profiles can be saved by specifying the --profile
switch. The same switch can be used as a global flag to the other commands to use a specific profile.
If the profile switch is omitted, the default
profile is saved and used.
dp configure --server https://ec2-52-29-224-64...compute.amazonaws.com --workspace your@email --profile dataplane-staging
Note: in case you are using a mocked UMS you can use the following command to generate an accesskey/privatekey pair that is accepted by the mock.
dp generate-mock-apikeys --tenant-name default --tenant-user myusername@cloudera.com
You need to update your ~/.dp/config file with these values.
The provided parameters will be saved into the configuration profile in the user's home directory.
To see its content: cat ~/.dp/config
. If this config file is present you don't need to specify the connection flags anymore,
otherwise you need to specify these flags to every command.
dp cluster list --server https://ec2-52-29-224-64...compute.amazonaws.com --workspace your@email
The --apikeyid
and --privatekey
configuration parameters are generated by Cloudera Altus, please refer to the documentation.
To create a cluster with the CLI, a cluster descriptor file needs to be put together and specified as an input to the cluster create
command:
dp cluster create --cli-input-json create_cluster.json
The cluster descriptor is basically the JSON request that's being sent to the DataPlane API. The full reference of this descriptor file can be found in the API docs. The CLI can help with creating the skeleton of the cluster descriptor JSON. The following command outputs a descriptor file with empty values:
dp cluster generate-template aws existing-subnet --blueprint-name "my-custom-blueprint"
The aws
and existing-subnet
keywords are subcommands to the cluster generate-template
command and help with creating a skeleton with proper entries for the selected cloud provider and network configuration.
Use the -h
option to see the available subcommands, e.g.:
dp cluster generate-template -h
Direct the output to a file to save the skeleton locally.
dp cluster generate-template aws existing-subnet > create_cluster.json
To create a cluster, fill the empty values or change the existing ones and use the create-cluster
command above.
To terminate the previously created cluster use the delete-cluster
command.
dp cluster delete --cluster-name my-cluster
If you want to check out the properties of a running cluster the describe-cluster
command can be useful.
dp cluster describe --cluster-name my-cluster
To enable bash completion run the following command and follow the instructions:
dp completion
To enable the debug logging use the --debug
global switch
dp --debug cluster list
or provide it as an environment variable export DEBUG=1
or inline
DEBUG=1 dp clusters list
To use the dp cli behind a proxy you must use the following environment variable:
export HTTP_PROXY=10.0.0.133:3128
or with basic auth:
export HTTP_PROXY=http://user:pass@10.0.0.133:3128/
This project uses Go modules for dependency management, introduced in golang 1.11. The Makefile enables module support explicitly on build goals, but you can also choose enable it in your system, by setting:
export GO111MODULE=on
Top level commands like cluster
are separated into a cmd package. You can see an example for these commands in the dataplane/cmd
folder. Each of these resource separated files contain an init
function that adds the resource specific commands to an internal array:
func init() {
DataPlaneCommands = append(DataPlaneCommands, cli.Command{})
}
The init()
function is automatically invoked for each file in the cmd
folder, because it is referenced from the main.go
file:
import (
"github.com/hortonworks/cb-cli/dataplane/cmd"
)
and then added to the main app:
app.Commands = append(app.Commands, cmd.DataPlaneCommands...)
To implement new top level commands you can create your own folder structure and reproduce the above specified:
- Create your own resource separation files/folders etc..
- Implement the init() function for these files
- Reference these files from the main.go file
- Add your commands to the app.Commands
If you'd like to introduce sub-commands
for already existing top level commands that are not DataPlane specific you can move the cluster.go
file (for example) from
the dataplane/cmd
folder to a top level cmd
folder and reference it from the main.go
file as describe above.
The CLI also supports a plugin
model. This means that you can create similar CLI tools like this one and build them independently/separately, but get them invoked by the main CLI tool. Let's assume you'd like to create a dlm
CLI, but develop it outside from this repository, but with the same framework.
If you create the dlm
binary and put it in your $PATH
you can invoke with the main CLI tool like:
dp dlm my-command
In order to do this the main CLI needs to be built with plugin mode enabled:
PLUGIN_ENABLED=true make build
This way you have introduced another top level command, but you have the advantage to:
* dynamically install/enable/disable top level commands without re-building/downloading the top level CLI
* dynamically upgrade commands without re-building/downloading the top level CLI
* develop it independently from this repository
* have an independent CLI tool that can be invoked without the top level CLI
This repository contains also functional tests for CB-CLI (in End to End and Integration suites).
- You can read more about these and how to run at tests folder README
- You can find the project at tests folder