Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 20 additions & 8 deletions docs/AppFramework.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The Splunk Operator provides support for Splunk App and Add-on deployment using
Utilizing the App Framework requires:

* An Amazon S3 or S3-API-compliant remote object storage location. App framework requires read-only access to the path containing the apps.
* The remote object storage credentials.
* The remote object storage credentials via a secret, or an IAM role.
* Splunk Apps and Add-ons in a .tgz or .spl archive format.
* Connections to the remote object storage endpoint need to be secure using a minimum version of TLS 1.2.

Expand All @@ -17,8 +17,12 @@ Utilizing the App Framework requires:
In this example, you'll deploy a Standalone CR with a remote storage volume, the location of the app archives, and set the installation location for the Splunk Enterprise Pod instance by using `scope`.

1. Confirm your S3-based remote storage volume path and URL.
2. Create a Kubernetes Secret Object with the storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

2. Configure credentials to connect to remote store by either:
* Configure an IAM role for the Operator and Splunk instance pods using a service account or annotations, or
* Create a Kubernetes Secret Object with the static storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

3. Create folders on the remote storage volume to use as App Source locations.
* An App Source is a folder on the remote storage volume containing a subset of Splunk Apps and Add-ons. In this example, we split the network and authentication Splunk Apps into different folders and named them `networkApps` and `authApps`.

Expand Down Expand Up @@ -72,8 +76,12 @@ For more information, see the [Description of App Framework Specification fields
This example describes the installation of apps on Indexer Cluster as well as Cluster Manager. This is achieved by deploying a ClusterMaster CR with a remote storage volume, the location of the app archives, and set the installation scope to support both local and cluster app distribution.

1. Confirm your S3-based remote storage volume path and URL.
2. Create a Kubernetes Secret Object with the storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

2. Configure credentials to connect to remote store by either:
* Configure an IAM role for the Operator and Splunk instance pods using a service account or annotations, or
* Create a Kubernetes Secret Object with the static storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

3. Create folders on the remote storage volume to use as App Source locations.
* An App Source is a folder on the remote storage volume containing a subset of Splunk Apps and Add-ons. In this example, we have Splunk apps that are installed and run locally on the cluster manager, and apps that will be distributed to all cluster peers by the cluster manager.
* The apps are split across 3 folders named `networkApps`, `clusterBase`, and `adminApps` . The apps placed into `networkApps` and `clusterBase` are distributed to the cluster peers, but the apps in `adminApps` are for local use on the cluster manager instance only.
Expand Down Expand Up @@ -131,8 +139,12 @@ For more information, see the [Description of App Framework Specification fields
This example describes the installation of apps on Search Head Cluster as well as Deployer. This is achieved by deploying a SearchHeadCluster CR with a storage volume, the location of the app archives, and set the installation scope to support both local and cluster app distribution.

1. Confirm your S3-based remote storage volume path and URL.
2. Create a Kubernetes Secret Object with the storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

2. Configure credentials to connect to remote store by either:
* Configure an IAM role for the Operator and Splunk instance pods using a service account or annotations, or
* Create a Kubernetes Secret Object with the static storage credentials.
* Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=AKIAIOSFODNN7EXAMPLE --from-literal=s3_secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`

3. Create folders on the remote storage volume to use as App Source locations.
* An App Source is a folder on the remote storage volume containing a subset of Splunk Apps and Add-ons. In this example, we have Splunk apps that are installed and run locally on the Deployer, and apps that will be distributed to all cluster search heads by the Deployer.
* The apps are split across 3 folders named `searchApps`, `machineLearningApps`, `adminApps` and `ESapps`. The apps placed into `searchApps`, `machineLearningApps` and `ESapps` are distributed to the search heads, but the apps in `adminApps` are for local use on the Deployer instance only.
Expand Down Expand Up @@ -287,7 +299,7 @@ Here is a typical App framework configuration in a Custom resource definition:
* `storageType` describes the type of remote storage. Currently `s3` is the only supported type
* `provider` describes the remote storage provider. Currently `aws` & `minio` are the supported providers
* `endpoint` helps configure the URI/URL of the remote storage endpoint that hosts the apps
* `secretRef` refers to the K8s secret object containing the remote storage access key
* `secretRef` refers to the K8s secret object containing the static remote storage access key. This parameter is not required if using IAM role based credentials.
* `path` describes the path (including the bucket) of one or more app sources on the remote store

### appSources
Expand Down
12 changes: 8 additions & 4 deletions docs/SmartStore.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,18 @@ The Splunk Operator includes a method for configuring a SmartStore remote storag
* Already existing indexes data should be migrated from local storage to the remote store as a pre-requisite before configuring those indexes in the Custom Resource of the Splunk Operator. For more details, please see [Migrate existing data on an indexer cluster to SmartStore](https://docs.splunk.com/Documentation/Splunk/latest/Indexer/MigratetoSmartStore#Migrate_existing_data_on_an_indexer_cluster_to_SmartStore).


SmartStore configuration involves indexes, volumes, and the volume credentials. Indexes and volume configurations are configured through the Custom Resource specification. However, the volume credentials are configured securely in a Kubernetes secret object, and that secret object is referred by the Custom Resource with SmartStore volume spec, through `SecretRef`
SmartStore configuration involves indexes, volumes, and the volume credentials. Indexes and volume configurations are configured through the Custom Resource specification. The credentials for accessing the volume can either be done through IAM roles or static credential. For roles these can be configured via service accounts or annotations. For static keys the volume credentials are configured securely in a Kubernetes secret object, and that secret object is referred by the Custom Resource with SmartStore volume spec, through `SecretRef`

## Storing Smartstore Secrets
Here is an example command to encode and load your remote storage volume secret key and access key in the kubernetes secret object: `kubectl create secret generic <secret_store_obj> --from-literal=s3_access_key=<access_key> --from-literal=s3_secret_key=<secret_key>`
Here is an example command to encode and load your static remote storage volume secret key and access key in the kubernetes secret object: `kubectl create secret generic <secret_store_obj> --from-literal=s3_access_key=<access_key> --from-literal=s3_secret_key=<secret_key>`

Example: `kubectl create secret generic s3-secret --from-literal=s3_access_key=iRo9guRpeT2EWn18QvpdcqLBcZmW1SDg== --from-literal=s3_secret_key=ZXvNDSfRo64UelY7Y4JZTO1iGSZt5xaQ2`


## Creating a SmartStore-enabled Standalone instance
1. Create a Secret object with Secret & Access credentials, as explained in [Storing SmartStore Secrets](#storing-smartstore-secrets)
1. Configure remote store credentials by either:
* Configure IAM role based credentials via service account or annotations, or
* Create a Secret object with Secret & Access credentials, as explained in [Storing SmartStore Secrets](#storing-smartstore-secrets)
2. Confirm your S3-based storage volume path and URL.
3. Confirm the name of the Splunk indexes being used with the SmartStore volume.
4. Create/Update the Standalone Customer Resource specification with volume and index configuration (see Example below)
Expand Down Expand Up @@ -66,7 +68,9 @@ Note: Custom apps with higher precedence can potentially overwrite the index and


## Creating a SmartStore-enabled Indexer Cluster
1. Create a Secret object with Secret & Access credentials, as explained in [Storing SmartStore Secrets](#storing-smartstore-secrets)
1. Configure remote store credentials by either:
* Configure IAM role based credentials via service account or annotations, or
* Create a Secret object with Secret & Access credentials, as explained in [Storing SmartStore Secrets](#storing-smartstore-secrets)
2. Confirm your S3-based storage volume path and URL.
3. Confirm the name of the Splunk indexes being used with the SmartStore volume.
4. Create/Update the Cluster Manager Customer Resource specification with volume and index configuration (see Example below)
Expand Down
23 changes: 16 additions & 7 deletions pkg/splunk/client/awss3client.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,15 +79,24 @@ func InitAWSClientSession(region, accessKeyID, secretAccessKey string) SplunkAWS
tr.ForceAttemptHTTP2 = true
httpClient := http.Client{Transport: tr}

sess, err := session.NewSession(&aws.Config{
Region: aws.String(region),
Credentials: credentials.NewStaticCredentials(
accessKeyID, // id
secretAccessKey, // secret
""),
var err error
var sess *session.Session
config := &aws.Config{
Region: aws.String(region),
MaxRetries: aws.Int(3),
HTTPClient: &httpClient,
})
}

if accessKeyID != "" && secretAccessKey != "" {
config.WithCredentials(credentials.NewStaticCredentials(
accessKeyID, // id
secretAccessKey, // secret
""))
} else {
scopedLog.Info("No valid access/secret keys. Attempt to connect without them")
}

sess, err = session.NewSession(config)
if err != nil {
scopedLog.Error(err, "Failed to initialize an AWS S3 session.")
return nil
Expand Down
15 changes: 12 additions & 3 deletions pkg/splunk/client/minioclient.go
Original file line number Diff line number Diff line change
Expand Up @@ -99,10 +99,19 @@ func InitMinioClientSession(appS3Endpoint string, accessKeyID string, secretAcce
// New returns an Minio compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
scopedLog.Info("Connecting to Minio S3 for apps", "appS3Endpoint", appS3Endpoint)
s3Client, err := minio.New(appS3Endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
var s3Client *minio.Client
var err error

options := &minio.Options{
Secure: useSSL,
})
}
if accessKeyID != "" && secretAccessKey != "" {
options.Creds = credentials.NewStaticV4(accessKeyID, secretAccessKey, "")
} else {
scopedLog.Info("No Access/Secret Keys, attempt connection without them using IAM", "appS3Endpoint", appS3Endpoint)
options.Creds = credentials.NewIAM("")
}
s3Client, err = minio.New(appS3Endpoint, options)
if err != nil {
scopedLog.Info("Error creating new Minio Client Session", "err", err)
return nil
Expand Down
65 changes: 42 additions & 23 deletions pkg/splunk/enterprise/configuration.go
Original file line number Diff line number Diff line change
Expand Up @@ -847,29 +847,33 @@ func AreRemoteVolumeKeysChanged(client splcommon.ControllerClient, cr splcommon.
return false
}

scopedLog := log.WithName("CheckIfsmartstoreConfigMapUpdatedToPod").WithValues("name", cr.GetName(), "namespace", cr.GetNamespace())
scopedLog := log.WithName("AreRemoteVolumeKeysChanged").WithValues("name", cr.GetName(), "namespace", cr.GetNamespace())

volList := smartstore.VolList
for _, volume := range volList {
namespaceScopedSecret, err := splutil.GetSecretByName(client, cr, volume.SecretRef)
// Ideally, this should have been detected in Spec validation time
if err != nil {
*retError = fmt.Errorf("Not able to access secret object = %s, reason: %s", volume.SecretRef, err)
return false
}
if volume.SecretRef != "" {
namespaceScopedSecret, err := splutil.GetSecretByName(client, cr, volume.SecretRef)
// Ideally, this should have been detected in Spec validation time
if err != nil {
*retError = fmt.Errorf("Not able to access secret object = %s, reason: %s", volume.SecretRef, err)
return false
}

// Check if the secret version is already tracked, and if there is a change in it
if existingSecretVersion, ok := ResourceRev[volume.SecretRef]; ok {
if existingSecretVersion != namespaceScopedSecret.ResourceVersion {
scopedLog.Info("Secret Keys changed", "Previous Resource Version", existingSecretVersion, "Current Version", namespaceScopedSecret.ResourceVersion)
ResourceRev[volume.SecretRef] = namespaceScopedSecret.ResourceVersion
return true
// Check if the secret version is already tracked, and if there is a change in it
if existingSecretVersion, ok := ResourceRev[volume.SecretRef]; ok {
if existingSecretVersion != namespaceScopedSecret.ResourceVersion {
scopedLog.Info("Secret Keys changed", "Previous Resource Version", existingSecretVersion, "Current Version", namespaceScopedSecret.ResourceVersion)
ResourceRev[volume.SecretRef] = namespaceScopedSecret.ResourceVersion
return true
}
return false
}
return false
}

// First time adding to track the secret resource version
ResourceRev[volume.SecretRef] = namespaceScopedSecret.ResourceVersion
// First time adding to track the secret resource version
ResourceRev[volume.SecretRef] = namespaceScopedSecret.ResourceVersion
} else {
scopedLog.Info("No valid SecretRef for volume. No secret to track.", "volumeName", volume.Name)
}
}

return false
Expand Down Expand Up @@ -1034,6 +1038,8 @@ func validateRemoteVolumeSpec(volList []enterpriseApi.VolumeSpec, isAppFramework

duplicateChecker := make(map[string]bool)

scopedLog := log.WithName("validateRemoteVolumeSpec")

// Make sure that all the Volumes are provided with the mandatory config values.
for i, volume := range volList {
if _, ok := duplicateChecker[volume.Name]; ok {
Expand All @@ -1050,8 +1056,9 @@ func validateRemoteVolumeSpec(volList []enterpriseApi.VolumeSpec, isAppFramework
if volume.Path == "" {
return fmt.Errorf("Volume Path is missing")
}
// Make the secretRef optional if theyre using IAM roles
if volume.SecretRef == "" {
return fmt.Errorf("Volume SecretRef is missing")
scopedLog.Info("No valid SecretRef for volume.", "volumeName", volume.Name)
}

// provider is used in App framework to pick the S3 client(aws, minio), and is not applicable to Smartstore
Expand Down Expand Up @@ -1146,21 +1153,33 @@ func ValidateSplunkSmartstoreSpec(smartstore *enterpriseApi.SmartStoreSpec) erro
func GetSmartstoreVolumesConfig(client splcommon.ControllerClient, cr splcommon.MetaObject, smartstore *enterpriseApi.SmartStoreSpec, mapData map[string]string) (string, error) {
var volumesConf string

scopedLog := log.WithName("GetSmartstoreVolumesConfig")

volumes := smartstore.VolList
for i := 0; i < len(volumes); i++ {
s3AccessKey, s3SecretKey, _, err := GetSmartstoreRemoteVolumeSecrets(volumes[i], client, cr, smartstore)
if err != nil {
return "", fmt.Errorf("Unable to read the secrets for volume = %s. %s", volumes[i].Name, err)
}
if volumes[i].SecretRef != "" {
s3AccessKey, s3SecretKey, _, err := GetSmartstoreRemoteVolumeSecrets(volumes[i], client, cr, smartstore)
if err != nil {
return "", fmt.Errorf("Unable to read the secrets for volume = %s. %s", volumes[i].Name, err)
}

volumesConf = fmt.Sprintf(`%s
volumesConf = fmt.Sprintf(`%s
[volume:%s]
storageType = remote
path = s3://%s
remote.s3.access_key = %s
remote.s3.secret_key = %s
remote.s3.endpoint = %s
`, volumesConf, volumes[i].Name, volumes[i].Path, s3AccessKey, s3SecretKey, volumes[i].Endpoint)
} else {
scopedLog.Info("No valid secretRef configured. Configure volume without access/secret keys", "volumeName", volumes[i].Name)
volumesConf = fmt.Sprintf(`%s
[volume:%s]
storageType = remote
path = s3://%s
remote.s3.endpoint = %s
`, volumesConf, volumes[i].Name, volumes[i].Path, volumes[i].Endpoint)
}
}

return volumesConf, nil
Expand Down
4 changes: 2 additions & 2 deletions pkg/splunk/enterprise/configuration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -636,8 +636,8 @@ func TestValidateAppFrameworkSpec(t *testing.T) {

AppFramework.VolList[0].SecretRef = ""
err = ValidateAppFrameworkSpec(&AppFramework, &appFrameworkContext, false)
if err == nil {
t.Errorf("Missing Secret Object reference should error out")
if err != nil {
t.Errorf("Missing Secret Object reference is a valid config that should not cause error: %v", err)
}
AppFramework.VolList[0].SecretRef = "s3-secret"

Expand Down
Loading