Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onestep image from AWS #1261

Merged
merged 28 commits into from
Jul 14, 2020
Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
14 changes: 14 additions & 0 deletions cli_tools/common/utils/logging/service/log_entry.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ type InputParams struct {
InstanceImportParams *InstanceImportParams `json:"instance_import_input_params,omitempty"`
MachineImageImportParams *MachineImageImportParams `json:"machine_image_import_input_params,omitempty"`
WindowsUpgradeParams *WindowsUpgradeParams `json:"windows_upgrade_input_params,omitempty"`
OnestepImageImportParams *OnestepImageImportParams `json:"onestep_image_import_input_params,omitempty"`
}

// ImageImportParams contains all input params for image import
Expand Down Expand Up @@ -116,6 +117,19 @@ type ImageExportParams struct {
Format string `json:"format,omitempty"`
}

// OnestepImageImportParams contains all input params for onestep image import
type OnestepImageImportParams struct {
*ImageImportParams

CloudProvider string `json:"cloud_provider,omitempty"`

// AWS related params
AWSAMIID string `json:"aws_ami_id,omitempty"`
AWSExportLocation string `json:"aws_export_location,omitempty"`
AWSExportedAMIPath string `json:"aws_exported_ami_path,omitempty"`
AWSResumeExportedAMI bool `json:"aws_resume_exported_ami"`
}

// InstanceImportParams contains all input params for instance import
type InstanceImportParams struct {
*CommonParams
Expand Down
1 change: 1 addition & 0 deletions cli_tools/common/utils/logging/service/logger.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ const (
ImageExportAction = "ImageExport"
InstanceImportAction = "InstanceImport"
MachineImageImportAction = "MachineImageImport"
OneStepImageImportAction = "OneStepImageImport"
WindowsUpgrade = "WindowsUpgrade"

// These strings should be interleaved to construct the real URL. This is just to (hopefully)
Expand Down
103 changes: 103 additions & 0 deletions cli_tools/gce_onestep_image_import/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
## Compute Engine One-step Image Import
fiona-liu marked this conversation as resolved.
Show resolved Hide resolved

The `gce_onestep_image_import` tool imports a VM image from other cloud providers to Google Compute Engine
image. It uses Daisy to perform imports while adding additional logic to perform
import setup and clean-up, such as creating a temporary bucket, validating
flags etc.

### Build
Download and install [Go](https://golang.org/doc/install). Then pull and
install the `gce_onestep_image_import` tool, this should place the binary in the
[Go bin directory](https://golang.org/doc/code.html#GOPATH):

```
go get github.com/GoogleCloudPlatform/compute-image-tools/cli_tools/gce_onestep_image_import
```

### Flags

#### Required flags
+ `-image_name=IMAGE_NAME` Name of the disk image to create.
+ `-client_id=CLIENT_ID` Identifies the client of the importer. For example: `gcloud` or
`pantheon`.
+ `-cloud_provider=CLOUD_PROVIDER` Identifies the cloud provider of the source image.
Currently only one cloud provider is supported. CLOUD_PROVIDER must be aws.
+ `-os=OS` Specifies the OS of the image being imported.
This must be specified if cloud provider is specified.
OS must be one of: centos-6, centos-7, debian-8, debian-9, rhel-6, rhel-6-byol, rhel-7,
rhel-7-byol, ubuntu-1404, ubuntu-1604, ubuntu-1804, windows-10-byol, windows-2008r2, windows-2008r2-byol,
windows-2012, windows-2012-byol, windows-2012r2, windows-2012r2-byol, windows-2016,
windows-2016-byol, windows-7-byol.

To import from AWS, all of these must be specified:
+ `-aws_access_key_id=AWS_ACCESS_KEY_ID` The Access Key Id for an AWS credential.
This credential is associated with an IAM user or role.
This IAM user must have permissions to import images.
+ `-aws_secret_access_key=AWS_SECRET_ACCESS_KEY` The Secret Access Key for an AWS credential.
This credential is associated with an IAM user or role.
This IAM user must have permissions to import images.
+ `-aws_region=AWS_REGION` The AWS region for the image that you want to import.

To import from AWS, exactly one of the groups must be specified:

+ To import from an exported image file in S3:
+ `-aws_exported_ami_path=AWS_EXPORTED_AMI_PATH` The S3 resource path of
the exported image file.

+ To import from a VM instance:
+ `-aws_ami_id=AWS_AMI_ID` The AWS AMI ID of the image to import.
+ `-aws_export_location=AWS_EXPORT_LOCATION` TheAWS S3 Bucket location
where you want to export the image.

#### Optional flags
+ `-aws_session_token=AWS_SESSION_TOKEN` The AWS session token value that is
required if you are using temporary security credentials.
+ `-no_guest_environment` Google Guest Environment will not be installed on the image.
+ `-family=FAMILY` Family to set for the translated image.
+ `-description=DESCRIPTION` Description to set for the translated image.
+ `-network=NETWORK` Name of the network in your project to use for the image import. The network
must have access to Google Cloud Storage. If not specified, the network named 'default' is used.
+ `-subnet=SUBNET` Name of the subnetwork in your project to use for the image import. If the
network resource is in legacy mode, do not provide this property. If the network is in auto subnet
mode, providing the subnetwork is optional. If the network is in custom subnet mode, then this
field should be specified. Region or zone should be specified if this field is specified.
+ `-zone=ZONE` Zone of the image to import. The zone in which to do the work of
importing the image. Overrides the default compute/zone property value for
this command invocation.
+ `-timeout=TIMEOUT` Maximum time a build can last before it is failed as "TIMEOUT". For example,
specifying 2h will fail the process after 2 hours.
+ `-project=PROJECT` Project to run in, overrides what is set in workflow.
+ `-scratch_bucket_gcs_path=PATH` GCS scratch bucket to use, overrides default set in Daisy.
+ `-oauth=OAUTH_PATH` Path to oauth json file, overrides what is set in workflow.
+ `-compute_endpoint_override=ENDPOINT` Compute API endpoint to override default.
+ `-disable_gcs_logging` Do not stream logs to GCS
+ `-disable_cloud_logging` Do not stream logs to Cloud Logging
+ `-disable_stdout_logging` Do not display individual workflow logs on stdout
+ `-kms-key=KMS_KEY_ID` ID of the key or fully qualified identifier for the key. This flag
must be specified if any of the other arguments below are specified.
+ `-kms-keyring=KMS_KEYRING` The KMS keyring of the key.
+ `-kms-location=KMS_LOCATION` The Cloud location for the key.
+ `-kms-project=KMS_PROJECT` The Cloud project for the key
+ `-no_external_ip` Set if VPC does not allow external IPs
+ `-labels=[KEY=VALUE,...]` labels: List of label KEY=VALUE pairs to add. Keys must start with a
lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and
numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.
+ `-storage_location` Location for the imported image which can be any GCS location. If the location
parameter is not included, images are created in the multi-region associated with the source disk,
image, snapshot or GCS bucket.

### Usage

```
gce_onestep_image_import -image_name=IMAGE_NAME -client_id=CLIENT_ID -os=OS
([-aws_exported_ami_path=AWS_EXPORTED_AMI_PATH]|
[-aws_ami_id=AWS_AMI_ID -aws_export_location=AWS_EXPORT_LOCATION])
[aws-session-token=AWS_SESSION_TOKEN] [-no-guest-environment]
[-family=FAMILY] [-description=DESCRIPTION] [-network=NETWORK]
[-subnet=SUBNET] [-zone=ZONE] [-timeout=TIMEOUT] [-project=PROJECT]
[-scratch_bucket_gcs_path=PATH] [-oauth=OAUTH_PATH]
[-compute_endpoint_override=ENDPOINT] [-disable_gcs_logging]
[-disable_cloud_logging] [-disable_stdout_logging]
[-kms-key=KMS_KEY -kms-keyring=KMS_KEYRING -kms-location=KMS_LOCATION
-kms-project=KMS_PROJECT] [-labels=KEY=VALUE,...]
```
83 changes: 83 additions & 0 deletions cli_tools/gce_onestep_image_import/main.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
// Copyright 2020 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

// GCE one-step image import tool
package main

import (
"fmt"
"log"
"os"

"github.com/GoogleCloudPlatform/compute-image-tools/cli_tools/common/utils/logging/service"
"github.com/GoogleCloudPlatform/compute-image-tools/cli_tools/gce_onestep_image_import/onestep_importer"
)

func main() {
log.SetPrefix("[import-image-from-cloud] ")
// 1. Parse flags
importerArgs, err := importer.NewOneStepImportArguments(os.Args[1:])
if err != nil {
log.Println(err)
os.Exit(1)
}

importEntry := func() (service.Loggable, error) {
return importer.Run(importerArgs)
}

// 2. Run Onestep Importer
if err := service.RunWithServerLogging(
service.OneStepImageImportAction, initLoggingParams(importerArgs), importerArgs.ProjectPtr, importEntry); err != nil {
os.Exit(1)
}
}

func initLoggingParams(args *importer.OneStepImportArguments) service.InputParams {
return service.InputParams{
OnestepImageImportParams: &service.OnestepImageImportParams{
ImageImportParams: &service.ImageImportParams{
CommonParams: &service.CommonParams{
ClientID: args.ClientID,
Network: args.Network,
Subnet: args.Subnet,
Zone: args.Zone,
Timeout: args.Timeout.String(),
Project: *args.ProjectPtr,
ObfuscatedProject: service.Hash(*args.ProjectPtr),
Labels: fmt.Sprintf("%v", args.Labels),
ScratchBucketGcsPath: args.ScratchBucketGcsPath,
Oauth: args.Oauth,
ComputeEndpointOverride: args.ComputeEndpoint,
DisableGcsLogging: args.GcsLogsDisabled,
DisableCloudLogging: args.CloudLogsDisabled,
DisableStdoutLogging: args.StdoutLogsDisabled,
},
ImageName: args.ImageName,
DataDisk: args.DataDisk,
OS: args.OS,
NoGuestEnvironment: args.NoGuestEnvironment,
Family: args.Family,
Description: args.Description,
NoExternalIP: args.NoExternalIP,
SourceFile: args.SourceFile,
StorageLocation: args.StorageLocation,
},
CloudProvider: args.CloudProvider,
AWSAMIID: args.AWSAMIID,
AWSExportLocation: args.AWSExportLocation,
AWSExportedAMIPath: args.AWSExportedAMIPath,
},
}
}
169 changes: 169 additions & 0 deletions cli_tools/gce_onestep_image_import/onestep_importer/aws_args.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
// Copyright 2020 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package importer

import (
"fmt"
"regexp"
"strings"

"github.com/GoogleCloudPlatform/compute-image-tools/cli_tools/common/utils/param"
"github.com/GoogleCloudPlatform/compute-image-tools/cli_tools/common/utils/validation"
"github.com/GoogleCloudPlatform/compute-image-tools/daisy"
)

// awsImportArguments holds the structured results of parsing CLI arguments,
// and optionally allows for validating and populating the arguments.
type awsImportArguments struct {
// Passed in by user
accessKeyID string
amiID string
executablePath string
exportLocation string
exportedAMIPath string
gcsComputeEndpoint string
gcsProjectPtr *string
gcsZone string
gcsRegion string
gcsScratchBucket string
gcsStorageLocation string
region string
secretAccessKey string
sessionToken string

// Internal generated
exportBucket string
exportFolder string
exportKey string
exportFileSize int64
}

// Flags
const (
awsAMIIDFlag = "aws_ami_id"
awsExportLocationFlag = "aws_export_location"
awsAccessKeyIDFlag = "aws_access_key_id"
awsSecretAccessKeyFlag = "aws_secret_access_key"
awsSessionTokenFlag = "aws_session_token"
awsRegionFlag = "aws_region"
awsExportedAMIPathFlag = "aws_exported_ami_path"
)

var (
bucketNameRegex = `[a-z0-9][-_.a-z0-9]*`
s3PathRegex = regexp.MustCompile(fmt.Sprintf(`^s3://(%s)(\/.*)?$`, bucketNameRegex))
)

// newAWSImportArguments creates a new AWSImportArgument instance.
func newAWSImportArguments(args *OneStepImportArguments) *awsImportArguments {
return &awsImportArguments{
accessKeyID: args.AWSAccessKeyID,
amiID: args.AWSAMIID,
executablePath: args.ExecutablePath,
exportLocation: args.AWSExportLocation,
exportedAMIPath: args.AWSExportedAMIPath,
gcsComputeEndpoint: args.ComputeEndpoint,
gcsProjectPtr: args.ProjectPtr,
gcsZone: args.Zone,
gcsRegion: args.Region,
gcsScratchBucket: args.ScratchBucketGcsPath,
gcsStorageLocation: args.StorageLocation,
region: args.AWSRegion,
secretAccessKey: args.AWSSecretAccessKey,
sessionToken: args.AWSSessionToken,
}
}

// ValidateAndPopulate validates args related to import from AWS, and populates
// any missing parameters.
func (args *awsImportArguments) validateAndPopulate(populator param.Populator) error {
err := args.validate()
if err != nil {
return err
}

err = populator.PopulateMissingParameters(args.gcsProjectPtr, &args.gcsZone, &args.gcsRegion,
&args.gcsScratchBucket, "", &args.gcsStorageLocation)
if err != nil {
return err
}

return args.generateS3PathElements()
}

func (args *awsImportArguments) validate() error {
if err := validation.ValidateStringFlagNotEmpty(args.accessKeyID, awsAccessKeyIDFlag); err != nil {
return err
}
if err := validation.ValidateStringFlagNotEmpty(args.secretAccessKey, awsSecretAccessKeyFlag); err != nil {
return err
}
if err := validation.ValidateStringFlagNotEmpty(args.region, awsRegionFlag); err != nil {
return err
}

needsExport := args.amiID != "" && args.exportLocation != "" && args.exportedAMIPath == ""
isResumeExported := args.amiID == "" && args.exportLocation == "" && args.exportedAMIPath != ""

if !(needsExport || isResumeExported) {
return daisy.Errf("specify -%v to import from "+
"exported image file, or both -%v and -%v to "+
"import from AMI", awsExportedAMIPathFlag, awsAMIIDFlag, awsExportLocationFlag)
}

return nil
}

// isExportRequired returns true if AMI needs to be exported, false otherwise.
func (args *awsImportArguments) isExportRequired() bool {
return args.exportedAMIPath == ""
}

// generateS3PathElements gets bucket name, and folder or object key depending on if
// AMI has been exported, for a valid object path. Error is returned otherwise.
func (args *awsImportArguments) generateS3PathElements() error {
var err error

if args.isExportRequired() {
// Export required, get metadata from provided export location.
args.exportBucket, args.exportFolder, err = splitS3Path(args.exportLocation)
if err != nil {
return err
}

if args.exportFolder != "" && !strings.HasSuffix(args.exportFolder, "/") {
args.exportFolder += "/"
}
} else {
// AMI already exported, get metadata from provide object path.
args.exportBucket, args.exportKey, err = splitS3Path(args.exportedAMIPath)
if err != nil {
return err
}
if args.exportKey == "" {
return daisy.Errf("%v is not a valid S3 file path", args.exportedAMIPath)
}
}
return nil
}

// splitS3Path splits S3 path into bucket and object path portions
func splitS3Path(path string) (string, string, error) {
matches := s3PathRegex.FindStringSubmatch(path)
if matches != nil {
return matches[1], strings.TrimLeft(matches[2], "/"), nil
}
return "", "", daisy.Errf("%v is not a valid AWS S3 path", path)
}