Skip to content
This repository has been archived by the owner on Jan 12, 2024. It is now read-only.

Deploy customized solution #2

Open
sachalau opened this issue Jun 8, 2020 · 10 comments
Open

Deploy customized solution #2

sachalau opened this issue Jun 8, 2020 · 10 comments

Comments

@sachalau
Copy link

sachalau commented Jun 8, 2020

Hello !

I'm trying to customize the workflow (making some changes into the AWS::EC2::LaunchTemplate resource mainly) and so far failing to deploy the customized solution from my own S3 bucket. I managed to interact with the solution using the "as-is" template.

I've made the changes in the files (batch.cfn.yaml) and tried following the README for getting my solution ready.

First I'm a bit confused by who the S3 bucket for the source code should be named. Is it my-bucket-name-us-east-1 or my-bucket-name ? Because when looking at the template file

Mappings:
  Send:
    AnonymousUsage:
      Data: Yes
  SourceCode:
    General:
      S3Bucket: 'my-bucket-name'

The region identifier is not present.

Anyway I've uploaded the global-s3-asset folder (and not dist) to both my-bucket-name and my-bucket-name-us-east-1

When I want to upload my template into CloudFormation, when trying to use directly the S3 Path of the template present in my buckets, the template is read but when clicking Next, I'm getting the following error :

Domain name specified in my-bucket-name-us-east-1 is not a valid S3 domain

However, when uploading directly the template file that was generated from my hard drive, the CloudFormation creation starts.

However I'm afraid I'm getting the same error as in #1

Failed to create resource. Code Build job 'GenomicsWorkflow2-Setup:c3605228-d405-4413-b3a4-134ca89e97d8' in project 'GenomicsWorkflow2-Setup' exited with a build status of 'FAILED'.

Thanks for your help !

Sorry I found the error :

aws s3 cp s3://customgenomicsworkflow-us-east-1/myawssolution/1/samples/NIST7035_R1_trim_samp-0p1.fastq.gz s3://genomicsworkflow2zone-zonebucket-1icns0ac2ai79/samples/NIST7035_R1_trim_samp-0p1.fastq.gz
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden

I copied the fastq files and I'm now resuming... I'm leaving this open in case I don't manage to build till the end. But so I guess it's really the regional-s3-assets that I should have uploaded since it's the one with the fastqs ?

Anyway I'm still puzzled by way I can't start the CloudFormation by inputing the S3 path of the template.

Edit 2 : Success !

Edit 3 : And my change into the LaunchTemplate appears to be working too, so I'm closing this now!

@sachalau sachalau closed this as completed Jun 8, 2020
@wleepang
Copy link
Contributor

wleepang commented Jun 9, 2020

@sachalau - Glad you were able to resolve this on your own.

Note, the README instructions are for deploying the launch assets into your own privately hosted buckets. This is for customizing the solution end-to-end - e.g. if you want to add additional resources to the "Zone" stack beyond what is provided by the publicly available solution.

For the purposes of customizing any of the workflow execution resources - e.g. batch compute environments, launch templates, etc. - you can do the following:

  1. launch the solution from its landing page
  2. clone the CodeCommit repo created by the "Pipe" stack - this has all the source code for all the underlying AWS resources for running workflows
  3. make edits, commit, and push the changes up to the repo

The latter will trigger a CodePipeline pipeline to re-deploy the workflow execution resources with any updates you've made.

@sachalau
Copy link
Author

sachalau commented Jun 9, 2020 via email

@sachalau
Copy link
Author

sachalau commented Jun 10, 2020

Hello @wleepang @rulaszek

Actually I would like to ask an additional question regarding the architecture of this solution and if you had any advice regarding implementation.

During the setup of the Stack, I'm downloading some specific references to the S3 zone bucket. Those references will then be used for the Workflow defined for each of the samples I would like to process. At the moment I'm downloading from NCBI all the references that I need in the setup/setup.sh file, with additional python script for instance.

However before these file can be used, they need additional transformation using some of the tools for which Docker images are constructed during the build. It could be really simply indexing of the fasta references with samtools or bwa, or something more complex like building a kraken database using multiple references.

At the moment, after the CloudFormation is complete, I can manually submit a Batch Job using the job definition that I want and write the outputs into the S3 result bucket. Then I will be able to use as inputs these files for all my Workflows. However I think these submissions could be automated during the CloudFormation.

My idea was to submit Batch jobs directly at the end of the build using awscli. However, to do so I need to access to the name of the S3 bucket that was just created, and I'm not sure I can do that in the setup/setup.sh file. Another possibility would be to define separate Workflows for each of these tasks that have to be run only once initially and then trigger them only once. However those workflows would only include one single step which would be run once so I'm not sure this solution makes actually sense. Do you have any opinion on that ?

To access the S3 bucket name, could I do something like that in the setup ?

S3_RESULT_BUCKET=$(aws cloudformation describe-stacks --stack-name $STACKNAME_CODE --query 'Stacks[].Outputs[?OutputKey==`JobResultsBucket`].OutputValue' --output text)

And then feed the S3_RESULT_BUCKET variable to

aws batch jobmit-job ...

Do you think that is the proper way to proceed ? Do you think it would be more proper to put all one timer batch jobs in a different file then setup.sh (like test.sh or something else?)
Thanks a lot !

@rulaszek
Copy link
Contributor

rulaszek commented Jun 16, 2020 via email

@sachalau
Copy link
Author

sachalau commented Jun 17, 2020

Thanks for getting back to me @rulaszek. It's helped me understand better how to use the solution and I'll try to stick to the intended use.

I've moved my instruction to buildspec.yml as you advised. For the one timer jobs I think I'll define workflows anyway, do you have an advice on how I could define these job where it would make the most sense ?

I have a couple of questions.

  1. What is the preferred way of designing new workflows ? At the moment I'm doing so directly in the webinterface of StepFunctions but I'm not sure that's the proper way. However formatting yaml and json at the same time as in workflow-variantcalling-simple.cfn.yaml is not ideal...

  2. Why aren't you resolving JOB_OUTPUTS in stage_out as JOB_INPUTS in stage_in with envsubst in entrypoint.aws.sh for building docker images? I just had a problem with this and just realized it was because my JOB_OUTPUTS were not resolved with the environment variables

Thanks a lot for the call proposal. I'm still wrapping by head around things so I'm not sure now is the best time for a call, maybe wait until I'm more confortable with every part.

@rulaszek
Copy link
Contributor

@sachalau Developing the workflow in the Step Functions console or using the new Visual Studio code plugin is probably ideal. After the workflow is working, you can create a new state machine resource in the workflow-variantcalling-simple.cfn.yaml and paste that workflow in. Also, make sure to substitute in the variables, i.e., ${SOMETHING}, but ignoring ${!SOMETHING}. Finally, commit and push your changes.

The second issue sound like a bug. Let me look into this more and get back to you.

https://aws.amazon.com/blogs/compute/aws-step-functions-support-in-visual-studio-code/

@rulaszek rulaszek reopened this Jun 17, 2020
@sachalau
Copy link
Author

For the second issue, you also need to resolve JOB_OUTPUT_PREFIX with envsubst as it is done for JOB_INPUT_PREFIX I think.

Thanks for the advice on the step machines.

@wleepang
Copy link
Contributor

@sachalau - can you provide more details on how you are defining your job outputs?
You can add envsubst evaluation as needed to the the entrypoint script, push the code, and the containers will rebuild.

@wleepang
Copy link
Contributor

Also, I've updated the README to clarify the customized deployment instructions.

@sachalau
Copy link
Author

sachalau commented Jun 20, 2020 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants