New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate into a fully separated peril system, by using AWS Lambda for a Danger run #159

Closed
orta opened this Issue Oct 13, 2017 · 10 comments

Comments

Projects
None yet
2 participants
@orta
Member

orta commented Oct 13, 2017

I once had a great chat with @bkeepers once about Lamba as a option for user security, today this really clicked into place as to how this could work with Peril.

In Danger JS, there is a tool called danger process - http://danger.systems/js/usage/danger-process - by separating the danger setup, vs the danger run then it allows any language to create their own runtime without all the faffing.

By the same process, Peril can do all the setup, and lambda can handle running the dangerfile. This could handle my two biggest problems: Handling running a lot of dangerfiles and the security issues around running a Dangerfile safely.

In this setup:

  • GitHub sends events to Peril
  • Peril decides what to run
  • For everything that needs to run, Peril will download and set up a lambda "job" (I don't know the terminology)
  • The lambda job receives a post with the DSL, and an access token (and anything else that is critical)

Questions for me to figure:

  • What is a lambda job? 🥇
  • Can I provide node modules in some customizable way? ( I probably don't need to for the main Peril though )
  • This feels like a pretty scalable approach 👍 - is it?
@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Oct 14, 2017

Member

I ran through the serverless tutorial, and then took a look at how AWS lambdas work in the docs. The idea seems to fit, there's a limit to data sent to a job at 6MB, which I think we can keep the DSL under.

https://hd6geu40s4.execute-api.us-east-1.amazonaws.com/dev/api/danger/run

Member

orta commented Oct 14, 2017

I ran through the serverless tutorial, and then took a look at how AWS lambdas work in the docs. The idea seems to fit, there's a limit to data sent to a job at 6MB, which I think we can keep the DSL under.

https://hd6geu40s4.execute-api.us-east-1.amazonaws.com/dev/api/danger/run

@orta

This comment has been minimized.

Show comment
Hide comment
@orta
Member

orta commented Oct 14, 2017

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Oct 14, 2017

Member

Though I could do this:

There’s also effectively a fourth way to exit – by crashing or calling process.exit(). For example, if you include a binary library with a bug and it segfaults, you’ll effectively terminate execution of that container.

Member

orta commented Oct 14, 2017

Though I could do this:

There’s also effectively a fourth way to exit – by crashing or calling process.exit(). For example, if you include a binary library with a bug and it segfaults, you’ll effectively terminate execution of that container.

@orta orta referenced this issue Oct 16, 2017

Merged

Separate Danger Run into two processes #395

3 of 3 tasks complete
@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Oct 21, 2017

Member

I spent a bit more time thinking out loud about this with @craigspaeth

I'd like the role of the "peril" server here to be about setup and execution of jobs, so perhaps it makes sense to move more of the logic into AWS. Here's how it could work.

  • A webook from GitHub goes to Peril
  • Peril then looks up the org internally, figures all their settings, then decides if anything happens
  • If something should happen, Peril sends off a lambda job posting something similar to GitHubRunSettings
  • The job generates the DSL
  • Maybe there could be a second job, or maybe it can be in the first job will then execute the JS Dangerfiles.

The advantage being that the "hard work" is completely moved out of Peril. Generating a DSL requires many API calls to GitHub, and a good chunk of number crunching on the diff JSON. Moving this out into lambda makes Peril effectively a router and not a work machine, which should make it more feasible to scale.

This is blocked by danger/danger-js#395 which provides a lot of tooling/typing inside danger for handling process separated JS.

Member

orta commented Oct 21, 2017

I spent a bit more time thinking out loud about this with @craigspaeth

I'd like the role of the "peril" server here to be about setup and execution of jobs, so perhaps it makes sense to move more of the logic into AWS. Here's how it could work.

  • A webook from GitHub goes to Peril
  • Peril then looks up the org internally, figures all their settings, then decides if anything happens
  • If something should happen, Peril sends off a lambda job posting something similar to GitHubRunSettings
  • The job generates the DSL
  • Maybe there could be a second job, or maybe it can be in the first job will then execute the JS Dangerfiles.

The advantage being that the "hard work" is completely moved out of Peril. Generating a DSL requires many API calls to GitHub, and a good chunk of number crunching on the diff JSON. Moving this out into lambda makes Peril effectively a router and not a work machine, which should make it more feasible to scale.

This is blocked by danger/danger-js#395 which provides a lot of tooling/typing inside danger for handling process separated JS.

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Jan 6, 2018

Member

TBH, https://hyper.sh looks like a better option then AWS lambda

Member

orta commented Jan 6, 2018

TBH, https://hyper.sh looks like a better option then AWS lambda

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Jan 28, 2018

Member

WIP happening, it's got merged and is building on https://hub.docker.com/r/dangersystems/peril/

Member

orta commented Jan 28, 2018

WIP happening, it's got merged and is building on https://hub.docker.com/r/dangersystems/peril/

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta Apr 15, 2018

Member

This has been working for 2 months on the danger org, I’ve started looking at moving this infra to be on a staging env

Member

orta commented Apr 15, 2018

This has been working for 2 months on the danger org, I’ve started looking at moving this infra to be on a staging env

@SD10

This comment has been minimized.

Show comment
Hide comment
@SD10

SD10 May 4, 2018

Member

Hey @orta, I got a little bit lost in this thread. Is the reasoning behind this for security concerns of executing the dangerfile? Essentially, you're allowing users to run the dangerfile in their own environment?

What I'm looking into is using peril to trigger some heavier work to be done on a remote machine where I need to download resources from multiple repositories

Member

SD10 commented May 4, 2018

Hey @orta, I got a little bit lost in this thread. Is the reasoning behind this for security concerns of executing the dangerfile? Essentially, you're allowing users to run the dangerfile in their own environment?

What I'm looking into is using peril to trigger some heavier work to be done on a remote machine where I need to download resources from multiple repositories

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta May 4, 2018

Member

This issue revolves around the idea of allowing multiple orgs to eval code without being able to access other org settings, lamba didn't work for that due to the sharing of runtime environments between runs.

I think that makes more sense to have the dangerfile trigger a webhook via an exposed API, if you wanted to do it for every dangerfile this is the current API

Member

orta commented May 4, 2018

This issue revolves around the idea of allowing multiple orgs to eval code without being able to access other org settings, lamba didn't work for that due to the sharing of runtime environments between runs.

I think that makes more sense to have the dangerfile trigger a webhook via an exposed API, if you wanted to do it for every dangerfile this is the current API

@orta

This comment has been minimized.

Show comment
Hide comment
@orta

orta May 24, 2018

Member

This is now done and sorted.

Member

orta commented May 24, 2018

This is now done and sorted.

@orta orta closed this May 24, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment