Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watch Mode #3448

Open
1 of 8 tasks
lukehoban opened this issue Nov 5, 2019 · 2 comments
Open
1 of 8 tasks

Watch Mode #3448

lukehoban opened this issue Nov 5, 2019 · 2 comments
Labels
area/cli UX of using the CLI (args, output, logs) kind/epic Large new features or investments

Comments

@lukehoban
Copy link
Member

lukehoban commented Nov 5, 2019

In #3391 we are introducing a new experimental pulumi watch command. This command can be used during active development of a Pulumi project to automatically update the active stack as soon as changes are made to any files in the working directory.

For inner-loop development on many modern application architectures (like serverless, Kuberenetes, and container-based services - but also increasingly for many other managed services) developers are in one of two modes - either stable production maintenance or active development of a new project or capability. In the former mode, the existing pulumi up, pulumi preview and other commands provide robust tools to carefully manage the exact changes that will be made to infrastructure. But in the latter mode, developers often just want to move quickly and be able to have resources in the cloud be created, updated and even destroyed quickly as they make changes.

By making the cloud resources feel so immediately malleable, pulumi watch encourages patterns of usage that would otherwise feel "clunky" or slow with standard Pulumi commands.

On top of watching the filesystem and automatically deploying updates - pulumi watch also incorporates runtime logs to provide insight into what is happening inside the deployed infrastructure. This uses the same logs capabilities as pulumi logs, but intertwines log output and update progress into a single output stream.

This experimental command works today for simple servleress apps (based around AWS Lambda for compute), but to more fully flesh out the interesting scenarios - several additional capabilities will be explored.

  • Initial pulumi watch command (Add an experimental pulumi watch command #3391)
  • Support for Kubernetes logs
  • Moving logs support into providers
  • Providing more options to control logs output in watch mode (resource filter, batching, limits)
  • Ability for user to run "commands" from within an active watch session, to get outputs, see resource tree view, etc.
  • Improvements to resource provisioning and update times for awsx ECS to improve usability generally, but especially during active development for watch style use cases
  • Pluggable model for intertwining other long running queries over the runtime system into the watch output
  • Improved CLI UX (colorize each output stream source uniquely, simpler/clearer update progress messages)

We'll use this issue to track progress on these.

Watch - Preview 2019-11-03 12_14_45

counter

@lukehoban lukehoban added this to the 0.29 milestone Nov 5, 2019
@lukehoban lukehoban self-assigned this Nov 5, 2019
@lukehoban lukehoban added the area/cli UX of using the CLI (args, output, logs) label Nov 5, 2019
pgavlin pushed a commit that referenced this issue Nov 6, 2019
Adds a new experimental `pulumi watch` CLI command which can be used for inner loop development on a Pulumi stack.  This command is only available currently via `PULUMI_EXPERIMENTAL=true` while in active development.

The `watch` command does the following:
1. Watches the workspace (the tree rooted at the `Pulumi.yaml` file) for changes
2. Triggers an `update` to the stack whenever there is a change
3. Streams output containing summaries of key update events as well as logs from any resources under management into a combined CLI output

Part of #3448.

The PULUMI_EXPERIMENTAL flag also makes`query` and `policy` available.
@joeduffy
Copy link
Member

As part of this, do we plan to improve how we handle errors? It seems we swallow the stderr stream (maybe what is meant by treating output streams differently).

E.g.

λ pulumi watch
Watching (dev):
10:57:26.349[                    ] Updating...
10:57:26.961[                    ] Update failed.
10:57:35.340[                    ] Updating...
10:57:35.816[                    ] Update failed.
10:57:35.816[                    ] Updating...
10:57:36.352[                    ] Update failed.
^C
10:57:45 joeduffy@joedu-wallawalla ~/temp/kxλ pulumi up
Previewing update (dev):

     Type                           Name    Plan       Info
     pulumi:pulumi:Stack            kx-dev             
 ~   ├─ kubernetes:apps:Deployment  nginx   update     [diff: ~spec]
 +   │  └─ kubernetes:core:Service  nginx   create     
 -   └─ kubernetes:core:Namespace   my-app  delete     
 
Resources:
    + 1 to create
    ~ 1 to update
    - 1 to delete
    3 changes. 1 unchanged

Do you want to perform this update? yes
Updating (dev):
Permalink: https://app.pulumi.com/joeduffy/kx/dev/updates/10
error: the current deployment has 1 resource(s) with pending operations:
  * urn:pulumi:dev::kx::kubernetes:apps/v1:Deployment$kubernetes:core/v1:Service::nginx, interrupted while creating

These resources are in an unknown state because the Pulumi CLI was interrupted while
waiting for changes to these resources to complete. You should confirm whether or not the
operations listed completed successfully by checking the state of the appropriate provider.
For example, if you are using AWS, you can confirm using the AWS Console.

Once you have confirmed the status of the interrupted operations, you can repair your stack
using 'pulumi stack export' to export your stack to a file. For each operation that succeeded,
remove that operation from the "pending_operations" section of the file. Once this is complete,
use 'pulumi stack import' to import the repaired stack.

refusing to proceed

@lukehoban
Copy link
Member Author

do we plan to improve how we handle errors?

Yes. Most errors are reported correctly when they come through the event stream. But the error above is one we report out of band. I suspect this also causes problems for correctly storing the update log in the backend and so should ideally be fixed by reporting these error through standard event stream - but either way we definitely want to surface in watch regardless.

@lukehoban lukehoban modified the milestones: 0.29, 0.30 Nov 26, 2019
@joeduffy joeduffy mentioned this issue Dec 2, 2019
6 tasks
@lukehoban lukehoban modified the milestones: 0.30, 0.31, 0.32 Dec 20, 2019
@lukehoban lukehoban changed the title [Experimental] Watch Mode Watch Mode Feb 24, 2020
@lukehoban lukehoban removed this from the 0.32 milestone Feb 24, 2020
@mikhailshilkov mikhailshilkov added kind/enhancement Improvements or new features kind/epic Large new features or investments and removed kind/enhancement Improvements or new features labels May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cli UX of using the CLI (args, output, logs) kind/epic Large new features or investments
Projects
None yet
Development

No branches or pull requests

4 participants