Skip to content
This repository has been archived by the owner on Aug 3, 2023. It is now read-only.

[dev] request for feedback #1047

Closed
EverlastingBugstopper opened this issue Feb 12, 2020 · 101 comments
Closed

[dev] request for feedback #1047

EverlastingBugstopper opened this issue Feb 12, 2020 · 101 comments
Labels
dev `wrangler dev`

Comments

@EverlastingBugstopper
Copy link
Contributor

EverlastingBugstopper commented Feb 12, 2020

wrangler dev is now in alpha! If you're using it, we want to hear from you. What do you like about it, what do you think should change, have you found issues with it? Comment below!

It's possible that your feedback deserves a new issue entirely. If you're filing a new issue, please start the title with [dev], mention me with @EverlastingBugstopper, and add a link to the issue in this issue.

Current issues in the milestone can be found here.

@EverlastingBugstopper EverlastingBugstopper added this to the dev server milestone Feb 12, 2020
@EverlastingBugstopper EverlastingBugstopper changed the title request for feedback: wrangler dev [dev] request for feedback Feb 12, 2020
@jaymakes11
Copy link

On change, dev enters a never-ending rebuild cycle (issue #1078)

@jaymakes11
Copy link

Worker updates not seen in terminal until restarting dev (issue #1082)

@deini
Copy link

deini commented Feb 20, 2020

I just stumbled into this. Doing a GET request with body crashes.

@jaymakes11
Copy link

On change, cloudflared tunnel breaks (issue #1107)

@maggo
Copy link

maggo commented Feb 25, 2020

The watcher seems to be really slow, I guess it's timeout based and not on filesystem changes?

Edit: Also it seems like it's not actually picking up changes. Testing with workers-graphql-server right now

@ackerleytng
Copy link

I'd just like to add that this is a really awesome feature. It was terrible trying to develop without feedback before.

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Mar 2, 2020

Hey y'all - I'm back from vacation so responses here should be a bit more frequent.


I just stumbled into this. Doing a GET request with body crashes.

@deini do you mind filing an issue with steps to reproduce? We definitely want to support every imaginable type of HTTP request and that is 100% a bug.


The watcher seems to be really slow, I guess it's timeout based and not on filesystem changes?

@maggo my guess is that it's not really the watcher that is slow, but rather the fact that Wrangler is required to re-build and re-upload your script on every change. We have some tentative plans to address performance but for now what we probably want to do is actually explain what Wrangler is doing and when - the output is a bit cluttered right now.


Also it seems like it's not actually picking up changes. Testing with workers-graphql-server right now

This is troubling - would you mind filing an issue with steps to reproduce?


@ackerleytng Thanks so much for the kind words, always looking for ways to make your experience better 😄

@defjosiah
Copy link
Contributor

defjosiah commented Mar 4, 2020

This isn't a reproduction yet, but especially early on, wanted to give you information on a crash!
CF_API_TOKEN=REDACTED yarn wrangler preview works correctly in the repo, wrangler dev crashes.

Version 1.8.0 crashes with a listener bind error:
wranger.toml

name = "odfiles"
type = "javascript"
account_id = "REDACTED"
workers_dev = false
route = "*odfil.es/*"
zone_id = "REDACTED"

run cmd:

RUST_BACKTRACE=1 CF_API_TOKEN=REDACTED yarn wrangler dev --host odfil.es

response:

💁  JavaScript project found. Skipping unnecessary build!
thread 'main' panicked at 'error binding to 127.0.0.1:8787: error creating server listener: Address already in use (os error 48)', /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.13.2/src/server/mod.rs:124:13
stack backtrace:
💁  watching "./"
   0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
   1: core::fmt::write
   2: std::io::Write::write_fmt
   3: std::panicking::default_hook::{{closure}}
   4: std::panicking::default_hook
   5: std::panicking::rust_panic_with_hook
   6: rust_begin_unwind
   7: std::panicking::begin_panic_fmt
   8: hyper::server::Server<hyper::server::tcp::AddrIncoming,()>::bind
   9: <std::future::GenFuture<T> as core::future::future::Future>::poll
  10: <futures_util::future::maybe_done::MaybeDone<Fut> as core::future::future::Future>::poll
  11: <futures_util::future::join::Join<Fut1,Fut2> as core::future::future::Future>::poll
  12: <std::future::GenFuture<T> as core::future::future::Future>::poll
  13: tokio::runtime::enter::Enter::block_on
  14: tokio::runtime::thread_pool::ThreadPool::block_on
  15: tokio::runtime::context::enter
  16: tokio::runtime::handle::Handle::enter
  17: wrangler::commands::dev::dev
  18: wrangler::run
  19: wrangler::main
  20: std::rt::lang_start::{{closure}}
  21: std::panicking::try::do_call
  22: __rust_maybe_catch_panic
  23: std::rt::lang_start_internal
  24: main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error Command failed with exit code 101.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

@EverlastingBugstopper
Copy link
Contributor Author

Hey @defjosiah - that error message could definitely be a bit clearer but the important bit is Address already in use - looks like you already have something else running (maybe another instance of wrangler dev) on port 8787.

I'm not sure what OS you're on - there are a few ways of finding out which process is listening on a specific port, but I think if you find and kill that process it should work. Let me know! If not and you have more details you can share - please file a separate issue 😄

@defjosiah
Copy link
Contributor

@EverlastingBugstopper that definitely fixed it. I had killed the process running on that port before starting wrangler, but with further investigation, it looks like the process I killed kept restarting itself 😆

@NicholasHazen
Copy link

Is there a simple way to detect in the code, when the work is running locally in dev mode?

@EverlastingBugstopper
Copy link
Contributor Author

Hey @NicholasHazen - not necessarily. The best way to do it is probably to set up environments with environment variables.

For instance, your wrangler.toml might look like this

workers_dev = true
type = "javascript"
account_id = "12657839048768728910"
vars = {
  MODE = "development",
}

[env.prod]
routes = ["example.com/*"]
zone_id = "1768940583781982"
vars = {
  MODE = "production",
}

Then, when you ran wrangler dev, in your worker, MODE === "production" would return false.

When you went to publish, you could run wrangler dev --env prod, and then MODE === "production" would return true.

Hope this helps!

@NicholasHazen
Copy link

That seems perfectly reasonable to me, thanks for the quick feedback.

@bglw
Copy link

bglw commented Mar 8, 2020

Hi, I need to access the original host of the request to the dev server, as the request is handled based on the subdomain. Currently I can't do that wrangler dev is rewriting it to example.com, or whatever I pass in.

It would be great if there was a flag to not rewrite the host. This worker only provides synthetic requests from a different backend, so what the host backend is set to doesn't need to be accessible from the worker.

Alternatively if the original host was passed through as a header that could work.

Is anything like that possible at the moment?

@bglw
Copy link

bglw commented Mar 9, 2020

Separately - I'm trying to setup the untested flow of making a request to the dev worker, which snakes its way back to localhost via a trycloudflare tunnel (or a tunnel on a custom hostname).

I can't seem to get it to work. The tunnel works fine directly, but the worker reports a 502 when trying to access it.

Edit: This actually seems to be the case for any host I set - do subrequests just not work from the dev worker?

@craigmulligan
Copy link

craigmulligan commented Mar 13, 2020

@bigelowcc I managed to do subrequests from the worker, at least fetch('https://google.com') works. Unfortunately localhost is routed to some cloudflare server, I guess this is because of the tunnel?

@EverlastingBugstopper
Copy link
Contributor Author

@bigelowcc - I'm not sure exactly what you mean, unfortunately there's currently no way to pass in a host header. But you can pass a subdomain as part of the --host flag, so you could make the back end subdomain.example.com and that should work.

As for trying to set up the localhost routing, you can check out this template by @GregBrimble which should take care of everything until we get this functionality actually introduced to Wrangler.

@aleclarson

This comment has been minimized.

@aleclarson
Copy link

aleclarson commented Mar 16, 2020

Is there a workaround for using Node.js --inspect to debug my worker?

The following command doesn't work:

NODE_OPTIONS="--inspect" wrangler dev

The above command fails with this error message:

Starting inspector on 127.0.0.1:9229 failed: address already in use
Error: failed to execute `"~/.nvm/versions/node/v12.13.0/bin/node" "~/Library/Caches/.wrangler/wranglerjs-67f8bc72e4a57eef" "--output-file=/var/folders/yt/_43z9xgn67sf7jcytzz1rt5m0000gn/T/.wranglerjs_output2eiW0" "--wasm-binding=wasm" "--webpack-config=webpack.config.js"`: exited with exit code: 12

Note: I used lsof -i tcp:9229 before running that command, and no processes were matched.

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Mar 17, 2020

Hey @aleclarson - glad you got the first question resolved. Going to hide it for now since it is not on topic (this issue is specifically for wrangler dev questions/feedback).

As for the second question, Cloudflare Workers are not executed within the context of a Node runtime, and Wrangler is a completely separate tool from Node written in Rust. You can use wrangler preview --watch for now to open your worker and use the inspector, but you should note that the Workers Runtime does not implement the debugger so you will only be able to use the console and network tabs.

That being said, we do have some plans to implement wrangler dev --inspect which should behave similarly to node --inspect. You can follow along with #946 to keep up with progress towards that goal (though it's important to know that there are no guarantees on time frame for this).

@aleclarson
Copy link

@EverlastingBugstopper Good to know, thanks! Do you think Cloudflare would consider maintaining an "environment polyfill" that would make a NodeJS environment behave like a Cloudflare Worker environment? That would allow me to use the NodeJS debugger. :)

@aleclarson
Copy link

aleclarson commented Mar 17, 2020

When using a custom webpack.config.js with source maps enabled, stack traces aren't mapping back to the original source. Will wrangler dev have support for this soon? Is there a workaround?

I'm using inline source maps via devtool: "inline-cheap-module-source-map", and I even tried using browser-source-map-support.js from source-map-support.

edit: Oh, found this: cloudflare/workers-sdk#1315

@PierBover
Copy link

wrangler dev is a super feature! Thanks for working on this.

I've been using it since yesterday and so far the only problem is using console.log(). Objects get truncated and when using the good old trick of JSON.stringify(someObject, null, ' ') the resulting string is not parsed correctly.

Another nitpick, there is no cf object in the request. Makes sense obviously but could break some features that rely on it. Would be great if there was at least a mock object in its place.

@johnelliott
Copy link

johnelliott commented Mar 19, 2020

This is an enormous improvement to my workflow. 🎉

Here's my extremely hot take on the first 10 minutes:

  • It wasn't clear at first that --host this wouldn't hit a local origin server. The first thing I tried was a host that resolved to 127.0.0.1. After a few minutes I reasoned that this probably worked more like the in-browser dashboard and went hunting. I see this is covered in [dev] disallow localhost as host #902 and [dev] when host set to localhost, spin up trycloudflared endpoint #901
  • I'm still not sure, but I assume this does not yet have 100% compatibility with things like KV, private cache API, and behaviors from the overall hosting product e.g. page rules. I have yet to find the note outlining what is part of the alpha features and what to expect to need to work around.
  • Having STDOUT/console in the shell is really glorious. This is enormously better than my old setup.

Old setup: I was using the web based interface to debug top-level error handling—such as created an exception in a returned HTML rewriter promise chain—and using my own HTTP logging setup for debugging custom cache keys and origin cache control.

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Mar 19, 2020

@PierBover

wrangler dev is a super feature! Thanks for working on this.

Thanks so much for saying so! Really glad we can help out 😄


I've been using it since yesterday and so far the only problem is using console.log(). Objects get truncated

unfortunately there isn't a straightforward way to show the entire object in the console.log output because the Chrome Devtools Protocol is typically interactive. If you open the inspector with wrangler preview you can see what i mean - logging objects will give you a brief preview and then you have to click the dropdown to get more info. Since the terminal is linear and doesn't really have a way to display that dropdown (and getting that extra info means sending another WebSocket message) we truncate it. #946 is probably going to be the best solution to this.


when using the good old trick of JSON.stringify(someObject, null, ' ') the resulting string is not parsed correctly.

Could you file a more detailed bug report on this in a separate issue? I'm not able to reproduce


@johnelliott

This is an enormous improvement to my workflow. 🎉

So glad to hear it! Thanks for the feedback, it's really appreciated 😄


It wasn't clear at first that --host this wouldn't hit a local origin server.

Yes, we'd love to solve this with #901 but it is not a high priority for our team right now. One of our community members/resident power users @GregBrimble has set up an automated example for you that may bridge the gap for now. Like I said, eventually we'd like to integrate this directly.


I'm still not sure, but I assume this does not yet have 100% compatibility with things like KV, private cache API, and behaviors from the overall hosting product e.g. page rules. I have yet to find the note outlining what is part of the alpha features and what to expect to need to work around.

KV should work, but note that right now we do not use a separate namespace for your values unless you are using Workers Sites, this should be fixed with #1032 and is a blocker on getting this feature out of alpha. Additionally we are working on making the overall experience more realistic by adding support for things like the cache api etc. by running the code directly on our edge (right now it runs in GCS). For now, all of the same restrictions that applied to wrangler preview apply to wrangler dev - though I'm not sure those are documented. The biggest ones I know of are no access to the cache api, CPU limits are not really enforced, and request.cf is undefined.


Having STDOUT/console in the shell is really glorious. This is enormously better than my old setup.

Really really glad to hear this, our goal here was to get something into your hands, even if it's not quite perfect yet. Incremental improvement over perfectionism 😄

@EverlastingBugstopper
Copy link
Contributor Author

Oh! @aleclarson I missed your comments, my apologies.

Do you think Cloudflare would consider maintaining an "environment polyfill" that would make a NodeJS environment behave like a Cloudflare Worker environment? That would allow me to use the NodeJS debugger. :)

Unfortunately not any time soon. The Cloudflare Workers Runtime is a proprietary, closed-source implementation of the Service Workers API and it's not portable. We have some ideas around how to improve the local testing experience but making it run in Node is not on the list.


When using a custom webpack.config.js with source maps enabled, stack traces aren't mapping back to the original source. Will wrangler dev have support for this soon? Is there a workaround?

There are some ideas around how to get this working and we are hashing that out internally - it will likely require some changes to the runtime for this to work smoothly. No timeline on when exactly it will be supported but it is on our radar

@arunesh90
Copy link
Contributor

arunesh90 commented Mar 26, 2020

Would love the possibility to be able to fetch from local IP ranges.
My use case is that I often use my local elasticsearch instance for developing/debugging, and having to skip the part of putting it public (behind auth) would be nice to have.

@ecerroni
Copy link

TLDR:

  • deployed code works
  • same code running wrangler dev does not work (no errors either)

I am developing a Graphql server building upon https://github.com/signalnerve/workers-graphql-server

Using the base template with wrangler dev gave no issues. However as soon I started trying stitching local and remote schemas together I could not debug anymore locally.

If I do wrangler publish the changes are deployed and the production version on the edge behaves exactly as expected.

Locally though I keep getting redirected to example.com. No errors either although I guess there are errors swallowed by wrangler dev. There is something it does not like, but I cannot figure out what it is.

Any clues?

@EverlastingBugstopper
Copy link
Contributor Author

hey @ecerroni - so what you're running into is probably due to the fact that wrangler dev is actually making requests via the Workers preview API (wrangler dev is not a "local" development server in that you need an internet connection for it to work). That means that anytime you want to make a request to something else that's running on your local network, you'll need to expose that service to the public internet so the Cloudflare Workers runtime may make requests to it.

@GregBrimble has an unofficial template for setting something this up here, but the gist of it is that it uses cloudflared to create a public endpoint for the service running locally on your computer. It will then give you a public endpoint, and you can pass that hostname to wrangler dev via the --host flag. Hope this helps!

@ecerroni
Copy link

@EverlastingBugstopper Thank you. It seems straight forward. However, I tried it and it is not working for me. Maybe I am missing something obvious.

First I start the tunnel

$ cloudflared tunnel --url localhost:8787 --metrics localhost:8081
[INFO] Cannot determine default configuration path. No file [config.yml config.yaml] in [~/.cloudflared ~/.cloudflare-warp ~/cloudflare-warp /usr/local/etc/cloudflared /etc/cloudflared]
[INFO] Version 2020.6.5
[INFO] GOOS: linux, GOVersion: go1.12.9, GoArch: amd64
[INFO] Environment variables map[metrics:localhost:8081 proxy-dns-upstream:https://1.1.1.1/dns-query, https://1.0.0.1/dns-query url:localhost:8787]
[INFO] cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
[INFO] Starting metrics server on 127.0.0.1:8081/metrics
[INFO] Proxying tunnel requests to http://localhost:8787
[INFO] Connected to VIE
[INFO] Each HA connection's tunnel IDs: map[0:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200]
[INFO] +-----------------------------------------------------------+
[INFO] |  Your free tunnel has started! Visit it:                  |
[INFO] |    https://offering-towers-satin-carol.trycloudflare.com  |
[INFO] +-----------------------------------------------------------+
[INFO] Route propagating, it may take up to 1 minute for your new route to become functional
[INFO] Connected to FRA
[INFO] Connected to VIE
[INFO] Each HA connection's tunnel IDs: map[0:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 1:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200]
[INFO] +-----------------------------------------------------------+
[INFO] |  Your free tunnel has started! Visit it:                  |
[INFO] |    https://offering-towers-satin-carol.trycloudflare.com  |
[INFO] +-----------------------------------------------------------+
[INFO] Route propagating, it may take up to 1 minute for your new route to become functional
[INFO] Connected to FRA
[INFO] Each HA connection's tunnel IDs: map[0:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 1:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 2:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200]
[INFO] +-----------------------------------------------------------+
[INFO] |  Your free tunnel has started! Visit it:                  |
[INFO] |    https://offering-towers-satin-carol.trycloudflare.com  |
[INFO] +-----------------------------------------------------------+
[INFO] Route propagating, it may take up to 1 minute for your new route to become functional
[INFO] Each HA connection's tunnel IDs: map[0:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 1:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 2:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200 3:ecas8dbrglaz56yinus4ctdqiykxc8byti9hvzv0p3egc1ivy200]
[INFO] +-----------------------------------------------------------+
[INFO] |  Your free tunnel has started! Visit it:                  |
[INFO] |    https://offering-towers-satin-carol.trycloudflare.com  |
[INFO] +-----------------------------------------------------------+
[INFO] Route propagating, it may take up to 1 minute for your new route to become functional

Then I start wrangler dev with the public endpoint as the host

$ wrangler dev --host https://offering-towers-satin-carol.trycloudflare.com
`wrangler dev` is currently unstable and there are likely to be breaking changes!
For this reason, we cannot yet recommend using `wrangler dev` for integration testing.

Please submit any feedback here: https://github.com/cloudflare/wrangler/issues/1047
 Built successfully, built project size is 741 KiB.
 Listening on http://127.0.0.1:8787
 Detected changes...
 Ignoring stale first change
[INFO] Remote schema ready
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:41] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:41] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:41] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:40] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:40] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:40] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:40] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:40] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:39] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:39] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:39] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:39] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:46] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:46] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:45] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:45] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:45] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:45] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:44] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:44] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:44] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:44] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:44] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:43] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
[2020-06-30 11:31:42] GET offering-towers-satin-carol.trycloudflare.com/___graphql HTTP/1.1 403 Forbidden
thread 'tokio-runtime-worker' panicked at 'Could not determine status code of response', src/commands/dev/gcs/headers.rs:73:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'Could not determine status code of response', src/commands/dev/gcs/headers.rs:73:18
thread 'tokio-runtime-worker' panicked at 'Could not determine status code of response', src/commands/dev/gcs/headers.rs:73:18
thread 'tokio-runtime-worker' panicked at 'Could not determine status code of response', src/commands/dev/gcs/headers.rs:73:18

This is the screen I see when I try to access http://127.0.0.1/___graphql or any other url on localhost.
url

@autarc
Copy link

autarc commented Jun 30, 2020

Going through the thread I couldn't see if it was already mentioned before but while the dev mode is great for local development there is a small issue:

request.header.get('Accept-Encoding'); 

The output in the wrangler dev runtime returns gzip, deflate, br while after deployment it will always be gzip as its set internally. While it is certainly a specific use case it might be good to have the behavior aligned. Even though it would be great to be able to access the information somehow it currently doesn't seem to be possible so having it statically set during development help to be aware of the it in time.

On another note it seems that there will be a script.js file created in the worker directory. Since the webpack configuration already has an output path perhaps that could be used for configuring the destination as it was a bit surprising to see it generated in that place 😅

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Jun 30, 2020

@ecerroni

cloudflared tunnel --url localhost:8787 --metrics localhost:8081

this won't work because you're exposing wrangler dev to the public internet, and then trying to use it as a back end - i assumed you wanted to make requests to some other sort of back end you were developing side-by-side with your worker?

perhaps i'm misunderstanding what you're trying to do - could you open a new issue and list out steps to reproduce the issue you're having (especially helpful if you can link a repository i can check out and futz around with)


@autarc - could you file a new issue for the request header thing you're seeing and we'll take a look?

as for the script.js in the worker dir - that is tracked in #1046

@autarc
Copy link

autarc commented Jun 30, 2020

@EverlastingBugstopper

Sure created it here: #1425

Also thanks for clarifying the script.js file and referring to the existing issue. Keep up the great work!

@ecerroni
Copy link

ecerroni commented Jul 2, 2020

@EverlastingBugstopper

cloudflared tunnel --url localhost:8787 --metrics localhost:8081

this won't work because you're exposing wrangler dev to the public internet, and then trying to use it as a back end - i assumed you wanted to make requests to some other sort of back end you were developing side-by-side with your worker?

perhaps i'm misunderstanding what you're trying to do - could you open a new issue and list out steps to reproduce the issue you're having (especially helpful if you can link a repository i can check out and futz around with)

Trying recreating the issue from scratch using a different remote schema I realized that the culprit is in the json file I am trying to require and pass in the remoteExecutableSchema.

If the file is small there is no problem.
If the size is around 100kB it does not work on wrangler dev, but it does after publishing (although I might retry a couple of times).
If the file is larger then wrangler dev is not working and I cannot publish anymore. I always get script has timed out.

At this point I am confused and I do not know if it is either me doing something wrong in the code or it is the graphql's makeRemoteExecutableSchema having issues with larger files or I cannot load large local json files (under .src/) with workers or maybe I need additional configuration in webpack to handle this case 🤷‍♂️

@EverlastingBugstopper
Copy link
Contributor Author

🤔 - hmmmm.... unfortunately, i have no clue what your issue is from reading this. if you could link a repository in a new issue with detailed steps to reproduce, i'll be more than happy to take a look. for now, logging off for the long weekend here in the US!

@ecerroni
Copy link

ecerroni commented Jul 3, 2020

@EverlastingBugstopper

Sure, enjoy your weekend :)

In the meantime, I created an issue here with a reproducing repo

@EverlastingBugstopper EverlastingBugstopper added the dev `wrangler dev` label Jul 16, 2020
@EverlastingBugstopper EverlastingBugstopper removed this from the wrangler dev milestone Jul 16, 2020
@normanr
Copy link

normanr commented Jul 26, 2020

The following command doesn't work:

NODE_OPTIONS="--inspect" wrangler dev

There are at least two node instances, so I found that making the inspector use a random port with NODE_OPTIONS="--inspect=:0" at least gets things running successfully. I haven't been able to inspect my code though. I suspect because as EverlastingBugstopper said "Cloudflare Workers are not executed within the context of a Node runtime". So the node instances you can attach to are running other things, not your workers.

Edit: and as mentioned later: "wrangler dev is actually making requests via the Workers preview API", and so there's nothing even running locally to attach to.

@EverlastingBugstopper
Copy link
Contributor Author

Hey @normanr - we have an issue for that here: #946, but it has not been prioritized and i'm not sure when it will be

@normanr
Copy link

normanr commented Jul 28, 2020

ack, I've subscribed to it, thanks!

@mskd12
Copy link

mskd12 commented Jul 29, 2020

@EverlastingBugstopper is it possible to automatically set some environment variables when using wrangler dev (that should be turned off otherwise)? Sounds like something basic, so sorry if I missed it in the docs somehow.

@PierBover
Copy link

@mskd12 you can set up different environments and create vars for each environment.

See the docs:

https://developers.cloudflare.com/workers/tooling/wrangler/environments/

@mskd12
Copy link

mskd12 commented Jul 30, 2020

@PierBover I did check it, but I was not sure about the interaction between environments and wrangler dev. Does calling wrangler dev automatically set the environment to dev?

@EverlastingBugstopper
Copy link
Contributor Author

@mskd12 - no, environments work the same exact way they do for any other command. if you dont specify an --env, it will use your top level configuration. if you want to use the dev environment you must wrangler dev --env dev

@SupremeTechnopriest
Copy link

Would be nice to be able to define a port in workers dev --host https://localhost:3000. Currently the port is stripped out.

@normanr
Copy link

normanr commented Aug 12, 2020

I think you need to pass the port using the --port flag (#1477 made it so that it can also be set in wrangler.toml).

@EverlastingBugstopper
Copy link
Contributor Author

@SupremeTechnopriest there are two things going on here

  1. --host is meant for setting your upstream host. as in, when you access request.url in your Worker, what host is it? You can't really set --host to anything that isn't running on the public Internet so if you want request.url to go to something running on port 3000 you must first expose it to the Internet with a tool like cloudflared or ngrok.

  2. If what you really want to do is change the port that wrangler dev is running on, then you should use the --port flag.

@SupremeTechnopriest
Copy link

@EverlastingBugstopper So even though I'm running dev the worker is still running in the cloudflare backplane and it is just forwarding the logs to my machine? I assumed the worker was actually running on my machine, therefore localhost as a upstream host would be ok. I ended up just using a tunnel.

@EverlastingBugstopper
Copy link
Contributor Author

Yup, running on Cloudflare's servers

@SupremeTechnopriest
Copy link

@EverlastingBugstopper Thanks for the info! Makes sense now.

@ispivey
Copy link
Contributor

ispivey commented Aug 22, 2020

As wrangler dev is now GA with 1.11.0, I'm going to close this issue. If you've got more feedback about dev, open a new issue or look for an existing one you can add to.

Thanks to everyone who gave us feedback over the past few months!

@ispivey ispivey closed this as completed Aug 22, 2020
@cloudflare cloudflare locked as resolved and limited conversation to collaborators Aug 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
dev `wrangler dev`
Projects
None yet
Development

No branches or pull requests