Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Plans for Node.js ESM Module Support? #1946

Closed
1 task done
astorm opened this issue Feb 19, 2021 · 90 comments · Fixed by #3698
Closed
1 task done

Question: Plans for Node.js ESM Module Support? #1946

astorm opened this issue Feb 19, 2021 · 90 comments · Fixed by #3698
Assignees
Labels
Discussion Issue or PR that needs/is extended discussion.

Comments

@astorm
Copy link
Contributor

astorm commented Feb 19, 2021

  • This only affects the JavaScript OpenTelemetry library

Opening this issue based on a conversation with @dyladan and @obecny in order to start a wider discussion.

Has there been any talk about how the Node side of the project is going to address to coming ESM/ES6 module migration that (parts of) the node community is planning once Node 10 reaches end of life?

Specifically -- Node.js has added native support (no bundler needed) for the import statment and ESM/ES6 modules. These modules can't be monkey-patched in the same way that CommonJS can via require-in-the-middle. This means a lot of auto instrumentation is going to stop working as packages begin to adopt ESM modules and end users begin to use them.

Node's ESM modules have a loader API which might offer a way to monkey-patch modules in the same way, but I don't know if anyone's done any public research on that yet. (waves in the direction of @michaelgoin for reasons)

In addition to the question of instrumentation, there's the question of what sort of modules this project will publish to NPM (CommonJS, ESM, both) and whether the project's individual modules will ship with with type attribute set or not.

Finally, FWIW, these questions come out of a discussion with @delvedor and @mcollina about fastify's plans for ESM support, and that @mcollina pulled together a list of the relevant Node.js core working groups and issues for folks interested in getting involved on that side of things.

@Flarna
Copy link
Member

Flarna commented Feb 19, 2021

The main problem with the loader API is that it is still experimental and it doesn't seem that this changes soon.

The current limitations are severe, e.g. only one loader per process is allowed. Monitoring tools like OTel should not eat the unique resource.

I think the Otel project can't do much about this as it has to rely on the infrastructure available. Unfortunatelly this is only monkey patching for now - which seems to be "broken" by ESM. I remember some node PRs to improve loader APIs but some of them abandoned because endless discussions. But I think this is more a topic for node repo this one.

In short I fear that APM tools (incl Otel) and ESM will not fall in love soon.

@Flarna Flarna added the Discussion Issue or PR that needs/is extended discussion. label Feb 19, 2021
@Flarna
Copy link
Member

Flarna commented Feb 22, 2021

Another option to avoid the need for monkey patching / loader hooks would be to avoid creating speparate instrumentation packages.
Instead integrate OpenTelementry API directly into the frameworks in question. If they reject such a dependency at least hooks/events could be added to allow an instrumentation to get the needed data via a public, documented interface.

@dyladan
Copy link
Member

dyladan commented Feb 22, 2021

Another option to avoid the need for monkey patching / loader hooks would be to avoid creating speparate instrumentation packages.
Instead integrate OpenTelementry API directly into the frameworks in question. If they reject such a dependency at least hooks/events could be added to allow an instrumentation to get the needed data via a public, documented interface.

This is the best case for us, but it will most likely be a slow process. It is already beginning in some places though typeorm/typeorm#7406

@mcollina
Copy link

FYI, Fastify offers integration points to gather all sort of information. There are a few modules that provide open telemetry integration:

https://www.npmjs.com/package/fastify-opentelemetry

https://www.npmjs.com/package/@autotelic/fastify-opentelemetry

@weyert
Copy link
Contributor

weyert commented Feb 22, 2021

Another option to avoid the need for monkey patching / loader hooks would be to avoid creating speparate instrumentation packages.
Instead integrate OpenTelementry API directly into the frameworks in question. If they reject such a dependency at least hooks/events could be added to allow an instrumentation to get the needed data via a public, documented interface.

This is the best case for us, but it will most likely be a slow process. It is already beginning in some places though typeorm/typeorm#7406

Yeah, I created that ticket. We are actively using TypeORM at work and trying to find out if TypeORM would be interested in me making a pull request to built-in OpenTelemetry support

@michaelgoin
Copy link
Contributor

Think I got pinged cause I'd done some early experimental module loader investigation and had proven out some concepts.

I was also going to mention I thought an early goal of these efforts (even before the merge into Open Telemetry) was to encourage framework authors to integrate Open Telemetry, given being a vendor-agnostic and hopefully future go-to standard. So perhaps focusing there is a better use of time, as others have beat me to. (Along w/ the integration points approach, although that then requires further instrumentation writing).

If auto-instrumentation was going to be a long-term goal, then I do think a project like this is perfect for helping Node evolve its story for providing good support for instrumenting ESM. Really, all APM/Observability vendors for their various needs but certainly this as an open standard.

@astorm
Copy link
Contributor Author

astorm commented Feb 22, 2021

Thanks for the background and the thoughts @Flarna -- they're appreciated and useful. I agree that an optimal end goal for the project is to have enough influence and sway that library and framework authors welcome adding instrumentation directly to their libraries and frameworks. I also agree that ESM loaders aren't quite there yet and aren't something oTel can do by itself with the current ESM implementation in Node.js.

That said -- right now there aren't many libraries with oTel instrumentation built in and monkey patching allows anyone to write instrumentation for a framework. It seems like this patching of require is something that, at least in the medium term, is going to be needed if this project wants to see wider adoption. My assumption is there's a need for a similar monkey patching of modules loaded via the native import statment in Node.js, and that the loader mechanism is the what the Node.js core team has proposed as a replacement for directly patching require.

It's fair to say that it's ultimately up to Node's core team to provide a replacement for this lost functionality -- but it's equally fair to say that Node's core can't do this alone. They need both feedback and direct help from APM/Observability vendors. It seems like OpenTelemetry/CNCF are ideally situated to be a conduit for this.

This is just me thinking out loud but I wonder if the next best steps would be for someone on the OpenTelemetry side of the fence to develop an ESM loader prototype that could be used in place of require-in-the-middle? With a prototype written we'd have a piece of code to bring to Node's core team that would highlight the issues with the current loader implementation, and provide focus for the future work on ESM loaders.

Also -- speaking of future work on ESM loaders, based on these comments it sounds like Node.js core contributor @GeoffreyBooth is up for reviving a working group and mentoring folks who might be interested in working on these features and bringing some of the debates @Flarna mentioned to a close. This sounds like a great opportunity for anyone that's wanted to get involved and influence Node.js core development.

@astorm
Copy link
Contributor Author

astorm commented Feb 22, 2021

Think I got pinged cause I'd done some early experimental module loader investigation and had proven out some concepts.
No passive voice needed @michaelgoin, that's exactly why I pinged you. :)

Do you think that work is something that New Relic would be willing/able to share? It seems like it might be a good basis for the prototype I mentioned above (if OpenTelemetry decides an ESM loader is needed/appropriate).

@Flarna
Copy link
Member

Flarna commented Feb 22, 2021

@astorm A while ago I ran across this which seems to be what you are looking for. But not sure if this fits current loader hooks or is based on an older variant.

And I agree that something like monkey patching will be needed. My main point was that once OTel API is GA we have now something in place which is not experimental and open source to approach libraries. We should at least try to get monitoring support into the libraries instead of blind continue the error-prone monkey patching approach.

@michaelgoin
Copy link
Contributor

@astorm I don't think I'd have issues getting NR to share but it looks like that article @Flarna shared is pretty similar to what we'd experimented with (leveraging params, etc.). If we do find a need, I can dig back through the old notes and see if there's anything additionally useful to surface up. We also experimented pretty early on so unsure how out of date things are. Or I could likely carve out time to help prototype (and document), as desired.

One thing that was the case back when I first researched, which may or may not still be the case, is CJS modules imported into ESM had full resolve paths so our module/instrumentation name matching was off for auto-instrumentation even with the common loader. We have not gotten to bridging that gap yet and unsure if that will be a problem for this project or not (can't remember how the auto-instrumentation is hooked up anymore). But that may be a small stop-gap-ish thing that may want to get solved regardless, if an issue.

On a further note, I'm confident NR would be willing to help instrument willing frameworks out there to help with Open Telemetry's success.

@chuanqisun
Copy link

Might be relevant: I was using a downstream library built with opentelemetry and it requires excessive configuration to make rollup bundler happy with opentelemetry. But I was still stuck because I was using vite, which is built on top of rollup and it conflicts with the vanilla rollup config. Then I cracked open the Azure sdk and realized it has written a money patch fix for opentelemetry's export issue.

So I wonder if opentelemetry can do something at the upstream so more bundlers can directly consume it. I also wonder if the folks from Azure SDK would be able to provide some help as they have already spent so much effort hacking around it. Here is their PR

@lizthegrey
Copy link
Member

https://github.com/DataDog/dd-trace-js/pull/1582/files is the equivalent change that was done to dd-trace-js.

I was bitten by this breaking for some (but not all) of my instrumentation when I moved to ESM, because I imported knex (which is cjs), which in turn loads postgres, and that instrumentation kept working. But the code loaded directly from my ESM main module (express, http) didn't get automatically patched because the hook never was called.

Discussion here: https://cloud-native.slack.com/archives/C01NL1GRPQR/p1632260506086900

@dyladan
Copy link
Member

dyladan commented Sep 22, 2021

Seems like the changes are relatively simple to make

They create a small shim that returs iitm only on supported node versions like this:

'use strict'

const semver = require('semver')
const logger = require('./log')

if (semver.satisfies(process.versions.node, '^12.20.0 || >=14.13.1')) {
  module.exports = require('import-in-the-middle')
} else {
  logger.warn('ESM is not fully supported by this version of Node.js, ' +
    'so dd-trace will not intercept ESM loading.')
  module.exports = () => {}
}

Then when they call hook they also call esmHook.

    hook(instrumentedModules, this._hookModule.bind(this))
    esmHook(instrumentedModules, this._hookModule.bind(this))

Simple in theory, though I'm not sure if this would have other impact we're not aware of at the moment. Particularly concerning is that the documentation for the hooks API specifically states it should not be used https://nodejs.org/api/esm.html#esm_hooks

@lizthegrey
Copy link
Member

"experimental" heh

so in order to enable include-in-the-middle, it requires running Node with -loader, so users will need to specify that flag when starting to opt in, otherwise I believe it'll no-op to register an override with include-in-the-middle given it won't be called at all.

but if we don't implement this, it becomes an area of missing features/lack of parity I fear :)

@mcollina
Copy link

Supporting ESM is going to be a moving target for a year or so. I recommend to follow up on this and inform the loaders team of any blockers: https://github.com/nodejs/loaders.

(The API for loaders is going to change soon)

@dyladan
Copy link
Member

dyladan commented Sep 22, 2021

If we implement this it can be done in the instrumentation package where we already wrap ritm exactly for this type of scenario. Implementation in one place would make it available to all instrumentations.

It seems we have at least a few options:

  1. update our hook method so all instrumentations automatically get esm support
  2. expose a second hook method with experimental esm support that instrumentations can opt in to.
  3. update our hook method to support esm only when experimental esm support is enabled somehow such as a config or env variable.
  4. others?

Supporting ESM is going to be a moving target for a year or so. I recommend to follow up on this and inform the loaders team of any blockers: nodejs/loaders.

That's roughly what @Flarna said earlier. There are still some quite severe restrictions on esm loaders. In a quick scan of the documentation I can't see if the restrictions @Flarna mentioned like only allowing a single loader have been lifted.

(The API for loaders is going to change soon)

When the API changes I assume IITM will be broken. The question becomes, in what way is it broken? If it simply fails to load ESM modules then it's not that big of a risk. If it causes a process crash, that is a huge risk

edit: it looks like chaining is "in progress" according to the loaders team https://github.com/nodejs/loaders/blob/main/doc/design.md#chaining-resolve-hooks

@weyert
Copy link
Contributor

weyert commented Sep 22, 2021

My understanding is that diagnostic_channel would be the right way to approach this but then you still have the chicken-egg problem as the package needs to instrument

@mcollina
Copy link

My understanding is that diagnostic_channel would be the right way to approach this but then you still have the chicken-egg problem as the package needs to instrument

I have recently added this to both fastify and undici.

@vmarchaud
Copy link
Member

My understanding is that diagnostic_channel would be the right way to approach this but then you still have the chicken-egg problem as the package needs to instrument

Don't think so, diagnostic_channel is useful for some instrumentations (ones that only create span like for databases) but as soon as we need to propagate context (across process boundary) its not sufficient.

@mcollina
Copy link

this is not strictly true. In Fastify we only publish when we create a new server - mainly for folks to instrument in their own way.

@weyert
Copy link
Contributor

weyert commented Sep 23, 2021

I am wondering what's easier to convince package maintainers to instrument with diagnostic_channel or @opentelemetry/api. Personally, would prefer the latter :D

@dyladan
Copy link
Member

dyladan commented Sep 23, 2021

There's no real reason a package maintainer couldn't support both

@astorm
Copy link
Contributor Author

astorm commented Sep 24, 2021

Thanks all for reviving this old issue, the original poster (me) appreciates it. It seems like there's a great opportunity here for collaboration between APM vendor engineers (via OpenTelemetry) and the node core engineers who've noticed that APM vendors aren't always as involved as they'd like :)

@mcollina it sounds like @dyladan's main concern is about the stability of the current loader hooks implementation and what happens when it's next revved.

When the API changes I assume IITM will be broken. The question becomes, in what way is it broken? If it simply fails to load ESM modules then it's not that big of a risk. If it causes a process crash, that is a huge risk

@mcollina as the (possibly reluctant) thread participant with node core project ties, would the node core team be able to suggest something that would alleviate this concern? Basically "how can we use the current loader(s) API (either through import-in-the-middle or some other way) and be sure that an upgraded node wouldn't crash the application. I don't think the experimental label is a problem (makes long eye contact with the experimental async_hooks module that most APM vendors including oTel rely on), but being able to know loader hooks has the sort of API stability that institutional APM customers are looking for is important. @mcollina if you're not the person who can help with this could you suggest someone who is?

@bengl hello! Looping you in to this discussion re: import-in-the-middle and loader hooks. If you have the time/attention (no worries if not), do you have any thoughts on the above, and what sort of contract you're providing with import-in-the-middle? Put another way, will import-in-the-middle blow up when the loader hooks API is next revved, or have you taken steps to make sure it won't?

@Flarna
Copy link
Member

Flarna commented Sep 24, 2021

Regarding loaders: There is a complete rework ongoing and it will take a while. First step already landed on master and I assume it will break import-in-the-middle.
There is a dedicated loaders team in node core working on this (see https://github.com/nodejs/loaders/). I would not expect that loaders get out of experimental that soon.

Please note also that these loaders is a pure Node.js thing and not applicable/present in e.g. browsers.

Regarding diagnostics channel vs OTel: Diagnostics channel is by far more generic as it allows more consumers and it doesn't specify what consumers should do with the data. OTel on the other hand has a well specified export chain.
Note also the Diagnostics Chanel is still marked as experimental and only available in recent nodejs versions (14.17.0 and 16) - it's not available on e.g. browsers.

@bengl
Copy link

bengl commented Sep 24, 2021

@dyladan

When the API changes I assume IITM will be broken. The question becomes, in what way is it broken?

Modules will fail to load and apps will likely crash. This will be fixed very soon. See below.

@astorm

Put another way, will import-in-the-middle blow up when the loader hooks API is next revved, or have you taken steps to make sure it won't?

The change currently on Node.js master replaces a few hooks with a single one. We'll make that change (very soon) in import-in-the-middle supporting both the old hooks and the new one. Our plan is to remain up-to-date with whatever's happening in Node.js core, supporting as far back as we can. We'll take any new changes into account, including upcoming loader composability. If needed, we'll switch based on Node.js version, but for now I think we'll fine just exporting all the old and new hooks.

If folks want to help out in future-proofing IITM, PRs are most certainly welcome. 😄

@weyert

I am wondering what's easier to convince package maintainers to instrument with diagnostic_channel or @opentelemetry/api. Personally, would prefer the latter :D

I would say the former. There are already examples of it in fastify and undici, and hopefully many more in the future. Like others have said, this enables OT and APM vendors, and other kinds of observability tools too, eg. security. The good news for opentelemetry is that it should be straightforward to use library-provided diagnostics_channel events where available without monkey-patching and heavy-handed tools like RITM/IITM.

We're in the early stages of using diagnostics_channel to underly all of our instrumentation in dd-trace-js, hoping to push more of the pattern to the libraries we're instrumenting wherever/whenever possible.

@mcollina
Copy link

Thanks all for reviving this old issue, the original poster (me) appreciates it. It seems like there's a great opportunity here for collaboration between APM vendor engineers (via OpenTelemetry) and the node core engineers who've noticed that APM vendors aren't always as involved as they'd like :)

It would be amazing to have more APM vendors involved in implementing and maintaining Node.js instrumentation.

@mcollina it sounds like @dyladan's main concern is about the stability of the current loader hooks implementation and what happens when it's next revved.

Unfortunately it's going to be a moving target and there is nothing we can do about it. Part of the instability is also due to certain corner-cases that needs to clarified/specified by TC39.

Given Node.js release cadence, changes will not be weekly but rather monthly/quarterly. I recommend somebody from OpenTelemetry to join the loaders team.

@mcollina as the (possibly reluctant) thread participant with node core project ties, would the node core team be able to suggest something that would alleviate this concern?

I think the best approach would be for a company to allocate somebody's time to keep everything up to date and even do part of the work.

Basically "how can we use the current loader(s) API (either through import-in-the-middle or some other way) and be sure that an upgraded node wouldn't crash the application. I don't think the experimental label is a problem (makes long eye contact with the experimental async_hooks module that most APM vendors including oTel rely on), but being able to know loader hooks has the sort of API stability that institutional APM customers are looking for is important. @mcollina if you're not the person who can help with this could you suggest someone who is?

I'm not the most informed person on the Loaders work as it is a spec and detail heavy project with lot of complexity (I'm more of a distributed system engineer).

Anyhow, I would expect Loaders go into a place similar to async_hooks, experimental but mostly stable and reliable in LTS versions.

@astorm
Copy link
Contributor Author

astorm commented Sep 25, 2021

@dyladan it sounds like @bengl, the maintainer of import-in-the-middle, is committed to being on top of stopping and fixing crashers if/when the loaders API is revved. This seems (from my naive point of view) enough to make import-in-the-middle no more risky than the other third party dependencies used by the open telemetry project. Is this enough for you to be comfortable using it? If not what would you need to hear in order to be comfortable moving forward with import-in-the-middle?

@Qard
Copy link

Qard commented Oct 28, 2022

Apparently I wasn't mentioned in here before so entirely missed the discussion about diagnostics_channel, but I can provide some context as the creator of it.

The original reason for creating diagnostics_channel was specifically to side-step all these concerns about ESM loaders and the early talk about module records being immutable. By providing a simple emitter for named diagnostics events we could define some loose structure to data captured at our usual injection points while having the lowest impact possible. We can also capture this data directly in the source modules without any fragile monkey-patching.

Yes, diagnostics_channel on its own does not handle correlation between disparate events and that is by-design. The intent is for it to be paired with another tool like AsyncLocalStorage to provide the context to which to bind this data. Context awareness and gathering of data are two completely different tasks which are often conflated due to how often they are needed together. By breaking them down into their respective components though there's a whole lot more flexibility in how to compose the two and also enables using them in isolation in cases where control flow context is not relevant like active request metrics tracking.

Node.js core specifically opted against integrating a richer tracing API at the time because for one thing that's a much heavier proposition, but also the ecosystem had just got burned by OpenTracing supposedly being the "standard" and then OTel coming along with different ideas. Both OpenTracing and OpenTelemetry are fairly opinionated, which is fine but means there's a high risk of differing opinions emerging, resulting in forks or new designs, further churning the ecosystem. This would mean anything provided in Node.js core could quickly become obsolete and removing things from Node.js takes a long time. I would much rather have a lower-level, uncontroversial primitive to simply broadcast metadata and leave it up to higher-level tools to consume that data and decide what to do with it.

The combination of diagnostics_channel and AsyncLocalStorage is designed intentionally to feed well into richer tracing systems by selecting the appropriate channels to consume and structuring the data how the consumer sees fit using AsyncLocalStorage for context storage. You can think of diagnostics_channel much like static probe systems like DTrace and SystemTap, but in JavaScript--there's a bunch of pre-defined channels you could listen to, if you're interested, but you can pick and choose which are relevant to your particular scenario. If we integrated a tracing library more directly then we would again need to find a way to reduce the overall data stream down to the subset a given consumer actually feels is relevant, which then creates a ton more complexity with questions like how do we reformulate the tree if we want to take spans out of it? There's a lot of complexity added by filtering at the higher-level where structure has already been imposed. It's much easier and less resource-intensive to filter at the lower-level of just selectively listening to interesting channels.

@djMax
Copy link

djMax commented Oct 28, 2022

^^ 1000 times that. My endorsement is worth precisely zero, but that approach gets it without reservation.

Has there been any thought about channel names and discovery? I couldn't find it.

@Qard
Copy link

Qard commented Oct 28, 2022

It has been relatively left up to each library to choose their own naming scheme, though the docs do suggest a module prefix to avoid collisions and providing docs for each of their channels along with the input object shapes. There are docs for the limited set of events currently emitted by Node.js core though: https://nodejs.org/api/diagnostics_channel.html#built-in-channels

@astorm
Copy link
Contributor Author

astorm commented Oct 28, 2022

Thanks for dropping by @Qard -- I'm not sure I'd seen the case/theory-of-operation for the diagnostic channel laid out that plainly and well before. (You'all should do more blogging/evangelizing to folks not cozy with the foundation and core project 😸)

To sum up what's been said by Stephen and others, a diagnostic channel approach would require library writers to self-instrument their code if they wanted it to be instrumented/traced. Legit -- but orthogonal to this particular issue.

@Qard
Copy link

Qard commented Oct 28, 2022

Yes, I intend on being more vocal about it going forward. We kind of kept a bit quiet about it for awhile to give it a trial period to prove the concept. Seems to have succeeded at that so we're going to start pushing it more publicly going forward. :)

Also, small note that the diagnostics_channel approach doesn't exactly require that libraries self-instrument. The monkey-patching approach remains usable for the old libraries that might never gain the diagnostics_channel changes necessary, and you can simply make patches that emit diagnostics_channel events. The benefit of using channels with patches is that there's a clear and easy way to turn individual instrumentations on or off and it also provides a nice clean path to move instrumentation out of APMs though with low effort. This is the approach Datadog has taken and we're quite satisfied with this approach.

@astorm
Copy link
Contributor Author

astorm commented Oct 28, 2022

As much as I've appreciated the discussion here, as the person who first opened this issue a year and a half ago I'd like to pull things back on track and just sum up where things are so that folks googling this know the state of play.

  1. People want to use import in the their Node.js programs without relying on compilers/bundlers

  2. Native Node ESM modules and Node's "load a CommenJS module in node_modules via import functionality" provide a way to do that

  3. However, native ESM modules and Node's "load a CommenJS module in node_modules via import functionality" don't allow a module to be redefined at runtime, which means none of the third party library tracing instrumentation modules provided by OpenTelemetry (or others) can work with an ESM based Node.js program.

  4. This blocks folks who are required to have a tracing library from using ESM. While it's true they could manually instrument using the API/SDK, the expectation in industry/the-market is that a tracing solution will pickup things like web-frameworks and db calls on its own

  5. Node's loader hooks feature allow a module's source to be redefined prior to runtime

  6. The import-in-the-middle module uses loader hooks to provide an experience that seems like monkey patching and provides similar functionality (it actually provides a proxy object -- the implementation's pretty neat/wild)

  7. At least one other APM/Observability vendor uses import-in-the-middle for their wrapping, which makes it (by some definitions) production acceptable

  8. Therefore, import-in-the-middle (or some other loader hooks based solution) appears to be the most expedient path forward to allow OpenTelemetry users to write "ESM First" programs and still take advantage of the third party tracing instrumentation provided by the OpenTelemetry project. (see packages with instrumentation in the name for the sort of functionality this effects: https://github.com/open-telemetry/opentelemetry-js/tree/a3e40da38aafd0809a5a9702bd181c4d4bebfcae/experimental/packages and https://github.com/open-telemetry/opentelemetry-js-contrib/tree/20767c4fffee34bc51392894001bbb667576e91d/plugins/node ).

  9. Work to do this was begun multiple times (feat(instrumentation): support ESM in node via import-in-the-middle #2763 feat(instrumentation): support ESM in node via import-in-the-middle #2640) but petered out for reasons. The latest and most up to date attempt is at feat(instrumentation): add support for esm module #2846

  10. My general read on the situation is the OpenTelemetry maintainers would welcome this, but their limited resource are being applied elsewhere and there's no plan to put this on the schedule. If folks were interested in giving it a try themselves hopping on the CNCF slack and saying hi in #otel-js is probably the best place to start

@vmarchaud
Copy link
Member

The key point of that system is called pm2 it is from a company called keymetrics it is the defacto standard at present in NodeJS for Metrics and instrumentation at last in the real world and it works

Well i've maintained pm2 and worked as the CTO at keymetrics between 2016 and 2019, i wrote the library used to generate metrics/trace (https://github.com/keymetrics/pm2-io-apm/graphs/contributors) so i think i'm well suited to say that it's not magical, doesn't support "pretrabyte" scale and use both require-in-middle: https://github.com/keymetrics/pm2-io-apm/blob/master/package.json#L91 and opencensus (which is the project which pre-date opentelemetry) https://github.com/keymetrics/pm2-io-apm/tree/master/src/census/plugins.
I started to work on OpenTelemetry after i left in 2019 (just when the Otel SIG JS was formed in June) because i saw the way forward as vendor sharing client-side implementations. After i left both keymetrics and pm2 were left without much maintenance (clearly visible in activity graph of the repos) so it wasn't put up to date with more modern approach including otel. I might add that it doesn't support ESM either.

My general read on the situation is the OpenTelemetry maintainers would welcome this, but their limited resource are being applied elsewhere and there's no plan to put this on the schedule. If folks were interested in giving it a try themselves hopping on the CNCF slack and saying hi in #otel-js is probably the best place to start

I re-iterate that we do welcome this, the most advanced PR was mine (because i based on both priors PRs) #2846, the current to-do is:

@weyert
Copy link
Contributor

weyert commented Oct 30, 2022

Hmmm i was now reviewing the current project state of open-telemetry overall and i see nothing at all. When i get it right and i am not missing any importent code then this offers me exactly what? it defines function naming patterns for my loggers? that i later turn into insights like traces?

What do you mean exactly? OpenTelemetry offers a standard for telemetry, see the spec. This repo offers similar functionality as your suggested pm2 but in a vendor agnostic way. I can send my metrics and traces to Google Trace by using their otel-js exporters or I could use any other tracing/telemetry vender like Dynatrace that supports OTLP directly.

OpenTelemetry like pm2 instruments http, and other popular packages , see opentelemetry-js-contrib repo for those

@frank-dspeed
Copy link

talked to some engineers they informed me that you maybe missing out some informations for your considerations firefox spidermonkey so warporacle already got structural instrumentation and adds it by default while v8 + chrome has console.time("zone") and console.timeEnd("zone") as also chrome://tracing to expose the data in a nice view able way.

They strongly suggest to simply wrap that on demand as also not call the console functions directly abstract it so that you can shirt circuit it

const trace = (zone) => console.time(zone)
const traceEnd = (zone) => console.timeEnd(zone)

// so that you can do in production
const trace = (zone) => { /** NoOp */ }
const traceEnd = (zone) => { /** NoOp */ }

@legendecas
Copy link
Member

legendecas commented Oct 30, 2022

@frank-dspeed you send a lot of long messages to this thread, and I have a hard time following what your goals or questions are most of the time.

It's very difficult for me to determine if everything you're posting is even on the topic of OpenTelemetry Node.js ESM integration and the frequency of your posts could be discouraging others to participate.

I'd like to ask you to be more considerate in how much you expand and how much context you give. Thanks!

@frank-dspeed
Copy link

@legendecas i also have a hard time i can even also not assert what belongs here where and that shows me that we need to have some more thinking before coding but got your Point i will clean this issue up for you and i have good news for you i invested again a whole day to evaluate if this has any reasonable feature in the next 3 years and i got the result that we will see no messages from me or my affiliates in this project for the next years.

@gabrielemontano
Copy link

@vmarchaud have you some updates about this thread?

@vmarchaud
Copy link
Member

Nothing new since the discussion on the PR here: #2846, i didn't have the time to continue it though since october

@schickling
Copy link

👏 👏 👏 @pichlermarc

@pichlermarc
Copy link
Member

@schickling I only merged the PR 😅
All the work was done by @JamieDanielson @pkanal 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Issue or PR that needs/is extended discussion.
Projects
None yet
Development

Successfully merging a pull request may close this issue.