Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add license, contributing, fix coverage script #2

Merged
merged 1 commit into from
Nov 8, 2016
Merged

Conversation

yurishkuro
Copy link
Member

No description provided.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling affe078 on add-license into 66ef10f on master.

@yurishkuro yurishkuro merged commit 1071221 into master Nov 8, 2016
@yurishkuro yurishkuro deleted the add-license branch December 16, 2016 20:29
spongedu added a commit to spongedu/jaeger that referenced this pull request Jun 13, 2019
@destinygujun destinygujun mentioned this pull request Jun 15, 2022
bigfleet pushed a commit to bigfleet/jaeger that referenced this pull request Feb 16, 2023
yurishkuro pushed a commit that referenced this pull request Aug 18, 2023
## Which problem is this PR solving?
Resolves #4680

## Description of the changes
- Add an opt-in option `--query.enable-tracing` to enable tracing for
the jaeger-query component.
- The jaeger all-in-one component does not expose this flag since traces
are emitted to port 4317 by default, which all-in-one listens on.

## How was this change tested?

```
# Run jaeger-query component with tracing enabled and verify that the connection errors are appearing in stdout.
$ SPAN_STORAGE_TYPE=memory go run -tags ui ./cmd/query/main.go --query.enable-tracing
...
{"level":"info","ts":1692363754.9049716,"caller":"grpc/clientconn.go:1301","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
{"level":"info","ts":1692363754.9050152,"caller":"grpc/clientconn.go:1414","msg":"[core][Channel #1 SubChannel #2] Subchannel picks a new address \"localhost:4317\" to connect","system":"grpc","grpc_log":true}
{"level":"warn","ts":1692363754.9058733,"caller":"grpc/clientconn.go:1476","msg":"[core][Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"localhost:4317\", ServerName: \"localhost:4317\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:4317: connect: connection refused\"","system":"grpc","grpc_log":true}
{"level":"info","ts":1692363754.9067123,"caller":"grpc/clientconn.go:1303","msg":"[core][Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:4317: connect: connection refused\"","system":"grpc","grpc_log":true}
...

# Run jaeger-query component with tracing disabled and verify that the connection errors no longer appear.
# Of course, we can't see traces in Jaeger UI because there's nothing to receive the traces.
$ SPAN_STORAGE_TYPE=memory go run -tags ui ./cmd/query/main.go

# Start an all-in-one instance just as a quick and dirty way to bring up an in-memory jaeger stack to
# receive traces from jaeger-query
$ make run-all-in-one

# Run jaeger-query as a separate component, listening on different ports to all-in-one to avoid port binding collisions.
$ SPAN_STORAGE_TYPE=memory go run -tags ui ./cmd/query/main.go --query.enable-tracing --query.grpc-server.host-port :17685 --query.http-server.host-port :17686 --admin.http.host-port :17687

# Open localhost:17686 in a browser and refresh a few times to emit traces to jaeger all-in-one.
```

Confirmed that `jaeger-query` is visible and contains traces:

<img width="1572" alt="Screenshot 2023-08-18 at 11 45 26 pm"
src="https://github.com/jaegertracing/jaeger/assets/26584478/a348e804-6d37-49f9-9d9f-f73854e9b6bc">


## Checklist
- [x] I have read
https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits
~- [] I have added unit tests for the new functionality~
- [x] I have run lint and test steps successfully
  - for `jaeger`: `make lint test`
  - for `jaeger-ui`: `yarn lint` and `yarn test`

---------

Signed-off-by: albertteoh <see.kwang.teoh@gmail.com>
yurishkuro added a commit that referenced this pull request Sep 27, 2023
## Which problem is this PR solving?
- Third prototype of "Jaeger-v2"
- Another alternative approach to #3500

## Description of the changes
- Adds a new binary `jaeger-v2` using OTEL Collector framework
- Minimal amount of extensions is included, to mimic what
`jaeger-collector` normally has
- It will combine all previous functions of agent/collector/query in one
binary, but controllable via config file

```
$ go run -tags=ui ./cmd/jaeger-v2 --config ./cmd/jaeger-v2/config.yaml
```

## Roadmap


https://docs.google.com/document/d/1s4_6VgAS7qAVp6iEm5KYvpiGw3h2Ja5T5HpKo29iv00/edit

## Design

* the ingestion and storing of traces will be done via standard
receivers/processors/exporters OTEL Collector components
* the jaeger-query and UI are implemented as `jaeger_query` extension
(already working in this PR)

### Storage
In order to keep the flexibility of mixing & matching storage
implementations, all backends can be configured via `jaeger_storage`
extension (we may need to add `jaeger_metrics_storage` extension in the
future). It might look like this:
```yaml
jaeger_storage:
  memory:  # defines Factory
    memstore:
      max_traces: 100000
  cassandra:
    cassandra_primary:
      servers: [...]
      namespace: jaeger
    cassandra_archive:
      servers: [...]
      namespace: jaeger_archive
```

The `jaeger_query` extension then references specific storage factories
by name:
```yaml
  jaeger_query:
    trace_storage: memstore
    dependencies: something_else
    metrics_store: prometheus_store
```

It's not clear yet if `jaeger_query` extension should simply subsume
`jaeger_storage` extension, because Query is the only one that needs
this _generic_ access to storage, while things like exporters or Kafka
ingester (receiver) always deal with a single implementation (because
OTEL Coll pipeline allows to connect them with each other, which is not
possible with extensions).

## Trade-offs
- This not using OTEL Collector builder `ocb`. That means people won't
be able to assemble a different version of the collector with other
extensions.
- We may want to support `ocb` in the future, as it makes it easier to
write custom in-process exporters for custom storage. It will require
converting all the components into their own modules.

## Next steps

* [x] Get feedback from the community on the approach
* [x] Fully implement all-in-one by wiring receivers / exporters
correctly

## Open Questions
* How can we implement all-in-one equivalent that can be run without any
config file?
* Do we want
[healthcheckextension](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/extension/healthcheckextension/README.md)
to be included by default?
* Investigate startup error `2023-09-23T19:55:46.661-0400 warn
zapgrpc/zapgrpc.go:195 [core] [Channel #2 SubChannel #3] grpc:
addrConn.createTransport failed to connect to {Addr: ":16685",
ServerName: "localhost:16685", }. Err: connection error: desc =
"transport: Error while dialing: dial tcp :16685: connect: connection
refused" {"grpc_log": true}`

---------

Signed-off-by: Yuri Shkuro <github@ysh.us>
Signed-off-by: Yuri Shkuro <yurishkuro@users.noreply.github.com>
Co-authored-by: Albert <26584478+albertteoh@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants