Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[workarround] Drivers from server #122

Merged
merged 10 commits into from
Apr 25, 2018

Conversation

dpordomingo
Copy link
Member

@dpordomingo dpordomingo commented Apr 19, 2018

fix #89
blocks #123

I followed the sugestions made by @mcuadros at #89 (comment), so the backend will be able to call gRPC https://godoc.org/github.com/bblfsh/bblfshd/daemon/protocol to get the list of drivers

protocol.NewProtocolServiceClient(*grpc.ClientConn)
  .DriverStates(ctx, &protocol.DriverStatesRequest{})

at fetchDrivers(ctx, protocol.ProtocolServiceClient) ([]*protocol.DriverImageState, error)

Testing it:

Run bblfshd:

$ docker run --privileged -d
    -p 9432:9432
    -p 9433:9433
    --name bblfsh
    bblfsh/bblfshd:v2.4.3
    -ctl-network=tcp
    -ctl-address=0.0.0.0:9433

And the dashboard:

$ docker run -d
    -p 8080:80
    --link bblfsh
    --name bblfsh_dashboard
    bblfsh/dashboard:v0.5.0
    -bblfsh-addr=bblfsh:9432
    -bblfshctl-addr=bblfsh:9433

The installed drivers must be installed or listed adding the addr parameter to the bblfshctl command:

$ docker exec -it bblfsh
    bblfshctl driver install --recommended --ctl-endpoint=tcp --ctl-address=localhost:9433
$ docker exec -it bblfsh
    bblfshctl driver list --ctl-endpoint=tcp --ctl-address=localhost:9433

Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
@dpordomingo dpordomingo self-assigned this Apr 19, 2018
@dpordomingo dpordomingo requested review from smacker and bzz April 19, 2018 15:11
@smola
Copy link
Member

smola commented Apr 19, 2018

I guess this is a temporary fix until we get this functionality implemented in the clients, right?

Related: bblfsh/scala-client#68

@dpordomingo dpordomingo force-pushed the drivers-from-server branch 4 times, most recently from 3123fc9 to b38ac01 Compare April 19, 2018 17:15
@bzz
Copy link
Contributor

bzz commented Apr 20, 2018

@smola

I guess this is a temporary fix until we get this functionality implemented in the clients, right?

exactly! bblfsh/sdk#253

dispatch({ type: LOADING });
export const load = () => (dispatch, getState) => {
const { options: { customServer, customServerUrl } } = getState();
const from = customServer ? customServerUrl : undefined;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we might need a selector for it, I remember we have such code in code base a few times already.

@bzz
Copy link
Contributor

bzz commented Apr 20, 2018

Looks really good, 👍 @dpordomingo - we can deliver this feature that would allow for adding new drivers easier, use this approach until SDK is changed and then refactor to a new one.

One question: is there a reason why it's now POST /drivers instead of GET ?

Copy link
Collaborator

@smacker smacker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGFM.

But 2 points:

  1. Need rebase. Commit 3cd4990 has a name "Prepare the fetch drivers from server" but there is only empty line removal.
  2. Please try to split work to separate PRs in the future. Current PR contains:
  • makefile refactoring
  • adding vendoring
  • the code itself

@dpordomingo
Copy link
Member Author

dpordomingo commented Apr 20, 2018

I did the same that done for /version (it sends the bblfsh server URL to the backend too)

@smacker
Copy link
Collaborator

smacker commented Apr 20, 2018

codecov reports -15.68% code coverage. Do you think you can add few tests?

@dpordomingo
Copy link
Member Author

Yes @smacker I MUST add some tests to keep (at least) the original test coverage. I just wanted to gather some reviews before adding tests 👯‍♂️
I'll rebase and ping you once this is ready for a second pass.
Many thanks for your reviews !!!

Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
@dpordomingo dpordomingo changed the title [RFC] Drivers from server [workarround] Drivers from server Apr 24, 2018
@dpordomingo dpordomingo mentioned this pull request Apr 24, 2018
@dpordomingo
Copy link
Member Author

Thanks for your review;
@smacker you're right about the benefit of having separated PR for:

  • makefile refactor ( aabbafc + 2e4e223 )
  • the changes required by the issue itself (the rest of the commit)

Feel free to confirm it if you prefer having separated PRs, and I'll do so.

New changes:

@rporres
Copy link

rporres commented Apr 24, 2018

If I understand how this goes, this will need changes in the way we deploy things if we want it work properly. Driver versions are currently pulled from the values in the helm deployment and are created inside the container. We can update them, but if the pod is recreated, the versions of the drivers will be those of the helm release and not whatever.

We will need attach a volume per pod and introduce logic to determine if we want to deploy drivers or not, which will definitely cause trouble.

I don't see a good way of this working. Please organise a meeting to discuss it.

@dpordomingo
Copy link
Member Author

Thanks @rporres for taking a look at this!
It is not required to update the drivers automatically, but only to deploy the new ones.
I'll open a PR against the bblfsh-dashboard charts with the expected changes and we'll be able to continue the discussion there.

@smacker
Copy link
Collaborator

smacker commented Apr 24, 2018

@dpordomingo I think we can keep this PR as it is for now. Just in the future try to do smaller PRs please. (for example refactoring of making things private and comments can be another PR also, it would help with review)

@mcuadros
Copy link
Contributor

@rporres, why a volume? The drivers can be downloaded inside of the container. For me every deploy a new dashboard, bblfshd, requires downloading all the repositories.

We should consider, from now and forever, k8s as a stateless environment. As soon we accept this everything will go better.

@mcuadros
Copy link
Contributor

@dpordomingo also who takes care of the charts, is Infrastructure.

BTW I don't know why this topic is threated on this issue about a frontend thing, the server is the same and was working like this sinces months, this changes just reflects a functionally that was on CLI on the Web.

@rporres
Copy link

rporres commented Apr 24, 2018

@mcuadros I understood we wanted to update the driver versions from the dashboard. I was suggesting scenarios that could make that possible that if we needed it. I very much agree that k8s should be as stateless as possible

Copy link
Contributor

@bzz bzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

In future, it would be easier to review in case such work is split in multiple PRs

@dpordomingo dpordomingo merged commit c401648 into bblfsh:master Apr 25, 2018
dpordomingo added a commit that referenced this pull request Apr 25, 2018
Signed-off-by: David Pordomingo <David.Pordomingo.F@gmail.com>
@dpordomingo dpordomingo deleted the drivers-from-server branch April 25, 2018 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Recover driver list from gRPC and not have a fixed list
6 participants