Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static docker image name fails with custom build for other architectures #52

Closed
al-sabr opened this issue May 16, 2017 · 13 comments · Fixed by #53
Closed

Static docker image name fails with custom build for other architectures #52

al-sabr opened this issue May 16, 2017 · 13 comments · Fixed by #53
Assignees

Comments

@al-sabr
Copy link

al-sabr commented May 16, 2017

See : neunhoef/ArangoDBStarter#13

@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

This comes as output in the docker container:

docker volume create arangodb2 && \
    docker run -it --name=adb2 --rm -p 4005:4000 -v arangodb2:/data \
    -v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
    --dockerContainer=adb2 --ownAddress= --join=

docker volume create arangodb3 && \
    docker run -it --name=adb3 --rm -p 4010:4000 -v arangodb3:/data \
    -v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter \
    --dockerContainer=adb3 --ownAddress= --join=

@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

arangodb/arangodb-starter should actually be the image name of the parent container or dynamically assignable

@ewoutp
Copy link
Contributor

ewoutp commented May 16, 2017

Note that this output is only there to give you an advise for what to do.
If you package the starter in another images, it should be no problem to use that image name instead (as long as the starter is the entrypoint).

Now we can probably detect the image name of the containing docker container and use that.
We'll look into that.

@ewoutp ewoutp self-assigned this May 16, 2017
@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

I don'T understand this output is not actually executed on the host ?

Why those command should be entered manually ?

@ewoutp
Copy link
Contributor

ewoutp commented May 16, 2017

The commands should be executed on different hosts to achieve high availability of your cluster.
If you want a local cluster for testing purposes (WITHOUT High availability), use the --starter.local option.

@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

I was asking myself why do I need to manually do that when starter can send a command to the swarm cluster and tell other nodes to run the commands...

I think that that this is not 21th century thinking... There is a lack of design in this solution.

If you can already communicate with docker then why not take advantage of the power of docker swarm

@ewoutp
Copy link
Contributor

ewoutp commented May 16, 2017

That would work on docker running in swarm mode.
But not when running servers with individual docker instances

@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

There is no point of running the starter with individual instances its purpose is for clustering and swarming

@ewoutp
Copy link
Contributor

ewoutp commented May 16, 2017

I disagree on that. There are plenty of companies that use lots of docker servers, without using swarm. Instead they use some other orchestration system.

That does not mean that docker swarm would make an interesting addition for the starter. We'll discuss what to do with that.

@al-sabr
Copy link
Author

al-sabr commented May 16, 2017

Alright fair enough... I think that what is missing for arango-starter is video tutorial with demo instead of text file and README.MD. It is a product on its own and needs care and introduction.

Thanx a lot have a good day

@sloniki
Copy link

sloniki commented May 18, 2017

I'm trying to build a cluster on 3 physical docker hosts. I started arangodb-starter on the first host 192.168.1.98 with parameters:
-p 8528:8528
--starter.address=192.168.1.98 --starter.join=192.168.1.98

Then I start another stater instance on another host 192.168.1.99 with parameters:
-p 8528:8528
--starter.address=192.168.1.99 --starter.join=192.168.1.98

Getting this on the second agency container:
17T21:35:06Z [1] ERROR {cluster} cannot create connection to server '' at endpoint 'tcp://192.168.1.98:8541'
I noticed that the first agent is listening port 8531, how can I change the port numbers for automatically created containers? Or could you advise, how to build a cluster across several physical hosts. I need authentication and TLS as well.

@ewoutp
Copy link
Contributor

ewoutp commented May 18, 2017

@sloniki Make sure to leave out the --starter.join option for the first starter.

About the port numbers, they are automatically derived from a base port (--starter.port).
The coordinator gets an offset of 1, the dbserver and offset of 2 & the agent an offset of 3.

You cannot change these offsets, but you can change the base port.
E.g. --starter.port=1234 will result in ports 1234 (for the starter), 1235 (for the coordinator) and so on.

@ewoutp
Copy link
Contributor

ewoutp commented May 18, 2017

@sloniki TLS can be activated by the --ssl.keyfile option, for authentication use the --auth.jwtsecret option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants