Skip to content

Commit

Permalink
fetch tools from remote repo
Browse files Browse the repository at this point in the history
  • Loading branch information
openwms committed Apr 11, 2020
1 parent 36b89cf commit c2c177e
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 11 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ script:
- if [[ ( "$TRAVIS_BRANCH" == "master" ) ]]; then mvn clean package -Denforcer.skip=true -Dci.buildNumber=$TRAVIS_BRANCH-$TRAVIS_BUILD_NUMBER -Prelease,gpg,travis -B $MAVEN_OPTS; fi

after_success:
- ./doc/docToolchain-master/bin/doctoolchain doc publishToConfluence
- ./docToolchain-master/bin/doctoolchain doc publishToConfluence
Binary file modified doc/images/07-op-multiple-server.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 34 additions & 9 deletions doc/src/07_deployment_view.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,17 @@ Most often a warehouse management system and material flow control are installed
server or on multiple ones. This on-premise deployment has the advantage of increased availability and latency.

==== Simple Single Box Deployment
The simplest way to deploy OpenWMS.org is the Single Box deployment. All components installed on one single physical or virtual server. All
The simplest way to deploy OpenWMS.org is the Single Box deployment. All components installed on one single physical or virtual server. All
processes run and communicate in-memory and do not require an external network access. In its simplest way all the microservices can be
installed as single instances as Unix daemons or MS Windows services. Scalability and elasticity are not an option in a typical warehouse
project, therefor there is no need to scale out processes on demand.

[#img-07-single-server]
.Simplest deployment on single server
image::07-op-single-server.png["Simplest deployment on single server"]
image::07-op-single-server.png["Simplest deployment on a single server"]

The services could also be installed as Docker containers to increase operability and reliability but this is not required.
The services could also be installed as Docker containers on https://docs.docker.com/compose/[Docker Compose] to increase operability and
reliability but this is not a requirement.

Motivation::

Expand All @@ -33,8 +34,8 @@ Quality and/or Performance Features::
- High level of operability
- In multiple warehouse projects per site the maintainability decreases

*Mapping of Building Blocks to Infrastructure*

Mapping of Building Blocks to Infrastructure::
[cols="1,2" options="header"]
|===
| **Component** | **Responsibility**
Expand All @@ -49,22 +50,46 @@ optional)
|===

==== Multiple Box Deployment
Similar to the one-box deployment, OpenWMS.org can also be split and deployed on multiple machines. For this scenario we propose to run the
microservices within Docker containers and let the container scheduling infrastructure distribute the instances as needed. For this scenario
we also propose https://docs.docker.com/swarm/overview/[Docker Swarm] as container scheduling runtime. But if customers have other
schedulers in place, like Kubernetes or Openshift this works the same way. The main point here is, that OpenWMS.org does not require to run
on Kubernetes or any PaaS solution. The basic requirements of this scenario are met with Docker Swarm.

[#img-07-single-server]
.Deployment distributed on multiple servers
image::07-op-multiple-server.png["Deployment distributed on multiple servers"]

Motivation::
The benefit of a container scheduler in a _distributed_ environment is tremendous. We do not need to care about low-level infrastructure
details on our own and rely on proven scheduling technologies.

_<explanation in text form>_
Motivation::
- A container scheduler is already in place
- A robust and more reliable setup is required, where process restarts and monitoring is required
- Load can be divided to multiple servers
- Processed can run at multiple instances
- One part of the system (e.g. TMS) is independent on the availability of other parts

Quality and/or Performance Features::
- High degree of latency
- High level of reliability and failure tolerance
- High level of operability
- Advantages with multiple warehouses in the project
- A distributed system is more complex and this is a decrease of maintainability

*Mapping of Building Blocks to Infrastructure*

_<explanation in text form>_
In addition to the Single Box Deployment.

Mapping of Building Blocks to Infrastructure::
_<description of the mapping>_
[cols="1,2" options="header"]
|===
| **Component** | **Responsibility**
| Server 1 | One managed Docker Swarm node (could also be a Kubernetes node)
| Server 2 | A second managed Docker Swarm node (could also be a Kubernetes node)
|===

Both Docker Swarm nodes use several ports for cluster and container management. The database and the RabbitMQ are made transparently
available to both nodes and microservices on each node.

=== Microsoft Azure Deployment

Expand Down
3 changes: 2 additions & 1 deletion scripts/toolsetup
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
#!/bin/bash
unzip ./doc/docToolchain.zip -d ./doc
wget https://github.com/openwms/docToolchain/archive/master.zip -P doc/
unzip ./doc/master.zip

0 comments on commit c2c177e

Please sign in to comment.