Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in Media Streaming benchmark #87

Closed
ebarlaskar opened this issue May 26, 2017 · 5 comments
Closed

Issue in Media Streaming benchmark #87

ebarlaskar opened this issue May 26, 2017 · 5 comments
Assignees

Comments

@ebarlaskar
Copy link

ebarlaskar commented May 26, 2017

Hello all,

I am trying to run media streaming benchmark by keeping the server in one machine and client in another machine. I have exposed and published the port while running the server as this should allow any incoming request from outside. I am wondering if I am doing anything wrong in port binding or in setting up the container network (I am using the User-defined networks i.e. streaming_network)

I have downloaded all sources from the CloudSuite github repo and followed the steps as below to run a test:

To build dataset*******
hostmachine1:

-/cloudsuite-master/benchmarks/media-streaming/dataset$ sudo docker build -t dataset .
-/cloudsuite-master/benchmarks/media-streaming/dataset$ sudo docker create --name streaming_dataset dataset

create network b/w client and server*******
hostmachine1:

-/cloudsuite-master/benchmarks/media-streaming/dataset$ sudo docker network create streaming_network

To run the server*******
hostmachine1:

-/cloudsuite-master/benchmarks/media-streaming/server$ sudo docker build -t server .
-/cloudsuite-master/benchmarks/media-streaming/server$ sudo docker run -p 80:80 -d --name streaming_server --volumes-from streaming_dataset --net streaming_network server

Now on another machine I have built the client by following the steps as below:

create network b/w client and server*******
hostmachine 2:

-/cloudsuite-master/benchmarks/media-streaming/client$ sudo docker network create streaming_network

To run the client*******
hostmachine2:

-/cloudsuite-master/benchmarks/media-streaming/client$ sudo docker build -t client .
-/cloudsuite-master/benchmarks/media-streaming/client$ sudo docker run -t --name=streaming_client -v /path/to/output:/output --volumes-from streaming_dataset --net streaming_network client streaming_server

This gives me an error msg:

docker: Error response from daemon: No such container: streaming_dataset.

This error is very obvious as I don't have the streaming dataset in host machine 2. I tried to replace the streaming_dataset with the container ID and streaming_server with the host IP

-/cloudsuite-master/benchmarks/media-streaming/client$ sudo docker run -t --name=streaming_client -v /path/to/output:/output --volumes-from 7be24343fb69 --net streaming_network client 143.117.95.33

docker: Error response from daemon: No such container: 7be24343fb69.

Then I removed the dataset and also I have changed the script hostlist.server(from files/run) by replacing streaming_server with my host machine 1's IP address. Then I tried to run the client to see if I can connect to the host machine 1 but once again it failed.

-/cloudsuite-master/benchmarks/media-streaming/client$ sudo docker run -t --name=streaming_client -v /path/to/output:/output --net streaming_network client 143.117.95.33

Total clients = 4
Minimum number of sessions = 25
Maximum number of sessions = 500
Launching 4 clients on localhost
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result1.log --num-sessions=25 --rate=2 2>>/output/bt1.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result2.log --num-sessions=25 --rate=2 2>>/output/bt2.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result3.log --num-sessions=25 --rate=2 2>>/output/bt3.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result4.log --num-sessions=25 --rate=2 2>>/output/bt4.trace
sizeof(fd_set) = 128
sizeof(fd_set) = 128
sizeof(fd_set) = 128
sizeof(fd_set) = 128
peak_hunter/launch_hunt_bin.sh: line 55: 0*100/0: division by 0 (error token is "0")
Benchmark succeeded for 25 sessions
Launching 4 clients on localhost
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result1.log --num-sessions=500 --rate=50 2>>/output/bt1.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result2.log --num-sessions=500 --rate=50 2>>/output/bt2.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result3.log --num-sessions=500 --rate=50 2>>/output/bt3.trace
Running command httperf --hog --server 143.117.95.34 --videosesslog=[/videos/logs/cl*],[0.1,0.3,0.4,0.2],[localhost,localhost,localhost,localhost] --epoll --recv-buffer=524288 --port 80 --output-log=/output/result4.log --num-sessions=500 --rate=50 2>>/output/bt4.trace
sizeof(fd_set) = 128
sizeof(fd_set) = 128
sizeof(fd_set) = 128
sizeof(fd_set) = 128
peak_hunter/launch_hunt_bin.sh: line 55: 0*100/0: division by 0 (error token is "0")
Benchmark succeeded for 500 sessions
Maximum limit for number of sessions too low.
Requests: 0
Replies: 0
Runtime error (func=(main), adr=9): Divide by zero
Reply rate:
Runtime error (func=(main), adr=9): Divide by zero
Reply time:
Net I/O: 0
sed: -e expression #1, char 19: unterminated `s' command
sed: -e expression #1, char 19: unterminated `s' command
sed: -e expression #1, char 19: unterminated `s' command
sed: -e expression #1, char 19: unterminated `s' command
........

I was wondering if I need to change anything in the Dockerfile or in any of the given scripts? Can someone please advise me on how to run the streaming server and client containers in two different machines as the benchmark is designed to run both the server and client in the same host?

Many thanks,
Esha Barlaskar

@aledaglis
Copy link
Contributor

Hi Esha,

You can use Swarm to build an overlay network over which Docker containers running on different machines can communicate. Please take a look at these instructions and let me know if you face any problems: http://cloudsuite.ch//2015/12/20/swarm/

Cheers,
Alex

@ebarlaskar
Copy link
Author

Hi Alex,

Thanks for replying.

I am doing bandwidth experiments for client-server interaction as part of my research in cloud. So for this I need the client to be outside the cloud setup. I have my media streaming server running in one of the VMs in OpenStack and the client is installed in another machine outside the OpenStack servers. I want to create a client-server scenario which can represent real world media streaming service where multiple clients from different machines can stream media services from the media streaming server. Do you think this is doable without using swarm as I do not prefer the client-server to be running in the same overlay network?

Many thanks.
Esha.

@aledaglis
Copy link
Contributor

Swarm creates an overlay network to create a "virtual" cluster and hence facilitate managing all the participating nodes. While I haven't specifically tried your deployment scenario, I don't think its use is limited within a single physical cluster. Swarm is not functionally required, but it's useful once the number of nodes you use starts growing.

Using directly the IPs should work, but you should make sure the client and server machines can reach each other and the used ports are open (in this case, port 80). As a first step, can you try repeating the process you posted above using a client and a server machine within the same OpenStack cluster?

@ebarlaskar
Copy link
Author

ebarlaskar commented Jun 8, 2017

Hi Alex,

As per your advice I have created a swarm and also an overlay network. Swarm nodes are working fine and I can create any docker service using the official images from docker hub, like redis, nginx, etc, it just works without any issues. But if I try to use the images from your cloudsuite or my private repo, it fails to start.

Swarm issues with images from private registry:

In my case I am building an image on the nodes in the cluster and also I have tried to use the images from my docker hub registry. I can pull the images from my hub registry in each of the swarm nodes but it doesn't work with docker service create command. Thus, it seems that I can only use images from official docker hub and not if I build the image and push/pull from my docker hub registry. I have tried to debug this issue and attempted different possible solutions but nothing worked for me.

These are my start commands:

Note: The commands given in http://cloudsuite.ch//2015/12/20/swarm/ for running a service doesn't work in my swarm mode as those are for older version of swarm. Hence I have used 'docker service create' commands found in https://docs.docker.com/engine/reference/commandline/service_create/

ubuntu@swarmmasternew:/$ sudo docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username : mydockerID
Password:
Login Succeeded

ubuntu@swarmmasternew:/$ sudo docker service create --name streaming_dataset --network swarm-network --constraint "node.id==ffx5v6f1ddnxtb1g8434q27dn" --with-registry-auth mydockerID/repository:dataset
l3emhilu51c00ivijkk4hag3z
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.

But with service ls I see that the service fails to start:

ubuntu@swarmmasternew:/$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
81ddszzjwjl9 streaming_dataset replicated 0/1 mydockerID/repository-dataset

ubuntu@swarmmasternew:/$ sudo docker service ps streaming_dataset
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ohxmvg2vel6e streaming_dataset.1 mydockerID/repository:dataset swarm1 Running Pending 29 seconds ago

And this keeps showing pending even after 3-4 hours.

CloudSuite related issue in swarm mode:

Then I tried to run dc-server from cloudsuite and this time I didn't pull the image from my registry. These are the commands that I used:

ubuntu@swarmmasternew:/$ sudo docker service create --name dc-server --network swarm-network --constraint "node.id==ffx5v6f1ddnxtb1g8434q27dn" -d cloudsuite/data-caching:server -t 4 -m 4096 -n 550
satyv06nlrxh9bwzv1kwo7opk

ubuntu@swarmmasternew:/$ sudo docker service ps dc-server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i2iq7yd0feq2 dc-server.1 cloudsuite/data-caching:server swarm1 Running Preparing 2 seconds ago

After an hour or so when I executed the service ps command I found that now its running.

ubuntu@swarmmasternew:/$ sudo docker service ps dc-server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i2iq7yd0feq2 dc-server.1 cloudsuite/data-caching:server swarm1 Running Running about an hour ago

However, now the problem is shown in dc-client which can't connect to the dc-server and also if I run the dc-client directly on the swarm node it fails to attach to the swarm overlay network as we have to deploy it via the swarm manager.

ubuntu@swarm2:~$ sudo docker run -it --net swarm-network --name dc-client cloudsuite/data-caching:client bash
docker: Error response from daemon: Could not attach to network swarm-network: rpc error: code = 7 desc = network swarm-network not manually attachable.

Then I tried to run it by removing the --net flag and although it was created it couldn't connect to the server.

Below are the steps which I followed:

sudo docker run -it --name dc-client cloudsuite/data-caching:client bash

  • '[' bash = -rps ']'
  • exec bash
    memcache@2036dbdf7c8a:/$ cd /usr/src/memcached/memcached_client/
    memcache@2036dbdf7c8a:/usr/src/memcached/memcached_client$ vi docker_servers.txt

In the docker_servers.txt file I have entered the IP and port of the swarm node in which the dc-server is running. Then I tried the following which failed with "Connection error" message:

memcache@2036dbdf7c8a:/usr/src/memcached/memcached_client$ ./loader -a ../twitter_dataset/twitter_dataset_unscaled -o ../twitter_dataset/twitter_dataset_30x -s docker_servers.txt -w 4 -S 30 -D 4096 -j -T 1
stats_time = 1
Configuration:

nProcessors on system: 4
nWorkers: 4
runtime: -1
Get fraction: 0.900000
Naggle's algorithm: False

host: 10.0.0.103
address: 10.0.0.103
Loading key value file...Average Size = 1057.34758
Keys to Preload = 3557357
created uniform distribution 1000
rps -1 cpus 4
Overridge n_connections_total because < n_workers
num_worker_connections 1
Connection error

Next, I have tried to create the dc-client from the manager node; it starts but shut-downs quickly - as it obviously cannot get into bash from the manager to some other node as expected. Below are the commands that I have tried for this:

ubuntu@swarmmasternew:/$ sudo docker service create --name dc-client --network swarm-network --constraint "node.id==rcx96urvzsyrvfy9cwttn152n" cloudsuite/data-caching:client bash
dllovu3wu05lm9tc86mi2nu76
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
ubuntu@swarmmasternew:/$ sudo docker service ps dc-client
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
6vc7h24yko26 dc-client.1 cloudsuite/data-caching:client swarm2 Ready Ready 2 seconds ago
h1nc1jzbm0mu _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 2 seconds ago
42l1ekylfiyg _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 8 seconds ago
ubuntu@swarmmasternew:/$ sudo docker service ps dc-client
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
qh25ulxcsjjt dc-client.1 cloudsuite/data-caching:client swarm2 Running Starting 1 second ago
h6833dl7x93h _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 6 seconds ago
km3v3pupcnp2 _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 12 seconds ago
n19r9a1jn0ov _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 18 seconds ago
shozekxvhaf2 _ dc-client.1 cloudsuite/data-caching:client swarm2 Shutdown Complete 25 seconds ago

I would really appreciate if you could help me with the above issues.

Many thanks,
Esha

@aledaglis
Copy link
Contributor

Please reopen if this problem still remains.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants