Skip to content
The F0rb1dd3n edited this page Nov 15, 2019 · 18 revisions

Welcome to the Multitor wiki!

You can file an issue about it and ask that it be added.


Table of Contents

Creating processes

Example of starting the tool:

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900

Creates new TOR processes and specifies the number of processes to create:

  • --init 2

Specifies the user from which new processes will be created (the user must exist in the system):

  • -u debian-tor

Specifies the port number for TOR communication. Increased by 1 for each subsequent process:

  • --socks-port 9000

Specifies the port number of the TOR process control. Increased by 1 for each subsequent process:

  • --control-port 9900

    If there are connection problems after initiating the tool (eg. from the web browser), it may be necessary to wait a few moments for the full TOR connection established.

Reviewing processes

Examples of obtaining information about a given TOR process created by multitor:

multitor --show-id --socks-port 9000

We want to get information about a given TOR process:

  • --show-id

You can use the all value to display all processes.

Specifies the port number for communication. Allows you to find the process after this port number:

  • --socks-port 9000

New TOR identity

There is a "Use new identity" button in TOR Browser or Vidalia. It sends a signal to the control port of TOR, to switch to a new identity. An alternative solution is to restart the multitor or wait for the time defined in the NewCircuitPeriod variable, which default value is 15s.

If there is a need to create a new identity:

multitor --new-id --socks-port 9000

We set up creating a new identity for TOR process:

  • --new-id

You can use the all value to regenerate identity for all processes. An alternative option to give new identity is to restart the multitor.

Specifies the port number for communication. Allows you to find the process after this port number:

  • --socks-port 9000

TOR options

By default, TOR instances are init with the following options:

sudo -u "$_arg_uname" tor -f "${_torrc_config}" \
     --RunAsDaemon 1 \
     --CookieAuthentication 0 \
     --SocksPort "$_arg_socks" \
     --ControlPort "$_arg_control" \
     --PidFile "${_proc_dir}/${_arg_socks}.pid" \
     --DataDirectory "${_proc_dir}" \
     --SocksBindAddress 127.0.0.1 \
     --NewCircuitPeriod 15 \
     --MaxCircuitDirtiness 15 \
     --NumEntryGuards 8 \
     --CircuitBuildTimeout 5 \
     --ExitRelay 0 \
     --RefuseUnknownExits 0 \
     --ClientOnly 1 \
     --StrictNodes 1 \
     --AllowSingleHopCircuits 1 \
     >>"$_log_stdout" 2>&1 ; _kstate="$?"

Proxy

See Load balancing.

Output example

So if we created 5 TOR processes by multitor example output will be given:

multitor --init 5 -u debian-tor --socks-port 9000 --control-port 9900 --proxy privoxy

     Set processes: 5
           Created: 5
       Not created: 0
  Control password: TI24tO2k0E8f8jqoIr

       Proxy state: running (privoxy » haproxy » socks)

Load balancing

Multitor uses two type of proxy to create a load balancing mechanism - these are socks proxy and http-proxy. Each of these types is good but its purpose is slightly different.

For browsing websites (generally for http/https traffic) it is recommended to use http proxy. In this configuration, the polipo, privoxy or hpts services are used, which has many very useful functions which in the case of TOR is not always well-aimed. In addition, we are confident in better handling of ssl traffic.

The socks proxy type is also reliable, however, when browsing websites through TOR nodes it can cause more problems.

Types of connection

Multitor provides two types of connection:

  • http-to-haproxy-to-socks
    • the frontend process is http-proxy
    • the broker process is a HAProxy
    • the backend process is a socks
  • haproxy-to-http-to-socks
    • the frontend process is HAProxy
    • the broker process is a http-proxy
    • the backend process is a socks

Default Multitor uses http-proxy to create a local proxy server for all created TOR instances. The next services is HAProxy which distribute traffic (round-robin) between TOR processes.

If you want to change this, add --haproxy param.

The default configuration file for HAProxy is in templates/haproxy-template.cfg.

SOCKS

Communication architecture:

Client_1
   |
   |--------> TOR Instance (127.0.0.1:9000)

Client_2
   |
   |--------> TOR Instance (127.0.0.1:9001)

To run the TOR socks processes:

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: GjdwJmJvLlSkI136yC

       Proxy state: disable (only tor)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      642/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      717/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      642/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      717/tor

It's initialize many TOR processes as quickly as possible. Generally for user-land programs, eg. web browsers.

SOCKS Proxy

Communication architecture:

Client
   |
   |--------> HAProxy (127.0.0.1:16379)
                 |
                 |--------> TOR Instance (127.0.0.1:9000)
                 |
                 |--------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy socks parameter to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy socks

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: Gr0SQvfMGHkur4DFQ9

       Proxy state: running (haproxy » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      2773/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      2773/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      2589/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      2666/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      2589/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      2666/tor

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --socks5 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req  1: 5.254.79.66
req  2: 178.175.135.99
req  3: 5.254.79.66
req  4: 178.175.135.99

Communication through socks proxy takes place without a cache (except browsers that have their own cache). Curl and other low-level programs should work without any problems.

Polipo

Frontend: Polipo, Broker: HAProxy

Communication architecture:

Client
   |
   |--------> Polipo (127.0.0.1:15379)
                 |
                 |--------> HAProxy Instance (127.0.0.1:16379)
                               |
                               |---------> TOR Instance (127.0.0.1:9000)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy polipo parameter to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy polipo

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: 0euFHlsWsfvYYKeeH5

       Proxy state: running (polipo » haproxy » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:15379         0.0.0.0:*               LISTEN      5867/polipo
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      5869/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      5869/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      5681/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      5782/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      5681/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      5782/tor

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:15379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61
Frontend: HAProxy, Broker: Polipo

Communication architecture:

Client
   |
   |--------> HAProxy (127.0.0.1:16379)
                 |
                 |--------> Polipo Instance (127.0.0.1:15379)
                 |             |
                 |             |---------> TOR Instance (127.0.0.1:9000)
                 |
                 |--------> Polipo Instance (127.0.0.1:15380)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy polipo --haproxy parameters to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy polipo --haproxy

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: BRtMEimbFRdZi4BvAp

       Proxy state: running (haproxy » polipo » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:15379         0.0.0.0:*               LISTEN      11169/polipo
tcp        0      0 127.0.0.1:15380         0.0.0.0:*               LISTEN      11182/polipo
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      11184/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      11184/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      11018/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      11077/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      11018/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      11077/tor

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61
Summary

In the default configuration, the Polipo cache has been turned off (look at the configuration template). If you set the network configuration in the browser so that the traffic passes through HAProxy, you must remember that browsers have their own cache, which can cause that each entry to the page will be from the same IP address. This is not a big problem because it is not always the case. After clearing the browser cache again, the web server will receive the request from a different IP address.

You can check it for example in the firefox browsers by installing the "Empty Cache Button by mvm" add-on and enter the http://myexternalip.com/ website.

Privoxy

Frontend: Privoxy, Broker: HAProxy

Communication architecture:

Client
   |
   |--------> Privoxy (127.0.0.1:15379)
                 |
                 |--------> HAProxy Instance (127.0.0.1:16379)
                               |
                               |---------> TOR Instance (127.0.0.1:9000)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy privoxy parameter to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy privoxy

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: euyquAXx4nppbp7tbN

       Proxy state: running (privoxy » haproxy » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:15379         0.0.0.0:*               LISTEN      28002/privoxy
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      28017/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      28017/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      27747/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      27833/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      27747/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      27833/tor

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:15379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61
Frontend: HAProxy, Broker: Privoxy

Communication architecture:

Client
   |
   |--------> HAProxy (127.0.0.1:16379)
                 |
                 |--------> Privoxy Instance (127.0.0.1:15379)
                 |             |
                 |             |---------> TOR Instance (127.0.0.1:9000)
                 |
                 |--------> Privoxy Instance (127.0.0.1:15380)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy privoxy --haproxy parameters to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy privoxy --haproxy

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: 15OSRWLFzNcuDuNb4D

       Proxy state: running (haproxy » privoxy » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:15379         0.0.0.0:*               LISTEN      32142/privoxy
tcp        0      0 127.0.0.1:15380         0.0.0.0:*               LISTEN      32159/privoxy
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      32249/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      32249/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      31992/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      32051/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      31992/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      32051/tor

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61

HPTS (http-proxy-to-socks)

Frontend: HPTS, Broker: HAProxy

Communication architecture:

Client
   |
   |--------> HPTS (127.0.0.1:15379)
                 |
                 |--------> HAProxy Instance (127.0.0.1:16379)
                               |
                               |---------> TOR Instance (127.0.0.1:9000)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy hpts parameter to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy hpts

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: SnsaS80HcbU1qsJ4XO

       Proxy state: running (hpts » haproxy » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      10343/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      10343/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      10127/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      10231/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      10127/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      10231/tor
tcp6       0      0 :::15379                :::*                    LISTEN      10341/node

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:15379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61
Frontend: HAProxy, Broker: HPTS

Communication architecture:

Client
   |
   |--------> HAProxy (127.0.0.1:16379)
                 |
                 |--------> HPTS Instance (127.0.0.1:15379)
                 |             |
                 |             |---------> TOR Instance (127.0.0.1:9000)
                 |
                 |--------> HPTS Instance (127.0.0.1:15380)
                               |
                               |---------> TOR Instance (127.0.0.1:9001)

To run the load balancer you need to add the --proxy hpts --haproxy parameters to the command specified in the example.

multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy hpts --haproxy

     Set processes: 2
           Created: 2
       Not created: 0
  Control password: OuPV0RLLuiJ1E2Lk6h

       Proxy state: running (haproxy » hpts » socks)

After launching, let's see the working processes:

netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo\|privoxy\|node"
tcp        0      0 127.0.0.1:16379         0.0.0.0:*               LISTEN      22286/haproxy
tcp        0      0 127.0.0.1:16380         0.0.0.0:*               LISTEN      22286/haproxy
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      22110/tor
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      22173/tor
tcp        0      0 127.0.0.1:9900          0.0.0.0:*               LISTEN      22110/tor
tcp        0      0 127.0.0.1:9901          0.0.0.0:*               LISTEN      22173/tor
tcp6       0      0 :::15379                :::*                    LISTEN      22264/node
tcp6       0      0 :::15380                :::*                    LISTEN      22280/node

In order to test the correctness of the setup, you can run the following command:

for i in $(seq 1 4) ; do \
  printf "req %2d: " "$i" ; \
  curl -k --location --proxy 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req  1: 178.209.42.84
req  2: 185.100.85.61
req  3: 178.209.42.84
req  4: 185.100.85.61

Port convention

The port numbers for the TOR are set by the user using the --socks-port parameter. Additionally, the standard port on which HAProxy listens is 16379 (and 16380 prot stats). Polipo, Privoxy and hpts uses ports 1000 smaller than those set for HAProxy.

HAProxy stats interface

If you want to view traffic statistics, go to http://127.0.0.1:16380/stats.

By default HAProxy statistics don't have authorization. If you want to enable this security please uncomment (templates/haproxy-template.cfg):

# stats           auth ha_admin:__PASSWORD__

Login: ha_admin

Password: automatically generated (see in etc/haproxy.cfg)

Gateway

If you are building a gateway for TOR connections, you can put HAProxy on an external IP address by changing the bind directive in haproxy-template.cfg:

bind 0.0.0.0:16379 name proxy

This working only if --haproxy param set.

Password authentication

Multitor uses password for authorization on the control port. The password is generated automatically and contains 18 random characters - it is displayed in the final report after the creation of new processes.