3proxy free proxy server

[RU] switch to
English Version

Optimizing 3proxy for high load

Precaution 1: 3proxy was not initially developed for high load and is positioned as a SOHO product, the main reason is "one connection - one thread" model 3proxy uses. 3proxy is known to work with above 200,000 connections under proper configuration, but use it in production environment under high loads at your own risk and do not expect too much.

Precaution 2: This documentation is incomplete and is not sufficient. High loads may require very specific system tuning including, but not limited to specific or cusomized kernels, builds, settings, sysctls, options, etc. All this is not covered by this documentation.

Configuring 'maxconn'

A number of simulatineous connections per service is limited by 'maxconn' option. Default maxconn value since 3proxy 0.8 is 500. You may want to set 'maxconn' to higher value. Under this configuration:
maxconn 1000
proxy -p3129
proxy -p3128
maxconn for every service is 1000, and there are 3 services running (2 proxy and 1 socks), so, for all services there can be up to 3000 simulatineous connections to 3proxy.

Avoid setting 'maxconn' to arbitrary high value, it should be carefully choosen to protect system and proxy from resources exhaution. Setting maxconn above resources available can lead to denial of service conditions.

Understanding resources requirements

Each running service require:
  • 1*thread (process)
  • 1*socket (file descriptor)
  • 1 stack memory segment + some heap memory, ~64K-128K depending on the system
Each connected client require:
  • 1*thread (process)
  • 2*socket (file descriptor). For FTP 4 sockets are required.
    Under linux since 0.9 splice() is used. It much more effective, but requires
    2*socket (file descriptor) + 2*pipe (file descriptors) = 4 file descriptors.
    For FTP 4 sockets and 2 pipes are required with splice().
    1 additional socket (file descriptor) during name resolution for non-cached names
    1 additional socket during authentication or logging for RADIUS authentication or logging.
  • 1*ephemeral port (3*ephemeral ports for FTP connection).
  • 1 stack memory segment of ~32K-128K depending on the system + at least 16K and up to few MB (for 'proxy' and 'ftppr') of heap memory. If you are short of memory, prefer 'socks' to 'proxy' and 'ftppr'.
  • a lot of system buffers, specially in the case of slow network connections.
Also, additional resources like system buffers are required for network activity.

Setting ulimits

Hard and soft ulimits must be set above calculated requirements. Validate ulimits match your expectation, especially if you run 3proxy under dedicated account by adding e.g.
system "ulimit -Ha >>/tmp/3proxy.ulim.hard"
system "ulimit -Sa >>/tmp/3proxy.ulim.soft"
in the beginning (before first service started) and the end of config file. Make both hard restart (that is kill and start 3proxy process) and soft restart by sending SIGUSR1 to 3proxy process, check ulimits recorded to files match your expecation.

Extending system limitation

Check manuals / documentation for your system limitations. You may need to change sysctls or even rebuild the kernel from source. To help with system-dependant settings, since 0.9-devel 3proxy supports different socket options which can be set via -ol option for listening socket, -oc for proxy-to-client socket and -os for proxy-to-server socket. Example:
available options are system dependant.

Extending ephemeral port range

Check ephemeral port range for your system and extend it to the number of the ports required. Ephimeral range is always limited to maximum number of ports (64K). To extend the number of outgoing connections above this limit, extending ephemeral port range is not enough, you need additional actions:
  1. Configure multiple outgoing IPs
  2. Make sure 3proxy is configured to use different outgoing IP by either setting external IP via RADIUS
    radius secret
    auth radius
    or by using multiple services with different external interfaces, example:
    allow user1,user11,user111
    proxy -p1001 -e1.1.1.1
    allow user2,user22,user222
    proxy -p1001 -e1.1.1.2
    allow user3,user33,user333
    proxy -p1001 -e1.1.1.3
    allow user4,user44,user444
    proxy -p1001 -e1.1.1.4
    or via "parent extip" rotation, e.g.
    allow user1,user11,user111
    parent 1000 extip 0
    allow user2,user22,user222
    parent 1000 extip 0
    allow user3,user33,user333
    parent 1000 extip 0
    allow user4,user44,user444
    parent 1000 extip 0
    allow *
    parent 250 extip 0
    parent 250 extip 0
    parent 250 extip 0
    parent 250 extip 0
    Under latest Linux version you can also start multiple services with different
    external addresses on the single port with SO_REUSEPORT on listening socket to
    evenly distribute incoming connections between outgoing interfaces:
    socks -olSO_REUSEPORT -p3128 -e
    socks -olSO_REUSEPORT -p3128 -e
    socks -olSO_REUSEPORT -p3128 -e
    socks -olSO_REUSEPORT -p3128 -e
    for Web browsing last two examples are not recommended, because same client can get different external address for different requests, you should choose external interface with user-based rules instead.
  3. You may need additional system dependant actions to use same port on different IPs, usually by adding SO_REUSEADDR (SO_PORT_SCALABILITY for Windows) socket option to external socket. This option can be set (since 0.9 devel) with -os option:
    proxy -p3128 -e1.2.3.4 -osSO_REUSEADDR
    Behavior for SO_REUSEADDR and SO_REUSEPOR is different between different system, even between different kernel versions and can lead to unexpected results. Specifics is described here. Use this options only if actually required and if you fully understand possible consiquences. E.g. SO_REUSEPORT can help to establish more connections than the number of the client port available, but it can also lead to situation connections are randomely fail due to ip+port pairs collision if remote or local system doesn't support this trick.

Setting stacksize

'stacksize' is a size added to all stack allocations and can be both positive and negative. Stack is required in functions call. 3proxy itself doesn't require large stack, but it can be required if some purely-written libc, 3rd party libraries or system functions called. There is known\ dirty code in Unix ODBC implementations, build-in DNS resolvers, especially in the case of IPv6 and large number of interfaces. Under most 64-bit system extending stacksize will lead to additional memory space usage, but do not require actual commited memory, so you can inrease stacksize to relatively large value (e.g. 1024000) without the need to add additional phisical memory, but it's system/libc dependant and requires additional testing under your installation. Don't forget about memory related ulimts.

For 32-bit systems address space can be a bottlneck you should consider. If you're short of address space you can try to use negative stack size.

Known system issues

There are known race condition issues in Linux / glibc resolver. The probability of race condition arises under configuration with IPv6, large number of interfaces or IP addresses or resolvers configured. In this case, install local recursor and use 3proxy built-in resolver (nserver / nscache / nscache6).

Do not use public resolvers

Public resolvers like ones from Google have ratelimits. For large number of requests install local caching recursor (ISC bind named, PowerDNS recursor, etc).

Avoid large lists

Currently, 3proxy is not optimized to use large ACLs, user lists, etc. All lists are processed lineary. In devel version you can use RADIUS authentication to avoid user lists and ACLs in 3proxy itself. Also, RADIUS allows to easily set outgoing IP on per-user basis or more sophisicated logics. RADIUS is a new beta feature, test it before using in production.

Avoid changing configuration too often

Every configuration reload requires additional resources. Do not do frequent changes, like users addition/deletaion via connfiguration, use alternative authentication methods instead, like RADIUS.

Consider using 'noforce'

'force' behaviour (default) re-authenticates all connections after configuration reload, it may be resource consuming on large number of connections. Consider adding 'noforce' command before services started to prevent connections reauthentication.

Do not monitor configuration files directly

Using configuration file directly in 'monitor' can lead to race condition where configuration is reloaded while file is being written. To avoid race conditions:
  1. Update config files only if there is no lock file
  2. Create lock file then 3proxy configuration is updated, e.g. with "touch /some/path/3proxy/3proxy.lck". If you generate config files asynchronously, e.g. by user's request via web, you should consider implementing existance checking and file creation as atomic operation.
  3. add
    system "rm /some/path/3proxy/3proxy.lck"
    at the end of config file to remove it after configuration is successfully loaded
  4. Use a dedicated version file to monitor, e.g.
    monitor "/some/path/3proxy/3proxy.ver"
  5. After config is updated, change version file for 3proxy to reload configuration, e.g. with "touch /some/path/3proxy/3proxy.ver".

Use TCP_NODELAY to speed-up connections with small amount of data

If most requests require exchange with a small amount of data in a both ways without the need for bandwidth, e.g. messengers or small web request, you can eliminate Nagle's algorithm delay with TCP_NODELAY flag. Usage example:
sets TCP_NODELAY for client (oc) and server (os) connections.

Do not use TCP_NODELAY on slow connections with high delays and then connection bandwidth is a bottleneck.

Use slice to speedup large data amount transfers

slice() allows to copy data between connections without copying to process addres space. It can speedup proxy on high bandwidth connections, if most connections require large data transfers. "-s" allows slice usage. Example:
proxy -s
Slice is only available in Linux and is currently beta option available in devel version. Do not use it in production without testing. Slice requires more system buffers, but reduces process memory usage. Do not use slice if there is a lot of short-living connections with no bandwidth requirements.

Use slice only on high-speed connections (e.g. 10GBE), if processor or bus are bottlenecks.

TCP_NODELAY and slice are not contrary to each over and can be combined on high-speed connections.

О сайте | Условия использования
© Securityvulns, 3APA3A, Владимир Дубровин