How TCP connections are counted in an unlimited proxy
In our unlimited traffic plan, there are no restrictions on data volume.
👉 You can transfer as many megabytes and gigabytes as you want.
But there is another limitation — the number of concurrent TCP connections.
This limitation often raises questions because it is less obvious than traffic or speed.
Simply put:
👉 traffic = “how much data you transferred”
👉 connections = “how many communication channels you have open at the same time”
And it is the second one that we limit.
What exactly we limit
We count only concurrent TCP connections.
👉 Important:
- not traffic (it is unlimited)
- not speed
- not number of requests
Only the number of active connections at a single moment.
Important to understand
A TCP connection is not a request.
One client can:
- open one connection and make 100 requests
- or open 100 connections and make the same 100 requests
👉 for the limit, this is completely different load
How the limit is counted
Each open TCP connection = 1 connection in the limit.
- tracking is done per Login ID
- all protocols are counted together:
- HTTP(S)
- SOCKS5
- there is no difference between them — everything is summed up
👉 If you have:
- 30 connections via HTTP
- 20 via SOCKS5
Total = 50 active connections
What happens when the limit is exceeded
If the limit is reached:
👉 all new connections are rejected (connection refused / reset)
At the same time:
- already established connections continue working
- new ones are not accepted until old ones are freed
Visual: how the limit works
flowchart LR
A[Your application] --> B[Connection 1]
A --> C[Connection 2]
A --> D[Connection 3]
A --> E[Connection 4]
A --> F[Connection 5]
B --> P[Proxy]
C --> P
D --> P
E --> P
F --> P
P --> OK[All are working]
A --> G[Attempt to open one more]
G --> X[❌ Rejected<br/>connection refused]
What a TCP connection is (simple explanation)
A TCP connection is a “separate communication channel” between your application and the proxy.
Every time:
- a browser opens a website
- a script makes a request
- a bot calls an API
— a TCP connection is created (or reused if already open).
Important note: keep-alive
If keep-alive is used:
👉 multiple requests go through one connection
This:
- saves the limit
- reduces load
If keep-alive is enabled, multiple HTTP requests can reuse the same TCP connection instead of opening a new one each time, depending on client/server behavior and connection lifetime.
Visual: how keep-alive works
flowchart LR
A[Your application] -->|1 connection| P[Proxy]
P --> R1[Request 1]
P --> R2[Request 2]
P --> R3[Request 3]
P --> R4[Request 4]
R1 --> S[Website]
R2 --> S
R3 --> S
R4 --> S
HTTP/1 — each request often creates a new TCP connection
In HTTP/1 (especially without optimizations), the model is usually simple:
👉 one request = one TCP connection
This means your application:
- opens a connection
- makes a request
- closes it
- repeats
Why this matters
With high activity:
👉 number of connections grows very quickly
And you may:
- hit the limit
- get connection errors
- experience instability
Visual: HTTP/1 behavior
flowchart LR
A[Your application]
A -->|Request 1| C1[Separate connection]
A -->|Request 2| C2[Separate connection]
A -->|Request 3| C3[Separate connection]
A -->|Request 4| C4[Separate connection]
C1 --> P[Proxy]
C2 --> P
C3 --> P
C4 --> P
HTTP/2 — more efficient model
HTTP/2 can significantly reduce the number of connections.
👉 How it works:
- one TCP connection
- multiple parallel streams inside it
👉 Important:
- only TCP connections are counted
- streams are not limited
Visual: HTTP/2 behavior
flowchart LR
A[Your application] -->|1 connection| P[Proxy]
P --> R1[Request 1]
P --> R2[Request 2]
P --> R3[Request 3]
P --> R4[Request 4]
P --> R5[Request 5]
P --> R6[Request 6]
R1 --> S[Website]
R2 --> S
R3 --> S
R4 --> S
R5 --> S
R6 --> S
What this means in practice
- HTTP/1 → many TCP connections
- HTTP/2 → fewer TCP connections
👉 HTTP/2:
- uses the limit more efficiently
- may still carry the same or higher load
Typical scenarios
1. Browser
Browsers open multiple connections per site.
👉 With HTTP/2, fewer connections are used.
2. Scraping / parsing
Common mistake:
- no keep-alive
- no connection pooling
- new connection per request
👉 Result: fast exhaustion of the limit
3. Bots / automation tools
Common issues:
- too many threads
- no connection reuse
- connections not properly closed
👉 This is the most frequent cause of hitting limits
Common mistakes
❌ “I have low traffic, why is it blocked?”
→ because connections are counted, not traffic
❌ “I only made 100 requests”
→ those may be 100 separate TCP connections
❌ “It works sometimes”
→ you are hitting the connection limit
How to use the system efficiently
👉 Recommendations:
- enable keep-alive
- use connection pooling
- limit number of threads
- prefer HTTP/2 when possible
- close unused connections
Summary
- only concurrent TCP connections are counted
- tracked per Login ID
- HTTP and SOCKS5 are combined
- exceeding limit → connection refused/reset
- HTTP/2 reduces number of connections
If you have questions, contact support — we can help optimize your setup.
Ready to test with real IPs?
Register now to get immediate access to our proxy pools.