New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The server takes up a lot of memory after hundreds of millions of requests #347
Comments
Can you provide more context? How many requests? How much memory? Did removing I have tested the code you posted and saw no increase in memory with increasing the number of requests. My numbers are
The code sending the requests was: use futures::prelude::*;
use hyper::{Body, Client, Request};
use std::sync::atomic::{AtomicU64, Ordering};
static COUNT: AtomicU64 = AtomicU64::new(0);
const N: u64 = 100_000_000;
#[tokio::main]
async fn main() {
println!("N = {}", N);
std::thread::spawn(|| {
while COUNT.fetch_add(1, Ordering::Relaxed) < N {
println!(
"{}",
((COUNT.load(Ordering::Relaxed) as f64 / N as f64) * 100.0).round() as i64
);
std::thread::sleep_ms(1000);
}
});
let tasks = (0..100)
.map(|_| {
tokio::spawn(async move {
let client = Client::new();
loop {
if COUNT.fetch_add(1, Ordering::Relaxed) >= N {
break;
}
client
.request(
Request::builder()
.uri("http://localhost:8380/ip")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
}
})
})
.collect::<Vec<_>>();
for t in tasks {
t.await.unwrap();
}
} The server code is what you posted. |
@davidpdrsn How many requests?2,000 * 60 * 60 * 24=172,800,000 requests/day. How much memory?The service used 6MB of memory when it was started, 500MB of memory was used after a day of requests, and 2GB of memory was used in about three days. Did removing ConnectInfo change things?No, I need to get remote ip. Maybe you can provide a way to get remote ip without using ConnectInfo, I would test it. Can you reproduce it using hyper directly without axum?Ok, the result will be given to you in 48 hours. |
Thats not possible due to hyper's design.
Can you share code for that as well? In general the code you posted is doing very little so if there is an issue I it is unlikely axum is the cause. |
use std::net::SocketAddr;
use std::convert::Infallible;
use hyper::{Body, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use hyper::server::conn::AddrStream;
async fn show_headers(addr: SocketAddr) -> Result<Response<Body>, Infallible> {
Ok(Response::new( format!("{}",addr.ip()).into()))
}
#[tokio::main]
async fn main() {
let make_service =
make_service_fn(move |conn: &AddrStream| {
let addr = conn.remote_addr();
async move {
let addr = addr.clone();
Ok::<_, Infallible>(service_fn(move |_| show_headers(addr.clone())))
}
});
Server::bind(&SocketAddr::from(([0, 0, 0, 0], 8380)))
.serve(make_service)
.await
.unwrap();
} Cargo.toml [dependencies]
hyper = { version = "0.14", features = ["full"] }
tokio = { version = "1", features = ["full"] } Resulthyper version is ok. axum used too much memory.
|
Are you able to make a reproduction script that I can run? |
python script ,need python >= 3.8
import asyncio
import time
from asyncio import TimeoutError
from attr import attrs, attr
import aiohttp
from aiohttp import ClientProxyConnectionError, ServerDisconnectedError, ClientOSError, ClientHttpProxyError
EXCEPTIONS = (
ClientProxyConnectionError,
ConnectionRefusedError,
TimeoutError,
ServerDisconnectedError,
ClientOSError,
ClientHttpProxyError,
AssertionError
)
TEST_URL = 'http://localhost:8380/ip'
TEST_VALID_STATUS = [200]
TEST_TIMEOUT = 10
@attrs(hash=True)
class Proxy(object):
"""
proxy schema
"""
host = attr(type=str, default=None, hash=True)
port = attr(type=int, default=None, hash=True)
@staticmethod
def of(raw_ip: str):
ris = raw_ip.split(':')
return Proxy(ris[0], int(ris[1]))
def __str__(self):
"""
to string, for print
:return:
"""
return f'{self.host}:{self.port}'
def string(self):
"""
to string
:return: <host>:<port>
"""
return self.__str__()
async def check_batch(ips: list[Proxy]):
results = await asyncio.gather(*[check_single(ip) for ip in ips])
return [r for r in results if r]
async def check_single(proxy: Proxy):
try:
reader, writer = await asyncio.wait_for(asyncio.open_connection(
proxy.host, proxy.port), timeout=5)
writer.close()
except Exception:
return proxy
async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(ssl=False)) as session:
try:
async with session.get(TEST_URL, proxy=f'http://{proxy.string()}', timeout=TEST_TIMEOUT,
allow_redirects=False) as response:
if response.status in TEST_VALID_STATUS:
return
else:
return proxy
except EXCEPTIONS:
return proxy
except Exception as e:
return proxy
def load_proxy():
# load all proxy
return []
if __name__ == '__main__':
while True:
proxy_list = load_proxy()
asyncio.run(check_batch(proxy_list))
time.sleep(1) |
I don't understand. Can I run the script or not? It's hard for us to figure out what's wrong if the bug cannot be reproduced. |
def load_proxy():
# load all http proxy from you datasource
return [] http proxies are required to run this script and reproduce this issue. |
Do you think using an HTTP proxy actually matters? If the problem is caused by axum I suppose it shouldn't matter. |
@davidpdrsn Very little memory usage when not using http proxy. |
Alright I guess thats good. How do you suggest we debug the issue then? |
@davidpdrsn Maybe I can use jemalloc to dump the memory and submit it. Or is there a better way to debug rust memory? |
@davidpdrsn |
@zzl221000 Do you see anything in that from axum? I've never used it before. |
|
@zzl221000 any news? |
@davidpdrsn I can't continue working until the day after I'm on vacation. I found the same issue in the hyper project. But that's a issue that has been solved. |
@davidpdrsn It‘s hyper‘s bug.hyper version program had same problem after running for seven days.
|
Alright good to know! I'll close this for now but suggest you re-open that hyper issue or file a new one. |
Bug Report
Version
axum v0.2.4
Platform
docker image base distroless cc-debian10
Crates
Description
Using ConnectInfo to get remote ip takes up a lot of memory after hundreds of millions of requests.
The text was updated successfully, but these errors were encountered: