How can I improve this very slow performance after adding database?

⚓ rust    📅 2025-05-28    👤 surdeus    👁️ 3      

surdeus

Warning

This post was published 32 days ago. The information described in this article may have changed.

I’m building a simple full-stack website. The performance was excellent before I added the database—it was hitting 180,000 requests per second when returning an HTML page to the browser (I tested the “/” endpoints, which return HTML page).

Here is the minimal reproducible code:

Main.rs

use bb8_postgres::PostgresConnectionManager;
use ntex::web::{self, types::State, HttpResponse, Responder, HttpServer};
use rust_embed::RustEmbed;
use tokio_postgres::NoTls;
use serde_json::{Map, Value};
use tokio_postgres::{Row, types::Type};

#[derive(RustEmbed)]
#[folder = "ui/"]
pub struct Frontend;

async fn frontend_handler() -> HttpResponse {
    let html = Frontend::get("index.html").unwrap();
    HttpResponse::Ok()
        .content_type("text/html")
        .body(html.data.into_owned())
}

async fn get_api(
    pool: State<Pool<PostgresConnectionManager<NoTls>>>
) -> impl Responder {
    let conn = pool.get().await.unwrap();
    let data = conn.query("SELECT * FROM produk", &[]).await.unwrap();
    let json = {
        let result: Vec<serde_json::Value> = data
            .iter()
            .map(crate::row_to_json_object)
            .collect();
        serde_json::Value::Array(result)
    };

    web::types::Json(json)
}

fn row_to_json_object(row: &Row) -> Value {
    let mut obj = Map::new();

    for col in row.columns() {
        let name = col.name();
        let typ = col.type_();

        let value = match *typ {
            Type::INT2 => row.get::<_, i16>(name).into(),
            Type::INT4 => row.get::<_, i32>(name).into(),
            Type::INT8 => row.get::<_, i64>(name).into(),
            Type::FLOAT4 => row.get::<_, f32>(name).into(),
            Type::FLOAT8 => row.get::<_, f64>(name).into(),
            Type::TEXT | Type::VARCHAR => row.get::<_, String>(name).into(),
            Type::BOOL => row.get::<_, bool>(name).into(),
            Type::TIMESTAMPTZ | Type::TIMESTAMP => {
                let dt: chrono::NaiveDateTime = row.get(name);
                dt.to_string().into()
            }
            Type::DATE => {
                let d: chrono::NaiveDate = row.get(name);
                d.to_string().into()
            }
            _ => {
                Value::Null
            }
        };

        obj.insert(name.to_string(), value);
    }

    Value::Object(obj)
}

#[ntex::main]
async fn main() -> std::io::Result<()> {
    let conn_str = "host=localhost port=5432 user=postgres password=fuji dbname=tes";
    let manager =
        PostgresConnectionManager::new_from_stringlike(conn_str, NoTls).unwrap();
    let pool = Pool::builder()
        .max_size(200) // Increase pool size
        .min_idle(Some(50))
        // .max_lifetime(Some(std::time::Duration::from_secs(3600)))
        // .idle_timeout(Some(std::time::Duration::from_secs(600)))
        .build(manager)
        .await
        .unwrap();

    HttpServer::new(move || {
        web::App::new()
            .state(pool.clone())
            .route("/", web::get().to(frontend_handler))
            .route("/api", web::get().to(get_api))
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Cargo.toml


[package]
name = "my_app"
version = "0.1.0"
edition = "2024"

[dependencies]
ntex = { version = "*", features = ["tokio"] }
serde = { version = "*", features = ["derive"] }
serde_json = "*"
bb8 = "0.9.0"
bb8-postgres = "0.9.0"
tokio-postgres = { version = "0.7", features = ["with-chrono-0_4"] }
chrono = { version = "0.4", features = ["serde"] }
rust-embed = "8.6.0"

[profile.release]
opt-level = 3
codegen-units = 1
panic = "abort"
lto = "thin"
debug = false
incremental = false
overflow-checks = false

After adding the "/api" with that database functionality above and then testing the "/api" endpoint, performance dropped to only 15,000 requests per second, down from the previous 180,000.

So far I’ve observed:

  • The JSON encoder function isn’t the main culprit, because when I commented out that code and returned a hard-coded JSON string like "hello", performance stayed at around 15,000 req/s.
  • The database code is what’s causing the drop: when I comment out the query itself, performance jumps back up to about 50,000 req/s.
  • It turns out that even just acquiring a connection from the pool has a very big impact—if I comment out the pool.get() call, performance immediately spikes to 230,000 req/s, even exceeding the 180,000 req/s I saw for serving big enough HTML

So in summary:

  • Returning HTML alone = 180,000 req/s
  • Returning a plain hard-coded JSON string = 230,000 req/s
  • Adding a code to acquire a connection from the pool (but not running any database query) = drops to 50,000 req/s
  • Then adding the database query = drops further to 15,000 req/s
  • Encoding the JSON string = still 15,000 req/s

And for this: I’ve already tried tuning the pool settings—raising the max size from 50 to 200 and setting min idle to 50—but it still had no effect.

My questions:

  1. Why does simply acquiring a connection from the pool in the code above (not doing any database query) reduce performance from 230,000 req/s to 50,000 req/s?
  2. Why does querying a table that contains only 1 very simple row—[{"nama":"produk 1"}]—further reduce performance from 50,000 req/s to 15,000 req/s?

I would greatly appreciate any insight

7 posts - 4 participants

Read full topic

🏷️ rust_feed