My Node.js API Was Slow. Here's What Actually Helped.
Response times were bad. Users complained. I tried everything. Some things worked, most didn't. This is what made a real difference.
My Node.js API in one of my MERN stack projects was running slow. It was not so slow that it was unusable. It was slow enough for my users to notice. The dashboard took a 3 seconds to load, which is a really long time when you are waiting for a website to load.
I worked on making it faster, for two weeks. Here is what actually made my Node.js API faster.
Finding the Problem First
I made a mistake when I started trying to make things faster without checking what was slow. I should have checked first. I added logging with timestamps to every operation.
Then I found out that the database queries were taking a lot of time 80 percent of the total time. I was trying to make the wrong things faster which was a waste of time.
The database queries were the things that needed to be made faster not the things I was working on.
The N+1 Query Problem
This was my issue. I was getting a list of items then I had to get the related data for each item, which meant I was running a query for every single item. The N+1 Query Problem was really causing me trouble. I had to fetch the related data for each item one, by one which was a problem. The N+1 Query Problem was my issue.
// Bad: N+1 queries
const posts = await Post.find()
for (const post of posts) {
post.author = await User.findById(post.authorId)
}
50 posts meant 51 database queries. That's insane.
The fix was to batch the related queries:
// Good: 2 queries
const posts = await Post.find()
const authorIds = posts.map(p => p.authorId)
const authors = await User.find({ _id: { $in: authorIds } })
// then map authors to posts
Response time dropped from 2 seconds to 200ms.
Indexes
I did not have indexes on the fields I query often. Since I work with MongoDB and Node.js for most of my backend tasks the lack of indexes really slowed things down.
I added indexes to the fields that I use to filter and sort data:
postSchema.index({ authorId: 1 })
postSchema.index({ createdAt: -1 })
postSchema.index({ status: 1, createdAt: -1 })
Some queries went from 500ms to 5ms. Not a typo.
Caching Hot Data
Some information does not change often but people ask for it all the time. This includes things like user profiles the settings for an app and lists of categories. This kind of information is great for caching.
I started out using Redis because it was part of my plan to make my development and operations work better.. Then I realized that for what I was doing I only needed to store the information in the memory of the computer. That is when I found Node-cache, which did what I needed it to do. Node-cache was a solution, for caching the hot data like user profiles and app settings and category lists.
const cache = new NodeCache({ stdTTL: 300 })
async function getCategories() {
const cached = cache.get('categories')
if (cached) return cached
const categories = await Category.find()
cache.set('categories', categories)
return categories
}
Cache invalidation is really tough. I decided to use time-based expiry for things. This works well for what I need to do with cache invalidation. Cache invalidation is not perfect. It is good enough, for my use case.
Pagination
I was giving back all the results by default. That meant I was sending thousands of records in one go. Looking back I wonder why I thought that was an idea. I added a limit and a skip option.
Now by default I show 20 items at a time. This change has made my responses much smaller and faster.
Compression
I turned on gzip compression. This made the response sizes a lot smaller. They went down by 70%. The compression did not slow down the computer all.
One line of code:
app.use(compression())
Should have done this from the start.
What Did Not Help
- I tried using a way to handle JSON data. It did not make a difference.
- I added memory to the server. The problem was not with the memory it was with the input and output. I found this out the way when I was setting up my project on a virtual private server, which I talked about in my post, on deploying MERN on a virtual private server.
- I used clustering. This actually made things worse because my code was not able to handle tasks at the same time.
Current State
The dashboard now loads in 400 milliseconds. It is still not perfect. Users have stopped complaining.
The biggest lesson I learned is to measure. I spent a lot of time on optimizations that did not make a difference. The real gains came from fixing the issues that I was ignoring.
Before vs After Performance
Here’s a quick comparison of what actually changed:
| Metric | Before | After |
|---|---|---|
| API Response Time | ~2000ms | ~200ms |
| DB Query Time | ~500ms | ~5ms |
| Payload Size | Large | Reduced by 70% |
| Dashboard Load | ~3 seconds | ~400ms |
These changes were not ideas. We actually measured them in our system. We used user data and logs to track the results. The changes were tested with users. We got the data, from our system logs.