Dockerizing My MERN App: 3 Days of Confusion Before It Finally Clicked
Trying to Dockerize a MERN app nearly broke my brain for three days. If you're stuck with Docker, this is the simple explanation and real experience that finally made everything click.
Everyone talks about Docker like it’s some kind of magic fix for deployment.
“Just containerize your app,” they say. “It’ll run the same everywhere,” they promise.
So I thought—how hard could it really be?
I was working on my MERN application, and it seemed simple enough: write a Dockerfile, run a couple of commands, and call it a day.
Yeah… not even close.
It took me three full days to get everything working. And even after that, I couldn’t confidently explain why it was working.
My First Dockerfile Was… Honestly Terrible
I started by copying a Dockerfile I found online and tweaking a few paths, assuming that would be enough.
Fifteen minutes later, I had built a Docker image that was somehow almost 2GB in size.
That should’ve been my first warning.
When I tried to run it, the app crashed immediately. Errors like “Cannot find module” kept popping up—even though the packages were clearly installed.
So I started checking everything.
I checked package.json.
I checked node_modules.
I checked it all again.
Everything looked fine.
And that’s what made it even more frustrating—because something was obviously wrong, and I had no idea what Docker was actually doing behind the scenes.
The Node Modules Problem
Here's what I was doing wrong, and it's embarrassing in hindsight.
I was running npm install on my Mac, then copying the entire project including node_modules into a Linux container. Some npm packages have native bindings that get compiled for your specific operating system. Mac binaries don't work on Linux. Who knew?
Well, apparently everyone except me.
The fix:
COPY package*.json ./
RUN npm install
COPY . .
The fix turned out to be surprisingly simple—but not obvious at all.
Copy the package files first. Then run npm install inside the container so it builds the correct Linux binaries. After that, copy the rest of the application.
Once I understood this, everything started to make sense.
As a bonus, it made my builds way faster. Docker caches each layer, so if package.json hasn’t changed, it just reuses the previous npm install step instead of reinstalling everything from scratch.
What used to take 15 minutes suddenly dropped to about 30 seconds—at least when I wasn’t adding new dependencies.
That was the moment Docker finally started to feel… useful.
MongoDB Refused to Connect
I finally got the Node container running. Great.
Now it needed to talk to MongoDB.
I added MongoDB to my docker-compose.yml, updated the connection string, and used the same URI I’d always used: mongodb://localhost:27017/myapp.
It should’ve worked.
It didn’t.
“Connection refused.”
I tried everything—changing ports, restarting containers, rebuilding images. Nothing made a difference. No matter what I did, the Node app just couldn’t see MongoDB.
That’s when I learned something important.
Inside a Docker container, localhost doesn’t mean your machine. It doesn’t even mean other containers.
It only refers to that specific container.
So when my Node app was trying to connect to MongoDB on localhost, it was basically looking inside itself… where MongoDB obviously didn’t exist.
The solution was using Docker's internal DNS:
services:
api:
build: ./server
environment:
- MONGO_URI=mongodb://mongo:27017/myapp
mongo:
image: mongo:6
See that mongo in the connection string? That's not a hostname I set up. That's the service name from docker-compose. Docker automatically creates DNS entries for each service name, so containers can find each other.
Changed localhost to mongo and everything connected instantly. Would've saved myself two hours if I'd known that from the start.
The Image Size Problem
Even after fixing the build process, my images were still massive. The Node app image was over 2GB. For an app that's maybe 50MB of actual code.
Since most of my stack runs on Node.js and MongoDB, I assumed Docker would be straightforward.
I was using the default node:20 base image, which includes a ton of stuff I didn't need. Build tools, Python, git, the whole kitchen sink.
Then I learned about multi-stage builds. You can use one image for building (with all the dev dependencies and tools), then copy just the production files into a smaller base image.
Went from this:
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node", "server.js"]
To this:
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app .
CMD ["node", "server.js"]
The alpine image is a stripped-down version of Linux. Doesn't have all the extra stuff. My final image dropped from 2GB to around 200MB.
Not perfect, but way better.
Development Was Painfully Slow
Okay, production builds were working. But development was miserable.
Every time I changed a line of code, I had to stop the container, rebuild the image, and restart. This took at least a minute each time. Completely killed my workflow.
I needed hot reload. I needed my changes to show up immediately like they did without Docker.
Volumes solved this. Instead of copying the code into the image, you mount your local directory as a volume. The container uses your actual source files, so when you edit them, nodemon picks up the changes and restarts automatically.
services:
api:
build: ./server
volumes:
- ./server:/app
- /app/node_modules
environment:
- NODE_ENV=development
That second volume line (/app/node_modules) is important. Without it, your local node_modules folder would override the one in the container. Since I often don't even have node_modules locally (because I'm running everything in Docker), this would break the app.
Now I could edit code and see changes instantly. Finally felt like normal development again.
Environment Variables Were Confusing
I had .env files for local development. But how do you handle those in Docker?
Tried copying the .env file into the image. Bad idea. Now my secrets are baked into the image, which is terrible for security and flexibility.
Docker Compose has an env_file option:
services:
api:
build: ./server
env_file:
- ./server/.env
This loads environment variables at runtime, not build time. Much better.
For production, I pass environment variables directly through my DevOps deployment workflow or server configuration.
What Finally Made It Click
For the longest time, I treated Docker like some kind of black box.
Containers, images, layers, orchestration—it all sounded important, but none of it really meant anything to me.
Then someone explained it in the simplest way possible:
Docker is just Linux processes with isolation.
A container is basically a process that thinks it has its own machine, when in reality it’s just sharing the host system’s kernel.
That one idea changed everything.
Suddenly, debugging started to make sense. The container isn’t a virtual machine. It’s not some completely separate world. It’s just a controlled process running on your computer.
Around the same time, I also changed how I approached errors.
Instead of immediately googling every issue, I started actually reading the error messages.
Turns out… Docker’s errors are pretty helpful most of the time.
I was just too busy panicking to notice.
Still Not an Expert
I still don’t know everything about Docker.
I google basic commands all the time. I haven’t touched Kubernetes yet—and honestly, it still feels a bit intimidating.
But things are different now.
My deployments are consistent. I can spin up the entire app with a single command, and it works the same on my laptop, staging, and production.
That’s all I really wanted.
I also wrote about doing this without containers in my guide on deploying a MERN app on a VPS, if you’re curious about a simpler approach.
It took three frustrating days to get here—but now I finally understand what people mean when they talk about Docker.
If you’re just getting started, expect confusion. Expect things to break in weird ways.
That’s part of the process.
You’ll figure it out.
Just remember one thing:
localhost doesn’t mean what you think it means inside a container.