Dockerizing My MERN App: The Mess Before the Magic
I spent three days fighting Docker before it finally clicked. Here's what actually helped me understand containers.
Everyone talks about Docker like it's this magical solution that makes deployment easy. "Just containerize your app," they said. "It'll work the same everywhere," they said.
So I figured, how hard could it be? Write a Dockerfile, run a couple commands, done.
Took me three full days to get it working. And even then, I wasn't totally sure why it worked.
My First Dockerfile Was Terrible
I reused a Dockerfile I found online and changed a few paths, thinking it would work.
Fifteen minutes later, I had a Docker image that was almost 2GB in size. When I finally ran it, the app crashed immediately with errors like “Cannot find module” — even though the packages were clearly installed.
I checked package.json.
I checked node_modules.
Everything looked correct.
But something was clearly wrong, and I didn’t yet understand what Docker was actually doing behind the scenes.
The Node Modules Problem
Here's what I was doing wrong, and it's embarrassing in hindsight.
I was running npm install on my Mac, then copying the entire project including node_modules into a Linux container. Some npm packages have native bindings that get compiled for your specific operating system. Mac binaries don't work on Linux. Who knew?
Well, apparently everyone except me.
The fix:
COPY package*.json ./
RUN npm install
COPY . .
Copy the package files first. Run npm install inside the container so it builds Linux binaries. Then copy everything else.
This also made builds way faster because Docker caches layers. If package.json hasn't changed, Docker reuses the cached npm install layer instead of reinstalling everything from scratch.
Suddenly my 15-minute builds were down to 30 seconds when I wasn't adding new packages.
MongoDB Refused to Connect
Got the Node container running. Great. Now it needed to talk to MongoDB.
I added MongoDB to my docker-compose.yml and updated my connection string in the Node app. Used mongodb://localhost:27017/myapp like I always had.
Didn't work. Connection refused.
Tried changing ports, restarting containers, rebuilding images. Nothing. The Node app just couldn't see MongoDB no matter what I did.
Turns out, localhost inside a Docker container means that specific container, not your host machine or other containers. Each container is isolated. If the Node app looks for MongoDB on localhost, it's looking inside its own container where MongoDB doesn't exist.
The solution was using Docker's internal DNS:
services:
api:
build: ./server
environment:
- MONGO_URI=mongodb://mongo:27017/myapp
mongo:
image: mongo:6
See that mongo in the connection string? That's not a hostname I set up. That's the service name from docker-compose. Docker automatically creates DNS entries for each service name, so containers can find each other.
Changed localhost to mongo and everything connected instantly. Would've saved myself two hours if I'd known that from the start.
The Image Size Problem
Even after fixing the build process, my images were still massive. The Node app image was over 2GB. For an app that's maybe 50MB of actual code.
I was using the default node:20 base image, which includes a ton of stuff I didn't need. Build tools, Python, git, the whole kitchen sink.
Then I learned about multi-stage builds. You can use one image for building (with all the dev dependencies and tools), then copy just the production files into a smaller base image.
Went from this:
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node", "server.js"]
To this:
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app .
CMD ["node", "server.js"]
The alpine image is a stripped-down version of Linux. Doesn't have all the extra stuff. My final image dropped from 2GB to around 200MB.
Not perfect, but way better.
Development Was Painfully Slow
Okay, production builds were working. But development was miserable.
Every time I changed a line of code, I had to stop the container, rebuild the image, and restart. This took at least a minute each time. Completely killed my workflow.
I needed hot reload. I needed my changes to show up immediately like they did without Docker.
Volumes solved this. Instead of copying the code into the image, you mount your local directory as a volume. The container uses your actual source files, so when you edit them, nodemon picks up the changes and restarts automatically.
services:
api:
build: ./server
volumes:
- ./server:/app
- /app/node_modules
environment:
- NODE_ENV=development
That second volume line (/app/node_modules) is important. Without it, your local node_modules folder would override the one in the container. Since I often don't even have node_modules locally (because I'm running everything in Docker), this would break the app.
Now I could edit code and see changes instantly. Finally felt like normal development again.
Environment Variables Were Confusing
I had .env files for local development. But how do you handle those in Docker?
Tried copying the .env file into the image. Bad idea. Now my secrets are baked into the image, which is terrible for security and flexibility.
Docker Compose has an env_file option:
services:
api:
build: ./server
env_file:
- ./server/.env
This loads environment variables at runtime, not build time. Much better.
For production, I don't use .env files at all. I pass environment variables directly through my CI/CD pipeline or server configuration. Keeps secrets out of version control.
What Finally Made It Click
I spent so long thinking of Docker as this black box of magic. Containers, images, layers, orchestration. All these abstract concepts that didn't mean anything to me.
Then someone explained it simply: Docker is just Linux processes with isolation. A container is a process that thinks it's running on its own machine, but it's actually sharing the host's kernel.
Once I understood that, debugging made way more sense. The container isn't a virtual machine. It's not this completely separate thing. It's just a controlled process on your computer.
Also, I started actually reading error messages instead of immediately googling them. Turns out Docker's error messages are pretty helpful most of the time. I was just panicking and skipping over them.
Still Not an Expert
I don't know everything about Docker. I still google basic commands. I've never touched Kubernetes and honestly it still intimidates me.
But my deployments are consistent now. I can spin up the entire app with one command. It works exactly the same on my laptop, our staging server, and production.
That's what I wanted from Docker. It took three frustrating days to get there, but I finally understand what everyone was talking about.
If you're just starting with Docker, expect to be confused. Expect things to break in weird ways. That's normal. You'll figure it out.
Just remember: localhost doesn't mean what you think it means inside a container.