Setting Up CI/CD for MERN: My GitHub Actions Journey
Automated deployments sounded scary until I actually set them up. Here's how I went from manual FTP uploads to proper CI/CD.
For the longest time, I deployed my apps by uploading files through FTP. Yeah, in 2024. I'm not proud of it.
Every single deployment made me nervous. Did I remember to upload that new API route? Did I accidentally put the files in the wrong directory? Did I just break production because I forgot to update an environment variable?
It was stressful, error-prone, and honestly embarrassing when other developers asked about my deployment process.
Then I finally set up CI/CD, and I wish I'd done it years earlier.
Why I Avoided It For So Long
Those YAML configuration files looked terrifying. All the indentation, the nested structures, the syntax that would break if you added one extra space. It felt like something only DevOps engineers with years of experience could understand.
Turns out I was overthinking it. A CI/CD workflow is really just a list of commands that run automatically when something happens in your repository. Push code? Run tests. Merge to main? Deploy to production. That's basically it.
Starting Small: Just Run the Tests
I didn't jump straight into automated deployments. My first workflow was dead simple: run tests whenever someone opens a pull request.
Here's the entire thing:
name: Test
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm test
Seven lines of actual code. That's it. But now if someone (including me) tries to merge broken code, the tests catch it automatically. No more "oops, forgot to run tests before merging."
The Deployment Part Was Trickier
Getting tests to run was straightforward. Actually deploying to my VPS? That took some figuring out.
I'm not using Vercel or Netlify. I've got a VPS where I run my own Docker containers. So the workflow needs to SSH into my server, pull the latest code, rebuild containers, and restart everything.
The part that made me uncomfortable was giving GitHub's servers SSH access to my production machine. What if someone compromised my repository? What if the keys leaked somehow?
Turns out you can create deploy keys with very limited permissions. They can only pull code and restart specific services, nothing else. And GitHub stores them encrypted. Still felt weird the first time, but it's actually pretty secure.
I Leaked a Secret (But GitHub Saved Me)
GitHub has this feature where you store sensitive values like API keys and SSH keys as "secrets" in your repository settings. The workflow references them by name, and GitHub never exposes the actual values in logs.
Cool feature. Very useful.
I still managed to mess it up.
I was debugging why my environment variables weren't loading, so I added a step to print out my .env file. Pushed the code, watched the workflow run, and there was my database password. In the public logs. For anyone to see.
Panicked for about 30 seconds before I noticed GitHub had automatically masked it. They detected it was a secret and replaced it with *** in the output. Crisis averted, but I learned to be way more careful about what I print during debugging.
What My Deployment Actually Does
After a bunch of trial and error, here's what happens when I push to the main branch:
- Run all the tests (if these fail, everything stops)
- Build the React frontend with environment variables
- Build a Docker image with both frontend and backend
- Push the image to Docker Hub
- SSH into my VPS
- Pull the new image
- Stop the old containers
- Start new containers with the updated image
The whole process takes about 4 minutes. I used to spend 20-30 minutes doing this manually, double-checking every step, and still making mistakes. Now I just merge the PR and go get coffee.
Caching Made It Way Faster
At first, every workflow run took forever because npm was installing all dependencies from scratch. Hundreds of packages, every single time.
Then I learned about caching. GitHub Actions can save your node_modules and reuse them if your package-lock.json hasn't changed.
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
This one addition cut my build time in half. Now it only does a full install when I actually add or update packages.
Things That Broke (And How I Fixed Them)
Not everything worked on the first try. Or the second. Or the fifth.
Different Node versions: My tests passed locally but failed in CI. Turned out I was running Node 18 on my machine and the workflow was using Node 16. Added an explicit version to the workflow and the problem disappeared.
Missing MongoDB: My tests expected a local MongoDB instance. Obviously GitHub's runners don't have that. I ended up using mongodb-memory-server for tests so they run in an isolated environment without needing a real database.
Environment-specific bugs: Code that worked on my Mac would fail on the Ubuntu runner because of path separators or case sensitivity differences. Now I try to write code that doesn't depend on OS-specific behavior.
Flaky tests: Some tests would pass 90% of the time and randomly fail the other 10%. Super frustrating. Turned out they had timing issues or relied on external APIs. Fixed them to be more deterministic.
Each failure was annoying at the time, but I learned something from every one. Now I barely have to touch the workflow. It just runs.
The Real Benefit Isn't Speed
Yeah, automated deployments are faster than doing it manually. But that's not why I love CI/CD.
The real benefit is confidence.
Before, every deployment was a gamble. I'd push code and hope nothing broke. I'd watch server logs for errors, ready to roll back if something went wrong.
Now? If the tests pass and the workflow succeeds, I know the code works. I can merge a PR, close my laptop, and not worry about it. That peace of mind is worth way more than the time saved.
I Still Watch It Run Sometimes
Old habits die hard. Even though I trust the workflow, I sometimes open GitHub Actions and watch the steps execute. See the tests pass, watch the build complete, confirm the deployment succeeded.
I don't have to do this. The workflow will send me a notification if something fails. But there's something satisfying about seeing all those green checkmarks.
Maybe I'll stop eventually. Or maybe I won't. Either way, I'm not going back to FTP uploads.
If you're still deploying manually, just start with a simple test workflow. You don't need to automate everything at once. Get comfortable with YAML, add one step at a time, and before you know it, you'll wonder how you ever deployed any other way.