A real-world guide for devs to ditch outdated container habits and build like it’s the 2020s
Introduction
Let’s get this out of the way: Docker is not dying. But if your workflow still looks like a Frankenstein script of docker run
, scattered Dockerfile
s, and copied docker-compose.yml
from Stack Overflow... you might be.
It’s 2025. We’ve got distroless images, BuildKit, rootless containers, VSCode devcontainers, and even AI pair programmers and yet, so many devs are still treating Docker like a toy they just unwrapped at their first bootcamp.
This article isn’t here to roast you (well, maybe just a little). It’s here to help you upgrade not just your toolset, but your mindset. We’ll go through the worst Docker sins developers still commit, show you real-world examples of how the pros do it, and drop some tools and best practices you can start using today to containerize smarter.
Whether you’re a junior trying to look senior, or a senior secretly copy-pasting from outdated blog posts, this one’s for you.
Section 1: docker has grown up have you?
Back in the early days of Docker, just getting a container to run felt like wizardry. You’d docker run -it ubuntu bash
your way into glory, install Node or Python inside the container like it was your personal playground, and call it a day.
But here’s the thing Docker has evolved. It’s not just a local dev tool anymore; it’s now a critical part of production pipelines, CI/CD workflows, edge deployments, and even Kubernetes clusters. So if you’re still using Docker like it’s just a fancy replacement for your terminal… you’re using a supercar to go grocery shopping.
Here’s what’s new (and what you’re probably not using yet):
- Docker Compose v2 is now a plugin, fully integrated with the Docker CLI. You no longer need to install it separately or treat it like an afterthought.
-
BuildKit is the default backend for building images. It’s faster, supports advanced caching, and can handle secrets, SSH forwarding, and parallel builds. (Still using
docker build .
without enabling BuildKit? You're missing out.) - Docker Extensions have opened the door to a plugin ecosystem that gives you GUI-based insights, logs, and metrics. Stuff you had to glue together with shell scripts is now a click away.
And yet… so many devs are stuck running Docker like they did when Pokémon Go launched.
If you’re not evolving your Docker usage, you’re not just being inefficient you’re making life harder for your future self, your team, and even your CI/CD budget (bloated images = bloated costs).
Stop hand-writing Dockerfiles like it’s a novel
Let’s be real: most beginner Dockerfiles look like someone tried to write a short story with RUN
commands.
You’ve seen it before — maybe you’ve written one. It starts with FROM node:latest
, then proceeds to install 50 packages, manually copy over files, run a build, install curl
for no reason, and ends with some CMD ["npm", "start"]
glued on at the bottom like an afterthought.
What’s wrong with this?
- No caching awareness one code change and the entire image rebuilds.
-
Massive image sizes because you left the
.git
folder, test files, andnode_modules
in there. -
Debug-only tools in production like leaving
vim
,nano
, orcurl
in your container for that “just in case” moment. - Missing multi-stage builds meaning your final image includes build tools it never actually uses at runtime.
Here’s how pros write Dockerfiles today
Multi-stage builds are the MVP. You use one stage for compiling, bundling, or building your app, and another lean stage for running it.
Dockerfile:
# Stage 1 - Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
# Stage 2 - Serve
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
RUN npm install -g serve
CMD ["serve", "dist"]
Boom. Smaller image, faster build, no leftovers, and works like a charm in CI.
You don’t need to reinvent the wheel just stop treating the Dockerfile like a place to “figure things out.” Think of it as your production recipe and no one wants spaghetti in production.
If you’re still writing Dockerfiles like a diary entry, it’s time to move on.
Section 3: the copy-paste docker-compose.yml curse
Let’s talk about the Great Copy-Paste Epidemic — specifically the docker-compose.yml
files floating around GitHub, Stack Overflow, and random blog posts from 2017. You know the ones:
version: "3"
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
It works… until it doesn’t. Then you spend 4 hours debugging why your container can’t connect to Postgres or why hot-reloading suddenly died on a Thursday.
Common sins in paste-first docker-compose setups:
- Using default bridge networks and wondering why services can’t talk to each other.
-
Mounting everything (
.:/app
) and destroying performance (especially on macOS). - Hardcoding secrets right into the YAML.
-
No environment separation — running dev and prod with the same
docker-compose.yml
file like a chaos gremlin.
Smarter ways to use Compose today:
-
Use
.env
files properly Don’t shove all your secrets into the Compose file. Let your.env
handle it:
POSTGRES_PASSWORD=supersecret
Then in Compose:
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
-
Separate dev and prod configs Use
docker-compose.override.yml
or even different files:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
- Use named volumes and networks Define them instead of relying on implicit behavior:
volumes:
db-data:
networks:
backend:
- Understand service healthchecks Don’t wait 30 seconds for your app to fail silently when Postgres isn’t ready. Use:
depends_on:
db:
condition: service_healthy
Bottom line: docker-compose is not a script, it’s your dev environment’s blueprint. Stop using random blueprints and expecting a stable house.
Section 4: the real dev magic is in Docker + Make or Taskfiles
Here’s a dirty little secret of productive dev teams: nobody actually types docker compose up
repeatedly. And if they do, they're one bad typo away from a broken workflow.
Serious devs script their commands. Not in bash in Makefile
or Taskfile form. Why? Because typing long Docker commands every time is like writing HTML without a framework technically fine, but painfully inefficient.
You might be doing this:
docker compose -f docker-compose.dev.yml up --build --remove-orphans
Cool. Now try typing that 5 times a day on 3 projects. Enjoy your carpal tunnel.
Instead, do this:
Makefile example:
up:
docker compose up --build
down:
docker compose down
logs:
docker compose logs -f
Or even better, use Taskfile, which supports YAML, cross-platform, better output, and dependencies:
version: '3'
tasks:
dev:
cmds:
- docker compose up --build
stop:
cmds:
- docker compose down
Now your team just runs:
task dev
Why this matters:
- Your junior devs don’t need to memorize Docker incantations.
- Onboarding becomes a one-command setup.
- Your scripts are versioned, shared, and extendable.
- You can bundle in testing, linting, DB setup, and teardown too.
This isn’t “just Docker best practices” this is how real engineering teams move fast without breaking stuff.
Stop shelling into containers to debug
Ah yes, the classic move:
docker exec -it my-app-container bash
Then you’re inside the matrix manually inspecting logs, tweaking environment variables, maybe even installing curl
to poke something. For a second, you feel like a hacker. Until your team asks, “Did you just hotfix that inside the running container?”
Yeah… don’t be that dev.
Why this is a bad habit:
- You’re making changes that disappear on container restart.
- It’s untracked, so nobody knows what you just did.
- You might be debugging a problem that only exists because you’re inside the container.
Better debugging techniques for 2025:
1. Use logs like a normal person
docker logs -f my-app-container
Or if you’re using Compose:
docker compose logs -f web
2. Mount your code instead of baking it in
In development, mount your project with a volume so changes reflect live no rebuild needed.
volumes:
- .:/app
3. Use healthchecks to catch silent failures
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 5
4. Use devcontainers (VSCode or JetBrains)
No more debugging through the CLI. Open the container as a full-featured dev environment with autocomplete, breakpoints, and everything:
5. When you must go inside, use sh
and be quick
docker exec -it my-container sh
But treat it like a read-only visit not a place to live.
Section 6: image bloat is real trim it down
Let’s talk about the silent killer of CI/CD pipelines, developer machines, and your team’s sanity: bloated Docker images.
You built a “simple app” and ended up with a 2.6GB image. Why? Because you threw everything but the kitchen sink into it build tools, test files, node_modules
, .git
, your hopes and dreams.
And now, every time someone runs docker pull
, they can hear their laptop fan whisper: “I hate you.”
Common causes of Docker bloat:
- Using full
ubuntu
ordebian
images when a slim base would do. - Not using multi-stage builds (see Section 2, you rebel).
- Including unnecessary files via
COPY . .
yes, your.env
,.git
, andnode_modules
are in the image now. - Forgetting
.dockerignore
even exists.
Your cleanup toolkit:
1. Use lean base images
- Go from
node:20
➝node:20-alpine
- Python? Use
python:3.12-slim
- Want extreme? Try distroless images from Google.
2. Add a .dockerignore
file
You wouldn’t commit node_modules
, so don’t bake it into your image either.
.git
node_modules
tests
.env
Dockerfile.
3. Explore with Dive
Want to see what’s taking up space in your image? Run:
dive your-image-name
You’ll get a visual breakdown of each layer and maybe cry a little.
4. Don’t install dev tools in prod images
Use multi-stage builds to separate build dependencies (TypeScript, pip, compilers) from the final runtime image.
Section 7: modern tools to stop doing Docker the hard way
If your Docker workflow still revolves around just docker build
and docker run
, it’s like playing Elden Ring with a broken sword — unnecessarily painful and wildly inefficient.
The Docker ecosystem has grown. There are tools that make your life easier, safer, and 10x faster. But most devs aren’t using them… because they’re too busy debugging YAML indentation errors.
Tools that modern devs should be using in 2025:
🔹 BuildKit (enabled by default now)
Faster builds, better caching, and advanced features like secrets mounting and SSH forwarding.
DOCKER_BUILDKIT=1 docker build .
Bonus: You can bake this into your Docker config and never worry again.
🔹 Podman
A drop-in replacement for Docker that runs rootless by design, works great on Fedora, and doesn’t need a daemon.
podman run -it alpine sh
No Docker daemon, fewer permissions, fewer risks.
🔹 Nerdctl + Lima (macOS & Linux)
Want a Docker-compatible CLI that works without Docker Desktop? Nerdctl plus Lima gives you containers without licensing drama.
🔹 Docker Extensions
Yes, Docker has plugins now. GUI dashboards for logs, resource usage, networking, and even Kubernetes. Examples:
🔹 VSCode Dev Containers
Containerized dev environments fully reproducible and sharable.
// .devcontainer/devcontainer.json
{
"name": "My App",
"image": "node:20-alpine",
"mounts": ["source=${localWorkspaceFolder},target=/workspace,type=bind"],
"settings": {
"terminal.integrated.defaultProfile.linux": "bash"
}
}
Open in VSCode ➝ write code inside container ➝ no more “it works on my machine.”

Section 8: what good Docker usage looks like in 2025
Let’s zoom out. We’ve dunked on bad practices, but what does a clean, modern, real-world Docker setup actually look like in 2025?
Imagine you clone a project, run task dev
, and boom the app is running, database is seeded, and everything’s containerized. Zero config hell. No weird .bashrc
hacks. No Slack messages like “hey what version of Node are we using again?”
Here’s what that dream setup looks like in action:
1. Clean multi-stage Dockerfiles
- No dev tools in the production image.
-
node_modules
excluded from build context. - Build artifacts copied in, nothing else.
2. Lean, minimal images
-
node:20-alpine
orpython:3.12-slim
, notubuntu:full-of-regret
. - Final image under 200MB.
- Verified with Dive.
3. Proper .dockerignore
dockerignore
.git
.env
node_modules
tests/
Dockerfile.
4. Docker Compose with override support
docker compose -f docker-compose.yml -f docker-compose.override.yml up
- Local dev mounts source code
- Production build uses prebuilt assets and optimized config.
5. Taskfile/Makefile for dev workflow
task dev # Spin up everything
task test # Run tests inside container
task lint # Lint via containerized linter
6. CI/CD using Docker layer caching
- Uses GitHub Actions or GitLab CI with BuildKit cache.
- Tagged builds like
my-app:1.3.2
instead oflatest
. - Auto-pushed to registries with semver-based tagging.
7. Docker Scout or Trivy security scans
- Scan images for known CVEs during CI.
- Report and block builds if vulnerabilities are critical.
This kind of setup makes onboarding a junior dev take minutes — not hours or days. It also means fewer bugs, faster deploys, and less stress when you inevitably have to ship on a Friday.
Section 9: stop ignoring security and caching
Security and caching: the broccoli and fiber of Docker. Not sexy, but if you ignore them, things get… messy.
Most developers skip over these two like they’re optional — until their image gets compromised or their CI pipeline takes 15 minutes to build a single container. Let’s fix that.
Security: stop living dangerously
The crime: using latest
tags
FROM node:latest
This works until the image updates silently, breaks your build, and no one knows why.
The fix: pin your versions
Dockerfile
FROM node:20.10.0-alpine
Be specific. It’s the difference between a stable app and chasing random bugs on Friday night.
Scan your images
Use tools like:
These check for known CVEs and bad packages. Integrate them in your CI, and they’ll block insecure builds before they reach production.
Caching: because time is money (and coffee)
The crime: changing Dockerfile order every 5 minutes
Docker builds layers top-down. If you change the early layers frequently (like COPY . .
before RUN npm install
), it busts the cache and rebuilds everything.
The fix: structure your Dockerfile like a pro
Dockerfile
COPY package.json ./
RUN npm ci
COPY . .
This way, changes to app code don’t trigger a full rebuild — just the final layer.
Use BuildKit with layer caching
In GitHub Actions:
- name: Build with cache
uses: docker/build-push-action@v5
with:
push: true
tags: your-registry/app:1.2.3
cache-from: type=registry,ref=your-registry/app:buildcache
cache-to: type=registry,ref=your-registry/app:buildcache,mode=max
CI builds go from 5 minutes ➝ 30 seconds. Your coffee gets cold less often.
Security and speed aren’t optional anymore. You’re building real-world apps treat your containers like they matter.
Section 10: conclusion don’t be that dev
Here’s the truth: Docker isn’t hard lazy Docker is. And outdated workflows aren’t just embarrassing… they’re expensive. Slow builds, bloated images, hard-to-reproduce bugs, mysterious prod issues all signs of a dev stuck in a container time warp.
You don’t need to become a DevOps wizard overnight, but you do need to stop doing the following:
- Writing Dockerfiles like bash scripts.
- Treating Compose like a magical incantation.
- Shelling into containers like you’re on a rescue mission.
- Shipping 2GB images like it’s nobody’s business.
- Ignoring security and caching because “it works on my machine.”
Modern Docker isn’t about doing more it’s about doing it smarter.
Use the tools. Embrace the ecosystem. And next time someone asks if your app is “Dockerized,” you won’t just say yes you’ll be proud of it.
Helpful resources to go even further:
- Docker Dev Best Practices (Official)
- Dive Visualize image bloat
- Trivy Security scanning tool
- Taskfile.dev Modern Makefile replacement
- VSCode DevContainers Local environments, done right
- BuildKit docs Better, faster Docker builds
Now go forth and containerize responsibly.
And if you learned something, share it with that one teammate still using docker run
like a caveman.
Also… comment below if you want a follow-up on “DevContainers vs Localhost: The Final Showdown.”