Docker for local web development, conclusion: where to go from hereLast updated: 2021-07-11 :: Published: 2020-07-06 :: [ history ]
In this series
- Introduction: why should you care?
- Part 1: a basic LEMP stack
- Part 2: put your images on a diet
- Part 3: a three-tier architecture with frameworks
- Part 4: smoothing things out with Bash
- Part 5: HTTPS all the things
- Part 6: expose a local container to the Internet
- Part 7: using a multi-stage build to introduce a worker
- Part 8: scheduled tasks
- Conclusion: where to go from here ⬅️ you are here
Subscribe to email alerts at the end of this article or follow me on Twitter to be informed of new publications.
In this post
If you've been following this series from the beginning and made it this far, you deserve a pat on the back. You're clearly committed to putting in the necessary time and effort to understand Docker, and you should be proud – this is no small feat!
I hope you enjoyed going through these tutorials as much as I enjoyed writing them, and that you now clearly see how to use Docker to your advantage, and feel comfortable doing so.
That being said, you might still feel like some stones are left unturned, so I will try and flip some of them in this conclusion, the same way I used the introduction to try and address some of the concerns you might have had before taking the plunge.
And if that's not enough, comments are always open!
"You said it wouldn't be too slow, but really, it is"
If you feel that way, you're probably running Docker Desktop on macOS. And even though I happily run a similar setup myself, I appreciate that performance might still feel suboptimal to you, especially around script execution (e.g. Composer commands).
Improvement in this regard is long overdue, as evidenced by discussions that have been going on for years now, with no clear solution on the horizon yet. You can experiment with technologies like Mutagen or docker-sync, which seem to yield some result, but I'm personally waiting for a solution that would seemlessly integrate with Docker Desktop without the need for some esoteric configuration.
Another seemingly popular approach is to mix Docker services with native tools. For instance, some people advocate using Laravel Valet on macOS to run your PHP application, and to use Docker containers for all of the other services (such as MySQL and Redis). This is the philosophy behind projects like Takeout, for instance. This way, you get near-native performance for the applications you actually write code for, while still enjoying Docker's flexibility for everything else.
I personally do not favour this approach, however, as it means a lot is left to the developer to set up, while what I'm trying to achieve is a unique, ready-to-use development environment that works for everyone in the team. But if you operate as a freelancer or if your company lets you deal with your working environment as you please, this could be a good solution.
Then again, and as I've kept repeating throughout this series, Docker is just a tool. There are many ways in which you can use it – find one that works for you!
"How do I deploy this?"
While this tutorial series strictly focuses on building a development environment, once you get there one of the next logical questions is how you would deploy the result to a staging or production environment. I won't cover this in depth here but I can give you a few hints to get you started.
What's actually deployed
The first thing to mention is that I never deploy the local setup as is. While some cloud platforms support
docker-compose.yml files out of the box, I normally don't use one in a production context.
What usually happens is that the frontend and backend images are built and deployed separately through their own CI/CD pipelines, and the connections to whatever external services they need are configured via environment variables.
For instance, in this series the application's backend uses a container based on the official MySQL image to provide a database locally, whose connection details are set in the backend's
.env file. In production, it could use a managed database such as Amazon RDS instead, whose connection details would be injected as environment variables.
From the backend's perspective, all that changes from one environment to the other are the values of the connection parameters; yet, the way the database is actually managed is completely different.
Infrastructure as a Service
Another thing to mention is that Infrastructure as a Service (IaaS) platforms will not always allow you to follow Docker's best practices, including the "one Dockerfile = one process" trope.
For instance, Azure's App Service doesn't make it easy to manage multi-container applications (you'd basically need one App Service per image, which significantly increases the cost of your infrastructure). For a staging environment we were experimenting with, we didn't feel like paying for three different App Service instances to manage Nginx, the API and a worker separately, so we ended up bundling them up into a single image, using Supervisor to manage the corresponding processes. We achieved this by simply adding a
production build stage to the API's Dockerfile.
Of course the above isn't really an issue when you use something like Kubernetes, but not all teams are comfortable with it.
When deploying an image of your application to staging or production, that image should contain copies of your application files (including its dependencies), as mounting a local folder doesn't really apply in such an environment.
Copying the files to the image is a fairly simple instruction:
COPY . .
The line above essentially means "copy the content of the host's current directory (where the Dockerfile is located) to the image's working directory".
A small caveat to this is that we don't need all of the files to be copied over – some of them are not essential to the production environment (e.g. READMEs, or even the Dockerfile itself), if not downright dangerous (e.g. Git-related files). How do we deal with this?
Much like Git's
.gitignore files, Docker has
.dockerignore files that allow you to specify what to exclude from the build context when performing an
COPY operation. I won't go into too much detail here, but this article should give you all the info you need.
As for the application's dependencies, using our backend application as an example again, you would typically add a new stage to the Dockerfile to install the Composer packages, whose files the
production stage would copy over, without keeping Composer itself (once the dependencies are installed, your production environment doesn't need it anymore).
If that intermediary stage were named
build and assuming there's also a
base stage featuring PHP-FPM and the necessary PHP extensions only, it would look like the following:
1 2 3 4 5 6 7 8 9 10 11
FROM base as build # Copy application files COPY . . # Install application dependencies RUN composer install --no-scripts --no-autoloader --verbose --prefer-dist --no-progress --no-interaction --no-dev --no-suggest FROM base as production # Copy files from the build stage to the working directory COPY --from=build .
The resulting image is the one you would push to your container registry (e.g. Docker Hub), which in turn would be pulled by whatever IaaS platform you're using.
"How about Kubernetes?"
Kubernetes is a topic on its own, but as I mentioned in the introduction, it has now established itself as the industry standard of container orchestration, which makes it more and more unavoidable when talking about Docker. It is supported by pretty much all of the major IaaS providers, which is a nice hedge against vendor lock-in and a welcome consolidation of the Docker ecosystem overall.
That made me wonder about running a Kubernetes setup locally, that could be deployed as is to staging and production environments, or at least that would allow me to use the same orchestration technology across environments. I haven't come up with anything concrete yet, but tools like Minikube, Skaffold or K3s were brought to my attention, and some initiatives like the Compose Specification also seem to be aiming at bridging that gap.
Potentially also related to this is Docker app, a tool that implements the Cloud Native Application Bundle (CNAB) standard, aiming at facilitating the bundling, installation and management of container-native applications (with Kubernetes support). I'm not entirely sure what to make of it just yet, but I've got the feeling it might be something interesting to follow in the near future. Their GitHub repository contains a few examples if you wish to have a gander.
It is not yet clear which standard will come out on top (I would expect some convergence at some point), but once again it looks like Docker Compose is to remain at the center of it all, which I certainly identify as a good thing.
It was all about the journey
By now it should be clear that Docker is a technology that can be approached from a lot of different angles, which is probably why it feels so intimidating to many. Not knowing where to start is the best way not to start at all, and my aim here was merely to show you a way in.
While this series is a reflection of my current knowledge of Docker, my own journey is far from finished. There is a lot more to explore as I've hinted before, and I fully expect the way I use Docker to evolve in the future.
Don't let yourself be overwhelmed by the pace of Docker's evolution, though. Take your time, and don't forget to regularly tune out the noise to put into practice what you already know.
There's no need to always pursue the latest fad – consolidate your current knowledge and pick up new tools only as you need them. As I said in the introduction, the reason I looked into Docker in the first place is because I started to feel hindered by Homestead.
It's a bit awkward to add this section as if I had just completed a book, but it's certainly how I felt writing this series at times. Almost six months have gone since I started working on the introduction – a lot of my free time went into this!
But I haven't been doing this completely on my own – I want to send my thanks and love to my girlfriend Anna who patiently reviewed every single one of those lengthy articles (including this one), helping me sound a lot less like a non-native speaker ❤️
This post ends the Docker for web development series for now, but I'll keep writing on the subject and about web development in general in the future.
You can subscribe to email alerts below or follow me on Twitter to be informed of new publications.