osteel's blog Web development resources

Docker for local web development, conclusion: where to go from here

Docker for web development: conclusion

In this series

Subscribe to email alerts at the end of this article or follow me on Twitter to be informed of new publications.

In this post

Congratulations!

If you've been following this series from the beginning and made it this far, you deserve a pat on the back. You're clearly committed to putting in the necessary time and effort to understand Docker, and you should be proud – this is no small feat!

I hope you enjoyed going through these tutorials as much as I enjoyed writing them, and that you now clearly see how to use Docker to your advantage, and feel comfortable doing so.

That being said, you might still feel like some stones are left unturned, so I will try and flip some of them in this conclusion, the same way I used the introduction to try and address some of the concerns you might have had before taking the plunge.

And if that's not enough, comments are always open!

"You said it wouldn't be too slow, but really, it is"

If you feel that way, you're probably running Docker Desktop on macOS or Windows. And even though I happily run a similar setup on macOS myself, I appreciate that performance might still feel suboptimal to you.

While improvement in this regard has been long overdue on both operating systems, Microsoft has recently addressed the issue with the introduction of WSL2 on Windows, which seems to be rapidly adopted by the community. Things are slower on the Apple side, however, with discussions that have been going on for years with no clear solution on the horizon yet. You can experiment with technologies like Mutagen or docker-sync, which seem to yield some result, but I'm personally waiting for a solution that would seemlessly integrate with Docker Desktop without the need for some esoteric configuration.

There are also other ways to improve performance. The main bottleneck on both platforms is around file sharing between the host and the containers. To mitigate this, you could for example bundle the HTTP server and the backend together in a single image, using Supervisor as the container's process to run both Nginx and PHP-FPM. That would reduce the number of mounted folders, which should improve performance a bit.

Another seemingly popular approach is to mix Docker services with native tools. For instance, people like Jose Soto advocate using Laravel Valet on macOS to run your PHP application, and to use Docker containers for all of the other services (such as MySQL and Redis). This is the philosophy behind Takeout, of which Jose is a maintainer. This way, you get near-native performance for the applications you actually write code for, while still enjoying Docker's flexibility for everything else.

I personally do not favour this approach as it means a lot is left to the developer to set up, while what I'm trying to achieve is a unique, ready-to-use development environment that works for everyone in the team. But if you operate as a freelancer or if your company lets you deal with your working environment as you see fit, this could be a good solution.

Then again, and as I've kept repeating throughout this series, Docker is just a tool, there are many ways in which you can use it. Find one that works for you!

"How do I deploy this?"

While this tutorial series strictly focuses on building a development environment, once you get there one of the next logical questions is how you would deploy the result to a staging or production environment. I won't cover this in depth here (I've got limited experience on the matter anyway), but I can give you a few hints to get you started.

What's actually deployed

The first thing to mention is that I never deploy the local setup as is – while some cloud solutions support docker-compose.yml files, I normally don't use one in a production context. What usually happens is the frontend and backend images are built and deployed separately, through their own CI/CD pipelines, and the connections to whatever external services they need are configured via environment variables.

For instance, our application's backend uses a container based on the official MySQL image to provide a database locally, whose connection details are set in the backend's .env file. In production, it could use a managed database such as Amazon RDS instead, whose connection details would be injected as environment variables. From the backend's perspective, all that changes from one environment to the other are the values of the connection parameters; yet, the way the database is actually managed is completely different.

Infrastructure as a Service

Another thing to mention is that Infrastructure as a Service (IaaS) platforms will not always allow you to follow Docker's best practices, including the "one Dockerfile = one process" trope. For instance, Azure's App Service doesn't make it easy to manage multi-container applications (you'd basically need one App Service per image, which significantly increases the cost of your infrastructure). For a staging environment we were experimenting with, we didn't feel like paying for three different App Service instances to manage Nginx, the API and a worker separately, so we ended up bundling them up into a single image, using Supervisor to manage the corresponding processes. We achieved this by simply adding a production build stage to the API's Dockerfile.

Of course the above isn't really an issue when you use something like Kubernetes, but not all teams are comfortable around it (we were not).

Application files

When deploying an image of your application to staging or production, that image should contain copies of your application files (including its dependencies), as mounting a local folder doesn't really apply in such an environment.

Copying the files to the image is a fairly simple instruction:

COPY . .

The line above essentially means "copy the content of the host's current directory (where the Dockerfile is located) to the image's working directory".

A small caveat to this is that we don't need all of the files to be copied over – some of them are not essential to the production environment (e.g. READMEs, or even the Dockerfile itself), if not downright dangerous (e.g. Git-related files). How do we deal with this?

Much like Git's .gitignore files, Docker has .dockerignore files that allow you to specify what to exclude from the build context when performing an ADD or COPY operation. I won't go into too much detail here, but this article should give you all the info you need.

Application dependencies

As for the application's dependencies, using our backend application as an example again, you would typically add a new stage to the Dockerfile to install the Composer packages, whose files the production stage would copy over, without keeping Composer itself (once the dependencies are installed, your production environment doesn't need it anymore).

If that intermediary stage were named build and assuming there's also a base stage featuring PHP-FPM and the necessary PHP extensions only, it would look like the following:

FROM base as build

# Copy application files
COPY . .
# Install application dependencies
RUN composer install --no-scripts --no-autoloader --verbose --prefer-dist --no-progress --no-interaction --no-dev --no-suggest

FROM base as production

# Copy files from the build stage to the working directory
COPY --from=build .

The resulting image is the one you would push to your container registry (e.g. Docker Hub), which in turn would be pulled by whatever IaaS platform you're using.

Recent partnerships

Finally, Docker has recently announced partnerships with both Azure and AWS to simplify deploying Docker applications from desktop to the cloud using Docker Compose.

The corresponding features are currently in preview, but definitely look promising.

"How about Kubernetes?"

Kubernetes is a topic on its own, but as I mentioned in the introduction, it has now established itself as the industry standard of container orchestration, which makes it more and more unavoidable when talking about Docker. It is supported by pretty much all of the major IaaS providers, which is a nice hedge against vendor lock-in and a welcome consolidation of the Docker ecosystem overall.

That made me wonder about running a Kubernetes setup locally, that could be deployed as is to staging and production environments, or at least that would allow me to use the same orchestration technology across environments. I haven't come up with anything concrete yet, but tools like Minikube, Skaffold or K3s were brought to my attention, and some initiatives like the Compose Specification also seem to be aiming at bridging that gap.

Potentially also related to this is Docker app, a new tool that implements the Cloud Native Application Bundle (CNAB) standard, aiming to facilitate the bundling, installation and management of container-native applications (with Kubernetes support). I'm not entirely sure what to make of it just yet, but I've got the feeling it might be something interesting to follow in the near future. Their GitHub repository contains a few examples if you wish to have a gander.

It is not yet clear which standard will come out on top (I would expect some convergence at some point), but once again it looks like Docker Compose is to remain at the center of it all, which I certainly identify as a good thing.

It was all about the journey

By now it should be clear that Docker is a technology that can be approached from a lot of different angles, which is arguably the biggest reason why it feels so intimidating to many. Not knowing where to start is the best way not to start at all – that is why the main purpose of this series was to show you a way in, more so than to build a local development environment.

While this series is a reflection of my current knowledge of Docker, my own journey is far from finished. There is a lot more to explore as I've hinted before, and I know there's a good chance that the way I use Docker will have evolved significantly just a couple of years from now.

I've already updated these articles multiple times since I started writing them, based on recent improvements, readers' comments or Twitter conversations, and I wouldn't be surprised if a more serious overhaul is needed in the near future, maybe to use one of the aforementioned tools, or one that is yet to make itself known.

Don't let yourself be overwhelmed by the pace of Docker's evolution, though; take your time, and don't forget to regularly tune out the noise to put into practice what you already know. There's no need to always pursue the latest fad – consolidate your current knowledge and pick up new tools only as you need them. As I said in the introduction, I initially looked into Docker because I started to feel somewhat hindered by Homestead.

Special thanks

It's a bit awkward to add this section as if I had just completed a book, but it's certainly how writing this series made me feel at times. I am publishing this conclusion at the beginning of July, and I started working on the introduction back in February. A lot of my free time has gone into this series in the meantime!

But I haven't been doing this completely on my own – I want to send my thanks and love to my girlfriend Anna who patiently reviewed every single one of those lengthy articles (including this one), helping me sound a lot less like a non-native speaker ❤️


This post ends the Docker for web development series for now, but I'll keep writing on the subject and about web development in general in the future. You can subscribe to email alerts below or follow me on Twitter to be informed of new publications.

Last updated by osteel on the :: [ tutorial docker ]

Like this?

You can unsubscribe at any time by clicking the link in the footer of the emails.
By clicking to subscribe, you acknowledge that your email address will be transferred to Mailchimp for processing.
Learn more about Mailchimp's privacy practices here.

Comments