osteel's blog Web development resources

Docker for local web development, part 3: a three-tier architecture with frameworks

LEMP, Laravel and Vue.js on Docker

In this series

Subscribe to email alerts at the end of this article or follow me on Twitter to be informed of new publications.

In this post

Foreword

In all honesty, what we've covered so far is pretty standard. Articles about LEMP stacks on Docker are legion, and while I hope to add some value through a beginner-friendly approach and a certain level of detail, there was hardly anything new (after all, I was already writing about this back in 2015).

I believe this is about to change with today's article. There are many ways to manage a multitiered project with Docker, and while the approach I am about to describe certainly isn't the only one, I also think this is a subject that doesn't get much coverage at all.

In that sense, today's article is probably where things will start to get concrete for some of you. That is not to say the previous ones are negligible – they constitute a necessary introduction contributing to making this series comprehensive – but this is where the theory meets the practical complexity of modern web applications.

The assumed starting point of this tutorial is where we left things at the end of the previous part, corresponding to the repository's part-2 branch.

If you prefer, you can also directly checkout the part-3 branch, which is the final result of today's article.

Again, this is by no means the one and only approach, just one that has been successful for me and the companies I set it up for.

A three-tier architecture?

After setting up a LEMP stack on Docker and shrinking down the size of the images, we are about to complement our MySQL database with a frontend application based on Vue.js and a backend application based on Laravel, in order to form what we call a three-tier architecture.

Behind this somewhat intimidating term is a popular way to structure an application, consisting in separating the presentation layer (a.k.a. the frontend), the application layer (a.k.a. the backend) and the persistence layer (a.k.a. the database), ensuring each part is independently maintainable, deployable, scalable, and easily replaceable if need be. Each of these layers represents one tier of the three-tier architecture. And that's it!

In such a setup, it is common for the backend and frontend applications to each have their own repository. For simplicity's sake, however, we'll stick to a single repository in this tutorial.

The backend application

Before anything else, let's get rid of the previous containers and volumes (not the images, as we still need them) by running the following command from the project's root directory:

$ docker-compose down -v

Remember that down destroys the containers, and -v deletes the associated volumes.

Let's also get rid of the previous PHP-related files, to make room for the new backend application. Delete the .docker/php folder, the .docker/nginx/conf.d/php.conf file and the src/index.php file. Your file and directory structure should now look similar to this:

docker-tutorial/
├── .docker/
│   ├── mysql/
│   │   └── my.cnf
│   └── nginx/
│       └── conf.d/
│           └── phpmyadmin.conf
├── src/
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml

Replace the content of docker-compose.yml with this one (changes have been highlighted in bold):

version: '3.7'

# Services
services:

  # Nginx Service
  nginx:
    image: nginx:1.17-alpine
    ports:
      - 80:80
    volumes:
      - ./src/backend:/var/www/backend:ro,delegated
      - ./.docker/nginx/conf.d:/etc/nginx/conf.d:ro
      - phpmyadmindata:/var/www/phpmyadmin:ro,delegated
    depends_on:
      - backend
      - phpmyadmin

  # Backend Service
  backend:
    build: ./src/backend
    working_dir: /var/www/backend
    volumes:
      - ./src/backend:/var/www/backend:delegated
    depends_on:
      - mysql

  # MySQL Service
  mysql:
    image: mysql:8
    environment:
      MYSQL_ROOT_PASSWORD: root
      MYSQL_DATABASE: demo
    volumes:
      - ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf:ro
      - mysqldata:/var/lib/mysql:delegated

  # PhpMyAdmin Service
  phpmyadmin:
    image: phpmyadmin/phpmyadmin:5-fpm-alpine
    environment:
      PMA_HOST: mysql
    volumes:
      - phpmyadmindata:/var/www/html:delegated
    depends_on:
      - mysql

# Volumes
volumes:

  mysqldata:

  phpmyadmindata:

The biggest update is the removal of the PHP service in favour of the backend service, although they are very similar. The new service looks for a Dockerfile located in the backend application's directory (src/backend), which is mounted as a volume on the container. The only real difference is the appearance of :delegated at the end of the volume declaration.

This option improves performance on macOS by caching the content of the folder for a faster read access, which becomes necessary as the number of files grows (which is typically the case when a framework is involved). Three options are available: consistent, cached and delegated. I find the official documentation somewhat confusing on this, so here is another attempt at explaining them:

  • consistent is the option by default: it basically ensures perfect file synchronisation between the host (your local machine) and the container. This option performs poorly on macOS;
  • cached indicates that what's in the container is more important; in other words, changes occurring on the host might not be reflected immediately in the container;
  • delegated is the opposite of cached, in that the host takes precedence: changes in the container might not be reflected immediately on the host.

In a development context, it is more important that our local changes are passed on to the container as soon as possible. Changes made on the host machine should therefore be prioritised, hence the delegated option is generally more appropriate.

This superficial explanation is enough for the scope of this article, but more details are available on the Docker blog if you are interested.

Note that on systems other than macOS, these options will simply be ignored, and consistent will be used instead.

Pump up that RAM! If the response time still feels slow, a quick and easy trick for Docker Desktop is to increase the amount of RAM it is allowed to use. Open the preferences and adjust the Memory slider under Resources: Docker memory settings The default value is 2 GB; I doubled that and it made a world of difference (if you've got 8 GB or more, it will most likely not impact your machine's performance in any visible way).

If you wish to explore further possibilities on macOS, more tips are laid out in this article, or you could look into something like docker-sync, or the more recent Mutagen-based file synchronization.

If you're on Windows and not using WSL 2 yet, looking into it might be a good idea.

The rest of the changes brought to docker-compose.yml mostly consist of the delegated option being added where relevant.

As the backend application will be built with Laravel, let's create an Nginx server configuration based on the one provided in the official documentation. Create a new backend.conf file in .docker/nginx/conf.d:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
server {
    listen      80;
    listen      [::]:80;
    server_name backend.demo.test;
    root        /var/www/backend/public;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    index index.html index.htm index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_pass  backend:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include       fastcgi_params;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}

Mind the values for server_name and fastcgi_pass (now pointing to port 9000 of the backend container).

You also need to update your local hosts file with the new domain names (have a quick look here if you've forgotten how to do that):

127.0.0.1 backend.demo.test frontend.demo.test phpmyadmin.test

Note that we've also added the frontend's domain to save us a trip later.

Create the src/backend directory and add a Dockerfile with the following content to it:

FROM php:7.4-fpm-alpine

Your file structure should now look like this:

docker-tutorial/
├── .docker/
│   ├── mysql/
│   │   └── my.cnf
│   └── nginx/
│       └── conf.d/
│           └── backend.conf
│           └── phpmyadmin.conf
├── src/
│   └── backend/
│       └── Dockerfile
├── .env
├── .env.example
├── .gitignore
└── docker-compose.yml

Laravel requires a few PHP extensions to function properly, so we need to ensure those are installed. The Alpine version of the PHP image comes with a number of pre-installed extensions which we can list by running the following command (from the project's root, as usual):

$ docker-compose run --rm backend php -m

Now if you remember, in the first part of this series we used exec to run Bash on a container, whereas this time we are using run to execute the command we need. What's the difference?

exec simply allows us to execute a command on an already running container, whereas run does so on a new container which is immediately stopped after the command is over. It does not delete the container by default, however; we need to specify --rm after run for it to happen.

The command essentially runs php -m on the backend container, and gives the following result:

[PHP Modules]
Core
ctype
curl
date
dom
fileinfo
filter
ftp
hash
iconv
json
libxml
mbstring
mysqlnd
openssl
pcre
PDO
pdo_sqlite
Phar
posix
readline
Reflection
session
SimpleXML
sodium
SPL
sqlite3
standard
tokenizer
xml
xmlreader
xmlwriter
zlib

[Zend Modules]

That is quite a lot of extensions, which might come as a surprise after reading the previous part praising Alpine images for featuring the bare minimum by default. The reason is also hinted at in the previous article: since it is not always simple to install things on an Alpine distribution, the PHP image's maintainers chose to make their users' lives easier by preinstalling a bunch of extensions.

The final result is around 80 MB, which is still very small.

Now that we know which extensions are missing, we can complete the Dockerfile to install them, along with Composer which is needed for Laravel:

1
2
3
4
5
6
7
FROM php:7.4-fpm-alpine

# Install extensions
RUN docker-php-ext-install pdo_mysql bcmath

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

Updating the Dockerfile means we need to rebuild the image:

$ docker-compose build backend

Once this is done, run the following command:

$ docker-compose run --rm backend composer create-project --prefer-dist laravel/laravel tmp "8.*"

This will use the version of Composer installed on the backend container (no need to install Composer locally!) to create a new Laravel 8 project in the container's /var/www/backend/tmp folder.

As per docker-compose.yml, the container's working directory is /var/www/backend, onto which the local folder src/backend was mounted – if you look into that directory now on your local machine, you will find a new tmp folder containing the files of a fresh Laravel application. But why did we not create the project in backend directly?

Behind the scenes, composer create-project performs a git clone, and that won't work if the target directory is not empty. In our case, backend already contains the Dockerfile, which is necessary to run the command in the first place. We essentially created the tmp folder as a temporary home for our project, and we now need to move back the files to their final location:

$ docker-compose run --rm backend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf tmp"

This will run the content between the double quotes on the container, sh -c basically being a trick allowing us to run more than a single command at once (if we ran docker-compose run --rm backend mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf tmp instead, only the first mv instruction would be executed on the container, and the rest would be run on the local machine).

Shouldn't Composer be in its own container? A popular way to deal with package managers is to isolate them into their own containers, the main reason being that they are external tools mostly used during development and that they've got no business shipping with the application itself (which is all correct).

I actually used this approach for a while, but it comes with downsides that are often overlooked: as Composer allows developers to specify a package's requirements (necessary PHP extensions, PHP's minimum version, etc.), by default it will check the system on which it installs the application's dependencies to make sure it meets those criteria. In practice, this means the configuration of the container hosting Composer must be as close as possible to the application's, which often means doubling the work for the maintainer. As a result, some people choose to run Composer with the --ignore-platform-reqs flag instead, ensuring dependencies will always install regardless of the system's configuration.

This is a dangerous thing to do: while most of the time dependency-related errors will be spotted during development, in some instances the problem could go unnoticed until someone stumbles upon it, either on staging or even in production (this is especially true if your application does't have full test coverage). Moreover, staged builds are an effective way to separate the package manager from the application in a single Dockerfile, but that's a topic I will broach later in this series. Bear with!

By default, Laravel has created a .env file for you, but let's replace its content with this one (you will find this file under src/backend):

APP_NAME=demo
APP_ENV=local
APP_KEY=base64:BcvoJ6dNU/I32Hg8M8IUc4M5UhGiqPKoZQFR804cEq8=
APP_DEBUG=true
APP_URL=http://backend.demo.test

LOG_CHANNEL=single

DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=demo
DB_USERNAME=root
DB_PASSWORD=root

BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file

Not much to see here, apart from the database configuration (mind the value of DB_HOST) and some standard application settings.

Let's try out our new setup:

$ docker-compose up -d

Once it is up, visit backend.demo.test; if all went well, you should see Laravel's home page:

Laravel home

Let's also verify our database settings by running Laravel's default migrations:

$ docker-compose exec backend php artisan migrate

You can log into phpmyadmin.test (with the root / root credentials) to confirm the presence of the demo database and its newly created tables (users, password_resets and failed_jobs).

There is one last thing we need to do before moving on to the frontend application: since our aim is to have it interact with the backend, let's add the following endpoint to routes/api.php:

1
2
3
4
5
<?php // ignore this line, it's for syntax highlighting only

Route::get('/hello-there', function () {
    return 'General Kenobi';
});

Try it out by accessing backend.demo.test/api/hello-there, which should display "General Kenobi".

Our API is ready!

We are done for this section but, if you wish to experiment further, while the backend's container is up you can run Artisan and Composer commands like this:

$ docker-compose exec backend php artisan
$ docker-compose exec backend composer

And if it's not running:

$ docker-compose run --rm backend php artisan
$ docker-compose run --rm backend composer

The frontend application

The third tier of our architecture is the frontend application, for which we will use the ever more popular Vue.js.

The steps to set it up are actually quite similar to the backend's; first, let's add the corresponding service to docker-compose.yml, right after the backend one:

1
2
3
4
5
6
7
8
# Frontend Service
frontend:
  build: ./src/frontend
  working_dir: /var/www/frontend
  volumes:
    - ./src/frontend:/var/www/frontend:delegated
  depends_on:
    - backend

Nothing that we haven't seen before here. Then, create a new frontend folder under src and add the following Dockerfile to it:

FROM node:13.10-alpine

We simply pull the Alpine version of Node.js' official image for now, which ships with both Yarn and npm (which are package managers like Composer, but for JavaScript). I will be using Yarn, as I am told this is what the cool kids use nowadays.

Let's build the image:

$ docker-compose build frontend

Once the image is ready, create a fresh Vue.js project with the following command:

$ docker-compose run --rm frontend sh -c "yarn global add @vue/cli && vue create tmp --default --force"

By using the same sh -c trick as earlier in order to run multiple commands at once on the container, we install Vue CLI (yarn global add @vue/cli) and use it straight away to create a new Vue.js project with some default presets, in the tmp directory (vue create tmp --default --force). This directory is located under /var/www/frontend , which is the container's working directory as per docker-compose.yml.

We don't install Vue CLI via the Dockerfile here, because the only use we have for it is to create the project. Once that's done, there's no need to keep it around.

Just like the backend, let's move the files out of tmp and back to the parent directory:

$ docker-compose run --rm frontend sh -c "mv -n tmp/.* ./ && mv tmp/* ./ && rm -Rf tmp"

If all went well, you will find the application's files under src/frontend on your local machine.

We've already added frontend.demo.test to the hosts file earlier, so let's move on to creating the Nginx server configuration. Add a new frontend.conf file to .docker/nginx/conf.d, with the following content (most of the location block comes from this article):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
server {
    listen      80;
    listen      [::]:80;
    server_name frontend.demo.test;

    location / {
        proxy_pass         http://frontend:8080;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection 'upgrade';
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   Host $host;
    }
}

This simple configuration redirects the traffic from the domain name's port 80 to the container's port 8080, which is the port Vue.js uses for its development server.

Let's also add a new vue.config.js file in src/frontend:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
module.exports = {
    devServer: {
        disableHostCheck: true,
        sockHost: 'frontend.demo.test',
        watchOptions: {
            ignored: /node_modules/,
            aggregateTimeout: 300,
            poll: 1000,
        }
    },
};

This will ensure the hot-reload feature is functional. I won't go into details about each option here, as the point is not so much about configuring Vue.js as seeing how to articulate a frontend and a backend application with Docker, regardless of the chosen frameworks. You are welcome to look them up though!

Let's complete our Dockerfile by adding the command that will start the development server:

1
2
3
4
FROM node:13.10-alpine

# Start application
CMD ["yarn", "serve"]

Rebuild the image:

$ docker-compose build frontend

And start the project so Docker picks up the image changes (for some reason, the restart command won't do that):

$ docker-compose start

Once everything is up and running, access frontend.demo.test and you should see Vue.js' welcome page:

Vue.js home

Give it a minute if it doesn't show up immediately, as the server takes some time to start. If need be, you can monitor it with the following command:

$ docker-compose logs -f frontend

Open src/frontend/src/components/HelloWorld.vue and update some content (one of the <h3> tags, for example). Go back to your browser and you should see the change happen in real time: this is hot-reload doing its magic!

To make sure our setup is complete, all we've got left to do is to query the API endpoint we defined earlier in the backend, with the help of Axios. Let's install the package with the following command:

$ docker-compose exec frontend yarn add axios

While Yarn is doing its thing, let's replace the content of src/frontend/src/App.vue with this one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<template>
  <div id="app">
    <HelloThere :msg="msg"/>
  </div>
</template>

<script>
import axios      from 'axios'
import HelloThere from './components/HelloThere.vue'

export default {
  name: 'App',
  components: {
    HelloThere
  },
  data () {
    return {
      msg: null
    }
  },
  mounted () {
    axios
      .get('http://backend.demo.test/api/hello-there')
      .then(response => (this.msg = response.data))
  }
}
</script>

All we are doing here is hitting the hello-there endpoint we created earlier and assigning its response to the msg property, which is passed on to the HelloThere component. Once again I won't linger too much on this, as this is not a Vue.js tutorial – I merely use it as an example.

Delete src/frontend/src/components/HelloWorld.vue, and create a new HelloThere.vue file in its place:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<template>
  <div>
    <img src="https://tech.osteel.me/images/2020/03/04/hello.gif" alt="Hello there" class="center">
    <p>{{ msg }}</p>
  </div>
</template>

<script>
export default {
  name: 'HelloThere',
  props: {
      msg: String
  }
}
</script>

<style>
p {
  font-family: "Arial", sans-serif;
  font-size: 90px;
  text-align: center;
  font-weight: bold;
}

.center {
  display: block;
  margin-left: auto;
  margin-right: auto;
  width: 50%;
}
</style>

The component contains a little bit of HTML and CSS code, and displays the value of msg in a <p> tag.

Save the file and go back to your browser: the content of our API endpoint's response should now display at the bottom of the page.

Hello There

If you want to experiment further, while the frontend's container is up you can run Yarn commands like this:

$ docker-compose exec frontend yarn

And if it's not running:

$ docker-compose run --rm frontend yarn

Using separate repositories As I mentioned at the beginning of this article, in a real-life situation the frontend and backend applications are likely to be in their own repositories, and the Docker environment in a third one. How to articulate the three of them?

The way I do it is by adding the src folder to the .gitignore file at the root, and I checkout both the frontend and backend applications in it, in separate directories. And that's about it! Since src is git-ignored, you can safely checkout other codebases in it, without conflicts. Theoretically, you could also use Git submodules to achieve this, but in practice it adds little to no value, especially when applying what we'll cover in the next part.

Conclusion

That was another long one, well done if you made it this far!

This article once again underscores the fact that, when it comes to building such an environment, a lot is left to the maintainer's discretion. There is seldom any clear way of doing things with Docker, which is both a strength and a weakness – a somewhat overwhelming flexibility. These little detours contribute to making these articles dense, but I think it is important for you to know that you are allowed to question the way things are done.

On the same note, you might also start to wonder about the practicality of such an environment, with the numerous commands and syntaxes one needs to remember to navigate it properly. And you would be right. That is why the next article will be about using Bash to abstract away some of that complexity, to introduce a nicer, more user-friendly interface in its place.

You can subscribe to email alerts below to make sure you don't miss it, or you can also follow me on Twitter where I will share my posts as soon as they are published.

Last updated by osteel on the :: [ tutorial docker laravel vuejs ]

Enjoying the content?

You can unsubscribe at any time by clicking the link in the footer of the emails.
By clicking to subscribe, you acknowledge that your email address will be transferred to Mailchimp for processing.
Learn more about Mailchimp's privacy practices here.

Comments