osteel's blog Web development resources

Docker for local web development, part 4: smoothing things out with Bash

Docker and Bash

In this series

Subscribe to email alerts at the end of this article or follow me on Twitter to be informed of new publications.

In this post

Introduction

As our development environment is taking shape, the number of commands we need to remember starts to build up. Here are a few of them, as a reminder:

  • docker-compose up -d to start the containers;
  • docker-compose logs -f nginx to watch the logs of Nginx;
  • docker-compose exec backend php artisan to run Artisan commands (or with run --rm if the container isn't running);
  • docker-compose exec frontend yarn to run Yarn commands (ditto);
  • etc.

Clearly, none of the examples above is impossible to remember and, with some practice, anyone would eventually know them by heart. Yet that is a lot of text to type, repeatedly, and if you haven't used a specific command for a while, looking for the right syntax can end up taking a significant amount of time.

Moreover, the scope of this tutorial series is rather limited; in practice, you're likely to deal with projects much more complex than this, requiring many more commands.

There is little point in implementing an environment that ends up increasing the developer's mental load. Thankfully, there is a great tool out there that can help us mitigate this issue, one you've probably at least heard of and that is present pretty much everywhere: Bash. With little effort, Bash will allow us to add a layer on top of Docker to abstract away most of the complexity, and introduce a standardised, user-friendly interface instead.

The assumed starting point of this tutorial is where we left things at the end of the previous part, corresponding to the repository's part-3 branch.

If you prefer, you can also directly checkout the part-4 branch, which is the final result of today's article.

Bash?

Bash has been around since 1989, meaning it's pretty much as old as the Internet as we know it. It is essentially a command processor (a shell), executing commands either typed in a terminal or read from a file (a shell script).

Bash allows its users to automate and perform a great variety of tasks, which I am not even going to try and list. What's important to know in the context of this series, is that it can run pretty much everything a human usually types in a terminal, that it is natively present on Unix systems (Linux and macOS), and can easily be installed on Windows (that's your cue to click the link if you are unsure how to run Bash on it 😉).

Its flexibility and portability makes it an ideal candidate for what we want to achieve today. Let's dig in!

The application menu

For starters, let's create a file named demo at the root of our project (alongside docker-compose.yml) and give it execution permissions:

$ touch demo
$ chmod +x demo

This file will contain the Bash script allowing us to interact with the application.

Open it and add the following line at the very top:

#!/bin/bash

This is just to indicate that Bash shall be the interpreter of our script, and where to find it (/bin/bash is the standard location on just about every Unix system, and also on Windows' Git Bash).

The first thing we want to do is to create a menu for our interface, listing the available commands and how to use them.

Update the content of the file with the following:

#!/bin/bash

case "$1" in
    *)
        cat << EOF

Command line interface for the Docker-based web development environment demo.

Usage:
    demo <command> [options] [arguments]

Available commands:
    artisan ................................... Run an Artisan command
    build [image] ............................. Build all of the images or the specified one
    composer .................................. Run a Composer command
    destroy ................................... Remove the entire Docker environment
    down [-v] ................................. Stop and destroy all containers
                                                Options:
                                                    -v .................... Destroy the volumes as well
    init ...................................... Initialise the Docker environment and the application
    logs [container] .......................... Display and tail the logs of all containers or the specified one's
    restart [container] ....................... Restart all containers or the specified one
    start ..................................... Start the containers
    stop ...................................... Stop the containers
    update .................................... Update the Docker environment
    yarn ...................................... Run a Yarn command

EOF
        exit 1
        ;;
esac

case is a basic control structure allowing us to do different things based on the value of $1 (referred to as a switch in some programming languages), $1 being the first parameter passed on to the demo script.

For example, with the following command, $1 would contain the string unicorn:

 $ demo unicorn

For now, we only address the default case, which is represented by *. In other words, if we call our script without any parameter, or one whose value is not a specific case of the switch, the menu will be displayed.

We now need to make this script available from anywhere in a terminal. To do so, add the following function to your local .bashrc file (or .zshrc, or anything else according to your configuration):

function demo {
    cd /PATH/TO/YOUR/PROJECT && bash demo $*
        cd -
}

"Wait. What?" Each time you open a new terminal window, Bash will try and read the content of some files, if it can find them. These files contain commands and instructions you basically want Bash to run at start-up, such as updating the $PATH variable, running a script somewhere or, in our case, make a function available globally. Different files can be used, but to keep it simple we'll stick to updating or creating the .bashrc file in your home folder, and add the demo function above to it:

$ vi ~/.bashrc

From then on, everytime you open a terminal window, this file will be read and the demo function made available globally. This will work whatever your operating system is (including Windows, as long as you do this from Git Bash, or from your terminal of choice).

Make sure to replace /PATH/TO/YOUR/PROJECT with the absolute path of the project's root (if you are unsure what that is, run pwd from the folder where the docker-compose.yml file sits and copy and paste the result). The function essentially changes the current directory for the project's root (cd /PATH/TO/YOUR/PROJECT) and executes the demo script using Bash (bash demo), passing on all of the command parameters to it ($*), which are basically all of the characters found after demo.

For example, if you'd type:

$ demo I am a teapot

This is what the function would do behind the scenes:

$ cd /PATH/TO/YOUR/PROJECT && bash demo I am a teapot

The last instruction of the function (cd -) simply changes the current directory back to the previous one. In other words, you can run demo from anywhere – you will always be taken back to the directory the command was initially run from.

Save the changes and open a new console window or source the file for them to take effect:

$ source ~/.bashrc

source will essentially load the content of the sourced file into the current shell, without having to restart it entirely.

Let's display our menu:

$ demo

If all went well, you should see something similar to this:

Menu

Looks fancy, doesn't it? Yet, so far none of these commands is doing anything. Let's fix this!

Basic commands

We will start with a simple command, to give you a taste. Update the switch in the demo file so it looks like this:

case "$1" in
    start)
        start
        ;;
    *)
        cat << EOF
...

We've added the start case, in which we call the start function without any parameters. That function doesn't exist yet – at the top of the file, under #!/bin/bash, add the following code:

# Create and start the containers and volumes
start () {
    docker-compose up -d
}

This short function simply runs the now familiar docker-compose up -d, which starts the containers in the background. Notice that we don't need to change the current directory, as when we invoke the demo function, we are automatically taken to the folder where the demo file is, which is also where docker-compose.yml resides.

Save the file and try out the new command (it doesn't matter whether the containers are already running):

$ demo start

That's it! You can now start your project from anywhere in a console using the command above, which is much simpler to type and remember than docker-compose up -d.

Let's give this another go, this time to display the logs. Add another case to the structure:

case "$1" in
    logs)
        logs
        ;;
    start)
        start
        ;;
    *)
        cat << EOF
...

And the corresponding function:

# Display and tail the logs
logs () {
    docker-compose logs -f
}

Try it out:

$ demo logs

You now have a shortcut command to access the containers' logs easily. That's nice, but how about displaying the logs of a specific container?

Let's modify the case slightly:

case "$1" in
    logs)
        logs "${@:2}"
        ;;
    start)
        start
        ;;
    *)
        cat << EOF
...

Instead of directly calling the logs function, we are now also passing on the script's parameters to it, starting from the second one, if any (that's the "${@:2}" bit). The reason is that when we type demo logs nginx, the first parameter of the script is logs, and we only want to pass on nginx to the start function.

Update the logs function accordingly:

# Display and tail the logs
logs () {
    docker-compose logs -f "${@:1}"
}

Using the same syntax, we append the function's parameters to the command if any, starting from the first one ("${@:1}") .

Save the file again and give it a try:

$ demo logs nginx

Logs

Now that you get the principle, and as most of the other functions work in a similar fashion, here is the rest of the file, with some block comments to make it more readable:

#!/bin/bash


#######################################
# FUNCTIONS
#######################################

# Run an Artisan command
artisan () {
    docker-compose run --rm backend php artisan "${@:1}"
}

# Build all of the images or the specified one
build () {
    docker-compose build "${@:1}"
}

# Run a Composer command
composer () {
    docker-compose run --rm backend composer "${@:1}"
}

# Remove the entire Docker environment
destroy () {
    read -p "This will delete containers, volumes and images. Are you sure? [y/N]: " -r
    if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit; fi
    docker-compose down -v --rmi all
}

# Stop and destroy all containers
down () {
    docker-compose down "${@:1}"
}

# Display and tail the logs of all containers or the specified one's
logs () {
    docker-compose logs -f "${@:1}"
}

# Restart all containers or the specified one
restart () {
    docker-compose restart "${@:1}"
}

# Start the containers
start () {
    docker-compose up -d
}

# Stop the containers
stop () {
    docker-compose stop
}

# Run a Yarn command
yarn () {
    docker-compose run --rm frontend yarn "${@:1}"
}


#######################################
# MENU
#######################################

case "$1" in
    artisan)
        artisan "${@:2}"
        ;;
    build)
        build "${@:2}"
        ;;
    composer)
        composer "${@:2}"
        ;;
    destroy)
        destroy
        ;;
    down)
        down "${@:2}"
        ;;
    logs)
        logs "${@:2}"
        ;;
    restart)
        restart "${@:2}"
        ;;
    start)
        start
        ;;
    stop)
        stop
        ;;
    yarn)
        yarn "${@:2}"
        ;;
    *)
        cat << EOF

Command line interface for the Docker-based web development environment demo.

Usage:
    demo <command> [options] [arguments]

Available commands:
    artisan ................................... Run an Artisan command
    build [image] ............................. Build all of the images or the specified one
    composer .................................. Run a Composer command
    destroy ................................... Remove the entire Docker environment
    down [-v] ................................. Stop and destroy all containers
                                                Options:
                                                    -v .................... Destroy the volumes as well
    init ...................................... Initialise the Docker environment and the application
    logs [container] .......................... Display and tail the logs of all containers or the specified one's
    restart [container] ....................... Restart all containers or the specified one
    start ..................................... Start the containers
    stop ...................................... Stop the containers
    update .................................... Update the Docker environment
    yarn ...................................... Run a Yarn command

EOF
        exit
        ;;
esac

Mind the fact that run --rm is used to execute Artisan, Composer and Yarn commands on the backend and frontend containers respectively, basically allowing us to do so whether the containers are running or not.

Also, as the destroy function's job is to delete all of the containers, volumes and images, it would be quite a pain to run it by mistake, so I made it failsafe by adding a confirmation prompt.

Most of the commands are now covered, but you might have noticed that a couple of them are still missing: init and update. These are a bit special, so the next section is dedicated to them.

Initialising and updating the project

Let's take a step back for a minute. Imagine you've been given access to the project's repository in order to install it on your machine. The first thing you'd do is to clone it locally, and to add the demo function to .bashrc so you can interact with the application.

From there, you would still need to perform the following actions:

  • Copy .env.example to .env and complete the latter at the root of the project;
  • Do the same in src/backend;
  • Download and build the images;
  • Install the frontend's dependencies;
  • Install the backend's dependencies;
  • Run the backend's database migrations;
  • Generate the backend's application key;
  • Start the containers.

While the Bash layer facilitates going through that list, that's still quite some work to do in order to obtain a functional setup, and it would be easy to miss a step. What if you need to reset the environment? Or to install it on another developer's machine? Or guide a client through the process?

Thankfully, now that we've introduced Bash to the mix, automating the tasks above is fairly simple.

First, add the two missing cases to demo:

init)
    init
    ;;
update)
    update
    ;;

And the corresponding functions (if your file is getting messy, you can also take a look at the final result here):

# Create .env from .env.example
env () {
    if [ ! -f .env ]; then
        cp .env.example .env
    fi
}

# Initialise the Docker environment and the application
init () {
    env \
        && down -v \
        && build \
        && docker-compose run --rm --entrypoint="//opt/files/init" backend \
        && yarn install \
        && start
}

# Update the Docker environment
update () {
    git pull \
        && build \
        && composer install \
        && artisan migrate \
        && yarn install \
        && start
}

Let's have a look at init first, whose job is to initialise the whole project. The first thing the function does is to call another function, env, which we defined right above it, and which is responsible for creating the .env file as a copy of .env.example if it doesn't exist. Next thing init does is ensuring the containers and volumes are destroyed, as the aim is to be able to both initialise the project from scratch and reset it. It then builds the images (which will also be downloaded if necessary), and goes on with running some sort of script on the backend container. Finally, it installs the frontend's dependencies and starts the containers.

You will probably recognise most of the items from the list at the beginning of this section, but what are this init file and that entrypoint we are referring to?

Since the backend requires a bit more work, we can isolate the corresponding steps into a single script that we will mount on the container in order to run it there. This means we need to make a small addition to the backend service in docker-compose.yml:

# Backend Service
backend:
  build: ./src/backend
  working_dir: /var/www/backend
  volumes:
    - ./src/backend:/var/www/backend:delegated
    - ./.docker/backend/init:/opt/files/init:delegated,ro
  depends_on:
    - mysql

Since the script will be run on the container, we need Bash to be installed on it, which is not the case by default on Alpine. We need to update the backend's Dockerfile accordingly (in src/backend):

FROM php:7.4-fpm-alpine

# Install extensions
RUN docker-php-ext-install pdo_mysql bcmath

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# Install extra packages
RUN apk --no-cache add bash mysql-client mariadb-connector-c-dev

We've also added the mysql-client and mariadb-connector-c-dev packages as we're going to need them (more on that in a minute).

Build the image:

$ demo build backend

Finally, let's create the init file, in the .docker/backend folder:

#!/bin/bash

# Install Composer dependencies
composer install -d "/var/www/backend"

# Deal with the .env file if necessary
if [ ! -f "/var/www/backend/.env" ]; then
    # Create .env file
    cat > "/var/www/backend/.env" << EOF
APP_NAME=demo
APP_ENV=local
APP_KEY=
APP_DEBUG=true
APP_URL=http://backend.demo.test

LOG_CHANNEL=single

DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=demo
DB_USERNAME=root
DB_PASSWORD=root

BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
EOF

    # Generate application key
    php "/var/www/backend/artisan" key:generate --ansi
fi

# Make sure the MySQL database is available
echo 'Waiting for MySQL to be available'
count=1
while [ $count -le 10 ] && ! mysql -uroot -proot -hmysql -P3306 -e 'exit' ; do
    sleep 5
    ((count++))
done
if [ "$count" -ge 10 ]; then
    echo >&2 'error: failed to connect to MySQL after 10 attempts'
    exit 1
fi
echo 'MySQL connection successful!'

# Database
php "/var/www/backend/artisan" migrate

Let's break this down a bit. First, we install the Composer dependencies, specifying the folder to run composer install from with the -d option. We then check whether there's already a .env file, and if there isn't, we create one with some pre-configured settings matching our Docker setup. Notice that we leave the APP_KEY environment variable empty; that is why we run the command to generate the Laravel application key right after creating the .env file. We then move on to setting up the database.

At the very beginning of the project's initialisation, there is no database. Even though we specified in docker-compose.yml that the MySQL container should be started before the backend container, being up and running doesn't mean it had the chance to create the demo database yet. In most situations, the corresponding delay isn't a problem, but in our case the init script will be executed almost immediately, meaning there may be a few seconds during which it won't be able to reach the database. For the script not to exit with an error, we periodically test the connection, using a loop trying to establish a connection every few seconds, using the MySQL CLI: this is what we installed the mysql-client package for. On top of that, we also need the mariadb-connector-c-dev package to make the caching_sha2_password authentication plugin available to the MySQL CLI.

I don't want to linger on this too much, but if you're curious about this the info box below is there for you.

MySQL authentication plugin With MySQL 8, the default authentication plugin changed for caching_sha2_password, and in some instances systems started to fail to connect to MySQL databases after the upgrade, since that plugin isn't available everywhere by default.

A quick solution is to revert to the previous plugin, which one can do by setting default-authentication-plugin to mysql_native_password in the MySQL configuration. The change was made for security reasons, however, so using the new plugin is recommended whenever possible.

On Alpine, the client-side plugin is built into the libmysqlclient library, which comes with the mariadb-connector-c-dev package, hence we installed it along with mysql-client. This is actually a good example of the kind of research effort it sometimes takes to make something work.

Just like the demo file at the beginning of this article, we need to make the init file executable:

$ chmod +x .docker/backend/init

By setting it as the entrypoint for the container we invoke, it will be the first and only script to be run before the container is destroyed.

You can try the command out straight away, regardless of the current state of your project:

$ demo init

At this point though, you've probably taken care of most of the steps covered by the script already (e.g. generating the .env files or installing the dependencies). If you wish to test the complete process, you can run demo destroy, and either delete the entire project and/or start afresh by cloning your own repository or this one (in the latter case, checkout the part-4 branch), without forgetting to update the function in .bashrc if the path has changed. Then run demo init again.

I personally find the experience of seeing the whole project setting itself up incredibly satisfactory, but maybe that's just me.

Mind-blown

Why the double slash? You might have noticed that, in the init function, the path to the init file is preceded with a double slash:

...
docker-compose run --rm --entrypoint="//opt/files/init" backend
...

This is not a typo. For some reason, when there's only one slash Windows will prepend the current local path to the script's, consequently complaining that the file cannot be found (duh). Adding another slash prevents that behaviour, while being ignored on other platforms.

That leaves us with the update function, whose job is to make sure our environment is up to date:

# Update the Docker environment
update () {
    git pull \
        && build \
        && composer install \
        && artisan migrate \
        && yarn install \
        && start
}

This is a convenience method that will pull the repository, build the images in case the Dockerfiles have changed, make sure any change of dependency is applied and new migrations are run, and restart the containers that need it (i.e. whose image has changed).

Managing separate repositories As I mentioned before, in a regular setup you are more likely to have the Docker environment, the backend application and the frontend application in separate repositories. Bash can also help in this situation: assuming the src folder is git-ignored and the code is hosted on GitHub, a function pulling the applications' repositories could look like this:

# Clone or update the repositories
repositories () {
    repos=(frontend backend)
    cd src
    for repo in "${repos[@]}";
    do
        git clone "git@github.com:username/${repo}.git" "$repo" || (cd "$repo" ; git pull ; cd ..) || true
    done
    cd ..
}

Conclusion

The aim of this series is to build a flexible environment that makes our lives easier. That means the user experience must be as slick as possible, and having to remember dozens of complicated commands doesn't quite fit the bill.

Bash is a simple yet powerful tool that, when combined with Docker, makes for a great developer experience. After today's article, it will be much simpler to interact with our environment, and if you happen to forget a command, a refresher is now just one demo away.

I kept things as simple as possible to avoid cluttering the post, but there is obviously much more you can get out of this duo. Many more commands can be simplified – just tailor the layer to your needs.

In the next part of this series, we will see how to generate a self-signed SSL/TLS certificate in order to bring HTTPS to our environment. Subscribe to email alerts below so you don't miss it, or follow me on Twitter where I will share my posts as soon as they are published.

Last updated by osteel on the :: [ tutorial docker bash ]

Like this?

You can unsubscribe at any time by clicking the link in the footer of the emails.
By clicking to subscribe, you acknowledge that your email address will be transferred to Mailchimp for processing.
Learn more about Mailchimp's privacy practices here.

Comments