Why hello again. What's on today's agenda? Well a good old bit of performance monitoring and setting it al up using our all time favourite, Docker. And why? Well why not? This site is small fish in the www so worrying about site performance isn't a big deal but for people who run online businesses, this is something that definitely needs to be kept an eye on.

So what is Sitespeed.io ...

Sitespeed.io is an open source toolkit that helps you analyze and optimize your website speed and performance, based on performance best practices advices from the coach and collecting browser metrics using the Navigation Timing API, User Timings and Visual Metrics (FirstVisualChange, SpeedIndex & LastVisualChange).

In simpler terms, it's a bunch of open source (£££ FREE £££) software, using industry standard checks and measures to provide you a report (or dashboard with Grafana) on how well your site performs. There is a shit ton of other key features and settings to this thing, too much for me to write it out here. For now, I'll run through the following:

  • Installing Docker (if you haven't already)
  • Spinning up Sitespeed.io container
  • Creating Network Environment's to simulate 3G, 4G, broadband speeds
  • CRON jobs to run at set periods
  • NGINX file directory page to access the HTML reports

Sounds like fun! Let's get to it...

Docker

In order to run Sitespeed.io we'll need to install Docker so let's begin!

Always good practice to update your existing packages first:

sudo apt update

Next a few prerequisites:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Now add the GPG key for the official Docker repository to your system:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Next, update the package database with the Docker packages from the newly added repo:

sudo apt update

Finally we can install Docker:

sudo apt install docker-ce

Lets check everything has installed correctly:

sudo systemctl status docker

Good stuff! How we've got Docker installed, lets copy and paste one last command to allow us to run docker commands with sudo ...

sudo usermod -aG docker ${USER}

Sitespeed.io

First thing to do is to pull the Sitespeed.io docker image by running the follow command:

docker pull sitespeedio/sitespeed.io

It's a hefty image so it may take some time. Once downloaded we are then ready to run our first test. Sitespeed.io doesn't use a Web UI to interact with so each test needs to be ran via a docker command manually (we'll go into CRON tasks later). Before we run the test, lets go over some variables that'll need to be amended or added.

docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.8.0 -b chrome https://blog.swakes.co.uk/
  • Using -v "$(pwd)":/sitespeed.io will map the current directory inside Docker and output the result directory there so remember where you ran the command from!
  • The -b chrome string, you may have guessed it, runs the test using a Chrome desktop browser. You can either use -b chrome (default) or -b firefox.
  • And finally change the https://blog.swakes.co.uk to your desired URL

Here are some other parameters which may be handy.

  • By default the amount of runs for each test is 3. If you want change this, add -n 5 (5 runs) to the end of the command.
  • If you want your tests to run through a mobile device browser, add --mobile to the command. This will use a Chrome browser on a Apple iPhone 6 device (360x640 viewport).

Once you've made your changes, run the command and watch the magic happen, well at least a run down of the script. When the test has finished you'll be able to run ls -l and see final report in a new folder called sitespeed.io. In this folder find and open index.html to review the report.

blog.swakes.co.uk/
└── [Date-Time]
    ├── css
    ├── font
    ├── img
    │   └── ico
    ├── index.html
    ├── js
    ├── logs
    └── pages
        └── blog.swakes.co.uk
            └── data
                ├── screenshots
                └── video
                    └── images
                        ├── 1
                        ├── 2
                        └── 3

The report should look similar to the one below. I won't go into every detail of the report so have a browse around and find out what nasties, or goodies, Sitespeed.io has found. You can find ever more detail under Pages and then click on URL

Fucking A! Now you've got Sitespeed.io all setup! Well, at least we've gone over the basics. There is still a shit ton more you can configure so check out their documentation here. These next steps are purely optional so if you want to bail out here, that's all good. However for those who want to delve in more, let's go ahead and setup Network Environments, CRON Jobs and NGINX.

Network Environments

For some test scenarios you may want/need to simulate a variety of network speeds to emulate real user behavior. A rough (really rough) example of this would be testing a mountain climbing map site. There's no point testing using WiFi speeds if your users will mainly be accessing outside via 3/4G coverage. This would help to easily identify issues such as huge files or images which could effect performance. By default any test you run in Sitespeed will use whatever speed is available on your network or device. To setup our own network speeds, we will need to create Docker Network Bridges. Easy enough to accomplish. Instead of bashing out tons of commands, we're going to knock up a script to do it for us.

sudo nano ~/docker/sitespeed/setupntw.sh

If you're happy with the default settings in the code below, proceed on, however if you want to tweak the speeds or names, you'll need to amend:

  • The name of each network is at the end of ...network.bridge.name"="docker1" 3g. To amend it, simply change 3g and not "docker..".
  • To change any of the speeds, amend the two identical entries after rate and ceil (...htb rate 2.5mbit ceil 2.5mbit).
#!/bin/bash
echo 'Starting Docker networks'
docker network create --driver bridge --subnet=192.168.33.0/24 --gateway=192.168.33.10 --opt "com.docker.network.bridge.name"="docker1" 3g
tc qdisc add dev docker1 root handle 1: htb default 12
tc class add dev docker1 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit
tc qdisc add dev docker1 parent 1:12 netem delay 150ms

docker network create --driver bridge --subnet=192.168.34.0/24 --gateway=192.168.34.10 --opt "com.docker.network.bridge.name"="docker2" cable
tc qdisc add dev docker2 root handle 1: htb default 12
tc class add dev docker2 parent 1:1 classid 1:12 htb rate 5mbit ceil 5mbit
tc qdisc add dev docker2 parent 1:12 netem delay 14ms

docker network create --driver bridge --subnet=192.168.35.0/24 --gateway=192.168.35.10 --opt "com.docker.network.bridge.name"="docker3" 3gfast
tc qdisc add dev docker3 root handle 1: htb default 12
tc class add dev docker3 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit
tc qdisc add dev docker3 parent 1:12 netem delay 75ms

docker network create --driver bridge --subnet=192.168.36.0/24 --gateway=192.168.36.10 --opt "com.docker.network.bridge.name"="docker4" 3gslow
tc qdisc add dev docker4 root handle 1: htb default 12
tc class add dev docker4 parent 1:1 classid 1:12 htb rate 0.4mbit ceil 0.4mbit
tc qdisc add dev docker4 parent 1:12 netem delay 200ms
sudo chmod +x setupntw.sh
./setupntw.sh

Cool beans. Once that script has ran successfully, you'll now be able to add the --network=3g and -c 3g parameter to your tests.

docker run --network=3g --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.8.0 -b chrome -c 3g https://blog.swakes.co.uk/ --mobile

CRON Jobs

There may be a time of the day when you want/need to run a test but really can't be arsed with the hassle of doing it manually, well where you can, automate that shit. To do this we can setup a CRON job by running crontab -e (select option 1 for nano). Now we need to define two bits, the time/date/frequency and command. I'm not going to go into detail around CRON so visit this site to help define the 'job'. The example below shows that this job will run every 30mins on odd hours on every day, month and year. Build a suitable timing rule for your job then simply copy the output into crontab -e on the bottom line.

Some words of wisdom when it comes to CRON jobbing these Sitespeed tests, ensure you have enough disk space! My first mistake was running tons of jobs every 15mins for a solid few days before finding that most of my disk space had been taken up!

NGINX

So now we have all these fantastic reports but the only way to view them is manually going onto the server or grabbing them via SFTP etc. Using the wonders of NGINX, we can setup a site, password protect it and be able to navigate through and view each test. In order to do this you will need a NGINX server and domain name. Lucky for you I've already written a post on how to set this up. Check it out here.

Assuming you've got NGINX setup and a domain name to access your reports through, let's crack on! Three parts to this bit...

/var/www/sitespeed

To keep everything 'in house', it's best to repoint your Sitespeed outputs to a newly created folder in /var/www by running sudo /var/www/sitespeed. The annoying part now is that you may need to amend all those new CRON jobs we had just setup before. Apart from that, remember to replace -v "$(pwd)":/sitespeed.io with -v /var/www/sitespeed:/sitespeed.io for any future commands you run manually.

NGINX Directory Listings

So now we've got our test outputs saved in a new directory, it's time to setup a site to view them through. We're not going to go down the route of building any HTML or CSS to view these reports, we can utilise a feature in NGINX called Directory Listing. With this enabled, it will display the contents of the folder in the browser instead of serving up a HTML page (see below).

To enable this and see your Sitespeed reports, you'll need to include the following lines in your NGINX site config.

        location / {
                autoindex on;
        }
server {
        listen 80 default_server;
        listen [::]:80 default_server;
        root /var/www/sitespeed;

        index index.php index.html index.htm;

        server_name speed.swakes.co.uk www.speed.swakes.co.uk;
	
        location / {
                autoindex on;
        }
}

Cool beans. Before you go ahead and run sudo nginx -s reload, we need to 'secure' the site from any unwanted users and their prying eyes. Two quick and easy ways to do so, one using a 'local.conf' NGINX snippet to define IP white/black listing and the other by creating a .htaccess file to enable a username and password prompt. The local.conf setup is covered under my NGINX SSL guide here so let's delve into .htaccess'ing this bitch.

First we'll need to create the .htaccess file and assign a username and password. Replace username with your own then follow the prompts to enter a password.

sudo htpasswd -c /etc/nginx/.httppassword username

Now go back to your NGINX site configuration and add the following lines:

        auth_basic "Private Property";
        auth_basic_user_file /etc/nginx/.htpasswd;
server {
        listen 80 default_server;
        listen [::]:80 default_server;
        root /var/www/sitespeed;

        auth_basic "Private Property";
        auth_basic_user_file /etc/nginx/.htpasswd;

        index index.php index.html index.htm;

        server_name speed.swakes.co.uk www.speed.swakes.co.uk;

        location / {
                autoindex on;
        }
}

Once you've updated and saved the site config, run a quick sudo nginx -t to ensure we (I) haven't made any fuck ups before going all the way and running sudo nginx -s reload to publish those changes. Now go to your site and should be getting a login prompt!

Hopefully by the end of this you should have a nicely automated testing solution setup and accessible from anywhere! As usual, anything I've screwed up or missed out of this guide, please let me know in the comments below.