Website setup 2017

— 30 minute read —
jekyll red test tube logo
Jekyll + SASS + ES6 + Promises + Fetch + Gitlab CI + Service Workers + NPM Scripts

This post will differ from my usual in many ways. It will be largely technical, though I shall try to explain things simply. I will be rewriting this post as I see fit, updating the information where I have changed my methodology, adding to it where I have improved the system and removing that which is no longer relevant. What follows is a complete overview of my approach to building the perfect website.

  1. Criteria
  2. Jekyll
  3. NPM Scripts
  4. Gitlab CI w/ FTP Deploy
  5. Design
  6. Accessibility
  7. Service Workers
  8. Page Transitions
  9. Server
  10. Refactoring
  11. General
  12. Conclusion

1. Criteria

First, lets outline what the minimum criteria was:

  • Be secure, impossible to breach from the front end
  • Use only open source software
  • Have a completely automated workflow
  • Be as fast as technically possible
  • Work offline
  • Have smooth and custom page transitions (they’re currently all the same — work in progress)
  • Work easily as a blog that can be written in markdown
  • Be fully responsive (of course)
  • Use content delivery networks for certain assets
  • Be optimised for accessibility
  • Have super light weight css and javascript files
  • Beautiful code to work with, ugly code for production
  • Scalable
  • Testable

2. Jekyll

I use Jekyll as a lightweight, static content management system. It allows me to effortlessly write posts, template pages with standardised content (i.e navigation, footer, meta content) and all whilst rendering to flat html files. This was important for a few reasons. First, static pages render far faster than ones which must dynamically include content (say with PHP), as there are fewer requests to the server. Second, security is always important and there is a far greater risk of a site being compromised when a server side language is heavily involved. Third, a site which relies heavily on server side processing, PHP includes and logic (such as WordPress) cannot scale cheaply as more resources are required for each user, using more bandwidth on the server. Four, I wanted to be able to make use of the latest Service Worker caching mechanism and to do that I needed a static architecture. And five, Jekyll is open source.

On a side note, Jekyll is written in Ruby, which is fine but in an ideal world it would have made my life so much easier and simpler had it been written in Node.js. Maybe one day.

I previously ran Jekyll through gulp, giving me automation and complete control over my build process. But, I eventually scrapped this for the far simpler NPM Scripts, Jekyll is run though a script in my package.json (shown in the snippet below).

	"scripts": {
		"watch:jekyll": "jekyll build --drafts --watch",
		"build:jekyll": "jekyll build --lsi"

The Jekyll build task, which is only run in my Gitlab CI process, uses the –lsi flag to optimise related posts. Whilst this does slow build times, I do not have enough posts for this to make a significant difference though intent to look into speeding it up with a new ruby library.

Jekyll will also generate a sitemap using Jekyll sitemap. There is a unit test to verify this was completed, and done so correctly. An Atom XML feed is also generated on the site using only standard liquid markup. The atom feed allows for integration into many third party news aggregation services and is almost mandatory for any good site which intends to make regular updates.

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="">

<title>{{ site.title }}</title>
<link href="{{ site.url }}/atom.xml" rel="self"/>
<link href="{{ site.url }}/"/>
<logo>{{ site.url }}/public/img/logo.png</logo>
<updated>{{ site.time | date_to_xmlschema }}</updated>
<id>{{ site.url }}</id>
	<name>Jack Edgson</name>

{% for post in site.posts %}

		<title>{{ post.title }}</title>
		<link href="{{ site.url }}{{ post.url }}"/>
    		<url>{{ site.url }}/public/img/{{ post.image }}</url>
		<updated>{{ | date_to_xmlschema }}</updated>
		<id>{{ site.url }}/{{ }}</id>
		<content type="html">{{ post.content | xml_escape }}</content>

{% endfor %}


Markdown, the holy grail of web-based (dare I say all) formatted writing, ships default with Jekyll using the ruby based library Kramdown. This meets the criteria I had laid out in advance, with the added benefit of YAML front matter allow extra customisation of each post’s meta data. For example, this post’s image, caption, and title are all specified in the front matter.

I have yet to decide if I want to integrate a comment system into this site. If I do, which eventually I will, Staticman will be my system of choice as it is fully customisable from a design perspective and incredibly lightweight.

3. NPM Scripts

My more naive self once used Gulp to automate my build and development process. However once I realised it wasn’t needed, I instead opted for NPM Scripts built right into my package.json file. Fewer devDependencies, and fewer files. Minimalism 1, Gulp 0.

Autoprefixer is perhaps one of my most valued integrations, allowing me to write clean CSS and not have to worry about remembering all the latest vendor prefixes. My current setup just prefixes everything, honestly it doesn’t add that much weight (~1kb not gzipped) so I’m not too worried about targeting specific browsers and versions.

The CSS is all written in Sass, and separated into .scss partials to be included in a main scss file. Gulp handles the conversion from Sass to regular CSS as well as compression, minification and concatenation (if ever the need arises). The Javascript is transpiled from ES2016 to ES5 so that I can continue to write beautiful code where possible whilst still retaining decent browser support.

In an attempt to maximise my Google Page Speed Score I also automated the inlining of critical css for each page. However, due to the lightweight nature of the main.css file I opted to remove this feature for now to see if this has an effect on speed. If I reintroduce the feature it will be solely on the home page, so as not to add unnecessary bytes to all other pages.

Obviously I’m not going to specify the service worker assets by hand, so I let SW Precache take care of this. It automatically generates an service-worker.js file in the build process and adds version control hashes to the end of each file. This is the code I’m currently using.

Since I know you’re curious, here is my entire package.json file as of version 0.1.2 for your reading pleasure.

  "name": "jack-edgson",
  "version": "0.1.2",
  "description": "Jack Edgson's personal website and journal",
  "main": "",
  "engines": {
    "node": ">=0.12.0"
  "scripts": {
    "clean": "rimraf _site/public/{css/*,js/*,img/*}",
    "watch:jekyll": "jekyll build --drafts --watch",
    "build:jekyll": "jekyll build",
    "autoprefixer": "postcss -u autoprefixer -r _site/public/css/*",
    "scss": "node-sass --output-style compressed -o _site/public/css src/_sass",
    "build:css": "npm run scss && npm run autoprefixer",
    "watch:css": "onchange \"src/_sass\" -- npm run scss",
    "lint": "eslint src/_js",
    "babel": "babel src/_js -d _site/public/js/tmp --presets es2015",
    "uglify": "uglifyjs _site/public/js/tmp/*.js --screw-ie8 -m -o _site/public/js/app.js && rimraf _site/public/js/tmp",
    "build:js": "npm run lint && npm run babel && npm run uglify",
    "watch:js": "onchange \"src/_js\" -- npm run build:js",
    "build:all": "npm run build:jekyll && npm run build:css && npm run build:js && npm run service-worker",
    "watch:all": "npm-run-all -p http watch:jekyll watch:css watch:js",
    "service-worker": "sw-precache --root=_site --static-file-globs='_site/**/*.{js,html,css,png,jpg,gif,svg,eot,ttf,woff,woff2}'",
	"http": "live-server _site --wait 1000 --port 9000 --host localhost",
    "serve": "npm run build:all && npm run watch:all",
    "build": "npm run build:all"
  "devDependencies": {
    "autoprefixer": "^6.3.6",
    "babel-cli": "^6.23.0",
    "babel-preset-es2015": "^6.22.0",
    "eslint": "^2.10.2",
    "eslint-config-standard": "^5.3.1",
    "eslint-plugin-promise": "^1.3.0",
    "eslint-plugin-standard": "^1.3.2",
    "node-sass": "^3.7.0",
    "npm-run-all": "^2.1.1",
    "onchange": "^2.4.0",
    "postcss-cli": "^2.5.2",
    "rimraf": "^2.5.4",
    "sw-precache": "^5.0.0",
    "uglify-js": "^2.6.2"

4. Gitlab CI

As every developer should, I use version control. And since I like free and open source, I’m using Gitlab — currently the free hosted version, but I’ll eventually switch to the self hosted alternative. Since I’m using Gitlab I opted for Gitlab’s built in Continual Integration system rather than, say Travis CI. So, every time I make a git commit, which is any time I am done with writing a post or just complete a general site update, Gitlab automatically runs the code I tell it to.

Now, this was a challenge, since I’m using both NPM and Jekyll I needed Node JS as well as Ruby on the runner. Now this doesn’t seem like a challenge, but somehow the things you think will be easy are always the hardest. What I ended up with was a Gitlab CI YAML file (.gitlab-ci.yml) that looked like the following:

image: ruby:latest

    - curl -o- | bash
    - source ~/.nvm/
    - nvm install 4.0
    - nvm use 4.0
    - apt-get update
    - apt-get install -y locales >/dev/null
    - apt-get -qq -y install bzip2
    - echo "en_US UTF-8" > /etc/locale.gen
    - locale-gen en_US.UTF-8
    - export LANG=en_US.UTF-8
    - export LANGUAGE=en_US:en
    - export LC_ALL=en_US.UTF-8
    - gem install bundler
    - gem install jekyll
    - bundle install --jobs $(nproc) --path=/cache/bundler
    - npm set strict-ssl false
    - npm install

    - test-deploy-staging
    - deploy-production

    stage: test-deploy-staging
        - scripts/test
        - scripts/deploy
        - master

    stage: deploy-production
        - scripts/deploy
    when: manual

I use a public runner preconfigured with the latest ruby image, and from there it pulls in Node Version Manager. Then it runs all the necesarry commands to get everything in proper order and install Jekyll, plus everything in my Gemfile (i.e jekyll-sitemap). The stages are sectioned off into testing -> deploying to staging, and testing -> deploying to production. The deploy to production is manually triggered, whereas the staging server deploys automatically. Unfortunately I haven’t got caching to work properly, so the deploy to production script requires redownloading all the dependencies, reconfiguring the runner and rerunning a whole site build.

The unit testing aspect of my site is mostly covered by HTML Proofer. It has a series of checks for links, images, favicons, OpenGraph meta data, scripts, html and css. It’s pretty good at checking my code from a very generic point of view. I’ve been a bit lazy though, and have written 0 actual tests for the site itself. Sure, I lazily check if the directories all exist and I’ve considered checking against for accessibility but I plan to fix this up at some point. I swear.

#!/usr/bin/env bash
set -e # Halt script on error.

# build site
jekyll build
gulp build

# unit test site with htmlproofer
htmlproofer _site --only-4xx

# check all directories exist
if [ ! -d "_site" ]; then exit 42; else echo '_site/ directory exists'; fi;
if [ ! -d "_site/public/css" ]; then exit 42; else echo 'css/ directory exists'; fi;
if [ ! -d "_site/public/js" ]; then exit 42; else echo 'js/ directory exists'; fi;
if [ ! -d "_site/public/img" ]; then exit 42; else echo 'img/ directory exists'; fi;
if [ ! "_site/sitemap.xml" ]; then exit 42; else echo 'sitemap exists'; fi;

# custom tests
# 1. all html files should have appropriate meta tags
# 2. ...

# accessibility checker

For the deployment bash script, I deploy from Gitlab CI to FTP using pure command line bash. I sincerely hope that someone, somewhere, in need of an FTP upload that works from the command line without any additional packages, stumbles across this line of code. It deploys my entire website in the Gitlab CI process, only if all the unit tests have passed successfully. I spent far, far too long getting this single line of code to work. I tried all number of things, from install ftp packages, to curl uploading in a for look, to zipping files up and automatically unzipping them on the server. In the end, this line of code is all it took.

find . -type f -exec curl -u $FTP_USER:$FTP_PASS --ftp-create-dirs -T {} ftp://$FTP_URL{} \;

Also, this single deployment bash script is used for both staging and production. It can tell if it needs to deploy to one or the other through gitlab’s environment variables. The full deploy script is below:

if [ "$CI_BUILD_STAGE" = "deploy-production" ]

5. Design

The design was foremost minimalist, and lightweight. No bloated css frameworks, not even a grid system (maybe overkill). I plan to continue refactoring the CSS for optimal performance — it certainly has a little way to go.

Aligned with my desire for the most lightweight design possible I opted to have no images on the site, there isn’t even in post support for them. Since the posts needed graphics I knew I needed some solution for this and I did consider using responsive images, or perhaps the new <picture> tag but eventually decided to go for SVG’s. Responsive images not only require more work, ongoing and upfront, but ultimately aren’t as well supported as SVG’s, are far loftier and in the end they just didn’t fit the design style I was going for. Not to mention vector graphics are infinitely future proof, if the future brings us 8K 200-inch OLED screens — my site will look great, though maybe the UX will need some attention.

I also required smooth page transitions which you can read more about in the Page Transitions section

6. Accessibility

Since this site has to implement everything I could ever think of, so it also has to be accessible. By that I mean, works well with screen readers. Obviously there are the basic things: contrast ratios, legible font sizes, alt tags. But that wasn’t enough, and I admit, I still have some ways to go in this department, but I also wanted to add aria labels where ever I could and clear structured markup for SEO and accessibility. was a huge help in this regard, I considered adding it into my unit testing suite but don’t quite think It’s necessary just yet. The Node a11y project was also very useful in running and fixing any errors it threw.

7. Service Workers

If a site can work offline, mine had to. If files could be cached in such a way where there was complete control by the programmer to how they were served, perfect. Service workers solve this problem, and whilst support is still a little bit spotty, all the good browsers support it. If you’re using Internet Explorer, get a grip and download Chrome or Firefox, you’re missing out on everything good about the web.

Services worker asset management is handled with NPM Scripts, see the code for it back in the NPM Scripts section. The basic service worker javascript which is in one of my main javascript files is below.

if (window.location.hostname !== 'localhost') {

	if('serviceWorker' in navigator) {
			.then(function() { console.log("Service Worker Registered"); });


Part of what I felt a need to do since I’m basically storing my entire website on a users computer if keep it as lightweight as possible, in total, I’m storing a less than 2 megabytes on you’re computer right now. Almost indefinitely — sorry about that one. The good news, it works offline. Turn your wifi off and keep browsing, how cool, and how useless. Maybe when I have a bunch of really good posts it’ll be useful for reading the massive, incredibly well written, and extremely valuable backlog of posts.

8. Page Transitions

To ensure a smooth user experience I refused to simply accept the traditional page loading mechanism, with that ugly flash of white and no animation. Initially I used SmoothState.js to achieve this effect, which is great in every way except one: it uses jQuery. It forced me to convert my entire project to jQuery because I couldn’t let this idea go. So I wrote my own pure Javascript page transition function implementing Fetch and Promises. Because Fetch and Promises are still not well supported I also decided to add the appropriate polyfills if they were needed using The complete Hotswap function can be found below it’s not perfect, and it is still very much a work in progress (and only support CSS Animations at the moment). But hey, it’s a start.


function HotSwap(startCallback, endCallback, contentSwapCallback, anchorHandling, container) {

	this.init = function() {

		document.addEventListener('click', this.onLinkClick.bind(this))
		window.addEventListener('popstate', this.onStateChange.bind(this))

		if (!window.location.origin) {
			window.location.origin = window.location.protocol + '//' + window.location.hostname + (window.location.port ? ':' + window.location.port: '')


	this.onLinkClick = function(event) {

		let node =

		do {
			if (node === null || node.nodeName.toLowerCase() === 'a') { break }
			node = node.parentNode
		} while (node)

		if (node && node.href && this.sameOrigin(node.href)) {

	this.swapUrl = function(url) {

		var state = {
			scrollY: window.scrollY

		window.history.replaceState(state, null, window.location.href)
		window.history.pushState(null, null, url)
		return this.onStateChange()

	this.onStateChange = function(popStateEvent) {

		this.loadNewPath().then(response => {
			this.callBackFunction(response).then(() => {
				this.onLoadSuccess(response, popStateEvent)


	this.loadNewPath = function() {

		const path = window.location.pathname +

		return fetch(path)
			.then(response => {
				logger('[Page] New page successfully fetched')
				return response.text()
			.catch(error => logger(error))


	this.callBackFunction = function(){

		return new Promise((resolve, reject) => {

			if(this.hasUrlHashParameter(window.location.href)) {

				logger('[Page] Anchor tag encountered - skipping animation')

			} else {

				logger('[Page] Exiting page - animate out')
				this.onCSSAnimationEnd(container, function(){
					logger('[Page] Exiting page - animation complete')



	this.onLoadSuccess = function(response, popStateEvent) {

		const responseObject = document.createElement('html')
		responseObject.innerHTML = response

		if(this.hasUrlHashParameter(window.location.href)) {
			// allow custom anchor handling (give user element id)

		if (popStateEvent && popStateEvent.state) {
			window.scrollTo(0, popStateEvent.state.scrollY)
		} else if(this.hasUrlHashParameter(window.location.href)) {
			var element = document.getElementById(this.getUrlHashParameter(window.location.href).substr(1))
			window.scrollTo(0, element.offsetTop)
		} else {
			window.scrollTo(0, 0)


	this.onPageExitComplete = function () {

		let s = document.body || document.documentElement,
			prefixAnimation = ''

		s =

		if( s.WebkitAnimation == '' )	prefixAnimation	 = '-webkit-'
		if( s.MozAnimation == '' )		prefixAnimation	 = '-moz-'
		if( s.OAnimation == '' )		prefixAnimation	 = '-o-'

		this.onCSSAnimationEnd = function(container, callback){

			var runOnce = e => {
				callback(), runOnce)

			container.addEventListener('webkitAnimationEnd', runOnce)
			container.addEventListener('mozAnimationEnd', runOnce)
			container.addEventListener('oAnimationEnd', runOnce)
			container.addEventListener('oanimationend', runOnce)
			container.addEventListener('animationend', runOnce)

			if( ( prefixAnimation == '' && !( 'animation' in s ) ) || getComputedStyle( container )[ prefixAnimation + 'animation-duration' ] == '0s' ) callback()
			return this



	this.sameOrigin = function(href) {
		return href.indexOf(window.location.origin) === 0

	this.hasUrlHashParameter = function(href) {
		return href.indexOf('#') > 0

	this.getUrlHashParameter = function(href) {
		return href.substr(href.indexOf('#'))


The HotSwap function then returns content to the user for handling. Here is how I’m handling the data and starting the function.

function transitionStart() {

function transitionEnd() {

function contentSwap(response) {

	// assign data to be hotswapped
	const reponseObjects = {
		'title': response.querySelector('head > title').innerHTML,
		'description': response.querySelector('meta[name="description"]').getAttribute('content'),
		'content': response.querySelector('#main').innerHTML
	const DOMDesc = document.querySelector('meta[name="description"]'),
		DOMContent = document.querySelector('#main')

	// hotswap data
	document.title = reponseObjects.title
	DOMDesc.setAttribute('content', reponseObjects.description)
	DOMContent.innerHTML = reponseObjects.content

	// re-run functions that are page dependant


function anchorHandling(anchorID) {
	const scrollElement = document.querySelector(anchorID)
	animateScroll(scrollElement, 1000, 'easeInOutQuint', 10, top)

const container = document.querySelector('#main')

new HotSwap(transitionStart, transitionEnd, contentSwap, anchorHandling, container).init()


9. Server

Service workers take care of the cache control for newer browsers, but I don’t want to forget about everyone else and force them to re-download all the assets on every page load or session, so I made use of Apache’s .htaccess configuration file (yes Nginx is probably better). I set CSS and JS files to stay cached for a week for now, I may increase this when I feel the site is more completed but for now it will stay at this value. SVGs are cached for half a year.

Of course, GZIP and DEFLATE are enabled for all assets, fonts, and files to keep the site as fast and as data consumption friendly as possible. The 404 redirect is also specified in the .htaccess and eventually I will add custom pages for a whole bunch of different error’s (and I do mean eventually).

Now, the site needed to be https, for a few reasons but mainly because I wanted it to be. The problem arose when I didn’t want to pay for a SSL certificate, couldn’t stand the idea — maybe I’m just cheap. Luckily, I stumbled upon LetsEncypt about 6 months earlier. This, combined with an open source cPanel plugin for generating LetsEncrypt certificates and setting up cron jobs to renew then, allowed me to get the exact solution I was looking for.

10. Refactoring

My code quality needs some attention, thought every programmer ever. But seriously, I’d like to refactor my css (Sass) now that the entire site is built and I know what is required. It can be cleaner simpler and more uniform. Currently, I use Sass mixins organised by page. I know realise, this was a mistake. I’d like to go back and atomise my project. Make every style standalone and reusable.

Now that I’ve successfully eliminated jQuery, transitioned my project to ES2015 (maybe someday ES2017), refactored my code, and split up my javascript files I’m temporarily satisfied with the javascript code base. I still intend to go back regularly to clear out the crap though and optimise already perfectly fine code. :)

11. General

it wasn’t much extra work, so I added a manifest.json file. This allows me to specify exactly how I want the browser to look and handle my site. If you add my site to your home screen, it will behave as I tell it to. Unfortunately I can’t mimic this file for iOS, it is purely for Android, but that’s okay.

	"lang": "en",
	"name": "Jack Edgson",
	"short_name": "Jack Edgson",
	"display": "standalone",
	"background_color": "#272727",
	"theme_color": "#202020",
	"orientation": "portrait",
	"description": "Jack Edgson's personal website and journal",
	"start_url": "/journal/",
	"icons": [{
		"src": "/public/img/favicon/favicon-196x196.png",
		"sizes": "196x196",
		"type": "image/png"

<link rel="manifest" href="/manifest.json">

I considered adding a cache json file, but the browsers that actually use this made it seem worthless, plus Service Workers, plus Apache expire headers. I figured I’d be fine without it. Of course, I’ve got a robots.txt file and also humans.txt file.

Meta tags are important, each page has the basic title and description meta tags but more than that it has a meta theme-color tag for when manifest.json isn’t supported.

<meta name="theme-color" content="#202020">

I’d like a full suite of OpenGraph meta tags for nicely rendered social sharing but I’m working on figuring out how to make this work with blog images since they are all SVG’s and OpenGraph, I think, will only accept raster images. I don’t yet know what to do about this. I also know feed aggregation services, like Feedly require these images to be JPG or PNG to work correctly. So I’ll likely dynamically convert SVG’s to high quality JPG images for this purpose. The site will still use SVGs but the external services will be pointed to JPG images.

12. Conclusion

Basically, this site is my playground. I test all the latest features I want, ceaselessly refactor and redesign everything. Always in search of the perfect site. I monitor everything with GTMetrix and Google Page speed insights to make sure changes improve speed and score. My Google PSI score is currently 97/100 which bugs me but sometimes I have to sacrifice score so that I can actually provide better speeds (sounds funny but it’s true).

If you’d like to add to this, or want the entire project’s source code, just let me know.

You may also like


11.03.17 — 5 minute read

A face to love and loath

29.08.16 — 3 minute read