Confident Clean a Hacked Site Workshop Join me March 21st for this 90 Minute Workshop  Claim your spot

New Site Part 2 – Development Workflow

General | WordPress

Hello this is the second in a multiple part series about the redesign of TimNash.co.uk. In the first part of this series, I went through some of the design reasons and then spent the rest of the article talking about performance and improving load times. For this part I want to cover the workflow for building the site and how it differs from many of my previous projects.

In case you haven’t come across me or read the first part of this series. I’m not a “frontend” developer and as such this guide should not be considered “best practice” it’s simply what did and in some cases didn’t work for me.

In the beginning

When I first considered redesigning I went through a lot of designs and mockups, some were doodles in a note book, others were html mockups. This proved both a useful guide to my lack of being a designer and providing hard boundaries between what I could and couldn’t do.

Once I was ready to go, I started setting up the project where it would live and how it would be structured.

Normally my projects reside in Git in either CodebaseHQ for web stuff or a self hosted Gitlab for non web stuff.

My initial reaction was to create a new CodebaseHQ project after all this is “web stuff”. Now I could tell you I had a deep and meaningful reflection on life and chose not to but the reality was I had hit my plan limit and really didn’t really fancy playing £10 more a month.
Not that I don’t think its great value, it is worth it just not for me right now. Likewise I could have archived one of my projects but again I do like the Exception handling reports which did occasionally come through.

So instead I decided to use the built in 34SP.com WordPress Hosting, hosted Git solution.  After all its probably a good idea to dog food your own products where possible. I was always going to hook into our Git integration anyway at some point in the flow so I’m simply removing another step. In future I can still set the 34SP.com hosted git repo as a remote origin to a CodeBaseHQ project if I feel I want the ticketing & Exception tracking.

The 34SP.com Git solution is simple to use, you SSH in to your container, run a wp-cli command, and a Git repo is generated in var/repo/yoursite with a master and staging branch. This git repo represents your wp-content/ folder and you can choose to either add all the contents or selectively add.

This is a flexible solution but can get your in a bit of a muddle. The hosting also auto updates plugins where possible so if you were to use Git to store ALL the content then your repo would get out of date quite quickly.

Therefore I use it to simply store all my custom content, which I then in the admin panel have those plugins set to “disable updates”, leaving the rest of the plugins to auto update.

For this project I simply cloned the git repo that is on the server including both branches and added a third branch dev. At this point several of you are going to be screaming “where are the feature branches, how do you do fixes” but really this approach works well for a solo developer.

Keep in mind this is a personal site and I’m the only one touching it so:
Code gets committed to Dev, merged into staging and then pushed on to live.

Hang on a minute pushed?

It turned out I didn’t really use the master branch for pushing to production, instead I used the 34SP.com hosting staging features, that allow me to push the database and files over. What horror is this, Tim one of the biggest proponents of continual integration manually pushing a button.

Well I did it for one big reason, convenience, by using the staging features I could have it handle database pushes and keeping the databases in sync at least during development. This meant I could do a bunch of cleaning up the database on staging and then push it to live it also allowed some tweaking. While this wouldn’t have been impossible if I had relied on the git integration it would have been extra work.

While it was tempting to simply use the staging site as my “dev” site especially through the initial development the inner Tim was screaming at me and as such I used a local dev setup.

Development Environment

My setup is pretty much the same on my work laptop and my home one the difference is simply the 9 years difference in hardware.

I have made a few changes since I have wrote my Development workflow post and I intend to do a more detailed post but basically my development setup is:

Docker

Docker based images, similar to hosting setup. This is sort of not what containers are for and reality and I’m using it in a similar way people would use vagrant but without the overheads.

For this project the container is a Centos7.4 container running Nginx, MariaDB, PHP7.2, Redis.

The best code editor ever…

I use Atom as my main editor as I find myself writing Python, PHP and Go these days I like having something that just works and I can swap between the 3. I’m not wedded to it and really want to try out Code by Microsoft at some point which I have heard great things. I do have Atom heavily configured to my needs and like most people who use Atom I really like it but would wish it wasn’t trying to take off out of my computer. On my older Mac, Atom memory usage and CPU usage are enough to trigger the fan whirling after prolonged use.

Webpack, directories and making a mess

This project was also the first time I had used Webpack, after watching Laracasts tutorial videos.

I’m not a huge fan of Javascript ecosystem which is a mess and using Webpack was done under silent duress but it’s something that I will be forced to use in another project soon so it was better to learn it now.

It was fairly simple to setup. With hindsight I should have looked at maybe the various Atom packages for Webpack to do tasks on save in Atom but instead I used Fswatch. Fswatch watches a directory for changes and when they occurred it does a task, in this case run Webpack.

It all worked but I feel its not really optimised. My file structure layout ended up slightly confusing:

/timnash.co.uk/deploy/
/timnash.co.uk/workingcopy/
/timnash.co.uk/tests/

The folders only contained content that was to go into Git, other plugins and the WordPress core files were not included.

The deploy folder is the cloned Git repo dev branch it’s also mounted into the docker container and symlinked into the wp-content folder in the docker version of my site.

Working copy is the copy I was editing, this meant that I was doing all the minfication and merging with Webpack locally and only minified files are going into Git.

The problem being, while I have the site contents in version control it could be argued I don’t have a site that can be immediately developed as the tooling is sitting outside. This is something I’m addressing but for now the final version of files sit in the Git Repo. If my laptop gets crushed then its going to suck a tincy bit but I will be more worried about the laptop then recreating the Webpack setup.

So the flow is: open files in atom in workingcopy, fswatch runs in background to monitor changes in workingcopy folder and then use Webpack to minify and run any additional tasks before moving the contents to deploy folder.

Webpack is also taking my merge.css file and putting the contents minified in my header.php in the head tag. Originally the plan was to use SaSS for CSS for no other reason then an opportunity to learn however this was over kill for my very small css file in total there are 213 lines in my merge.css and most of that is whitespace and formatting. On the page itself minified the css is around 2kb.

Other then the merge Webpack is doing well nothing more, excluding some files but thats it. Really a super basic usage and if I had realised that from the outset I probably wouldn’t have used it.

Finally for plugins and bit’s not being handled inside the git repo I have a simple bash script (yes again this could be orchestrated better) which runs a series of wp-cli commands to install various plugins and wp-cfm used to do a little bit of config management. This is not a great solution and will ultimately be changed, like most “simple” scripts its already started to grow.

Testing

For testing I mainly use Codeception and have done for years, I have even given talks about using it. I run Codeception within the docker container for Functional and integration tests. My theme and supporting plugins don’t have unit tests and I’m ok with that. Mostly because they lack any major logic, though as I bring comments back online this might change.

My integration tests fall broadly into the following categories:

  • Acceptance testing – does it do what I expect
  • Performance tests – is it doing it as well as I expect
  • Security tests – is it misbehaving

One big area currently missing is accessibility which is an oversight I’m looking to correct and am currently looking at headless accessibility tools that could be used to generate meaningful tests.

My belief is tests should help not hinder, and provide a usable output for you to prevent things going wrong.

To this end I tend when I’m writing a feature to decide what I expect the output to be and write a integration test for that expected output prior to coding.

I can then go ahead and write the feature, by keeping the test broad I’m simply setting a scope and expected behaviour. Once I have a passing test I can move on, now it maybe that the original test was to broad in which case I break it into smaller tests. Likewise if I hit logic, that requires more brain power then I might write a unit test, especially if I have a non binary return.

Testing methodology purists are screaming but as this post develops I seem to be angering every group in tech. This approach keeps me moving while providing a core way to test methodology.

I tend to see these acceptance tests as did it do what I expect, as much about maintenance then development. Once I have these tests I can run them fairly quickly in numerous scenarios including whenever I update a plugin or against a backup to test it’s working.

Security and Performance tests are more specific tests, for performance I only run this locally current as it calls a separate docker instance running webpagetest to be honest these tests are slow and crude, returning fails if certain parameters are not met regarding load times and specific optimisation scores.

Likewise security tests are a set of integration tests checking for certain features, if you login with the dummy user on my docker instance does he get challenged for his 2fa key? It also runs OWASP Zap against the local instance. Finally a simple integration test checks and compares the headers to make sure security headers are set.

For the staging site Codeception is run from within the container but only runs the acceptance integration tests, this is triggered using the built in tests feature within the 34SP.com hosting. By default the hosting will run any shell scripts in the tests folder and will only deploy to the staging site if it returns a 1. This is great if your running say a Codesniffer and indeed unit tests but anything that needs the site to load. Instead the script returns a 1 and triggers a PHP script to sleep for 60 seconds and then trigger the acceptance tests. This gives the deployment scripts a chance to run. This is a bit of a hacky way to run the tests. In theory the hosting also allows a triggering of a WordPress action when its fully deployed and I could have hooked in there.

Because of this hacky way of doing things failing tests do not prevent deployments, rather they notify me of fail this is done using slack, with a bot written in botman. I have been slowly adding more commands to manage the site using botman, for example it posts comments for approval with a simple interface for approving. It also notifies me of Fatal errors or some thrown exceptions. Finally it notifies me of me logging in, if not on my home or work networks.

My testing coverage is not 100% and really is broad but is slowly being added to over time.

Backups

I have a very funny not funny story about backups but for this site I have the hostings own backups. in addition a Rsync script that runs on a post update that dumps the database to remote storage and a third option using a service called CodeGuard to also store files.

The git repo is held locally and on the server and I have it also being pulled to a separate server at home. With the exception of images this means everything that is custom is in at least 3 locations at any one time and images are in two.

Backups are important… do back ups.
Because this site is not updated frequently the need for hourly backups is not needed but if your in a scenario where loosing an hour of your site would be a disaster then do hourly backups.

Testing backups, I don’t explicitly test backups however the rsync’d database is pulled by the development docker image on creation meaning this is in affect “tested” each time I do any work, similarly the Git repo holds the other information. I could automate this or pull backwards from live to staging and run tests that way to have a fully tested backup solution.

Centralising setup

This is my current project and is in a bit of a state of flux, but getting everything up and running at the moment does take a few minutes of my time and the goal is to make the whole process as automated as possible.

Then there is also the matter, of the tests, webpack configs and other bits not being centralised in a git and just sitting in multiple locations.

My intention therefore is to setup a git repo on Gitlab with the tests and webpack config along with the main git repo being pulled into it as a git submodule.

The next stage is creating a more reliable build script, for a lot of projects I use Puppet which is massively overkill. It was suggested I could use webpack which is an interesting choice. It doesn’t run on production, but then the current script uses aliases and is run locally anyway.

Like many small side projects, when I setup things up initially I had the best intentions in the world, but midway through it was a total mess and now I’m slowly working it back into a ordered project.


Lot’s of rambling later, this article was covering the more how it works and general bits that I never believed people would be interested in but apparently you are!

In the next article I will be taking a look at some of the security side of things including setting up two factor authentication, security headers and lots of the stuff happening behind the scene.

 

Helping you and your customers stay safe


WordPress Security Consulting Services

Power Hour Consulting

Want to get expert advice on your site's security? Whether you're dealing with a hacked site or looking to future-proof your security, Tim will provide personalised guidance and answer any questions you may have. A power hour call is an ideal starting place for a project or a way to break deadlocks in complex problems.

Learn more

Site Reviews

Want to feel confident about your site's security and performance? A website review from Tim has got you covered. Using a powerful combination of automated and manual testing to analyse your site for any potential vulnerabilities or performance issues. With a comprehensive report and, importantly, recommendations for each action required.

Learn more

Code Reviews

Is your plugin or theme code secure and performing at its best? Tim provides a comprehensive code review, that combine the power of manual and automated testing, as well as a line-by-line analysis of your code base. With actionable insights, to help you optimise your code's security and performance.

Learn more

Or let's chat about your security?

Book a FREE 20 minute call with me to see how you can improve your WordPress Security.

(No Strings Attached, honest!)