Making – Part 2, The developer strikes back


In part 1 of my “how did I make this site” series, which, in case you are using an RSS reader or are on a site that scraped mine, is, I went through my choice to focus on a writing experience of the site being a developer playground. The article focused on my choice of plugins and theme as well as the idea that I wanted a site that made it easy for me to write. 

Those who know me well may well have stared at that article with suspicion; Tim using off-the-shelf components and nothing else, I simply don’t believe it! 

So ok, you got me, I couldn’t entirely leave the developer side alone so in this article I’m going to look at the custom side of the site. I do recommend reading Part 1, Making – Plugins and Theme first, if you haven’t done so, we have plenty of time and can wait.


Code Management

Before we get to the custom code let’s roll it back a bit and talk about how I manage the site. 

My site sits on Managed WordPress hosting that comes with built-in Git integration. The way we set up Git at Managed Hosting is that the host sets up a Git repository with a master and staging branch. If you push to master it copies the contents to your live site, if you push to staging it copies to your staging site. 

So you can set-up the Git repo as an origin, clone it and off you go, or you can add it as a remote and push to it. Both ways work. The big advantage of doing it this way, over adding the “server” as a user to your existing repo, is that the server controls what it will accept in push requests. For example, by default it lints PHP code, it will also run things in your tests folder, allowing you to specify tests for deployment. If things fail, then it rejects the push and lets you know directly in the Git client feedback.

There is nothing wrong with just using this Git repo, and for lots of projects this is what I do, however for my own site I also keep my custom code within a separate Git repository on CodebaseHQ.


CodebaseHQ is a code hosting service similar to GitHub/GitLab/BitBucket. Built by aTech media I have used Codebase on and off for over 10 years as my code storage solution alongside other aTech products.

There are many reasons to use Codebase over the larger competitors. It has some fantastic features, some of which the bigger companies have never replicated, but the main reason is they are a local UK company and, unless they are going to DM me, have literally no evil clients.

A few reasons you might want to check them out:

  • Really good issuing tracking, that effectively allows you to run projects directly from within Codebase
  • Wiki features in the form of notebooks
  • Time tracking 
  • Exception and error handling (see below)
  • Really simple yet flexible API and Webhook system

I sound like a shill for them, but I really do think for a lot of smaller companies they are the ideal choice, especially over GitHub private repositories.

If I was smart I would totally have an affiliate link… I don’t, carry on.

I keep in a single project, and I organise most of the site tasks within Codebase. For example, right now I have a pair of Milestones:

  • Project Speed – A general milestone for exploring and experimenting with speed/performance improvements
  • V2 2020 Overhaul – A milestone to collect any big-ticket items I’m looking to change in the second half of the year

Outside of these two big milestones I have my general tickets, these are either raised within CodebaseHQ itself or more likely me sending an email.

Whenever a ticket is created it also adds a Todo within Todoist via some horrifying spaghetti code which will never see the light of day. Likewise, if the issue is closed, either through email/the site or through a Git commit, the Todo is removed. In theory, closing the Todo in Todoist should also close the issue but the reality is that’s never worked reliably.

So my usual workflow for a bug I don’t have time to fix immediately is, send an email to Codebase, that generates a ticket and a Todo. Work on ticket, close it and it removes it from the Todo. Prompting to work on the ticket/escalation is all done in Todoist. 

Pushing to Multiple origins

Pushing to two places at once in Git turns out to be remarkably, for Git, easy. To get it setup:

  • Clone your original repo, in my case that’s the one
  • Add the second as a “git remote”
    git remote add codebase
  • Set the two repo to both listen to origin for push and add
    git remote set-url –add –push origin
    git remote set-url –add –push origin

If you then run a `git remote show origin` both are now showing and when you push, they both are pushed too.

IMPORTANT – The second repo you add as a remote needs to be blank or matching otherwise this is going to become fun.

It’s worth noting that only “add” and “push” are mirroring so other commands that call origin, for example pull, will still only be occurring from your primary repo, which in my case is the one I cloned from.

This way I can keep my code neatly in two separate repositories. Now I’m aware I have basically broken Git and this only works realistically if one person is working on a project. I also have turned a decentralised system into a centralised system in two places. 

One of the bigger issues is making sure they keep in sync, at least for me is testing. The Managed Hosting Git integration provides a way to run tests against your deployed code and rejects the deployment if, for example, it fails a code lint. 

This is great and stops code that will kill your site from being deployed, however, it can mean that one of our origin servers has rejected the commit. Normally this isn’t an issue as you just fix and push again and they resync.

I tend to work with 3 branches, the Git integration specifies two branches:

Master & Staging, new commits pushed to the origin on either of these branches will be deployed. So I tend to work on a dev branch and merge into Master or staging depending on what I’m doing.

What goes into the repo?

I have kept saying custom code, I don’t commit any code that is available via the auto-updater system so this is code from and sources which hook into the WordPress auto-update system.  

So my git repo looks like:


So there is a composer file in the root of my site but no vendor folder.

Where is the vendor folder…?

Not in Git, I’m not here to start a flamewar but the point of package managers is that they manage your packages. The moment you hunt something in Git, the manager is no longer in control, you are. Which might be ok in certain scenarios, but for me, I would rather let Composer do its thing.

Also that Composer file, it has nothing to do with WordPress… 

Well, that’s not exactly true, it’s for handling code that is currently outside of the WordPress plugin/theme eco-system. Specifically, I use Composer to manage a pair of packages I use for environment variables and exception tracking. Both of these need setting up prior to somewhere in the WordPress loading sequence where there is a suitable hook. 

Consequently, they are set up in the my-config.php file, which is a file provided on the hosting that is “required” in the non-writable wp-config.php file for additional directives. Normally it’s used for adding things like WP_DEBUG defines or similar, but I use it as an early-stage location to add some code.

So why don’t I put all the plugins and theme into Composer and let Composer manage everything?

It’s a good question, and the simple answer is that would require me to write a package management update system or use a third-party, and the hosting already has a plugin update system that works well. 

If you have read my Back to Basics – Updating WordPress Strategies you will hopefully get the impression I am very pro full automation as much as possible, when it comes to keeping things up to date. Therefore one of the criteria for the site is that it should manage itself if I stop looking after it for prolonged periods. 

At the moment, all my plugins on the live/staging site auto-update daily. My dev site, my plugins update when I open my IDE Atom and open the project files in addition I have a pre-commit hook in git that runs auto-update and fails the commit if a plugin has been updated allowing me to retest if needs be, or simply commit again.

Yes, in theory, I could do all of this with packagist and wp-packagist and some cron jobs but the current setup has thousands of sites using it, is robust and has decent feedback systems. Why reinvent the wheel?

So what are the two packages?

Keeping things organised with .env

The first package in my Composer is though if I was to recommend one I would lean more towards

This reads in from a .env file a bunch of variables and allows me to quickly insert them wherever I like within my code. Why might I do this?

I have in-effect 3 environments – my local machine, staging and live, each of these at times need different variables. So each has its own .env file.   So within the my-config.php file I load in the env at the start

use Dotenv\Dotenv;
$dotenv = Dotenv::createImmutable(__DIR__.'/../');

And then I can use them at any time for calling:


This means any of my custom code can make use of .env file but I also have a simple mu_plugin that looks at variables called OPTION_* and then applies a `pre_option_{$option}` filter allowing me to serve any option normally in wp_options table via the env file instead. This allows me to set separate API keys etc on plugins on local/dev/live.

Exception Tracking

The second package is Airbrake. Airbrake is a language-agnostic exception tracking service and open-source standard. The idea, instead of reporting your error or thrown exception to your local logs, you throw it to an exception tracking service. There are lots of these services, and most have their own API for handling data sent to them. Airbrake opened up their API and this means multiple services can act as Airbrake endpoints including Codeception.

What does this mean? Well with Airbrake setup and configured on the server and Codebase, whenever my site throws an exception or triggers a warning/fatal error, it sends an HTTP notification to Codeception which generates an exception report. This contains a stack trace and other useful information. It then groups them together so if you have the same error over and over it just includes them in the same report.

This means you can go into Codeception and into Exceptions tab, see all the errors, raise tickets and notes and ultimately close/delete them directly from the interface. 

To get it going I just install the ‘phpbrake’ package from Airbrake then:

 * Setup Airbrake for exception tracking to Codebase
// Create new Notifier instance, pointing to Codebase
$notifier = new Airbrake\Notifier(array(
    'projectId' => getenv('AIRBRAKE_ID'),
    'projectKey' => getenv('AIRBRAKE_KEY'),
    'host' => ''
// Set global notifier instance.

// Register error and exception handlers.
$handler = new Airbrake\ErrorHandler($notifier);

Within the my-config.php file.

That’s it, I can now explicitly throw an exception and it will appear, or any errors will show up, naturally. This makes debugging quicker and easier and because of notifications in CodebaseHQ I get notified about errors quickly, not when I happen to look in my PHP error log.

Custom Plugins

I have a few custom plugins and mu-plugins. I have already talked about my options_to_env plugin above. In addition, the two of most interest to people will be my security headers and theme_fiddles plugin, and both will disappoint.

I really try to keep things small and single-purpose. Indeed the entire code in my tn-security-headers plugin is:

function tn_security_headers() {
	header( 'strict-transport-security: max-age=31536000; includeSubDomains; preload' );
	header( 'X-Frame-Options: SAMEORIGIN' );
	header( 'X-Xss-Protection: 1; mode=block' );
	header( 'X-Content-Type-Options: nosniff' );
	header( 'Referrer-Policy: strict-origin-when-cross-origin' );
add_action( 'send_headers', 'tn_security_headers' );

Likewise, my tn-theme-fiddles is similarly lightweight:

remove_action( 'wp_head', 'wp_generator' );
remove_action( 'wp_head', 'wlwmanifest_link' );
remove_action( 'wp_head', 'rsd_link' );
remove_action( 'wp_head', 'wp_shortlink_wp_head' );
remove_action( 'wp_head', 'adjacent_posts_rel_link_wp_head', 10 );
add_filter( 'the_generator', '__return_false' );
remove_action( 'wp_head', 'print_emoji_detection_script', 7 );
remove_action( 'wp_print_styles', 'print_emoji_styles' );
remove_action( 'wp_head', 'rest_output_link_wp_head' );
remove_action( 'wp_head', 'wp_resource_hints', 2 );
add_filter( 'the_seo_framework_indicator', '__return_false', 10 );
		//Make sure all images come from cdn
			function( $sources ) {
				$return = array();
				foreach ( $sources as $source ) {
					$source['url'] = str_replace( '', '', $source['url'] );
					$return[]      = $source;
				return $return;
		function() {
				function( $o ) {
					return preg_replace( '/^\n?<!--.*?[S]tream.*?-->\n?$/mi', '', $o );

The security header just applies header on every page. Really this could be done with Nginx, but at one point some headers changed depending on page, it has been simplified over time.

The theme fiddles is very much the “functions.php” file of the site but in plugin form, so I can keep some control over it. It’s mostly removing things in the head I am uninterested in, though it does include a fix (for posts using classic editor block) where the ‘subscr’ would point to the wrong URL.

That’s it, how dull and boring, but that’s the point I’m trying to be dull and boring just a few very small custom plugins and everything else using existing plugins was the goal. Many of the plugins I use I could make myself, especially if I’m looking for total performance and sacrifice settings pages for configs etc.

Other code that is running?


The hosting takes backups daily in the morning just after any updates are done and stores them for 28 days. I also use a free service called CodeGuard that backs up all the code on the site daily as well, excluding .env file and the wp/ folder itself as I don’t manage that, the hosting does.

In addition, I have a small script that logs on and runs ‘wp db export’ and stores it on my home NAS. When I start developing locally (by opening the project in Atom and on my home network, or connected to the VPN) it will grab the latest backup and apply it to my local environment. This way I am always testing my backup.

So my backups are:

  • The hosts own daily backup
  • All custom code is in 2 repos
  • I back up the code to CodeGuard
  • I back up the MySQL DB to a local NAS as well as use it as the basis for local development.

I’m totally not paranoid, and have lost years of posts in the past and had to rely on to get them back, oh no.

WP-CLI commands

I have a few maintenance commands, that I hold in a package separately and just require them as and when I need them. Mostly these are old test commands, exporting options and a few quick commands for clearing the cache when I need them. These are the dirty bash scripts of the WordPress admin, you wouldn’t share them but they quickly allow you to test things. My test for finding Gutenberg blocks is an example of the sort of script that is in my wp-cli commands folder. 

While typing this I realised how unproud I am of this collection so I have raised a ticket in Codebase to clean them out and organise things a little more logically.

So is part 3 return of the writer?

So there you have it in, part 1 we looked at how I was trying to simplify and make sure my over-engineering self wouldn’t dominate my site so I could get on with writing. In this 2nd part, we can see that’s not exactly how it’s gone and I clearly have more work to do.  

So what of part 3? This is an ongoing project, after all. Since I wrote part 1 things have changed, plugins have gone, and a new plugin has arrived. But I think I will save things that changed since I started writing these posts to part 4. Instead part 3 will be How I write posts and my Gutenberg workflow! 

Want to learn more?

This post is from a series called Making, here is the series so far:

Help others find this post:

This post was written by Me, Tim Nash I write and talk about WordPress, Security & Performance.
If you enjoyed it, please do share it!

Helping you and your customers stay safe

WordPress Security Consulting Services

Power Hour Consulting

Want to get expert advice on your site's security? Whether you're dealing with a hacked site or looking to future-proof your security, Tim will provide personalised guidance and answer any questions you may have. A power hour call is an ideal starting place for a project or a way to break deadlocks in complex problems.

Learn more

Site Reviews

Want to feel confident about your site's security and performance? A website review from Tim has got you covered. Using a powerful combination of automated and manual testing to analyse your site for any potential vulnerabilities or performance issues. With a comprehensive report and, importantly, recommendations for each action required.

Learn more

Code Reviews

Is your plugin or theme code secure and performing at its best? Tim provides a comprehensive code review, that combine the power of manual and automated testing, as well as a line-by-line analysis of your code base. With actionable insights, to help you optimise your code's security and performance.

Learn more

Or let's chat about your security?

Book a FREE 20 minute call with me to see how you can improve your WordPress Security.

(No Strings Attached, honest!)