In the last year I’ve either deployed or inherited about 10 new WordPress installations and managing them became a mess that quickly ate too much of my time. It seems that quite a few of my friends have the same problem – so here’s a quick overview on how to approach it.
Everything I describe here can definitely work on OS X or Linux and probably on Windows as they’re all either PHP or Python based tools.
Keeping up with updates
Clients don’t update their plugins or WordPress itself and when they do they won’t read changes clearly enough to be able to judge if upgrade would break something. I use InfiniteWP for this. It’s a standalone PHP installation that connects to your WP’s via InfiniteWP Client plugin. It’s free, with some commercial add-ons. You can set it up to email you when there are new updates and support remote backups of your sites, which will be useful for later stages.
From security standpoint, it’s definitely not optional, but at the moment – not updating seems a greater risk.
Local development environment
For each client’s site, I would have a local copy running on my computer. Depending on your preferences you might be using something like MAMP of XAMPP that packages MySQL, PHP and Apache server together. One thing to watch out is that you’re running your local development under the same major version of PHP as it’s often source of bugs (as my local PHP would support newer syntax than the one on server).
For each site, I would have a local alias – http://sitename.local/ to ensure that I don’t accidentally change things on production.
For things I would develop, usually a theme and an extra plugin, I would store them in git to keep revision history and feature branches.
I have yet to find a good way to version plugins, so for now the tactic is to try to keep up with latest versions of plugin and use them as little as possible and only from developers that have release blogs and sane release tactics.
Synchronising production to local environment (manually)
Sometimes I don’t have shell access to server – in that case I would use either InfiniteWP to generate a database dump (from InifniteWP dashboard) or UpdraftPlus from within WordPress dashboard.
Locally, I would then use wp-cli to reset local database:
wp db reset
and import new database:
wp db import sitename_db.sql
wp-cli supports local path substitutions, but it’s usually not needed. What I would do is modify my local wp-config.php to have:
This allows me to use copy of production database, without WordPress redirecting my logins to production URL.
For contents of wp-content/uploads I usually don’t bother as I can easily fix things without seeing images in last few blog posts.
Synchronising production to local environment (automated)
For the sites where I have shell access and can install wp-cli on server, I have ansible scripts (more on that later) that run:
wp db dump
locally and then copy it to my dev environment where they import it using wp db reset and wp db import combination.
This means that I can sync production to my local environment in less than a minute, making it a no brainer to test and tweak things locally and not on production.
Applying changes to production
For themes and custom plugins for sites where I only have FTP access, I’m using git-ftp that allows me to push to FTP server using git ftp push. It keeps track of which revision is on server and updates only the difference. It does mean that you never change things on server directly, but have to go through committing to git first (which I consider a good thing).
For environments with shell access you can just ssh and then use git on the other side to pull in changes. It works, but it’s a couple of additional steps.
Lately, I’ve automating these tasks with Ansible playbooks that allow me to have simple scripts like:
- hosts: server1
- name: update theme
git: repo=git@server:themename.git dest=/home/username/sitename/wp-content/themes/themename
or to grab database dump
- hosts: server
- name: wp db dump
command: /home/username/.wp-cli/bin/wp db dump /home/username/tmp/sitename.sql chdir=/home/username/sitename
- name: copy db to ~/dbdumps/
local_action: command scp servername:tmp/sitename.sql /home/username/dbdumps/sitename.sql
Which can then be easily extended or in a separate playbook file drop local database and import new copy. To run these playbooks you would just use ansible-playbook dbdump.yml and similar and it gives you a full report of what’s happening.
For bigger and more complex setups you would extend to support rollback and different revision models, but that’s beyond scope of my current WordPress projects.
Scripting these tasks always seemed as something not worth doing as they were just a couple shell commands or clicks away. But as number of projects grew it became annoying and much harder to remember specifics of each server setup, passwords, phpmyadmin location and similar.
With having things fully scripted, I can now get a request from client, sync whatever state of their WordPress is at the moment, automatically in just a minute, and see why theme broke on a specific article. It saves me crazy amount of time.
At the moment I’m trying to script anything that I see myself typing into shell more than 3 times and so far it was worth it every time as these scripts suddenly become reusable across different projects.