Jem’s Guide to WordPress Website Performance Auditing and Optimisation
Website performance is an important factor in the success of your website. Site speed affects conversions, search engine traffic and sales. Lighter websites also use less power (climate change – argh!), less bandwidth, and less disk space. But you knew that already, which is why you’re here.
What you perhaps don’t know, is where to start? Here are some of the tricks and tools I use to perform a basic WordPress website performance audit for our clients and how you can do the same for your sites.
Table of Contents
Assumptions
This guide assumes you’re auditing or tinkering with a WordPress website. Some of the tips work for other platforms, but for the most part they’re WP-centric.
This guide also assumes you have basic admin knowledge of WordPress, know your way around the web a bit, or generally have some degree of tech literacy.
Important Notes
It’s important to note that the cost of ‘fixing’ some of these points vs the value of doing so may not always add up – micro optimisations and tiny tweaks are great for nerds chasing the big green ‘100’ in Google’s PageSpeed Insights tool, but may not offer any real world improvement to actual customers.
Some of these points may be more important for one type of website but not another, e.g. a big e-commerce site is going to have different issues to Joe Smith’s recipe blog.
Lastly, this isn’t the be-all and end-all of web performance. This is about tackling some of the big easy wins for tinkerers and devs, not a complete course on performance optimisation and auditing.
First Steps
These first steps apply to any website, and should take priority in your checks:
Check for HTTP/2 support
HTTP (HyperText Transfer Protocol) is, in simple terms, the method in which the browser communicates with a web server. Your browser makes an HTTP request, and the server responds with an HTTP response. HTTP/2 is the successor to HTTP/1.1 and brings many improvements. In particular, and relevant to web performance, it has the ability to send multiple requests over a single connection which increases the speed at which a website gets from the server to the browser.
At time of writing, w3techs estimates that 41.6% of websites are using HTTP/2, and it’s crucial that yours is one of them. You can use this tool to check if your website is hosted on a server that supports HTTP/2. If it is, move on to the next point. If it isn’t, your number one priority should be to contact your host to see if HTTP/2 is available, or an upgrade to the site’s web hosting, before doing anything else.
Check for redirect chains
A redirect chain is a series of redirects in a row from URL A (the page you request) and URL Z (the page you end up on). Although these are quite common for internal pages, especially when clients start messing with e.g. product category structure, they also occur on homepages frequently.
Redirect chains are costly to both performance and SEO, and while discussing the latter is out of scope for this post, you will want to thank me later for fixing this one.
A common redirect chain might be from http://mywebsite.com to https://mywebsite.com to https://www.mywebsite.com – and sometimes there’s a https://www.mywebsite.com/index.htm in there for good measure.
Each of the ‘hops’ in a redirect chain add response time that can be avoided. Use a redirect checker tool to check how many hops your homepage and key pages contain before they actually load, and where possible trim out any ‘middle’ hops.
Better yet, avoid the need for hops in the first place by avoiding changing URL structure and ensuring that any external links (where under your control, e.g. social media) point to the final destination, not a part of the chain.
Check for DNS issues
Similar to internal redirect chain issues, it’s possible to end up with DNS chaining issues. I see it quite often with clients who’ve changed host a few times. Usually the client has hosted with an agency originally but then moved elsewhere. Generally looks a bit like this:
- The agency keeps the domain in their control with the nameservers pointing to server A (original host)
- The client, to save having to pay admin fees to the old agency (yes, some charge) don’t bother to move the domain, and just ask for it to be pointed to server B
- The DNS lookup now goes registrar → server A → server B
This chain of DNS hops creates a significant delay that could be resolved by having the domain nameservers or A record straight from the domain registrar to server B.
MX Toolbox DNS Lookup Tool can help diagnose DNS issues.
Check the PHP version
This usually requires access to the hosting control panel, e.g. cPanel or Plesk, or access to the file system to dump phpinfo() somewhere. You need to look for a minimum of PHP 7.0. This is probably controversial as PHP 7.4 has not long reached end of life (Nov 28th 2022 to be precise) but there are still questions over WordPress’ support of PHP 8.0 and beyond*.
What you definitely don’t want is PHP 5 (any version, or earlier!) altamira benchmark WordPress websites running on PHP 7 as nearly twice as fast as WordPress websites running on PHP 5.6, with kinsta benchmarking WordPress 5.9 on PHP 8.1 as running around 47% faster than PHP 7.4.
*I’ve been actively developing in PHP 8 for around 12 months without issue.
Check for the existence of robots.txt and favicon.ico
Manually creating robots.txt and favicon.ico files can be easily overlooked when a site is launched, and WordPress will ‘helpfully’ try to serve these for you. However, this isn’t without performance implications.
In the case of robots.txt, WordPress has to load and run through the template loader before it generates the stand-in robots.txt output.
In the case of the favicon.ico file, WordPress will pass try to find the favicon via get_site_icon_url()
(which looks to see if the option ‘site_icon’ is set) before redirecting and, likely eventually throwing a 404.
It’s easiest to look for the presence of these files in the filesystem itself, but without that access (e.g. auditing a 3rd party site) there are clues in the Network tab of your browser Dev Tools. The robots.txt generally returns a ‘Link’ request header referencing the REST API URL if it’s automatically generated:

The favicon, if missing, will return a 404 status code, or an abnormally large file size if it’s generating it from an image.
Check for pagebuilders
The performance of pagebuilders is a whole post in itself, and something I covered in my blog post “Why is my WordPress website so slow?” in detail, but the long and short of it is:
- The overall size of the mark-up generated by pagebuilders like Visual Composer/WP Bakery, Elementor and Divi etc is significantly higher than equivalent pages created in Gutenberg or decent bespoke themes. Bigger total size of a page means slower load times.
- The
<div>
soup created by pagebuilders (unnecessary levels of<div>
s several deep) creates a larger than necessary DOM tree which takes longer to parse.
The presence of pagebuilders is usually fairly obvious – just look for common prefixes in the source (vc_
, elementor-
, and et_
) If you spot these in your code, consider addressing this before moving on to the next section, either by chatting to a specialist who can help you optimise the output of these editors or by replacing pages with ‘slimmer’ equivalents.
Utilising Performance Measuring Tools
There are a multitude of performance measuring tools available and they all have pros and cons. I generally recommend having a play with them all, and seeing what results you get from each one. The three I use the most are:
Look at initial server response times
You can make all the changes and optimisations in the world, but if your server is slow it’s all a bit pointless. Once you’ve checked your website is hosted on a server that supports HTTP/2, you can use any of the performance tools above to check for server response time issues. If this is a problem, you may see something like this screenshot, where we can see Google PageSpeed Insights showing an inital server response time of 2.14 seconds:

Anything other than 500ms is a red flag for me, and although this is sometimes high because of a temporary issue (e.g. requesting a page that needs a lot of database/plugin interactions that is yet to be cached), if it’s persistently high you want to find new hosting.
Look out for image issues
“Image issues” covers several common problems. You may spot any / all of the following:
- Recommendations to avoid large network payloads
- Recommendations to serve images in “next gen” formats, e.g. as WebP
- Recommendations to serve assets with an efficient cache policy
Although it’s not always because of images, the most critical one of these issues – avoid large network payloads – is usually related to huge, unoptimised, original size/high resolution photography. I strongly recommend running all images through third party optimisation, e.g. JPEGmini or TinyPNG either before uploading to your website or after the fact in batches from the uploads folder.
There are several WordPress plugins available that convert your images to more modern formats automatically, including Converter for Media and Performance Lab. I personally prefer to do this after I have already optimised the base images: partly because converting an optimised image feels like it would perform better, and also so that users who require a fallback to JPEG/PNG get an optimised experience too.
When you’re sure all of your images are sufficiently optimised or reformatted, check that the pages are loading the images in a suitable size for the space given. There’s absolutely no point in loading a 1400×1200 photo in a 250×250 thumbnail space.
Check that your theme supports the loading="lazy"
attribute for images that appear on pages below the fold. It’s not recommended to use this attribute on “hero” or cover images at the top of pages.
Lastly, check for the presence of ‘Expires’ headers on the various image formats and implement if necessary (although this should cover all assets, not just images). Kinsta’s guide on fixing the leverage browser caching warning is comprehensive and helpful.
You may see other recommendations for images, such as setting an explicit width
and height
attribute, but these are related to layout shift rather than specifically to do with performance. Addressing layout shift is important and I would recommend doing so, but that’s beyond the scope of this already-hugely-long guide.
Look for render-blocking resources (especially JavaScript)
A render-blocking resource is an asset (e.g. script or stylesheet) that slows or prevents the browser from loading the website temporarily. In my experience, JavaScript files are usually the biggest culprit.

This image from GTmetrix’ guide on eliminating render blocking resources demonstrates how render-blocking resources affect content loading.
If the tool you’re using flags render-blocking resources, take a look at what JavaScript files you’re loading on your page and where possible, use the defer
or async
attribute to allow JavaScript files to be loaded in the background or later on, while the browser continues to load the rest of the page.
Where it’s not possible to use defer or async because it interrupts the loading of important functionality, for example, instead look at how you can adjust the order of blocking resources on the page; there’s no point loading a JavaScript library before everything else for functionality that is only triggered at the bottom of a page three layers deep in your sitemap. You can also use the $in_footer
argument in wp_enqueue_script
to force the script to load in the site’s footer.
Look for external resources
The loading of external resources is extremely common on webpages. From smaller ‘harmless’ scripts like JavaScript libraries loading from a third party content delivery network (CDN), to larger tracking scripts, widgets and embeds. There are multiple problems with this reliance on third party content:
- When loading assets from the same server address, i.e. loaded locally, the HTTP connection to the server can be reused (if HTTP/2). When loading assets from elsewhere this is not the case, and another connection must be made – this is particularly problematic if the server hosting the asset is very slow or is located halfway around the world. This delay, combined with render-blocking issues, can add up to significant delays in load time that could be nullified by loading from your own server.
- Loading assets from unverified locations is a security risk, and there’s been more than one JS library affected by malware that then gets loaded onto victim’s websites without their knowledge. Terence Eden highlight’s this issue in a post entitled Please stop using CDNs for external Javascript libraries
- It doesn’t help with caching. Despite the long held view that loading e.g. jQuery from a third party CDN will mean it’s cached for the next site who employs the same CDN, browser cache partioning means this is no longer a thing and hasn’t been for a while. For more information, see Stefan Judis’ post Say goodbye to resource-caching across sites and domains
This also covers using things like YouTube video embeds, Google maps and other iframe-based services. Consider using a facade technique to effectively lazy load videos (this technique also works with Google Maps; I utilised similar code for my client Wall Flowers Seasonal Blooms in the Google map in their footer).
If you’re using Gutenberg and frequently embed YouTube videos, there are options for filtering the embed block to enforce the facade technique site-wide. I use a snippet of my own making, but there are plugin alternatives.
Look for opportunities to reduce HTTP requests
Although HTTP/2 increases the amount of requests we can receive at any one time over a single connection from our servers, it’s not infinite. Reducing the amount of files we request from our servers speeds up the delivery of the files we’re asking for.
Plugins and third party widgets often request multiple resources unnecessarily, for example. Reduce the number of requests by using “wp_dequeue_script
” and “wp_dequeue_style
” to filter out assets you don’t need e.g. plugin stylesheets for styling that’s otherwise covered by your theme.
Where plugins load multiple stylesheets or scripts, consider building a single CSS file that houses all of the customisations you do need, rather than blindly loading everything. (Warning: this can significantly impact dev upkeep)
Where plugins load assets globally but that are only used on one or a few pages, look at opportunities to dequeue the assets and then re-‘enqueue’ only when required. Contact Form 7 is a good example of this, and it’s possible to make the loading conditional on the presence of the appropriate shortcode:
function dequeue_scripts_styles() { /* dequeue contact form 7 style if form not detected */ global $post; if ( is_a( $post, 'WP_Post' ) && !has_shortcode( $post->post_content, 'contact-form-7') ) { wp_dequeue_style( 'contact-form-7' ); } } add_action( 'wp_print_styles', 'dequeue_scripts_styles', 999 );
As well as speeding up interactions with the server, reducing the amount of HTTP requests also reduces the overall size of the pages loaded.
Look for caching issues
Caching is a big complicated subject and I don’t want to get outside my paygrade here, but what I will say is this:
- Pay attention to the advice to leverage browser caching – see Image issues
- Use a decent WordPress plugin that’s appropriate for your web host, e.g. my host utilises LiteSpeed so I use LiteSpeed Cache
- OR use a service like Cloudflare
I don’t recommend using both a caching plugin and cloudflare, because you will spend the rest of your life trying to figure out which level of cache your dev changes are stuck in, and why half of management sees one thing and the other half see another. Trust me on this.
Utilising browser tools
Once you’ve used and abused the three main performance tools and gleaned all the information you can, it’s time to start testing your sites in the browser. I tend to use Chrome for this but Firefox is just as capable.
What throttling can show us
All the performance tools give you is a theoretical view of what’s causing issues. Actually testing our sites on slow connections under certain conditions gives us a much better view of what the user experience is like. To do this, I like to use browser throttling.
In Chrome, bring up the dev tools (normally F12), and under the Network tab you need to look for the Throttling dropdown options. Choose Slow 3G. Tick the box next to this to disable the cache.

Once you’ve set these, visit the site you’re assessing. What can you actually see happening as the page loads? Here are some things to look out for:
- Sliders (bloody sliders) – are they rotating through images before the images actually have had time to load?
- Animations – is the speed of the connection causing these to stutter or proving problematic to normal scrolling?
- Are large JavaScript files blocking content from loading?
- Are large images causing massive gaps in layout – would these be better swapped for alternative versions on mobile?
Testing a site on a throttled connection is going to give a different result each time. There’s no catch-all rules for what you need to change as a result of this, but slowing everything down makes it a lot easier to spot where problems are happening.
Are we loading resources in an optimal order?
While we’re still in the Network tab, look at the list of requests as the site loads to check if assets are loading in an appropriate order.
On one inherited client site, we can see here that the very first thing to load (outside of the actual page request) are the styles for fancybox:

But the fancybox JavaScript isn’t actually called until right at the bottom of the page, and isn’t used until way past the fold. It’s entirely pointless having this stylesheet load first.
We can reorder asset loading in WordPress plugins/themes by setting the dependency value in wp_enqueue_style
and wp_enqueue_script
Why we should test multiple pages
A huge mistake that a lot of people mistake is to run the homepage through the tools / throttling and then stop there. Why is this bad?
- Because not all visitors come to your website via the homepage – especially if your site is utilising ad campaigns.
- Because even if your visitors do come via the homepage, you probably don’t want them to stay there.
- Anecdotally, but: in my opinion, if a site is quick to begin with then gets progressively slower as you get further along, this is more frustrating than a site that’s slow from start to finish because I feel like I’ve already invested time in the process
You likely have several important pages on your site – that might be landing pages used in Google ads, pages that are on the other end of a call to action, e-commerce shop categories, etc. Test and optimise all of these.
Other considerations
When to use server cron
WordPress has a “cron” system built in designed to mimic the command/job scheduling we see on *nix-like systems. This is used to make sure scheduled posts go out on time, checks if WordPress updates are available, clean-up tasks are done, etc.
WordPress’ cron is triggered on every page load, which for high traffic sites can cause issues. For these large sites, particularly high traffic blogs, e-commerce sites or big WordPress multisite installations, consider utilising your server cron over WordPress cron. Kinsta has a guide on how to disable WordPress cron and utilise the system cron to trigger the jobs that would otherwise be actioned. I also have a cobbled together trigger of my own for multisite installs, and trigger that via a proper cron job instead.
Consider user location
When choosing your hosting and caching options, consider your primary audience’s location. Do not pick hosting or caching solutions based on the opposite side of the world to your target audience; the further across the world the customer is from a server, the longer it takes for the content to get down all those lovely little cables into the customer’s home. It might just be a difference of fractions of a second, but it all adds up.
Avoid resource intensive plugins
Resource instensive plugins tend to have a big impact on CPU and memory usage, which – especially on cheaper shared hosting – can cripple a website. Online Media Masters have a list of 75+ WordPress plugins which impact CPU usage. Not all of these are ‘bad’, and some of them may be unavoidable (WooCommerce, anyone?) but stacking lots of these together on a cheap server is going to cause you a world of pain.
Be particularly careful of plugins that replicate functionality that should be done at server level – firewalls, query monitoring, etc.
The Heartbeat API
The WordPress Heartbeat API allows the browser and the website server to communicate, sending commands which trigger things like automatic post saving, locking posts that are being edited to stop other users from editing them at the same time, real time data updates in plugins, that sort of thing.
The API works by sending regular requests to the WordPress admin-ajax.php file. Lots and lots of requests to that file can cause an increase in resource usage, particularly CPU utilisation. Higher traffic multi-author / WooCommerce sites on shared hosting may find that this CPU utilisation is too much, slowing the site to a crawl or even bringing it down altogether.
Although I don’t recommend turning off the Heartbeat API completely, it’s possible to adjust it to slow the frequency of the ‘pulse’ without having a huge impact on functionality. Consider this option for large sites on low quality hosting where there’s limited budget to improve.
Why that optimisation plugin might be making things worse
Believe it or not, the optimisation plugin you’ve been heartily recommended by some random article might actually be making your issues worse.
Several of the popular optimisation plugins work by grabbing the contents of all plugin output, smashing all the code together. minifying it and serving it in two large files. This actually increases the likelihood of experiencing issues with blocking than if you’d just served several smaller files over HTTP/2.
Conversely, some optimisation plugins work by inlining basically everything. This is problematic too, because files that could be cached and reused across pages on the same site are effectively redownloaded for each page load. Inlining should be reserved for smaller snippets, not entire site styles.
Things I haven’t covered
These things might help, but are too big a topic to go into here.
Code coverage
Chrome has code coverage tools which can help you reduce the fluff, particularly useful in old sites and inherited builds. I’ve not done a huge amount with this, but reducing the weight of your files is always a win.
DOM size/depth issues
I touched upon this in passing when I mentioned pagebuilders, but large DOM trees take longer for the browser to parse. Consider investigating if you can reduce the depth of your HTML.
Different types of caching
Caching is a whole subject of its own. Object caching, dynamic caching, third party caching providers vs plugin-based caching. I’m not knowledgeable enough on the sysadmin side to make specific recommendations here, but your hosting provider probably can.
Database optimisation
There are a variety of things you can do to help keep your databases ticking over nicely, including making sure old transients/actionscheduler logs are cleared, running the OPTIMIZE TABLE command once in a while, making sure your database engine is set appropriately, etc. However, this is DANGER SITE DELETION RISK territory and I don’t need that kind of pressure in my life. Talk to your host about this one too.
There you have it – a pretty comprehensive list of things that I check for and/or adjust when working on performance audits. I hope you’ve found this useful, and welcome questions/comments below.
tl;dr? Don’t have time to run these checks and tools on your own website? We can take care of that for you. Get in touch to talk about our WordPress performance auditing and optimisation services.
Jem, this is a gem!
Thank you for presenting this.