Creating a Performance Culture in eCommerce Businesses - Avoiding a Site Speed Blame Game

By Paul Rogers - 13 December, 2021
Paul Rogers 2021

I’ve been meaning to write this article for a number of years, having seen different brands and retailers struggle with performance due to operating in certain ways (but still blaming developers). Contrastly, I’ve also seen the opposite, with brands and retailers building strong processes and layers of governance to retain quality and keep site speeds optimal.

In my experience, with the brands we’re working with, the development agency (or occasionally the internal team) is usually blamed for a slow site. However, very often, this agency is not actually to blame - or at least, only in part. Although the developers are writing the code and managing integrations, the client team is making lots of decisions that will impact site speed.

In addition to blaming developers, I regularly see eCommerce teams getting frustrated with having to invest large amounts of budget into investigation and optimising performance - so we’ve tried to explain where this often comes from too.

This article is focused on eCommerce site speed and is written as a collaboration between me (representing the business stakeholder) and Liam Quinn (giving a technical view / opinion).

Why Is Performance Important in eCommerce

Performance and site speed has been a high priority for eCommerce teams for a long time now - with the shift towards mobile devices in eCommerce being a big catalyst to change and investment in this area.

Ultimately, the speed of a site has a significant impact on the customer experience generally, having an impact on conversion rates and now organic search visibility. There are various reports and studies that quantify this impact, but I also think the continued improvement in both new DTC brands and larger brands now means it’s something that can’t be ignored - particularly with the growth of PWAs and headless sites being introduced (not that this is a silver bullet).

The Core Web Vitals update has further drawn awareness around the importance of both performance and the rendering experience - with their impact on organic search rankings only set to increase.

What Client Decisions & Actions Will Impact Site Speed

Client teams often make prioritised decisions based on other needs outside of site performance; with the implementation of third party technologies, design changes, the introduction of rich content etc. all being examples of items that can impact performance.

Examples of things that clients typically ask for that they may not realise can impact performance:

  • Introduction of new Javascript-heavy third parties - e.g. search solution, product recommendations / personalisation engine, product reviews, live chat etc.

  • Other third party Javascript tags added to the site - e.g. A/B testing solutions, additional pixels, tracking solutions (most will use a tag management solution - but equally these should be optimised and sequenced to reduce performance impact).

  • Introduction of unoptimised rich media assets - e.g. hi-res imagery, interactive content, video content etc. and this is particularly an issue in fashion eCommerce. Even changing fonts without consideration can have a noticeable effect on performance.

  • Variable-led themes - the more variables and logic you have in your theme, the longer pages are going to take to load. Also, using solutions that deliver different content and functionality to different users based on variables (e.g. location) etc will also use more resources.

  • Rushed changes - often pushing developers to do things quickly and not allow time to do proper performance testing can have a big impact over time - this will be covered in the processes section.

  • General workarounds and quick fixes.

What Is a Performance Culture and Why Is It Important?

Ultimately, creating a performance culture is building a set of processes that need to be followed at different stages of requesting changes and new features and then deploying them. In most cases, this comes via a series of documentation, checklists and checks that are used at various points of the process, by various team members - both internally and externally.

Really, what you’re trying to do here is:

  • Ask questions on the performance impact of changes before you define them and add them to your backlog.

  • If there is an impact - is this the best way to deliver the functionality or is there an alternative approach that can reduce the impact?

  • If it’s a third party - is this the optimal option? Are there alternative routes for integration that could improve performance?

  • Equally, is the functionality critical and does it warrant an impact to page load speed?

  • During pre-deployment (as part of UAT), performance should be tested and compared against previous versions.

  • During post-deployment, there should be further testing.

In addition to this, some other key changes include:

  • Assigning responsibility to different stakeholders and making people accountable.

  • Building performance metrics into trade and business reporting.

  • Creating documentation to ensure new team members adopt the same processes and approaches.

  • Automating testing and scanning.

  • Assigning ongoing budget (this could be per project or a set % of retainer) to optimising the core theme and assets.

The clients that I’ve seen create the best performance cultures have a person in a Product Owner role that really governs all of this - holding developers and other internal team members to account with regular reviews.

Balancing Functionality vs Speed: How Should Teams & Stakeholders Work Together to Govern Site Speed?

The key here is having the initial conversation around the performance impact and the approach - this may add more time or cost slightly more, however it’s important to get the two types of stakeholders working together to set parameters. Improving page speed often comes as a tradeoff for functionality, so it is very much a two way street.

A good example is if you’re looking to introduce a new search solution - there may be a very simple solution available or it could be that the native search functionality could be extended, but equally business teams need to ensure they are able to meet requirements - e.g. ability to merchandise results, optimised JS overlay, a certain level of automation etc. In this instance, you’ll likely end up going with a third party in order to create a good solution and the discussion is then how the JS can be optimised and which vendor is able to achieve the requirements without having a major impact on performance.

This is one example where the functionality will require a third party in most instances, but often that’s not the case - with things like product recommendations or bundling for example. You also then have scenarios where you can implement solutions better - e.g. hiding a live chat widget behind a click so the JS isn’t loaded every time.

This is all part of the balancing act and it’s why the two teams need to work together and accept joint responsibility.

The Great Image Quality Debate - Quality vs Load Times

In recent years the average page weight on an eCommerce site has increased hugely and, as well as video becoming far more common, this is primarily down to much more focus on large, high quality imagery. Naturally, any brand wants the best possible photography to showcase their products, but there are a few points that should be considered:

  • Always have a process in place to optimise images before they’re uploaded to the site. There are great tools for this to be done quickly, and effectively resulting in lossless images. Options like TinyPNG or Kraken.io are my go tos. This should apply to every image uploaded, and will certainly cause developers to push back on performance issues if this isn’t done, due to being such an impactful point. There are one or two apps like Crush.pics that can help automate this, but they’re not perfect, and this shouldn’t be necessary as it only takes a few extra seconds before uploading to get right.

  • Consider how many large images are displayed per page. There is no hard and fast rule, again it's a balancing act because the more images to load, the slower the page will be. Look at how people are browsing the page, are all images actually being seen? Are slides 2/3/4 in that carousel ever actually being viewed?

  • When the above has been streamlined as best as possible, one point does sit with the developers which is loading the image into the page at the right size. If an image is only ever being rendered within a 3 column grid then it shouldn’t be loaded in at 2000x2000px for example. This is a waste of resource.

Who Should Be Responsible for Site Speed?

In my view, you want to get to a point where there’s joint responsibility and people are working together, but I’ve still seen the best results from having an internal Product Owner as the real guardian of performance. Although they’re not responsible for writing or deploying the code, they hold people to account, push developers, and question new features. This is the ideal situation but it only really works if you have the right person in place that really understands the situation and is able to work with the developers.

This said, anyone involved in defining or delivering backlog items is also responsible.

How Should Teams Be Monitoring Site Speed?

An important element to this is not precisely which metrics to measure, but doing so with the right approach. By this, I mean it is important to avoid a scattergun approach of pointing at the lowest red numbers and having tunnel vision on effort before moving onto the next one. There are changes you could make aimed at improving one metric, which will have a domino effect and positively impact others.

However, there are also tactics that get the numbers to look better, but don’t actually improve anything for the end user - these are tactics that simply trick Google into thinking performance has improved.

For these reasons, there needs to be a holistic view and a longer term strategy to getting to a place where page speed and technical performance is solid. The steps would look something like:

  • Select which monitoring tools to use for reporting. In terms of tools, there are a number that do very similar jobs. Some might give you slightly more favourable numbers because of the way they’re reporting, but you shouldn’t select a tool based on vanity. Unless you have specific needs which another tool offers, you can’t go far wrong with using the Google suite (Lighthouse, web.dev, and Search Console) to audit individual pages and give a high level view on the status of your URLs as a whole. I also like Pingdom for a quick check on things like resources and page weights. One point to bear in mind is that Google can take 28 days to take your improvements into account; Search Console won’t improve overnight.

  • You’ll also need to decide which metrics are key for your reporting. Again, different goals and objectives can lead to focus on different metrics. But the Core Web Vitals (intentionally) cover a good broad perspective of site performance. I like to keep Time to Interactive (TTI) in the ongoing reporting as a good judge of impact on usability, while Time to First Bite (TTFB) can be vital on self-hosted platforms.

  • Once these previous two points are in place, be consistent. For accuracy it’s important to use the same tools, same metrics, run tests on the same URLS, same timeframes. Repeat.

  • Identify fires across the site. Focus on critical drains on performance and leaks etc. Fix those first.

  • Next step is to stop and prioritise. Which elements affect critical pages and the customer buying journey. Which elements are global and affect a high percentage of pages? Look for apps that are particularly heavy, with inefficient coding causing problems on specific pages. A benefit to Shopify is you could split out a theme Section by Section, and run an audit on each Section to find problems.

  • After exhausting these and getting down to very minor wins, there are some technologies that can be introduced to help. With Magento, we’ve helped achieve some really impressive results with Cloudflare & caching technologies. On Shopify projects - Yottaa, Edgemesh and Render Better have added another layer of performance increase. Due diligence is needed here, as alluded to at the start there are ‘black hat’ technologies that will look like they are having a big impact when in reality their sole purpose is to trick Google and other tools into giving higher scores - which benefits nobody in reality.

For Developers: Balancing Push Back and Client Servicing

I think the key is making sure performance is part of discussions from the beginning. At this point, it isn’t just pushing back but rather making the client aware which design or development decisions will later impact performance. It becomes an education piece, rather than an awkward standoff where someone has to back down. We’ve sometimes seen initial sales pitches where a focus on technical performance is promised, but then nothing tangible is delivered on that side.

Clear ongoing communication from both sides means the technical performance becomes an inherent part of ongoing development, sets expectations and limits or removes the risk of blame game being played for slow site speeds.

In Conclusion

Overall, I think any brand doing a good level of volume online should be looking to create ongoing processes and reporting to help keep their online store optimised from a performance perspective. You’ll definitely need to compromise in places, but this should just be a balance, and decisions should be made by different stakeholders with a good view on the overall impacts.

As I mentioned earlier, I do feel that having an individual governing performance internally is really valuable, particularly as you grow and start adding more functionality, doing more releases, having more people making decisions etc.

Categories