When you enter the world of web performance, you’re likely to encounter two terms relatively quickly: Synthetic Monitoring and Real User Monitoring (RUM). Much of the blogs, videos, and social media chatter of late seem to be focused on Synthetic Monitoring. I’d like to suggest that Real User Monitoring deserves a lot more of our attention.
To be clear, both Synthetic Monitoring and Real User Monitoring have a place in modern engineering systems. They are both needed to see the full picture and are an important part of a healthy engineering performance culture. But unfortunately, engineering organizations are often content to put the minimal infrastructure in place, check the box on their list, pat themselves on the back, and remain blissfully unaware of their real performance problems until it’s too late.
Synthetic Monitoring provides its best value when integrated with CI as a core part of your engineering process. It involves a laboratory-like setup where well-known operations and user interaction sequences are automatically performed in strictly defined environments. This type of monitoring is pretty simple to configure and provides a relatively small amount of easy-to-understand and track data.
You can get started by integrating Lighthouse and/or WebPageTest into your processes. Do this as early as possible and join it to a deep discussion that establishes your performance baseline. Using Synthetic Monitoring tools to measure against your baseline is a must. It enables your team to clearly see obvious regressions and to unite together in a healthy culture of quality where personal biases and assumptions are supplanted by objective data and decision-making.
Unfortunately, not only is Synthetic Monitoring often implemented without properly established baselines, but it is also frequently the only type of performance monitoring used. As valuable as it is, Synthetic Monitoring is still synthetic. It can help you steer clear of big, disastrous surprises but it’s not fully representative of the real world. It’s not enough to ensure that customers consistently have a quality experience and that the business is getting the most out of its software development efforts.
Real User Monitoring (RUM)
Real User Monitoring extends beyond the boundaries of a testing environment and into the space where real users interact with your software. RUM involves running code in the browser to send back various metrics from real user interactions happening in real user browsers, on real user devices, and on real user networks.
RUM is more involved than Synthetic Monitoring, which is likely one of the reasons why it’s less common. The challenge isn’t so much in setting up the base infrastructure to collect the data (for which there are many tools), it’s in dealing with the large quantity of data, interpreting it, and making the necessary cultural changes to turn the findings into business results.
In the same way that Synthetic Monitoring needs to be paired with objective baselines, RUM needs to be paired with explicit business goals. In fact, there are at least two key ingredients to success with RUM…
Leadership and Cultural Alignment
While an engineering group may be able to set up Synthetic Monitoring with little to no buy-in from any leader, expanding to include RUM is rarely possible without strong cross-functional leadership support. RUM doesn’t just involve purely engineering processes but is deeply connected with the business and culture. So, there needs to be a real commitment from leadership to support the process, go where the data leads, and fund action in accordance with findings. This isn’t a one-time event. It’s a commitment to a continual transformation in the way that software is built.
Clear Business Goals
How do we know what we’re looking for though? What do we do with all the data that RUM produces? The answer begins with goals. Setting goals upfront provides a means to scour the sea of incoming data with purpose, focusing in on and interpreting metrics in ways that can translate to business impact. Leveraging histograms and percentiles can help to surface problems with goal alignment. This requires an ongoing commitment to continue to evaluate how that data is sliced, diced, and customized to clearly report on the goals.
To reiterate, the challenge isn’t in setting up the tools to collect the data. In fact, many teams have that much in place. But if you don’t go through the effort to get leadership buy-in, shift the culture, and set proper goals that are regularly measured with your RUM data, you aren’t really going to get the value from it.
Even though it can be challenging to truly embrace RUM in your organization, the benefits are worth the hard work. If you haven’t heard of or explored RUM before, consider taking a deep look into this space and thinking through what kinds of cultural transformation, processes, and practices you can embrace in this area.
Performance monitoring is an important part of establishing a healthy engineering culture. It must be more than merely a box that’s ticked on a list of requirements. It’s easy to send a Lighthouse screenshot around your company or via social media, touting how amazing your application performance is. But that often portrays an incomplete or incorrect picture of reality and is little more than self-congratulation masking an underlying complacency. Start with Synthetic Monitoring, integrated into your CI process, and paired with a carefully designed objective baseline. But don’t stop there. Work to build a healthy engineering culture of performance and quality. Layer on Real User Monitoring guided by clear business goals to better ensure that your customers and your business are truly succeeding.
Now, move smartly, otherwise ye be walkin’ the plank! Yarrr!!!
If you enjoyed this article, you might want to check out my Web Component Engineering course. I’d also love it if you would subscribe to this blog, subscribe to my YouTube channel, or follow me on twitter. Your support greatly helps me continue writing and bringing this kind of content to the broader community. Thank you!