This is a step-by-step guide on how I use Chrome DevTools (DevTools from now on) to detect Web Performance issues on a website, as well as validate hypotheses to fix some of the problems found.
Disclaimer
Before diving in, I want to clarify that: this is not a guide on how DevTools works, it’s a very comprehensive application. I’ll only focus on the steps I usually take during a preliminary performance analysis of a website. There are many other tools (I’ll talk about them in future articles), but today I want to talk about DevTools, which is built into Google Chrome. I’ll focus on some of the features available for performance analysis or those that, although not initially performance analysis tools, help us detect issues that ca…
This is a step-by-step guide on how I use Chrome DevTools (DevTools from now on) to detect Web Performance issues on a website, as well as validate hypotheses to fix some of the problems found.
Disclaimer
Before diving in, I want to clarify that: this is not a guide on how DevTools works, it’s a very comprehensive application. I’ll only focus on the steps I usually take during a preliminary performance analysis of a website. There are many other tools (I’ll talk about them in future articles), but today I want to talk about DevTools, which is built into Google Chrome. I’ll focus on some of the features available for performance analysis or those that, although not initially performance analysis tools, help us detect issues that can degrade performance, ultimately degrading the UX of our website or product.
Give me a URL…
Almost every time someone gives me a URL, I must confess that the first thing I do is open DevTools. I’m passionate about analyzing web performance. So let’s start by getting to know the website we’ll analyze in this guide.
Recently, I saw a post by Aleyda Solis, “Top Black Friday Organic Search Traffic Winners & Trends in 2025“, which compiles the websites with the highest organic traffic during Black Friday 2025, a very comprehensive and interesting article. In it, we have websites from the USA and some European countries, and the types of websites analyzed are retailers, news media, review/technology sites, social platforms, price comparators, and deal aggregators.
I chose one of the websites from Spain, being in the same country means I don’t have to emulate geolocation, making it a more realistic test. From the list, Zara’s website caught my attention, so let’s start analyzing https://www.zara.com/es/en/-pT9796937300.html?v1=501611224.
The Browser
As the title indicates, we’ll use Google Chrome DevTools. I want to highlight that for this type of analysis I use the Canary version, a development version where we have the latest features (some in beta). Also, since it’s not my regular browser, I don’t have extensions installed, as they interfere with the experience and can slightly alter the metrics.
We also have the option to create a Chrome user profile without extensions installed and use it for this type of analysis. Or open a temporary guest profile.
DevTools
Before Chrome version 129, the first thing I did was open DevTools and go to the network tab. Once there, I sort the Size column, which gives us a quick view of the heaviest resources.

Here we already have interesting information. Let’s focus on resources weighing more than 200kB, to do this, I add larger-than:200k in the filter field.

Here we can already see some interesting data:
First, we can see there’s a fetch request to categories weighing 218kB, but just below we see 1827kB, that’s the uncompressed file weight. The server serves a Brotli-compressed version, as we can see in the content-encoding: br header. What catches my attention about this resource, beyond its 1.8MB, is that it has a 72-second cache.
I don’t know the product, but from my experience, I’d say eCommerce categories don’t have a rotation or changes that require the client to query that list every 72 seconds.
Second, we have a 254kB WebP image. The image format is considered “modern” (it’s 15 years old), that’s good. In my opinion, being a fashion eCommerce where image quality is critical, I would suggest using AVIF or JPEG-XL with fallback. Regarding the weight, a WebP over 200kB made me suspicious, so I selected the row and the preview tab to validate the resolution.

As we can see, the image resolution is 1440 x 2160 px, and it’s the image we can see in the viewport, which is not being rendered at that resolution. This validates that we’re downloading an image with a higher resolution than we need.
There’s another tool in DevTools that I like to show to validate this, the preview we have in the Elements tab when hovering over an image resource.

Here we can see that the image we downloaded at 1440x2160px is being rendered at 372 × 558 px. Even considering device pixel densities, it’s still too high a resolution. So, to optimize that request, which we can see is https://static.zara.net/assets/public/.../T9796937300-p.jpg?ts=1764580554499&w=1440&f=auto with a w=1440 parameter defining the resolution, we could easily adapt the value (assuming there’s no complexity).
Finally, the heaviest resource is a 1MB compressed JavaScript, being 4MB uncompressed. Also, the name recom-chat.js suggests it’s a chat, which we don’t see in the viewport.
Resources like chatbots or recaptcha, which we only need in a flow where the user submits data through a form, shouldn’t be downloaded until necessary.
A New Beginning
Let’s go back to the beginning for a moment, where I said “Before Chrome version 129, the first thing I did was open DevTools and go to the network tab”, this is because in that version we had a very interesting change in DevTools: Live metrics observations, a change in the Performance tab home that shows real-time information about Core Web Vitals, both on your local machine and based on field data from the Chrome UX Report.
Now, the first thing I do is access this tab, since we have information from real website visits like Core Web Vitals and the mobile/desktop percentage, as well as the 75th percentile of network speed for visits.

Let’s analyze what we see on this screen:
Local and field metrics
-
The 75th percentile Core Web Vitals from real visits recorded in CrUX:
-
A Largest Contentful Paint (LCP) of 7.15 s
-
A Cumulative Layout Shift (CLS) of 0.11
-
An Interaction to Next Paint (INP) of 779 ms
-
We also see the Core Web Vitals from local readings (on my device with my connection):
-
A Largest Contentful Paint (LCP) of 4.05 s
-
A Cumulative Layout Shift (CLS) of 0.08
-
Regarding the local Interaction to Next Paint (INP) metric, it’s not available. That’s because I’ve not interacted with the page, I simply loaded it. We’ll analyze this in a bit.
Field metrics
We have a dropdown with the URL and the device, “Auto (Mobile)” in this case. Notice that I’ve changed the URL, that’s because the previous one doesn’t have visit information in CrUX due to not having the minimum data volume. This is very common in sites where products are temporary, like in an eCommerce.
Environment settings
Here we have very interesting and important information:
- Device: 55% mobile, 45% desktop, 55% of visits recorded in CrUX were on mobile. I have to say this surprised me, as in many sites 80-90% of visits are from mobile devices.
- Network: 75th percentile is similar to Slow 4G throttling, this information shows us that the 75th percentile of visits has a connection similar to Slow 4G
With this information, we can configure our DevTools environment to be as similar as possible to the website’s actual visitors.
Environment settings
With the recommended configuration, we now have data closer to field data, which helps us get a vision closer to the experience of the website visitors we’re analyzing.

Core Web Vitals
Following Google’s recommendations, and coinciding with the Experience metrics from Google Search Console, let’s start by analyzing the Core Web Vitals.
Largest Contentful Paint (LCP)
In the LCP box, we can see that after configuring the environment, the local values and the 75th percentile of field data are similar. And it’s clear that there’s a lot of room for improvement in this metric.

Just below, we have very valuable information:
LCP element img.media-image__image.media__wrapper--media is the element that generates the LCP metric. When hovering over this selector, we see how the referenced element is highlighted in the viewport.
When clicking on the selector, it takes us to the Elements tab with the DOM element selected. At a glance, I see we don’t have fetchpriority='high', which makes me suspect that the image affecting the page’s LCP is not being prioritized.
<img
class="media-image__image media__wrapper--media"
data-qa-qualifier="media-image"
alt="Mid-knit ecru jumper set with a roll neck and long sleeves, paired with dark socks and high heels."
src="https://static.zara.net/assets/public/06b4/86e0/c99d4e959492/ec7ecb3e3a0a/05802109715-p/05802109715-p.jpg?ts=1764087684969&w=744&f=auto"
/>
But we’ll validate this in a bit, as there are several ways to prioritize resource loading.
Cumulative Layout Shift (CLS)
Regarding CLS information, the difference between field and local data, IMHO, is the most complicated metric to emulate, as it depends on device resolution. Even so, let’s see the useful information we have available in DevTools.

If we click on Worst cluster 1 shift, it shows us Layout shifts with a list of elements that have been affected by a shift.

Just like with the LCP selector, when hovering over the list elements, we see how the element is highlighted in the viewport. Similarly, if we click, it selects the DOM element in the Elements tab.
In this case, the cause of this shift is the element above, the native app banner, which pushes the content down when it appears.
Interaction to Next Paint (INP)
As we’ve discussed and seen in the screenshots so far, we don’t have a local INP metric until we make an interaction on the page, so I’m going to click on the hamburger menu icon.

Here we also see a big difference between the local INP value and the 75th percentile of field data. We could try different CPU throttling options or debug with a device connected via USB, but that’s another article. I think we already have a scenario that allows us to look for improvement opportunities.
At the bottom, we see a list of interactions, all interactions we make on the page will be shown here, and the one with the highest value will have the INP label (with the color corresponding to the threshold that value falls into).

If we expand one of those interactions, we’ll see information about the INP value sub-parts. In this case, we see values (in ms) of Input delay: 41, Processing duration: 216, and Presentation delay: 55. Clearly, the problem is in the processing step, which has a value of 216 ms.
If we click on Local duration (ms), thanks to the Long Animation Frames API, Chrome shows us in the console a list of scripts that executed during the interaction. This list is a result of Chrome’s instrumentation for INP (Interaction to Next Paint) and shows which JavaScript code contributed to the processing time of the selected interaction. Each row corresponds to an event listener or callback that executed as a direct or indirect consequence of the input (click, tap, pointer, etc.).
This helps us detect which scripts and functions are causing a poor experience.
Performance Panel
Let’s take a look at the most comprehensive part for analyzing web performance, the Performance tab when we do a page load or interaction.
Another disclaimer: we could dedicate several articles (or a book) to this DevTools tab, so we’ll see some things I think can help you detect improvement opportunities.
Record and reload page
If we do a page load from the Performance tab with the “Record and reload” icon or ⌘ ⇧ E (on Mac), we’ll see something like this.

The Chrome DevTools team keeps updating DevTools with features that make debugging easier, so I want to thank and congratulate them for the fantastic work they’re doing.
One of those features that make analysis and debugging easier is the Insights sidebar, where we already have tips and advice that will help us improve the experience, such as:
Use efficient cache lifetimes

Here we have a list of resources with very low cache lifetime. Optimizing this will help returning visitors, as they’ll have a cached version of those resources in the browser.
Insights LCP request discovery

Here we can validate what we saw earlier when analyzing the HTML code of the element affecting the page’s LCP.
More things to highlight
At the top, we can see that the graph shows many yellow areas, indicating JavaScript execution. We can also see this at the bottom in the Summary, which tells us the browser executed JavaScript for 7 seconds.
In the Network row, I’ve highlighted the HTML load, which was very high on this occasion. Here I can see resources that take longer to download, or even the number of resources downloading in parallel, as we can see in the following screenshot, where we have many JavaScript and CSS resources, Render blocking (red corner) downloading in parallel.

By the way, in the Insights sidebar, we’ll see Render blocking requests, if we select it, it highlights the Render blocking resources in the Network area.
Finally, I’ve highlighted one of the long tasks we can see, which takes more than a second, that’s one of the points where we should dig deeper to improve the experience.
Recording interactions
Another option we have in the Performance panel is recording interactions. We can do this with the Record button or ⌘ E, this starts the Performance Profiler recording until we stop it with the Stop button.
In this case, I recorded the action of opening the menu, surely one of the most used actions by this website’s visitors.

As we can see, it generates an INP of 279 ms, and if we select INP breakdown, it shows us the sub-parts: Input delay: 24, Processing duration: 191, and Presentation delay: 62, graphically in the flame charts. Clearly, this is a candidate for improvement.
Interactions with side effects
When analyzing that interaction, I noticed something interesting. When opening the menu, some images and videos are shown, which download at that moment, we can see this in the Network row.

One of them really caught my attention, as it took 6 seconds to download. Let’s investigate (spoiler: I found something else interesting)
We go back to the Network tab, this time with the media filter, to list only video resources.

By the way, download times are now faster because there’s no throttling, since what I’m interested in looking at right now is the resource weight.
And there we have a couple of videos weighing 3.3MB and 29MB, let’s focus on the latter. Opening the video in a new tab, I see it’s a 5-second sequence of products on a white background. This makes me suspect it can definitely be optimized better, later we’ll see how I did it to validate that hypothesis. But first, I want to show you something more interesting.

When trying to open the menu a couple of times, I see there’s another call to the video, but this time instead of using the parameter w=400, it uses w=398, so it’s a different URL and downloads the same video version, just 2 pixels smaller.
Layers
Another thing I like to validate is whether the rendering of components outside the viewport is being optimized. For this, I use the Layers tab, which, although outdated, is still very useful for this validation.

Here we can see what the browser is rendering, and not seeing the product images below the viewport is good for optimization. Let’s scroll to the footer.

Now we can see the images rendered, since we’ve scrolled vertically and horizontally, making the components appear in the viewport and render the necessary resources.
#WebPerfTipHere we could do something to improve performance, which improves the page’s INP. We could use thecontent-visibilityproperty to free up memory and elements to apply styles to in Recalculate style operations. In this post “content-visibility: the new CSS property that boosts your rendering performance” you’ll find information about this property, and in this Pull Request an implementation example and a video of the Layers tab showing how it works.
WebPerf Snippets
One of the tools I like for analyzing web performance is running scripts from the console. Over time, I’ve been creating and adapting scripts that help me detect improvement opportunities. Additionally, in the Sources tab of DevTools, we have a Snippets subtab where we can create and edit scripts to run. This helps us avoid having to copy&paste from external sources, we have everything integrated in the browser.
I’ll show you some of them. You can find them at WebPerf Snippets, a project that started as a GitHub repository to compile and share with the community, and I ended up creating a site to facilitate searching, browsing, and documentation.
Largest Contentful Paint Sub-Parts (LCP)
This is a script that displays a table in the console with the LCP Sub-Parts of the page.

This information is now integrated in Performance Insights, but we didn’t have it before, so this script helped me for quite a while.
CSS Media Queries Analysis
This snippet analyzes all @media rules in your CSS stylesheets to identify how much desktop-specific CSS you’re unnecessarily sending to mobile users.
It includes two complementary functions:
analyzeCSSMediaQueries(): Identifies and quantifies CSS that’s only used in large viewports (by default >768px). It shows you exactly how many bytes mobile users could save.
analyzeCSSPerformanceImpact(): Calculates the real performance impact of that unnecessary CSS on different device types (high-end, mid-range, and low-end). It shows you:
- Render blocking time
- Impact on Core Web Vitals (FCP, LCP, INP, TBT)
- Memory overhead
- Estimated conversion impact
The results will give you specific numbers and an implementation strategy to optimize.

Content Visibility
Related to what we’ve seen in the Layers tab, this other snippet helps us detect potential improvements with the CSS content-visibility property.
When running the script, detectContentVisibility() executes and tells us whether the website is using the content-visibility property. This CSS property is a powerful rendering optimization that allows browsers to skip the layout and painting work of off-screen content, significantly improving initial page load performance.

We also see the message To find optimization opportunities, run: analyzeContentVisibilityOpportunities(), which suggests running the function that will show us a list of potential DOM elements we could apply the CSS property to, showing the selector, height, distanceFromViewport, childElements, and even estimatedSavings with an estimated rendering time savings.

And an example implementation of the solution:
/* Optimize offscreen content */
.your-selector {
content-visibility: auto;
contain-intrinsic-size: auto 500px; /* Use actual height */
}
I’ll let you take a look at the rest of the Snippets, as they help me detect performance improvements.
AI to Speed Up Performance Reviews
As you may have noticed, all the steps are very hands-on, I really think that’s what I like about this process. But AI can help us speed up processes, interpret information (especially from the Performance Profiler), and generate reports to maintain a consistent style and logic when prioritizing improvements.
The DevTools team has been working for some time on offering us AI tools for this, like AI assistance. We even have the recent Chrome DevTools MCP, but we’ll cover that in another article.
Bonus: Video Optimization
Whenever I see a video as a resource on a website, I can’t help but try to get a more optimized version. So in this case, I quickly tested an optimization with HandBrake, a desktop software that’s cross-platform and free, allowing us to convert video between different formats.
In this case, with a simple right-click to show the context menu and open it with HandBrake.

The next step for this example is to choose Web -> Creator 720p60 (High quality video for publishing via online services such as Vimeo and YouTube. H.264 video (up to 720p60) and high bit rate AAC stereo audio in an MP4 container.), one of the available presets, and click the “Start Encoding” button.

And the magic converts a 29 MB video into a 1 MB one.

This process is very manual, of course, but it helps me validate that we can get a more optimized video version. The next step is to automate the conversion in the content creation step or a server process running FFmpeg. Or even better, use a Media CDN service like Cloudinary that will allow us to optimize videos on demand, adapting them to devices (this is also material for another article).
Conclusion
As you’ve seen, the process I use to detect performance improvement opportunities with DevTools is very hands-on. And we must keep in mind that:
- Every website is different.
- DevTools is continuously evolving with improvements that we need to adapt to. I’m sure this article will become outdated at some point.
- I couldn’t fit everything I do in a Performance Review into one article.
- It wasn’t the goal of this article to be a mega-manual of DevTools either.
I hope this has been useful, and don’t hesitate to share your processes or tricks, that’s what makes us one of the best communities.