If you use Playwright or you’re interested in how it’s evolving, you probably know that it offers numerous integrations that extend its default functionality for both API and UI testing.
Some time ago, I wrote about using built-in functionality for visual regression testing and also, I wrote about using Visual Regression Tracker, a helpful open-source tool that extends Playwright’s default visual regression capabilities. T
Today, I’d like to present how to integrate Lighthouse with Playwright.
What is Lighthouse?
Lighthouse is an open-source tool developed by Google that allows us to generate reports containing information about a web application’s performance from the user’s perspective. Lighthouse is also built into the Chrome browser and available in Chrome DevTools, allowing users to generate reports for any web application. We can select whether we want the report data to reflect a mobile emulator or desktop view.
The Lighthouse report contains key information related to performance, SEO, and accessibility for a particular web page.
What aspects does Lighthouse analyze?
Performance:
Measures how quickly a webpage loads and responds to user interactions. Key metrics include:
https://blog.chromium.org/2020/05/introducing-web-vitals-essential-metrics.html
- First Contentful Paint (FCP): Measures the time from page load to when the first piece of content (like text or image) appears on the screen.
- Largest Contentful Paint (LCP): Marks the time when the largest visible content element (like a hero image or large text block) finishes loading.
- Total Blocking Time (TBT): Calculates the total time between FCP and TTI where the main thread is blocked and can’t respond to user input.
- Cumulative Layout Shift (CLS): Quantifies how much visible content shifts on the page during its lifetime, affecting visual stability.
- Time to Interactive (TTI): Measures how long it takes for a page to become fully interactive and reliably respond to user input.
Accessibility:
Evaluates whether the webpage is usable by people with disabilities, focusing on compliance with WCAG (Web Content Accessibility Guidelines). Examples include:
- Color contrast issues
- Missing alt text for images
If you need extended accessibility tests beyond Lighthouse’s capabilities, you can use specialized tools like axe-core or pa11y, helpful open-source solutions easily integrated with Playwright. Another useful library about which I’ve already written is Visual Regression Tracker which helps extend Playwright’s built-in visual regression capabilities. (link).
SEO (Search Engine Optimization):
Checks whether the page is optimized for search engines. Examples include:
- Proper meta tags (title, description)
- Mobile-friendliness
- Structured data (schema.org) markup
Best Practices:
Evaluates overall adherence to web-development best practices. Examples include:
- Use of HTTPS
- Secure JavaScript libraries
- Avoidance of deprecated APIs
Progressive Web App (PWA):
Ensures the site is installable and provides a reliable offline experience. This includes checking for:
- Service worker presence
- Web app manifest
- Offline support
When should you run these tests?
Single-page Applications (SPA):
When dealing with Single-page Applications (SPAs) developed with frameworks like React, Vue, or Angular, client-side performance testing is crucial. It’s essential to verify how quickly and smoothly the application responds to user interactions, as these applications heavily rely on dynamic content loading without page reloads.
Optimization for Resource Loading:
When the web application loads numerous resources (e.g., images, CSS files, JavaScript files), testing client-side performance helps ensure acceptable loading times, contributing to a positive user experience.
Response to User Interactions:
Tests should confirm that interactions trigger smooth, immediate feedback and short response times, enhancing overall usability and satisfaction.
Mobile Device Performance:
Client-side performance is particularly crucial for mobile devices, which typically have fewer resources, limited RAM, and often slower internet connections. Testing in a mobile context helps confirm that the application functions smoothly despite these constraints.
JavaScript Rendering and Resource Optimization:
If the application heavily relies on large JavaScript files or extensive DOM operations (e.g., animations, UI updates), testing helps identify processes causing performance drops and allows verification of efficient page rendering.
When pages are used for sales purposes—for example, landing pages leading users to a shopping cart—it’s worth improving their performance. This applies to subpages as well, especially those frequently visited or drawing users’ attention.
How to integrate Lighthouse with Playwright?
One idea for integrating Playwright with Lighthouse is using the playwright-lighthouse library—but does this library actually work without problems?
Some time ago, during a PoC (Proof of Concept) integration of Lighthouse with Playwright for one of my projects, I encountered some challenges with using this library. I already knew it was possible to integrate Lighthouse with Playwright, and that a library existed for this purpose. However, when I tried to set up the configuration, I encountered some issues, for example:
- Problems passing cookies for authentication to the library.
- Lack of updates—the latest version is over a year old.
- Other issues reported by users, for instance:
- Issue #82
- Issue #109
In this case, I wondered how I could achieve this in another way. I realized that I could integrate the original Lighthouse library with Playwright. In today’s article, I’m going to show you how you can do that and what’s important about this challenge.
Let’s start with the basics.
Our test will involve navigating to the homepage of my blog and then running the Lighthouse test. The URL address will be passed via a YAML file. Apart from the URL, I will also pass other parameters, such as thresholds, which define maximum allowed values for areas measured by Lighthouse, including performance, SEO, best practices, and accessibility.
Configure Playwright with Lighthouse
First of all, I’m going to start by creating a project with PlaywrightRun the following command:
playwright init
Next, I’m going to install the latest version of Lighthouse by running:
npm install lighthouse
It’s important to remember that the Node.js version should be at least v18.
Add configuration via YAML file
Our YAML settings file looks like this:
It’s a simple configuration that allows us to pass values to the YAML file externally—for example, via CI pipelines—to avoid hardcoding these values. We define key values such as performance and accessibility.
We start by setting our URL address, then we configure a port for the Chrome browser. This step is essential for running Lighthouse because we must reuse the same browser instance.
The next step involves configuring Chromium to run in headless mode. This means the test executes without opening a visible browser window, running instead as a background process.
We need to use interfaces to help parse our YAML file.
Types.ts
At the beginning, we define the LighthouseScoreThresholds interface, which determines the expected values for different Lighthouse metrics such as performance, accessibility, best practices, and SEO.
Next, we declare the ScoreKey type, which represents keys from the LighthouseScoreThresholds interface.
The ChromeConfig interface contains only one field—debuggingPort—which indicates the port used by Chrome during automated tests. However, it’s an interface that we can easily extend in the future.
The main interface, LighthouseConfig, aggregates all essential settings needed for running a Lighthouse test. It contains:
- baseUrl – the URL of the page under test
- chrome – configuration for the browser (represented by ChromeConfig)
- thresholds – threshold values defining acceptable Lighthouse scores
- formFactor – specifies whether the test runs in mobile or desktop mode
- logLevel – specifies the level of logging details (silent, error, info, verbose)
- onlyCategories – an optional list of Lighthouse categories to test
The last definition is ScoreResult, which is a map (Record) where the keys are Lighthouse metrics (ScoreKey) and the values are their numerical results.
The next step is to create a file named lighthouse.config.ts, which defines the path to the configuration file in YAML format. The default path is passed as an argument to the constructor, but there is an option to specify an alternative configuration, for example, with different settings.
If the file path is not provided as a parameter, the system will use the environment variable LIGHTHOUSE_CONFIG_PATH. If neither of these values is set, the code will display an error and inform about the necessity of providing the path to the configuration.
Additionally, this implementation supports loading a YAML file and converting threshold values to numbers. It also creates the lighthouse-reports directory if it doesn’t already exist.
packages.json
lighthouse.config.ts
First of all, we need to initialize envConfigPath which I take from process.eng.LIGHTHOUSE_CONFIG.PATH then this path is used to load the config.
One of the essential methods is ensureReportsDirectory(), which is responsible for creating the directory for Lighthouse reports. If the directory doesn’t exist, the method creates it to store the generated results.
The loadConfig() method loads the Lighthouse configuration from a YAML file, then parses it into the LighthouseConfig interface, and returns an object containing all the defined fields. It’s a key element because test parameters are defined based on this object.
The next step is to add a key method—audit()—which is responsible for auditing a page using Lighthouse:
- This method accepts a URL parameter, which by default is taken from the YAML file containing all settings. The user can provide another URL if they want to test a different site.
- Next, we import and define Lighthouse so that we can use it.
- In the next stage, we configure the report by setting crucial parameters such as:
- Port – taken from the YAML file
- Log level – determines the detail level in logs
- formFactor – indicates whether tests run in mobile or desktop mode
Finally, I need to define the report format. In our case, it’s HTML, but it’s easy to extend the implementation to support other formats such as JSON or CSV. These formats are more straightforward for further processing or analyzing results compared to HTML.
The last methods in the class – getThresholds()
and getConfig()
.
Finally, we need to define two methods: getThresholds()
and getConfig()
, which will be used in tests.
getThresholds()
– returns an object containing threshold values for specific Lighthouse metrics. UsingReadonly<LighthouseScoreThresholds>
ensures these values cannot be modified outside of the class.getConfig()
– returns the entire test configuration asLighthouseConfig
object. Similarly to thegetThresholds()
method, usingReadonly<LighthouseConfig>
ensures that the configuration can’t be changed after being returned.
Example of a Playwright test using Lighthouse.
performanceTest.spec.ts file
extractScore and assertScore methods
Tests start by declaring lighthouseService
in the beforeEach
block, which ensures initialization before each test.
1. Take configuration
Using the getConfig()
method, we obtain settings that were previously parsed from the YAML file. This approach helps us avoid problems related to hardcoded values.
2. Launch the Chrome browser
- The argument
--remote-debugging-port
specifies the port number used by Lighthouse, which helps us avoid potential issues related to hardcoded values. - We specify the browser launch mode (headless or UI mode). Since Chromium 109, to launch the browser in headless mode, we need to use the
--headless=new
argument. - In the code (line 21), we check the value
config.chrome.headless
. If it’strue
, we use the new parameter--headless=new
; otherwise, the browser runs in UI mode.
3. Creating a context and page in Playwright
- We create a new context and page in Playwright, which we pass to the
HomePage
object. navigate()
– This method navigates to the page under test.
4. Execution of Lighthouse audit
- The
audit()
method runs the Lighthouse test. - The results are processed by the
extractScore(audit)
method. - Next, we compare threshold values using the method
assertScores(scores, LighthouseService.getThresholds())
.
5. Close the browser
- At the end of the test, the browser is closed by calling
browser.close()
.
6. The ExtractScore
method returns a score record containing all necessary data for the specified key. The AssertScores
method asserts all scores against the expected threshold.
HomePage class – provides navigation during tests
Constructor of the HomePage class
The HomePage
class constructor takes two parameters:
- Page – This is a
page
object, which is necessary for interacting with the browser page via Playwright. - Config – This is a
LighthouseConfig
object, which contains settings parsed from a YAML file.
Navigate() method
The navigate()
method is responsible for opening the homepage of the web application in Lighthouse.
Here’s how it works:
- It navigates to the defined page based on the
baseUrl
, taken from the YAML configuration. - It waits for the
h1
heading to load, ensuring that the page is fully loaded before running the test.
Benefits of Using YAML Configuration
One of the main advantages of using external configuration files is flexibility.
The path to the YAML file is stored in .env
, which makes it easy to change configurations without modifying the code.
Thanks to this, we can run different test scenarios by simply updating the configuration path.
What Does Our Generated Report Look Like?
I ran a Lighthouse test for the homepage of my blog.
The current results are strong. However, in earlier stages — during the blog’s creation and configuration process — the results were worse.
I had to rescale images, enable caching, and introduce several other improvements to achieve efficient resource loading.
Optimization is an ongoing process.
As the page evolves and the context changes, it’s important to run these tests regularly and apply necessary improvements.
I encourage you to use Lighthouse tests consistently for your web applications.
Analyzing the reports helps identify potential issues, and implementing improvements boosts performance, accessibility, and overall user experience.
It’s also important to remember that there is an almost infinite number of test configurations — from internet speed to screen resolution to the type of device used.
Running tests under different conditions helps uncover areas that need further optimization.
Are the tests in Lighthouse stable?
When starting to work with client-side performance testing, it’s important to note that the results can be highly dependent on the quality of the internet connection and the environment in which the tests are run. Therefore, it’s recommended to execute them from a location with a stable internet connection and on a machine with sufficient resources.
It’s also important to consider the time when these tests are performed. For example, running tests during peak hours (e.g., 11:00 a.m.), when more users are active in the same environment, can affect the results and make them less reliable. I suggest running the tests at the same time each day, ideally during periods when the application is not in active use and system resources are more available.
Why do I suggest using Lighthouse?
Regularly running client-side performance tests allows us to quickly detect potential performance drops in the web application.
Lighthouse can emulate slower internet connections, which is useful — not everyone in the world has access to 4G or 5G, so resources may load more slowly.
Thanks to Lighthouse report analysis, we can identify which elements of our page need improvements, whether all resources are optimized in size, or if we need to implement caching mechanisms.
How to integrate Lighthouse into our CI/CD process?
GitHub Actions & Other CI/CD Tools
One idea for using Lighthouse is to configure a job that runs tests daily and sends the reports to Teams or Slack. This allows the whole team to monitor how the application’s performance evolves.
In this approach, we can set thresholds — for example, to ensure the performance score never drops below 90%. Additionally, we can easily add logic to compare current results with previous ones and detect potential regressions.
Alternatives?
Sitespeed.io
Sitespeed.io is one of the most interesting open-source tools for performance testing. It includes various tools to measure key performance metrics.
What additional features does it offer
Thanks to its extended capabilities, we can also test how quickly a video loads on a page, among other things.
Integration with Playwright?
Some might wonder whether Sitespeed.io integrates with Playwright. This question was raised some time ago on their GitHub page.
https://github.com/sitespeedio/sitespeed.io/issues/3452
Unlighthouse
Unlighthouse is an interesting alternative to Lighthouse, which runs Lighthouse audits on all detected pages in your application and generates a comprehensive report for each discovered URL.
It’s also possible to pass authentication data, allowing you to test restricted or protected pages.
Puppeteer & Lighthouse
Another option is using the officially supported approach of combining Lighthouse with Puppeteer to test authenticated areas. Since Google maintains both tools, this integration is well supported.
WebPageTest
WebPageTest is another interesting tool for advanced client-side performance testing. Unlike Lighthouse — which performs a quick audit and provides key insights — WebPageTest offers more in-depth analysis.
One feature that Lighthouse lacks is video rendering analysis. WebPageTest allows you to examine how videos are loaded and rendered on a page in more detail.
It’s a good idea to combine both tools, for example via Sitespeed.io — Lighthouse can be used for frequent, quick tests, while WebPageTest is ideal after major changes. The ultimate choice depends on your project’s goals and specific needs.
Is it possible to use Lighthouse with Playwright?
Absolutely! While it’s possible to use Lighthouse on its own, integrating it with Playwright offers some significant advantages.
One key benefit is the ability to easily navigate to specific web pages in an authenticated state. This allows you to run tests under more realistic conditions.
Running such tests directly in Lighthouse is more difficult because it doesn’t natively support login flows. There are workarounds for passing authentication tokens, but tools like Puppeteer or Playwright simplify the process significantly, making it easier to test protected areas.
Summary
In this article, I demonstrated how to integrate Lighthouse with Playwright, highlighted key considerations, and explained how to avoid common pitfalls.
In the future, I’ll demonstrate how to configure a GitHub Actions pipeline that helps detect client-side performance issues automatically.