Lighthouse with puppeteer
Google Lighthouse run programmatically with Google Puppeteer.
I wanted to explore running lighthouse as part of our CI CD delivery environment. Fortunately, lighthouse did provide the means of running it programmatically. All it needed was some puppeteer magic to do stuff like login to the site before lighthouse could be run against the page I needed it run against, and then generate reports et. all.
In a gist, this is what it does,
Launch chrome
Connect lighthouse to chrome
Run puppeteer to login
Run lighthouse against the desired URL post authentication
Generate HTML and JSON reports
This is a simple implementation
A full example of GitHub with a README can be found at https://github.com/joviano-dias-springernature/lighthouse-puppeteer
An Advanced Example
Going further, I wanted to do the following additional things.
Run lighthouse against multiple pages/URLs using puppeteer
Read the scores from the JSON report
Compare the scores against baseline value
Fail the run and alert on Slack if the scores fall below the baseline values
How to generate an HTML report using Google Puppeteer
Google Lighthouse provides you with reports when you run it programmatically with puppeteer with the desired format
In HTML
const json = reportGenerator.generateReport(report.lhr, 'html');
In JSON
const json = reportGenerator.generateReport(report.lhr, 'json');
How do you sieve out scores from the Lighthouse JSON?
So now you have the JSON report. How do you parse the JSON nodes in the lighthouse report to get your relevant scores? And thus programmatically compare it against a Baseline set? This is how
let scores = {Performance: 0, Accessibility: 0, "Best Practices": 0, SEO: 0};scores.Performance = JSON.parse(json).categories.performance.score;
scores.Accessibility = JSON.parse(json).categories.accessibility.score;
scores["Best Practices"] = JSON.parse(json)["categories"]["best-practices"]["score"];
scores.SEO = JSON.parse(json).categories.seo.score;
How do you compare the Lighthouse JSON scores against your baseline set?
We set our baseline to be 80% or 0.8. So all of our tests need to comply to be above 80% or 0.8 in these metrics.
let baselineScores = {
"Performance": 0.80,
"Accessibility": 0.80,
"Best Practices": 0.80,
"SEO": 0.80
};try {
Object.keys(baselineScores).forEach(key => {
let baselineValue = baselineScores[key]; if (scores[key] != null && baselineValue > scores[key]) {
Object.keys(baselineScores).forEach(key => {
const scorePercent=scores[key]*100;
});
console.log(`${app_name}: ` + key + " score " + scores[key]*100 + "% for " + reportName + " is less than the defined baseline of " + baselineValue*100 + "%");[ADD_SLACK_ALERT]throw BreakException;
}
});
} catch (e) {
if (e !== BreakException) throw e;
}
So our test now fails if the scores captured are below our baseline.
- scores[Performance] v/s baselineScores[Performance]
- scores[Performance] should be > 0.8 (defined), else alert
How do you add slack alerts when it falls below a baseline score?
You can use slack-notify to achieve this in place of [ADD_SLACK_ALERT]
var MY_SLACK_WEBHOOK_URL = 'https://myaccountname.slack.com/services/hooks/incoming-webhook?token=myToken';var slack = require('slack-notify')(MY_SLACK_WEBHOOK_URL);slack.alert('Something important happened!'); // Posts to #alerts by
This is a full version of the implementation. Note that
- It waits for the full test to run before the Slack alerts are initiated.
- It creates a report with a different name for each test run as per the name provided.
A complete example in code can be found on my GitHub at https://github.com/joviano-dias-springernature/lighthouse-puppeteer/blob/master/lighthouse-puppeteer-slack/lighthouse-tests.js