How to Scrape a Web Page with Node.js
Web scraping with Node.js can streamline your data extraction process, especially when paired with the Puppeteer library. This guide will walk you through building a simple web scraper to handle dynamic content. If you’re interested in another approach, check out my previous blog post, "Instant Data Scraping Techniques: A Next.js Guide", which explores data scraping using Next.js. Both guides offer valuable insights into different scraping techniques.
A Step-By-Step Guide
1. Set Up the Environment
First, you need to install Node.js and npm on your device if you haven’t already. Set up the Node.js environment with this command in the terminal:
npm init
2. Install Puppeteer Library
Next, we need Puppeteer for the web scraping library. To use Puppeteer, install the library with the command:
npm install puppeteer
3. Determine the Target URL If you haven’t already, create a js file “index.js” in the root directory of your project where the main function will be.
Determine the page you want to scrape with the URL of the page. In this example, we’re going to scrape “https://en.wikipedia.org/wiki/Web_scraping”.
const url = "https://en.wikipedia.org/wiki/Web_scraping";
4. Set Up the Scraping Function
Set up the main function for the scraping activity. Since we’re using Puppeteer, don’t forget to import the library.
const puppeteer = require("puppeteer");
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
}
5. Define the Data Selector
Next, define the selector of the data you want to scrape. In this example, we want the references for the Wikipedia page, which has the selector “.references li”.
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
After extracting the data, close the Puppeteer browser with:
await browser.close();
6. Store the Scraping Result
Finally, after successfully extracting the data, export the result into a structured format such as JSON or CSV. In this example, we’re going to export them into a JSON format.
const fs = require("fs");
fs.writeFileSync("result.json", JSON.stringify(references));
The complete function should look like this:
const puppeteer = require("puppeteer");
const fs = require("fs");
const url = "https://en.wikipedia.org/wiki/Web_scraping";
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
await browser.close();
fs.writeFileSync("result.json", JSON.stringify(references));
}
scrape(url);
Conclusion
Using Puppeteer with Node.js simplifies web scraping by enabling you to automate the extraction of dynamic content from websites. With a straightforward setup, you can configure Puppeteer to navigate web pages, select and extract data, and export the results in a structured format. This approach not only enhances efficiency but also provides flexibility for various scraping tasks, making it a powerful solution for gathering and managing web information.
While it is easy to scrape a website page with Node.js, it can be easier with MrScraper. We provide a no-code web scraping tool designed for users who prefer a straightforward, intuitive interface. All you need is to provide a URL for the website you want to scrape, prompt the data to scrape to ScrapeGPT AI and it’ll handle the scraping process for you.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here
AI Web Scraping Tools: How Intelligent Scrapers Are Transforming Data Collection
Discover how AI web scraping tools work, why they are replacing traditional scrapers, and how businesses use intelligent extraction to collect reliable web data.
How to Scrape Amazon with Node.js: A Beginner-Friendly Guide
Learn how to scrape Amazon using Node.js and Puppeteer in a simple, beginner-friendly guide. This tutorial covers setup, scrolling, pagination, code examples, and tips for extracting product data safely and efficiently.
Instant Data Scraper Review: Features, Benefits, and Limitations (2025 Guide)
Learn everything about Instant Data Scraper — the free Chrome extension that lets you scrape websites without coding. Understand its features, limitations, best use cases, and when you should use more advanced scraping tools.
@MrScraper_
@MrScraper