How to Scrape a Web Page with Node.js

Web scraping with Node.js can streamline your data extraction process, especially when paired with the Puppeteer library. This guide will walk you through building a simple web scraper to handle dynamic content. If you’re interested in another approach, check out my previous blog post, "Instant Data Scraping Techniques: A Next.js Guide", which explores data scraping using Next.js. Both guides offer valuable insights into different scraping techniques.
A Step-By-Step Guide
1. Set Up the Environment
First, you need to install Node.js and npm on your device if you haven’t already. Set up the Node.js environment with this command in the terminal:
npm init
2. Install Puppeteer Library
Next, we need Puppeteer for the web scraping library. To use Puppeteer, install the library with the command:
npm install puppeteer
3. Determine the Target URL If you haven’t already, create a js file “index.js” in the root directory of your project where the main function will be.
Determine the page you want to scrape with the URL of the page. In this example, we’re going to scrape “https://en.wikipedia.org/wiki/Web_scraping”.
const url = "https://en.wikipedia.org/wiki/Web_scraping";
4. Set Up the Scraping Function
Set up the main function for the scraping activity. Since we’re using Puppeteer, don’t forget to import the library.
const puppeteer = require("puppeteer");
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
}
5. Define the Data Selector
Next, define the selector of the data you want to scrape. In this example, we want the references for the Wikipedia page, which has the selector “.references li”.
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
After extracting the data, close the Puppeteer browser with:
await browser.close();
6. Store the Scraping Result
Finally, after successfully extracting the data, export the result into a structured format such as JSON or CSV. In this example, we’re going to export them into a JSON format.
const fs = require("fs");
fs.writeFileSync("result.json", JSON.stringify(references));
The complete function should look like this:
const puppeteer = require("puppeteer");
const fs = require("fs");
const url = "https://en.wikipedia.org/wiki/Web_scraping";
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
await browser.close();
fs.writeFileSync("result.json", JSON.stringify(references));
}
scrape(url);
Conclusion
Using Puppeteer with Node.js simplifies web scraping by enabling you to automate the extraction of dynamic content from websites. With a straightforward setup, you can configure Puppeteer to navigate web pages, select and extract data, and export the results in a structured format. This approach not only enhances efficiency but also provides flexibility for various scraping tasks, making it a powerful solution for gathering and managing web information.
While it is easy to scrape a website page with Node.js, it can be easier with MrScraper. We provide a no-code web scraping tool designed for users who prefer a straightforward, intuitive interface. All you need is to provide a URL for the website you want to scrape, prompt the data to scrape to ScrapeGPT AI and it’ll handle the scraping process for you.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here

How to Access TikTok Unblocked: A Complete Guide
TikTok continues to face restrictions in various regions due to government bans, network policies, or institutional firewalls. Whether you're a student, traveler, or reside in a country where TikTok is inaccessible, this guide provides effective methods to unblock TikTok and enjoy uninterrupted access.

MiniProxy: How This Tool Helps You Bypass Web Restrictions
MiniProxy is a free, open-source web proxy written in PHP. It allows users to access websites through a server, effectively hiding the user's IP address and helping bypass network or geographical restrictions.

Invalid CAPTCHA Meaning: What It Is and How to Fix It
A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a security feature designed to differentiate between human users and automated bots.
@MrScraper_
@MrScraper