How to Scrape a Web Page with Node.js

Web scraping with Node.js can streamline your data extraction process, especially when paired with the Puppeteer library. This guide will walk you through building a simple web scraper to handle dynamic content. If you’re interested in another approach, check out my previous blog post, "Instant Data Scraping Techniques: A Next.js Guide", which explores data scraping using Next.js. Both guides offer valuable insights into different scraping techniques.
A Step-By-Step Guide
1. Set Up the Environment
First, you need to install Node.js and npm on your device if you haven’t already. Set up the Node.js environment with this command in the terminal:
npm init
2. Install Puppeteer Library
Next, we need Puppeteer for the web scraping library. To use Puppeteer, install the library with the command:
npm install puppeteer
3. Determine the Target URL If you haven’t already, create a js file “index.js” in the root directory of your project where the main function will be.
Determine the page you want to scrape with the URL of the page. In this example, we’re going to scrape “https://en.wikipedia.org/wiki/Web_scraping”.
const url = "https://en.wikipedia.org/wiki/Web_scraping";
4. Set Up the Scraping Function
Set up the main function for the scraping activity. Since we’re using Puppeteer, don’t forget to import the library.
const puppeteer = require("puppeteer");
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
}
5. Define the Data Selector
Next, define the selector of the data you want to scrape. In this example, we want the references for the Wikipedia page, which has the selector “.references li”.
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
After extracting the data, close the Puppeteer browser with:
await browser.close();
6. Store the Scraping Result
Finally, after successfully extracting the data, export the result into a structured format such as JSON or CSV. In this example, we’re going to export them into a JSON format.
const fs = require("fs");
fs.writeFileSync("result.json", JSON.stringify(references));
The complete function should look like this:
const puppeteer = require("puppeteer");
const fs = require("fs");
const url = "https://en.wikipedia.org/wiki/Web_scraping";
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
await browser.close();
fs.writeFileSync("result.json", JSON.stringify(references));
}
scrape(url);
Conclusion
Using Puppeteer with Node.js simplifies web scraping by enabling you to automate the extraction of dynamic content from websites. With a straightforward setup, you can configure Puppeteer to navigate web pages, select and extract data, and export the results in a structured format. This approach not only enhances efficiency but also provides flexibility for various scraping tasks, making it a powerful solution for gathering and managing web information.
While it is easy to scrape a website page with Node.js, it can be easier with MrScraper. We provide a no-code web scraping tool designed for users who prefer a straightforward, intuitive interface. All you need is to provide a URL for the website you want to scrape, prompt the data to scrape to ScrapeGPT AI and it’ll handle the scraping process for you.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here

Bingle Proxy: What It Is, How It Works, and Why It Matters
Bingle Proxy is an online proxy platform that enables indirect browsing. When you request a site via Bingle Proxy, the request goes through Bingle’s server, which fetches the content—so your IP remains hidden from the target website

Unblocked Google: What It Means and How to Access It Safely
Access to Google services like Search, Drive, Translate, or Gmail—is blocked in certain regions or networks. The term unblocked Google refers to methods users employ to access these services from censored environments like schools, workplaces, or restrictive countries (e.g. China). This post explains how it works, legal considerations, and best practices.

Maximizing Web Scraping with Bright Data Proxy: An In-Depth Guide
Bright Data Proxy stands out as a top-tier solution for enterprises and advanced users—offering unparalleled IP variety, geo-targeting, and anti-blocking capabilities. This article explores what makes it special, when to use it, and how it compares to other options.
@MrScraper_
@MrScraper