How to Scrape a Web Page with Node.js
Web scraping with Node.js can streamline your data extraction process, especially when paired with the Puppeteer library. This guide will walk you through building a simple web scraper to handle dynamic content. If you’re interested in another approach, check out my previous blog post, "Instant Data Scraping Techniques: A Next.js Guide", which explores data scraping using Next.js. Both guides offer valuable insights into different scraping techniques.
A Step-By-Step Guide
1. Set Up the Environment
First, you need to install Node.js and npm on your device if you haven’t already. Set up the Node.js environment with this command in the terminal:
npm init
2. Install Puppeteer Library
Next, we need Puppeteer for the web scraping library. To use Puppeteer, install the library with the command:
npm install puppeteer
3. Determine the Target URL If you haven’t already, create a js file “index.js” in the root directory of your project where the main function will be.
Determine the page you want to scrape with the URL of the page. In this example, we’re going to scrape “https://en.wikipedia.org/wiki/Web_scraping”.
const url = "https://en.wikipedia.org/wiki/Web_scraping";
4. Set Up the Scraping Function
Set up the main function for the scraping activity. Since we’re using Puppeteer, don’t forget to import the library.
const puppeteer = require("puppeteer");
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
}
5. Define the Data Selector
Next, define the selector of the data you want to scrape. In this example, we want the references for the Wikipedia page, which has the selector “.references li”.
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
After extracting the data, close the Puppeteer browser with:
await browser.close();
6. Store the Scraping Result
Finally, after successfully extracting the data, export the result into a structured format such as JSON or CSV. In this example, we’re going to export them into a JSON format.
const fs = require("fs");
fs.writeFileSync("result.json", JSON.stringify(references));
The complete function should look like this:
const puppeteer = require("puppeteer");
const fs = require("fs");
const url = "https://en.wikipedia.org/wiki/Web_scraping";
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
await browser.close();
fs.writeFileSync("result.json", JSON.stringify(references));
}
scrape(url);
Conclusion
Using Puppeteer with Node.js simplifies web scraping by enabling you to automate the extraction of dynamic content from websites. With a straightforward setup, you can configure Puppeteer to navigate web pages, select and extract data, and export the results in a structured format. This approach not only enhances efficiency but also provides flexibility for various scraping tasks, making it a powerful solution for gathering and managing web information.
While it is easy to scrape a website page with Node.js, it can be easier with MrScraper. We provide a no-code web scraping tool designed for users who prefer a straightforward, intuitive interface. All you need is to provide a URL for the website you want to scrape, prompt the data to scrape to ScrapeGPT AI and it’ll handle the scraping process for you.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here
How to Get Real Estate Listings: Scraping San Francisco Zillow
In this guide, we'll walk you through the process of scraping Zillow data for San Francisco using MrScraper, the benefits of doing so, and how to leverage this data for your real estate needs.
How to Get Real Estate Listings: Scraping Zillow Austin
Discover how to scrape Zillow Austin data effortlessly with tools like MrScraper. Whether you're a real estate investor, agent, or buyer, learn how to analyze property trends, uncover deeper insights, and make smarter decisions in Austin’s booming real estate market.
How to Find Best Paying Remote Jobs Using MrScraper
Learn how to find the best paying remote jobs with MrScraper. This guide shows you how to scrape top job listings from We Work Remotely efficiently and save time.
@MrScraper_
@MrScraper