How to Scrape a Web Page with Node.js
Web scraping with Node.js can streamline your data extraction process, especially when paired with the Puppeteer library. This guide will walk you through building a simple web scraper to handle dynamic content. If you’re interested in another approach, check out my previous blog post, "Instant Data Scraping Techniques: A Next.js Guide", which explores data scraping using Next.js. Both guides offer valuable insights into different scraping techniques.
A Step-By-Step Guide
1. Set Up the Environment
First, you need to install Node.js and npm on your device if you haven’t already. Set up the Node.js environment with this command in the terminal:
npm init
2. Install Puppeteer Library
Next, we need Puppeteer for the web scraping library. To use Puppeteer, install the library with the command:
npm install puppeteer
3. Determine the Target URL If you haven’t already, create a js file “index.js” in the root directory of your project where the main function will be.
Determine the page you want to scrape with the URL of the page. In this example, we’re going to scrape “https://en.wikipedia.org/wiki/Web_scraping”.
const url = "https://en.wikipedia.org/wiki/Web_scraping";
4. Set Up the Scraping Function
Set up the main function for the scraping activity. Since we’re using Puppeteer, don’t forget to import the library.
const puppeteer = require("puppeteer");
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
}
5. Define the Data Selector
Next, define the selector of the data you want to scrape. In this example, we want the references for the Wikipedia page, which has the selector “.references li”.
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
After extracting the data, close the Puppeteer browser with:
await browser.close();
6. Store the Scraping Result
Finally, after successfully extracting the data, export the result into a structured format such as JSON or CSV. In this example, we’re going to export them into a JSON format.
const fs = require("fs");
fs.writeFileSync("result.json", JSON.stringify(references));
The complete function should look like this:
const puppeteer = require("puppeteer");
const fs = require("fs");
const url = "https://en.wikipedia.org/wiki/Web_scraping";
async function scrape(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
const references = await page.evaluate(() => {
return [...document.querySelectorAll(".references li")].map(
(element) => element.innerText
);
});
await browser.close();
fs.writeFileSync("result.json", JSON.stringify(references));
}
scrape(url);
Conclusion
Using Puppeteer with Node.js simplifies web scraping by enabling you to automate the extraction of dynamic content from websites. With a straightforward setup, you can configure Puppeteer to navigate web pages, select and extract data, and export the results in a structured format. This approach not only enhances efficiency but also provides flexibility for various scraping tasks, making it a powerful solution for gathering and managing web information.
While it is easy to scrape a website page with Node.js, it can be easier with MrScraper. We provide a no-code web scraping tool designed for users who prefer a straightforward, intuitive interface. All you need is to provide a URL for the website you want to scrape, prompt the data to scrape to ScrapeGPT AI and it’ll handle the scraping process for you.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here
How to Use CroxyProxy: Complete with Usecase
CroxyProxy is a free web proxy service that provides secure and anonymous browsing by acting as an intermediary between the user and the website. This article will explore CroxyProxy, its features, a practical use case, and beginner-friendly steps to get started.
YouTube Channel Crawler
A YouTube channel crawler is a tool that automatically collects data from YouTube channels. It can extract information like video titles, descriptions, upload dates, views, likes, and comments, enabling efficient data analysis or research.
AI Workflow: Automating Customer Support with AI
Artificial Intelligence (AI) workflows are structured processes that guide the development, deployment, and usage of AI systems to solve specific problems or automate tasks. This guide provides a clear understanding of AI workflows, a practical use case, and simple, beginner-friendly steps to implement one.
@MrScraper_
@MrScraper