How to Send a cURL GET Request: A Guide with Mrscraper API

In this article, we’ll explore how to make a GET
request using cURL
, a command-line tool commonly used for making HTTP requests. We will also demonstrate how to retrieve data from Mrscraper using its API endpoint.
What is cURL?
cURL (Client URL) is a command-line tool used for transferring data using URLs. It's a versatile tool that supports several protocols such as HTTP, FTP, and more. In the context of web scraping or APIs, cURL is frequently used to make HTTP requests to fetch or send data.
Syntax of a Basic cURL GET Request
A simple GET
request using cURL
looks like this:
curl https://api.example.com/resource
This command sends a GET
request to the provided URL, and if successful, retrieves the data from that endpoint.
Key Options
-
-X GET
: Explicitly mentions that we’re making aGET
request (althoughGET
is the default for cURL). -
-H
: Used to set headers (likeAuthorization
orContent-Type
). -
-d
: Adds data in case of a POST or PUT request.
For GET
requests, you typically don’t need to send a body, but you might need to include headers like authentication tokens.
Example: Fetching Data with cURL
Let’s consider a simple example where we fetch data from a public API.
curl -X GET "https://api.example.com/v1/items" \
-H "Authorization: Bearer YOUR_API_TOKEN"
Here’s what’s happening:
-
-X GET
: Specifies the request type asGET
. -
-H "Authorization: Bearer YOUR_API_TOKEN"
: Adds an authentication token to the request header.
The server will return data (usually in JSON format) if the request is successful.
Using cURL to Get Results from Mrscraper API
Now that we’ve covered the basics of cURL
and GET requests, let’s see how you can use it to fetch scraping results from the Mrscraper API.
Mrscraper API Overview
The Get a Result endpoint in the Mrscraper API allows you to retrieve scraping results by result_id
. Below is a step-by-step guide on how to do this.
Step-by-Step Guide:
- Get Your API Key First, you need an API key to authenticate your request. This can be obtained from your Mrscraper account dashboard.
-
Make the GET Request
Using
cURL
, the request to the Get a Result endpoint would look like this:
curl -X GET "https://api.mrscraper.com/results/{result_id}" \
-H "Authorization: Bearer YOUR_API_KEY"
- Example Response If successful, you’ll receive a response like the following in JSON format:
{
"data": {
"id": 1,
"scraper_id": 88683,
"scraping_run_id": 12,
"scraper_name": "My scraper 1",
"scraped_url": "https://example.com/scrape-url",
"status": "succeeded",
"content": "<your-extracted-data>",
"created_at": "2022-11-20T11:54:52.000000Z",
"updated_at": "2022-11-20T11:54:52.000000Z"
}
}
- Handling Errors If an error occurs, the API will return an appropriate status code and message. For example:
{
"error": "Result not found",
"status": 404
}
Important Note
- Replace
{result_id}
with the actual result ID you want to fetch. - Always ensure you use your API key in the request header.
Conclusion:
Using cURL
to send GET
requests is a powerful and straightforward way to interact with APIs like Mrscraper. By following the steps in this guide, you can quickly retrieve data from any compatible API endpoint.
Table of Contents
Take a Taste of Easy Scraping!
Get started now!
Step up your web scraping
Find more insights here

What is Nebula Proxy? A Powerful Tool for Web Scraping, SEO, and Online Privacy
Discover how Nebula Proxy enhances web scraping, SEO monitoring, and online privacy. Learn about its features, use cases, and setup, plus top alternatives for seamless data extraction and secure browsing.

Taco Proxy: Understanding Its Role and Use Cases
Learn what Taco Proxy is, how it works, and its key use cases for web scraping, SEO monitoring, cybersecurity, and bypassing geo-restrictions. Get step-by-step proxy configuration guides for Python, Scrapy, and cURL.

How to Configure Proxy
Learn how to set up a proxy on Windows, macOS, Linux, browsers, and command-line tools like cURL and Python.
@MrScraper_
@MrScraper