smartproxy
Back to blog

How to Send a cURL GET Request

Tired of gathering data inefficiently? Well, have you tried cURL? It’s a powerful and versatile command-line tool for transferring data with URLs. Its simplicity and wide range of capabilities make it a go-to solution for developers, data analysts, and businesses alike.

So, why should you care about the cURL GET request method? Simply put, it’s the cornerstone of web scraping and data gathering. It enables you to access publicly available data without the need for complex coding or expensive software. In this blog post, we’ll explain how to send cURL GET requests, so you’re ready to harness its fullest potential.

Martin Ganchev

Martin Ganchev

Jan 02, 2024

7 min. read

How to send cURL GET requests

What is a cURL GET request

cURL’s inception dates back to 1996. It’s a command-line tool and library for transferring data with URLs that quickly became an indispensable tool due to its user-friendliness, convenience, no-setup-required nature, and the fact that it comes pre-installed on many systems.

Various HTTP request methods facilitate different types of communication. Some of the key HTTP methods cURL can use include:

  • POST. Sends data to a server to create or update a resource. Often used for submitting form data or uploading files. (-X POST)
  • PUT. Applied for updating or replacing an existing resource on the server. It's similar to POST but idempotent, meaning repeating the request doesn't have additional effects. (-X PUT)
  • DELETE. Requests that the specified resource be deleted. (-X DELETE)
  • HEAD. Retrieves only the HTTP headers of a response, not the body. It's useful for getting metadata about a resource, like its size or last-modified date. (-I)
  • PATCH. Used for making partial updates to a resource. Unlike PUT, which typically replaces the whole resource, PATCH modifies only specified parts. (-X PATCH)
  • OPTIONS. Employed to describe the communication options for the target resource. It can help determine what HTTP methods are supported by the server. (-X OPTIONS)

Meanwhile, a GET request is an HTTP method primarily used to request data from a specified resource, such as a web page or an API endpoint. It’s the default method used by cURL when no other method is specified. GET requests are for retrieving information without affecting the resource's state. They’re idempotent, meaning multiple identical requests yield the same result.

Unlike its counterpart, the POST method, which sends data to a server (often for things like submitting form data), a GET request is all about receiving data. It’s the most common method used for retrieving information from the web.

Basic cURL GET request

Let's try a data retrieval exercise using cURL to understand how a GET request works. First, we need to get cURL on our operating system.

If you’re on macOS or Linux, cURL is most likely pre-installed. If not, you can easily install it on macOS by opening Terminal, installing Homebrew, and then typing `brew install curl`. On Linux, you can use the command `sudo apt-get install curl`. Finally, Windows 10 and later versions have cURL included by default. On earlier versions, you can download cURL from its official website.

Moving on to firing off a cURL GET request, we’ll use a cURL command to go to a Smartproxy webpage that can tell you your current IP address. The command is this simple:

curl https://ip.smartproxy.com/

In this example, “curl” is the command that initiates the cURL program, while the URL https://ip.smartproxy.com/ is the resource you’re requesting data from. Even though we’re not specifying any cURL method with the -X option, it defaults to performing a GET request. Type it and hit Enter on your Command Prompt (Windows) or Terminal (macOS/Linux). Upon doing that, cURL sends a GET request to the specified URL. The server then processes this request and returns the HTML content of the page – in this case, your current IP address.

By sending a cURL GET request to https://ip.smartproxy.com/, you retrieve information about your IP address because that’s all that this webpage does. If you select a different target, you’ll receive its raw HTML content.

However, be aware that a straightforward cURL command may not always work seamlessly with all websites. This can be due to various factors such as dynamic content, client-side rendering, restrictions on scraping, and requirements for authentication and cookies.

Empower cURL GET requests with proxies

While cURL is a powerful tool for data retrieval, making too many cURL GET requests can lead to issues such as being blocked by websites, particularly those with robust anti-scraping measures. This is where residential proxies can lend a helping hand.

Residential proxies provide IP addresses that belong to household devices, enabling you to bypass anti-scraping measures, access geo-restricted content, and become truly anonymous.

At Smartproxy, we offer 55M+ residential proxies from 195+ locations with an average response time of <0.61 seconds, a 99.47% success rate, and HTTP(S) or SOCKS5 compatibility, perfect for complementing your cURL-based data scraping efforts.

Rely on our help documentation for a comprehensive walkthrough on setting up and implementing our proxies, but it can be as easy as generating a proxy list on our dashboard and using it like this:

curl -U "username:password" -x "us.smartproxy.com:10000" "https://ip.smartproxy.com/json"

Here, the -U option introduces proxy authentication, which you should follow with your proxy user’s credentials. Then, the -x option specifies the proxy server: we’ve used a location from the US, but you can customize this and other parameters on our dashboard.

Alternatively, you can simply whitelist your IP, use the -x option, followed by the address of the proxy server, port, and target website:

curl -x "us.smartproxy.com:10000" "https://ip.smartproxy.com/json"

Various cURL GET request examples

Now that we’ve tried a basic cURL GET request, let's explore other examples to showcase how you can leverage this tool for different scenarios.

1. Receive data in JSON

Many APIs and web services offer the ability to retrieve data in JSON format. You can specify the desire for JSON data using the -H (header) option.

curl -H "Accept: application/json" https://api.example.com/data

2. Get only HTTP headers

Sometimes, you might only need the headers of a response, not the content. The HTTP headers can provide interesting information like content type, server, cookies, caching policies, etc. You can use the -I option for this.

curl -I https://example.com

3. Follow redirects

It’s common for web pages to redirect to other pages. Manually following these can be cumbersome. Use the -L option to follow redirects automatically.

curl -L https://example.com

4. Send cookies with your cURL GET request

Cookies are used by websites for tracking sessions, maintaining user state, and storing preferences. To send custom cookies with your request, use the -b option.

curl -b "name=value" https://example.com

5. Use the GET request with parameters

GET requests can include parameters in the URL. Here's an example with query parameters, which sends a GET request to https://example.com/search with a query parameter query set to curl.

curl https://example.com/search?query=curl

6. Save the output to a file

Instead of displaying the output in the Command Prompt or Terminal, you might want to save it to a file for further analysis or archival. To save the output of your request, use the -o option followed by the filename, format, and URL of the target website.

curl -o filename.html https://example.com

Each of these examples demonstrates the flexibility and power of cURL in all sorts of data retrieval scenarios. Whether you're dealing with APIs or dynamic websites or need to manage cookies and redirects, cURL offers a command-line solution.

cURL option shorthand

The following table presents common cURL options, especially useful for data collection and working with proxies. For a comprehensive list of all available cURL options and their detailed descriptions, visit the official cURL manual page.

Option

-d

-H

-I

-L

-o

-X

-x

-U

-A

-b

-k

-v

--data-urlencode

--compressed

--header

-F

-T

--connect-timeout

--limit-rate

Description

Sends the specified data in a POST request to the HTTP server.

Adds a header to the request. For example, -H "Content-Type: application/json" sets the content type header.

Fetches the HTTP header only, without the HTML body.

Follows HTTP redirects, useful for pages that have moved.

Saves the output to a file instead of displaying it in the console.

Specifies a custom request method (e.g., GET, POST, PUT, DELETE).

Specify a proxy server through which the cURL request is sent.

Passes proxy user credentials for authentication. Format: username:password.

Sets the User-Agent header to simulate different browsers/devices.

Sends cookie data along with the request.

Allows connections to SSL sites without certificates or with invalid certificates.

Provides verbose output. Useful for debugging.

URL-encodes the given data for safe transmission in a URL.

Requests a compressed response using one of the algorithms libcurl supports.

Adds a custom header to the request.

Sends multipart/form-data. Useful for file uploads.

Uploads a file to a specified remote server.

Sets a timeout for the connection to the server. Useful for preventing your scraping script from hanging indefinitely on a slow/unresponsive server.

Limits the maximum transfer rate for data sent and received. Useful to avoid triggering anti-scraping measures.

Summing up

Mastering cURL GET requests is an essential skill for any scraping project. With it, you can fetch simple web content, dive into API interactions, or explore advanced web scraping. Remember, while cURL can handle a wide array of tasks, it’s not always a one-size-fits-all solution, especially for complex websites.

Complementing cURL with proxies can greatly enhance your web scraping capabilities. And you’re in the best place for proxies! We’ve got industry-leading residential proxies with a 14-day money-back option and award-winning 24/7 tech support, from $4/GB.

About the author

Martin Ganchev

Martin Ganchev

VP Enterprise Partnerships

Martin, aka the driving force behind our business expansion, is extremely passionate about exploring fresh opportunities, fostering lasting relationships in the proxy market, and, of course, sharing his insights with you.

Linkedin: https://www.linkedin.com/in/martinganchev/

All information on Smartproxy Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Smartproxy Blog or any third-party websites that may be linked therein.

In this article

What is a cURL GET request

Related articles

smartproxy

A Comprehensive Guide on Using Proxy with cURL in 2024

Whether you're a developer or an IT professional, data is an essential element of your everyday tasks. One of the most popular tools for data transfer is cURL (client for URL), which is embedded in almost every device that transfers data over different internet protocols. However, when it comes to transferring data through a proxy, using cURL becomes even more critical. So let's delve into the basics of cURL and proxies, discuss how it works, and get valuable tips on how to use cURL with proxy settings. So, buckle up, pal, and get ready to learn how to use cURL with proxy and why it is essential in data transfer.

smartproxy

James Keenan

Jun 20, 2023

7 min read

rotating proxies

What Are Rotating Proxies and Why Should You Use Them?

Rotating proxies might sound like a fancy term for someone who just started diggin' around the world of tech, but they're actually a handy tool for anyone who wants to protect their online identity or collect data from the web. Stay with us as we're going to explore what rotating proxies are, how they work, and the most popular use cases that match these proxies. So, without further ado, let's learn!

smartproxy

James Keenan

Oct 09, 2023

5 min read

Frequently asked questions

What is a cURL command?

A cURL command is a tool used in the command-line to transfer data to or from a server, supporting various protocols like HTTP, HTTPS, FTP, and more.

How to use cURL command?

To use the cURL command, you first need to open a command-line interface, such as Terminal on macOS/Linux or Command Prompt on Windows. Then, type "curl", followed by various options and a URL. For example, "curl -X GET http://example.com".

How do I use cURL for a GET request?

To use cURL for a GET request, type "curl" followed by the URL. For example, "curl https://example.com". This will fetch data from the specified URL.

How do I get data from cURL?

To get data using cURL, execute a command like "curl https://example.com". The data from the specified URL will be displayed in your command-line interface.

Get in touch

Follow us

Company

© 2018-2024 smartproxy.com, All Rights Reserved