Back to blog

How to Send a cURL GET Request

Tired of gathering data inefficiently? Well, have you tried cURL? It’s a powerful and versatile command-line tool for transferring data with URLs. Its simplicity and wide range of capabilities make it a go-to solution for developers, data analysts, and businesses alike. Simply put, the cURL GET request method is the cornerstone of web scraping and data gathering. It enables you to access publicly available data without the need for complex coding or expensive software. In this blog post, we’ll explain how to send cURL GET requests, so you’re ready to harness its fullest potential.

Dominykas Niaura

Jan 02, 2024

7 min read

curl://

What is a cURL GET request

cURL’s inception dates back to 1996. It’s a command-line tool and library for transferring data with URLs that quickly became an indispensable tool due to its user-friendliness, convenience, no-setup-required nature, and the fact that it comes pre-installed on many systems.

Various HTTP request methods facilitate different types of communication. Some of the key HTTP methods cURL can use include:

  • POST. Sends data to a server to create or update a resource. Often used for submitting form data or uploading files. (-X POST)
  • PUT. Applied for updating or replacing an existing resource on the server. It's similar to POST but idempotent, meaning repeating the request doesn't have additional effects. (-X PUT)
  • DELETE. Requests that the specified resource be deleted. (-X DELETE)
  • HEAD. Retrieves only the HTTP headers of a response, not the body. It's useful for getting metadata about a resource, like its size or last-modified date. (-I)
  • PATCH. Used for making partial updates to a resource. Unlike PUT, which typically replaces the whole resource, PATCH modifies only specified parts. (-X PATCH)
  • OPTIONS. Employed to describe the communication options for the target resource. It can help determine what HTTP methods are supported by the server. (-X OPTIONS)

Meanwhile, a GET request is an HTTP method primarily used to request data from a specified resource, such as a web page or an API endpoint. It’s the default method used by cURL when no other method is specified. GET requests are for retrieving information without affecting the resource's state. They’re idempotent, meaning multiple identical requests yield the same result.

Unlike its counterpart, the POST method, which sends data to a server (often for things like submitting form data), a GET request is all about receiving data. It’s the most common method used for retrieving information from the web.

Basic cURL GET request

Let's try a data retrieval exercise using cURL to understand how a GET request works. First, we need to get cURL on our operating system.

If you’re on macOS or Linux, cURL is most likely pre-installed. If not, you can easily install it on macOS by opening Terminal, installing Homebrew, and then typing `brew install curl`. On Linux, you can use the command `sudo apt-get install curl`. Finally, Windows 10 and later versions have cURL included by default. On earlier versions, you can download cURL from its official website.

Moving on to firing off a cURL GET request, we’ll use a cURL command to go to a Smartproxy webpage that can tell you your current IP address. The command is this simple:

curl https://ip.smartproxy.com/

In this example, “curl” is the command that initiates the cURL program, while the URL https://ip.smartproxy.com/ is the resource you’re requesting data from. Even though we’re not specifying any cURL method with the -X option, it defaults to performing a GET request. Type it and hit Enter on your Command Prompt (Windows) or Terminal (macOS/Linux). Upon doing that, cURL sends a GET request to the specified URL. The server then processes this request and returns the HTML content of the page – in this case, your current IP address.

By sending a cURL GET request to https://ip.smartproxy.com/, you retrieve information about your IP address because that’s all that this webpage does. If you select a different target, you’ll receive its raw HTML content.

However, be aware that a straightforward cURL command may not always work seamlessly with all websites. This can be due to various factors such as dynamic content, client-side rendering, restrictions on scraping, and requirements for authentication and cookies.

Empower cURL GET requests with proxies

While cURL is a powerful tool for data retrieval, making too many cURL GET requests can lead to issues such as being blocked by websites, particularly those with robust anti-scraping measures. This is where residential proxies can lend a helping hand.

Residential proxies provide IP addresses that belong to household devices, enabling you to bypass anti-scraping measures, access geo-restricted content, and become truly anonymous.

At Smartproxy, we offer 55M+ residential proxies from 195+ locations with an average response time of <0.5 seconds, a 99.68% success rate, and HTTP(S) or SOCKS5 compatibility, perfect for complementing your cURL-based data scraping efforts.

Rely on our help documentation for a comprehensive walkthrough on setting up and implementing our proxies, but it can be as easy as generating a proxy list on our dashboard and using it like this:

curl -U "username:password" -x "us.smartproxy.com:10000" "https://ip.smartproxy.com/json"

Here, the -U option introduces proxy authentication, which you should follow with your proxy user’s credentials. Then, the -x option specifies the proxy server: we’ve used a location from the US, but you can customize this and other parameters on our dashboard.

Alternatively, you can simply whitelist your IP, use the -x option, followed by the address of the proxy server, port, and target website:

curl -x "us.smartproxy.com:10000" "https://ip.smartproxy.com/json"

Various cURL GET request examples

Now that we’ve tried a basic cURL GET request, let's explore other examples to showcase how you can leverage this tool for different scenarios.

1. Receive data in JSON

Many APIs and web services offer the ability to retrieve data in JSON format. You can specify the desire for JSON data using the -H (header) option.

curl -H "Accept: application/json" https://api.example.com/data

2. Get only HTTP headers

Sometimes, you might only need the headers of a response, not the content. The HTTP headers can provide interesting information like content type, server, cookies, caching policies, etc. You can use the -I option for this.

curl -I https://example.com

3. Follow redirects

It’s common for web pages to redirect to other pages. Manually following these can be cumbersome. Use the -L option to follow redirects automatically.

curl -L https://example.com

4. Send cookies with your cURL GET request

Cookies are used by websites for tracking sessions, maintaining user state, and storing preferences. To send custom cookies with your request, use the -b option.

curl -b "name=value" https://example.com

5. Use the GET request with parameters

GET requests can include parameters in the URL. Here's an example with query parameters, which sends a GET request to https://example.com/search with a query parameter query set to curl.

curl https://example.com/search?query=curl

6. Save the output to a file

Instead of displaying the output in the Command Prompt or Terminal, you might want to save it to a file for further analysis or archival. To save the output of your request, use the -o option followed by the filename, format, and URL of the target website.

curl -o filename.html https://example.com

Each of these examples demonstrates the flexibility and power of cURL in all sorts of data retrieval scenarios. Whether you're dealing with APIs or dynamic websites or need to manage cookies and redirects, cURL offers a command-line solution.

cURL option shorthand

The following table presents common cURL options, especially useful for data collection and working with proxies. For a comprehensive list of all available cURL options and their detailed descriptions, visit the official cURL manual page.

Option

Description

-d

Sends the specified data in a POST request to the HTTP server.



-H



Adds a header to the request. For example, -H "Content-Type: application/json" sets the content type header.



-I



Fetches the HTTP header only, without the HTML body.



-L



Follows HTTP redirects, useful for pages that have moved.



-o



Saves the output to a file instead of displaying it in the console.



-X



Specifies a custom request method (e.g., GET, POST, PUT, DELETE).





-x



Specify a proxy server through which the cURL request is sent.



-U



Passes proxy user credentials for authentication. Format: username:password.



-A



Sets the User-Agent header to simulate different browsers/devices.



-b



Sends cookie data along with the request.



-k



Allows connections to SSL sites without certificates or with invalid certificates.



-v



Provides verbose output. Useful for debugging.



--data-urlencode



URL-encodes the given data for safe transmission in a URL.



--compressed



Requests a compressed response using one of the algorithms libcurl supports.



--header



Adds a custom header to the request.



-F



Sends multipart/form-data. Useful for file uploads.



-T



Uploads a file to a specified remote server.



--connect-timeout



Sets a timeout for the connection to the server. Useful for preventing your scraping script from hanging indefinitely on a slow/unresponsive server.



--limit-rate



Limits the maximum transfer rate for data sent and received. Useful to avoid triggering anti-scraping measures.



Summing up

Mastering cURL GET requests is an essential skill for any scraping project. With it, you can fetch simple web content, dive into API interactions, or explore advanced web scraping. Remember, while cURL can handle a wide array of tasks, it’s not always a one-size-fits-all solution, especially for complex websites.

Complementing cURL with proxies can greatly enhance your web scraping capabilities. And you’re in the best place for proxies! We’ve got industry-leading residential proxies with a 14-day money-back option and award-winning 24/7 tech support, from $4/GB.

About the author

Dominykas Niaura

Copywriter

As a fan of digital innovation and data intelligence, Dominykas delights in explaining our products’ benefits, demonstrating their use cases, and demystifying complex tech topics for everyday readers.

LinkedIn

All information on Smartproxy Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Smartproxy Blog or any third-party websites that may be linked therein.

Frequently asked questions

What is a cURL command?

A cURL command is a tool used in the command-line to transfer data to or from a server, supporting various protocols like HTTP, HTTPS, FTP, and more.

How to use cURL command?

How do I use cURL for a GET request?

How do I get data from cURL?

Related articles

cURL POST Request

cURL POST Request: Extensive Developer Guide 🤖

In web development, skillfully handling HTTP requests¹ is key for interacting with APIs, testing endpoints, and automating web processes. cURL² shines as a top tool thanks to its adaptability, reliability, and extensive use, making it the go-to solution for HTTP requests. This sturdy command-line utility empowers developers to make requests, process responses, handle redirects, manage cookies and sessions, and tackle tasks from basic to advanced, all through the Terminal.

Martin Ganchev

Apr 24, 2024

15 min read

© 2018-2024 smartproxy.com, All Rights Reserved