Web Scraping Proxies made Simple

Get Started scraping the web in as little as 2 minutes

API Documentation

Detailed API Documents Explaining All the APIs Key Functionality

Scraping & Integration Tutorials

guided examples on how to integrate and use scraperAPI

Frequent Questions

quickly find answers for the most common issues

Getting Started

Using ScrapeNetwork is easy. Just send the URL you would like to scrape to the API along with your API key and the API will return the HTML response from the URL you want to scrape.

API Key & Authentication

ScrapeNetwork uses API keys to authenticate requests. To use the API you need to sign up for an account and include your unique API key in every request.

If you haven’t signed up for an account yet then

Making Requests

You can use the API to scrape web pages, API endpoints, images, documents, PDFs, or other files just as you would any other URL. Note: there is a 2MB limit per request.

There are three ways in which you can send GET requests to ScrapeNetwork:

  • Via our Async Scraper service http://async.scrapenetwork.com
  • Via our API endpoint http://api.scrapenetwork.com?
  • Via one of our SDKs (only available for some programming languages)
  • Via our proxy port http://scrapenetwork:APIKEY@proxy-server.scrapenetwork.com:8001

Choose whichever option best suits your scraping requirements.

Important note: regardless of how you invoke the service, we highly recommend you set a 60 seconds timeout in your application to get the best possible success rates, especially for some hard-to-scrape domains.

Requests to the API Endpoint Method #1

ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to http://api.scrapenetwork.com with two query string parameters and the API will return the HTML response for that URL:

  • api_key which contains your API key, and
  • url which contains the url you would like to scrape

You should format your requests to the API endpoint as follows:

				
					curl "http://api.scrapenetwork.com?api_key=APIKEY&url=http://httpbin.org/ip"

				
			

To enable other API functionality when sending a request to the API endpoint simply add the appropriate query parameters to the end of the ScrapeNetwork URL.

For example, if you want to enable Javascript rendering with a request, then add render=true to the request:

				
					curl "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/ip&render=true"

				
			

To use two or more parameters, simply separate them with the “&” sign.

				
					curl "http://api.scrapenetwork.com?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"

				
			

Send requests to the proxy port Method #2

To simplify implementation for users with existing proxy pools, we offer a proxy front-end to the API. The proxy will take your requests and pass them through to the API which will take care of proxy rotation, captchas, and retries.

The proxy mode is a light front-end for the API and has all the same functionality and performance as sending requests to the API endpoint.

The username for the proxy is scrapenetwork and the password is your API key.

				
					curl -x "http://scrapenetwork:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Note: So that we can properly direct your requests through the API, your code must be configured to not verify SSL certificates.

To enable extra functionality whilst using the API in proxy mode, you can pass parameters to the API by adding them to username, separated by periods.

For example, if you want to enable Javascript rendering with a request, the username would be scrapenetwork.render=true

				
					curl -x "http://scrapenetwork.render=true:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Multiple parameters can be included by separating them with periods; for example:

				
					curl -x "http://scrapenetwork.render=true.country_code=true:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

API Status Codes

The API will return a specific status code after every request depending on whether the request was successful, failed or some other error occurred. ScrapeNetwork will retry failed requests for up to 60 seconds to try and get a successful response from the target URL before responding with a 500 error indicating a failed request.

Note: To avoid timing out your request before the API has had a chance to complete all retries, remember to set your timeout to 60 seconds.

In cases where a request fails after 60 seconds of retrying, you will not be charged for the unsuccessful request (you are only charged for successful requests, 200 and 404 status codes).

Errors will inevitably occur so on your end make sure you catch these errors. You can configure your code to immediately retry the request and in most cases, it will be successful. If a request is consistently failing then check your request to make sure that it is configured correctly. Or, if you are consistently getting bans messages from an anti-bot, then create a ticket with our support team and we will try to bypass this anti-bot for you.

If you receive a successful 200 status code response from the API but the response contains a CAPTCHA, please contact our support team and they will add it to our CAPTCHA detection database. Once included in our CAPTCHA database the API will treat it as a ban in the future and automatically retry the request.

Below are the possible status codes you will receive:

Status CodeDetails
200 Successful response
404Page requested does not exist.
410Page requested is no longer available.
500After retrying for 60 seconds, the API was unable to receive a successful response.
429You are sending requests too fast, and exceeding your concurrency limit.
403You have used up all your API credits.

Credits and Requests

Every time you make a request, you consume credits. The credits you have available are defined by the plan you are currently using. The amount of credits you consume depends on the domain and parameters you add to your request.

Domains

We have created custom scrapers to target these sites, if you scrape one of these domains you will activate this scraper and the credit cost will change.

Category Domain examples Credit Cost
SERPGoogle, Bing 25
E-Commerce Amazon, Allegro 5
Protected Domains Linkedin, G2, Social Media Sites 30

Geotargeting is included in these credit costs.
If your domain does not fit in any category in the above list, it will cost 1 credit if no additional parameters are added.

Parameters

According to your needs, you may want to access different features on our platform.

premium=true – requests cost 10 credits

render=true – requests cost 10 credits

premium=true + render=true – requests cost 25 credits

ultra_premium=true – requests cost 30 credits

ultra_premium=true + render=true – requests cost 75 credits

In any requests, with or without these parameters, we will only charge for successful requests (200 and 404 status codes). If you run out of credits sooner than planned, you can renew your subscription early as explained in the next section – Dashboard.

Dashboard

The Dashboard shows you all the information so you can keep track of your usage. It is divided into Usage, Billing, Documentation, and links to contact us.

Usage
Here you will find your API key, sample codes, monitoring tools, and usage statistics such as credits used, concurrency, failed requests, your current plan, and the end date of your billing cycle.

Billing
In this section, you are able to see your current plan, the end date of the current billing cycle, as well as subscribe to another plan. This is also the right place to set or update your billing details, payment method, view your invoices, manage your subscription, or cancel it.

Here you can also renew your subscription early, in case you run out of API credits sooner than planned. This option will let you renew your subscription at an earlier date than planned, charging you for the subscription price and resetting your credits. If you find yourself using this option regularly, please reach out to us so we can find a plan that can accommodate the number of credits you need.

Documentation
This will direct you to our documentation – exactly where you are right now.

Contact Support
Direct link to reach out to our technical support team.

Contact Sales
Direct link to our sales team, for any subscription or billing-related inquiries.

Advanced API Functionality

Customize API Functionality

ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:

ParameterDescription
render Activate javascript rendering by setting render=true in your request. The API will automatically render the javascript on the page and return the HTML response after the javascript has been rendered.
Requests using this parameter cost 10 API credits, or 75 if used in combination with ultra-premium ultra_premium=true.
country_code Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request.
premiumActivate premium residential and mobile IPs by setting premium=true. Using premium proxies costs 10 API credits, or 25 API credits if used in combination with Javascript rendering render=true.
session_number Reuse the same proxy by setting session_number=123 for example.
This parameter does not increase the cost of the API request.
keep_headers Use your own custom headers by setting keep_headers=true along with sending your own headers to the API.
This parameter does not increase the cost of the API request.
device_type Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile.
This parameter does not increase the cost of the API request.
autoparse Activate auto parsing for select websites by setting autoparse=true. The API will parse the data on the page and return it in JSON format.
This parameter does not increase the cost of the API request.
ultra_premiumActivate our advanced bypass mechanisms by setting ultra_premium=true.
Requests using this parameter cost 30 API credits, or 75 if used in combination with javascript rendering.

Custom Headers

If you would like to use your own custom headers (user agents, cookies, etc.) when making a request to the website, simply set keep_headers=true and send the API the headers you want to use. The API will then use these headers when sending requests to the website.

Note: Only use this feature if you need to send custom headers to retrieve specific results from the website. Within the API we have a sophisticated header management system designed to increase success rates and performance on difficult sites. When you send your own custom headers you override our header system, which oftentimes lowers your success rates. Unless you absolutely need to send custom headers to get the data you need, we advise that you don’t use this functionality.

If you need to get results for mobile devices, use the device_type parameter to set the user-agent header for your request, instead of setting your own.

				
					curl --header "X-MyHeader: 123" "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/anything&keep_headers=true"

				
			
				
					curl --header "X-MyHeader: 123" \-x "http://scrapenetwork.keep_headers=true:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/anything"

				
			

Sessions

To reuse the same proxy for multiple requests, simply use the session_number parameter by setting it equal to a unique integer for every session you want to maintain (e.g. session_number=123). This will allow you to continue using the same proxy for each request with that session number. To create a new session simply set the session_number parameter with a new integer to the API. The session value can be any integer. Sessions expire 15 minutes after the last usage.

				
					curl "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/ip&session_number=123"

				
			
				
					curl -x "http://scrapenetwork.session_number=123:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Geographic Location

Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.

To control the geolocation of the IP used to make the request, simply set the country_code parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.

For example: to ensure your requests come from the United States, set the country_code parameter to country_code=us.

Business and Enterprise Plan users can geotarget their requests to the following 13 regions (Hobby and Startup Plans can only use US and EU geotargeting) by using the country_code in their request:

Country Code Region Plans
usUnited States Hobby Plan and higher.
euEurope Hobby Plan and higher.
caCanadaBusiness Plan and higher.
ukUnited KingdomBusiness Plan and higher.
geGermanyBusiness Plan and higher.
frFranceBusiness Plan and higher.
esSpainBusiness Plan and higher.
brBrazilBusiness Plan and higher.
mxMexicoBusiness Plan and higher.
inIndiaBusiness Plan and higher.
jpJapanBusiness Plan and higher.
cnChinaBusiness Plan and higher.
auAustraliaBusiness Plan and higher.

Geotargeting “eu” will use IPs from any European country.
Other countries are available to Enterprise customers upon request.

				
					curl "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/ip&country_code=us"

				
			
				
					curl -x "http://scrapenetwork.country_code=us:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Premium Residential/Mobile Proxy Pools

Our standard proxy pools include millions of proxies from over a dozen ISPs and should be sufficient for the vast majority of scraping jobs. However, for a few particularly difficult to scrape sites, we also maintain a private internal pool of residential and mobile IPs. This pool is only available to users on the Business plan or higher.

Requests through our premium residential and mobile pool are charged at 10 times the normal rate (every successful request will count as 10 API credits against your monthly limit). Each request that uses both javascript rendering and our premium proxy pools will be charged at 25 times the normal rate (every successful request will count as 25 API credits against your monthly limit). To send a request through our premium proxy pool, please set the premium query parameter to premium=true.

We also have a higher premium level that you can use for really tough targets, such as LinkedIn. You can access these pools by adding the ultra_premium=true query parameter. These requests will use 30 API credits against your monthly limit, or 75 if used together with rendering. Please note, this is only available on our paid plans.

				
					curl "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/ip&country_code=us"

				
			
				
					curl -x "http://scrapenetwork.premium=true:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Device Type

If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type parameter.

  • Set device_type=desktop to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent. Note: This is the default behavior. Not setting the parameter will have the same effect.
  • Set device_type=mobile to have the API set a mobile (e.g. iPhone or Android) user agent.

Note: The device type you set will be overridden if you use keep_headers=true and send your own user agent in the requests header.

				
					curl "http://api.scrapenetwork.com/?api_key=APIKEY&url=http://httpbin.org/ip&device_type=mobile"

				
			
				
					curl -x "http:/scrapenetwork.device_type=mobile:APIKEY@proxy-server.scrapenetwork.com:8001" -k "http://httpbin.org/ip"

				
			

Account Information

If you would like to monitor your account usage and limits programmatically (how many concurrent requests you’re using, how many requests you’ve made, etc.) you may use the /account endpoint, which returns JSON.

Note: the requestCount and failedRequestCount numbers only refresh once every 15 seconds, while the concurrentRequests number is available in real-time.

				
					curl "http://api.scrapenetwork.com/account?api_key=APIKEY"

				
			
Scroll to Top