There are some situations when the customer does not have the opportunity or the time to create the scrapers,
maintain them, setup schedule, control scraping process and retrieve data manually or using API on their own.
All he want to get are datasets with proper data delivered to him periodically.
For these cases, we are offering you our exceptional service - "Full Managed Solution".
What you get if you decide to use this service:
- Our developers build all required scrapers to extract data from source websites you need.
- Our team set up the schedule and monitors scraping process to make sure they work properly, and data is ok.
- If source website gets changed (layout or mechanics) and scraper becomes inoperable, we do required fixes to the scraper.
- We take care of data validation and quality control.
- Datasets are delivered to you in required format by any means you need.
- You don't need to think about subscription limits :)
If this is exactly what you need, please contact us using
online chat, email or click on the button below and fill out the form.
Description of terms
The number of projects that you can have on your account.
Projects allow you to group your scrapers (diggers) to organize your workspace better.
It becomes useful when you have hundreds of scrapers that used for different purposes,
but technically you can have any number of scrapers in the single project.
The number of scrapers (diggers) that you can have on your account.
Digger is a script written in a special meta-language that allows you to extract and collect the required
information from one or several sources, process this data and form a specific data structure with this data.
This scraped data can be manually downloaded in various formats or retrieved using API as many times as you
need without any extra charge. Typically, one digger is designed to handle one source (for example, a website)
because result of single digger job is a session with single dataset which has specific structure, but you are
free to scrape multiple sources with single digger if its required by data design, workflow or you only just
want to do it this way.
The number of requests to data sources. A request is an HTTP request made
by a scraper, for example, a fetched page, ajax request, a document or image request. Requests are the primary
consumable resource on your account. If your available requests get depleted, your diggers cannot start a new job,
and all running diggers will be stopped. Request balance is renewed on the beginning of a new billing period
(approximately every month). The balance is set to the number of allowed requests depending on your subscription plan.
For example, on Free plan, it resets to 5000 requests every month. Please note that unused requests are not accumulated
and not moved to the next month. If you want to increase the number of available monthly requests you need to upgrade
your subscription plan. If you need additional page requests just once, please contact us using online chat or email.
Binary files uploaded to third-party storage.
Number of the simultaneously running digger sessions in Data on Demand mode.
Data Retention (days)
The number of days that your data is stored in our database before its
automatically deleted. Do not worry, the system only deletes the data of completed sessions. So retention
period starts ticking when digger finishes the job. If you download data, you can keep it on your storage
as long as you want, data will be deleted only from our cloud.
"Pay As You Go" is a special plan without a monthly fee and Free plan restrictions.
Top-up your balance at any time and work while you have enough balance.