There are some situations when the customer does not have the opportunity or the time to create the scrapers,
maintain them, setup schedule, control scraping process and retrieve data manually or using API on their own.
All he want to get are datasets with proper data delivered to him periodically.
For these cases, we are offering you our special service - "Full Managed Solution".
What you will get if you decide to use this service:
- Our developers will code all required scrapers to extract data from source websites you need
- Our team will setup schedule and will monitors scraping process to make sure they works properly and data is ok
- If source website gets changed (layout or mechanics) and scraper becomes inoperable, we will do required fixes to the scraper
- We will take care of data validation and quality control
- Datasets will be delivered to you in required format by any means you need
- You dont need to think about subscription limits :)
If this is exactly what you need, please contact us using
online chat, email or click on the button below and fill out the form.
Description of terms
Number of projects that you can have on your account. Projects allow you
to group your scrapers (diggers) to better organize your workspace. It becomes useful when you have hundreds
of scrapers that used for different purposes, but technically you can have any number of scrapers in the single project.
The number of scrapers (diggers) that you can have on your account.
Digger is a script written in a special meta-language that allows you to extract and collect the required
information from one or several sources, process this data and form a certain data structure with this data.
This scraped data can be manually downloaded in various formats or retrieved using API as many times as you
need without any extra charge. Typically, one digger is designed to handle one source (for example, a website)
because result of single digger job is a session with single dataset which has specific structure, but you are
free to scrape multiple sources with single digger if its required by data design, workflow or you simply
just want to do it this way.
Number of requests to data sources. A request is any HTTP request made by a scraper,
for example: a page fetch, ajax request, a document or image request. Requests is the main consumable resource on your
account. If your available requests gets depleted, your diggers cannot start new job and all running diggers will be stopped.
Request balance will be renewed on the beginning of a new billing period (approximately every month), balance will be set to
the number of allowed requests depending on your subscription plan. For example, on Free plan it will be reset to 5000 requests
every month. Please note that unused requests are not accumulated and not moved to the next month. If you want to increase
number of available monthly requests you need to upgrade your subscription plan. If you need additional page requests just
once, please contact us using online chat or email.
Data Retention (days)
The number of days that your data is stored in our database before its automatically deleted.
Do not worry, the system only deletes the data of completed sessions. So retention period starts ticking when digger finishes job.
If you download data, you can keep it on your own storage as long as you want, data will be deleted only from our cloud.