Scraping fashion retail data for machine learning purposes from Bloomingdales

Scraping fashion retail data for machine learning purposes from Bloomingdales

Bloomingdale’s is a multi-brand store chain, founded in April 1872. At the moment, the network is owned by Macy’s, Inc. Using the parser below, you can scrape a lot of fashion retail data, including prices and images the bloomingdales.com online store. This data can be used for brand research, computer vision and any other machine learning problematic.

Approx number of goods: 350000
Approx number of page requests: 350000
Recommended subscription plan: Medium

PLEASE NOTE! The number of requests can exceed the number of products, because data about variations, images, etc. can be scraped from other resources and will require additional requests. Also part of the product data can be delivered using XHR requests, which also increases the total number of required page requests.

How to use the web scraper to extract data about goods and prices from bloomingdales.com

To use the web scraper for Bloomingdale’s store website, you must have an account with our Diggernaut service. You can just simply follow this comprehensive guide:

  1. Go through this registration link to open free account with Diggernaut
  2. After registering and confirming the email address, you will need to log in to your account
  3. Create a project with any name and description, if you do not know how to do it, please refer to our documentation
  4. Switch to the created project and create a digger with any name, if you do not know how to do it, please refer to our documentation
  5. Copy the following digger configuration to the clipboard and paste it into the digger you created, if you do not know how to do it, refer to our documentation
  6. PLEASE NOTE! Basic proxy servers may not work with this site and you may need to use your own proxy servers. You will need to specify proxy server to the specific location in the digger configuration as commented. If you feel confused about this item, please contact us using the support system or using our online chat, we will be glad to help you.
  7. Switch the mode of the digger from Debug to Active, if you do not know how to do it, please refer to our documentation
  8. Run your digger and wait until the completion, if you do not know how to do it, please refer to our documentation
  9. Download the scraped dataset in the format you need, if you do not know how to do it, please refer to our documentation

You can also setup a schedule for running your scraper and collect data regularly.

Scraping configuration for the digger

Sample of scraped data

Below is a sample of a dataset with several products in JSON format (so you can easily review it and see data structure). The dataset can be downloaded as CSV, XLSX, XML, or any other text format using the templates.

Co-founder of cloud based web scraping and data extraction platform Diggernaut

Leave a Reply

Your email address will not be published. Required fields are marked *