Extract data from XLS, XLSX and CSV files

Extract data from XLS, XLSX and CSV files

Today, the support for files in XLS, XLSX and CSV format has been added to the Diggernaut platform. The way it was implemented is same as for other supported file types. You load a file into the digger using the walk command, the digger gets the file, determines its type and converts it to XML. Next you can traverse the DOM structure, extract the necessary data and create your data set.

Let’s see how exactly it works by an example. To do it, we uploaded 3 files to our sandbox:
https://www.diggernaut.com/sandbox/sample.csv – CSV data file
https://www.diggernaut.com/sandbox/sample.xls – XLS data file (binary version)
https://www.diggernaut.com/sandbox/sample.xlsx – XLSX data file (XML version)

We’ll code a very simple digger configuration that will take the file and show us the source code of the converted data in debug mode.

If we run the digger in debug mode, then in the log we will see the following XML source page with data:

Since there is only one sheet in CSV, we have a single sheet element in the resulting structure. In XLS / XLSX, there can be many sheets, and all of them will be kept in the corresponding sheet elements. It’s quite easy to parse this structure, go through the sheets, then go through the rows row and extract the data from the columns column. The values in the classes correspond to the row and column number in the original file.

Let’s now see how the XLS resource will be converted:

We get the following source code:

As you can see, in this file we have 2 sheets, and the rest is basically the same structure as in the case of CSV. If we load XLSX, we get exactly the same result as with XLS, so we omit this test.

How can you use this functionality, except for the actual parsing of the final data? Alternatively, you can use spreadsheets as a feed with the resources your digger should scrape. For example, you add a list of links to products in the store to the sheet Your scraper reads the sheet, picks up the list of URLs, puts them into the pool, and then the main logic of the scraper is used to collect the data about the goods. Or, imagine that you have a spreadsheet with data that must be extended with the data from the web. Your scraper reads the sheet, go through it line by line and form a new dataset, for each line it can visit some page and extract some additinal information to keep in the new dataset. This way you will have data from the spreadsheet and from the product page merged in a single entry. There are other options for using the spreadsheets, but we can talk about it next time.

Co-founder of cloud based web scraping and data extraction platform Diggernaut

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *