Basic Settings

Turning On Selenium

Most of the tasks you meet can be solved without using the headless blowser. Generally using standard mechanics for fetching pages is always better: its faster, consume less resources, give you more control over the process of fetching etc. But there are cases when using real browser with JS support is your only solution.

For such cases, you can use Selenium with Chrome Web Driver, but please note that when you request a single page, Selenium also fetches many other supplementary files and data, like JavaScripts or AJAX requests. You will be charged 1 page request for every successfull (when server returns response code 200) document type HTTP request and any other type of request that returns content with mime-types: text/html, text/plain or application/json. Therefore, downloading one page with Selenium can cost you dozens of page requests. To enable Selenium use the js_enabled option:

            # TURNING ON SELENIUM
- config:
    js_enabled: "yes"
            

You can use the Walk command to navigate pages. However, it must be remembered that when using this command, only GET requests are supported in Selenium (including iteration through the link pool).

Please note:
Working with a DOM structure in Selenium is different from working with the basic mechanics of Diggernaut. You can use the Find command to navigate the DOM. However, in some cases, when you are iterating over a pool of found elements, they may be invalidated if the browser re-renders the DOM.

When entering a block, you can use all the commands of the block context. As well as some additional commands that work only in Selenium:

Command Description
type Simulates text input to the current element (block). Mainly used to fill out form fields.
submit Simulates sending a form on the selected item. Mostly used to send the completed form.
click Simulates a mouse click on the current element. It is used to follow links, click on buttons, set focus (cursor) on text fields of a form to fill them with text. Attention: when following a link, this command does not generate a new page context, but replaces the current one.
scrollto Scrolls to the currently selected element. Can be used when you need to click on some element, but it's out of the browser window visibility frame.
execute Executes JavaScript snippet passed with js argument. It can be used to manipulate page elements, eg to hide or show something.
fetch_content Fetches HTML content of the selected frame.
screenshot Makes a screenshot of the browser screen and puts the base64-encoded image into a register.
            #FETCH IFRAME CONTENT AND SAVE TO VARIABLE
- find: 
    path: 'div[id*="google_ads_iframe"] > iframe, .onf-ad iframe,[id*=moneytag] iframe, [id*=scr_] iframe, #grumi-container iframe' 
    do: 
    - fetchcontent
    - variable_set: ad
      
#HIDE STICKY HEADER
- execute:
    js: "document.querySelector('[data-widget=\"webStickyProducts\"] > div').hidden = true;"

# FIND A TEXT FIELD
- find:
    path: input[name="username"]
    do:
    # SCROLLING TO THE FIELD
    - scrollto
    # SET THE FOCUS TO THE FIELD
    - click
    # TYPE IN THE TEXT
    - type: iamyouruser
    # SUBMIT THE FORM
    - submit
            

If you have enabled the Selenium support option, you can also work using the standard page collection mechanics. To do this, switch the currently used engine. And when Selenium is needed, switch back to Selenium. This will allow you to build applications for working with different sources and APIs within the same digger. To switch the engine, use the command set_engine.

            # SWITCH TO THE STANDARD ENGINE
- set_engine: surf
# SWITCH TO THE SELENIUM ENGINE
- set_engine: selenium