Burp Suite User Forum

Create new post

Training Burp's crawler

Andreas | Last updated: Feb 07, 2019 12:54PM UTC

In the 1.x version, an approach to ensuring good coverage in complex apps was to add the site to the scope, start the spider, and then start manually browsing the site to ensure that all those components that the spider couldn't find, would be included and that the spider could continue crawling from new paths otherwise not reachable. How is this achieved in the 2.x version? With the "crawl and audit" task, there's no clear indication that the crawler is actually including those paths you manually followed, so it is a bit unclear if it purely relies on it's own ability to reach certain parts (such as unlinked pages) of the application.

Liam, PortSwigger Agent | Last updated: Feb 07, 2019 01:02PM UTC

Andreas, the live passive crawl from Proxy (all traffic) should be enabled in the Dashboard > Tasks view. Ensure the task execution engine isn't paused.

You must be an existing, logged-in customer to reply to a thread. Please email us for additional support.