Training Burp's crawler
In the 1.x version, an approach to ensuring good coverage in complex apps was to add the site to the scope, start the spider, and then start manually browsing the site to ensure that all those components that the spider couldn't find, would be included and that the spider could continue crawling from new paths otherwise not reachable.
How is this achieved in the 2.x version? With the "crawl and audit" task, there's no clear indication that the crawler is actually including those paths you manually followed, so it is a bit unclear if it purely relies on it's own ability to reach certain parts (such as unlinked pages) of the application.
Andreas, the live passive crawl from Proxy (all traffic) should be enabled in the Dashboard > Tasks view. Ensure the task execution engine isn’t paused.