Passive Scanning of .js CPU intensive and always retrying the same file
I'm reporting a behavior that i've noticed since the new static code analysis was introduced.
I've noticed that whenever there is a .js or other file that is Big or with complicated code, the passive scanner is very CPU intensive and it seems that after hitting the analysis timeout for a file, it enters a loop and it's always analyzing the same file.
I think the problem here is the the loop, there should be a retry counter limit, by defining a counter limit for each file, the scanning should hit the limit counter and continue the scan on another file.
Thanks for this report. We’re aware of situations where sites that make heavy use of complex JS can cause the static code analysis to run very slowly. We are planning to provide a UI similar to the active scan queue where you can monitor Burp’s work doing code analysis of passively scanned items, and cancel selected items etc. We also plan to provide options for configuring the thread pool used for passive scanning.
Regarding the current behavior, Burp handles processes each passively scanned item in turn, and if a given item times out, that is cancelled and Burp moves on. So even if there are lots of JS-heavy items that time out, Burp will eventually process/timeout them all, and shouldn’t enter an infinite loop.