We've come across a situation where a web server may be configured to avoid a DoS (Denial of Service) or similar issues with being overloaded by too many requests from a single client, and this is being triggered when using Zoom with all 10 threads to spider the web site.
If setup properly, this mechanism should simply stop responding gracefully and Zoom will start reporting that the server is not responding.
In some cases, the server has a mechanism that is overly drastic and starts terminating the communication midway. This could lead to Zoom not responding as it in turn is waiting for its own HTTP component (curl) that never comes back.
In either of these cases, you can avoid this problem by configuring Zoom to use less threads (e.g. 1 or 2) or to insert a delay between requests. You can find these settings under "Configure"->"Spider options"
It may also help to talk to your web host or admin, to determine what policy is in place and what number of connections within x number of seconds may provoke such behaviour so you can optimize your crawling configuration.
If setup properly, this mechanism should simply stop responding gracefully and Zoom will start reporting that the server is not responding.
In some cases, the server has a mechanism that is overly drastic and starts terminating the communication midway. This could lead to Zoom not responding as it in turn is waiting for its own HTTP component (curl) that never comes back.
In either of these cases, you can avoid this problem by configuring Zoom to use less threads (e.g. 1 or 2) or to insert a delay between requests. You can find these settings under "Configure"->"Spider options"
It may also help to talk to your web host or admin, to determine what policy is in place and what number of connections within x number of seconds may provoke such behaviour so you can optimize your crawling configuration.