Hi all,

We've been stress testing Zoom a fair bit lately and putting it through its paces by indexing massive amounts of data. It has been handling this admirably and as a consequence of our testing, we have assembled a collection of helpful tips and advice for anyone who may be indexing (or thinking about indexing) enormous sites.

You can read about this here:
http://www.wrensoft.com/zoom/support...rge_sites.html

Our primary source of data has been the entirety of human knowledge (that is, Wikipedia). It was a big job, but we eventually did manage this after 3.8 days of continuous indexing. You can read more about this real-life example here:
http://www.wrensoft.com/zoom/support...html#wikipedia

We hope this is of use and provides a clear demonstration of what Zoom is capable of. Remember that this is just a single node of Zoom, and you can in fact, aggregate multiple nodes together through the use of Zoom MasterNode and provide federated searching capabilities.

Please let us know if you have any questions or comments!