User talk:Chrisgagne

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Hi!

It's my understanding that using a web-spider such as wget is frowned upon because it can place a large load on the server and it is more efficient to download the entire database.

However, my application does not warrant pulling down the database. I'd be interested in pulling 50 articles and each article that those 50 articles link to. I'm guessing that this would be on the order of 10-20K articles, well-below the size of the entire Wikipedia.

Therefore, would it be tolerated if I used wget or a similar spider under the condition that I ran it at a rate of only one page a second? Would this be broadly tolerated? It seems like all of the anti-spider notes are about people trying to download all of Wikipedia at 50 pages/sec.

Your question is a bit too technical for most of the people that answer {{helpme}} requests. I would suggest asking your question at the village pump or if you use IRC at #wikimedia-tech. Mr.Z-man 02:29, 3 September 2007 (UTC)[reply]