[Koha] robots.txt

MJ Ray mjr at phonecoop.coop
Tue Nov 3 05:32:59 NZDT 2009


Owen Leonard <oleonard at myacpl.org> wrote:
> When we were hosting our own Koha installation we had to start
> excluding search engine bots (Googlebot in particular) because our
> server was getting hit too hard and it was slowing everything down. I
> think LibLime blocks everything by default for its customers now. I'd
> certainly prefer to be able to let Google in. I'd like the contents of
> the OPAC to be discoverable in search engines.

That's pretty much what our librarians have told me when I've asked.
To some of them, more eyeballs means more borrowers means more lends
and pretty directly means more funding.

It is possible to use things like Google Webmaster Tools and even
iptables to slow the search engine bots down if/when they become a
problem.  I don't know if that will override settings like LibLime's
blocking.

In general, I feel it sucks to be doing the search engine's work for
them and they should tread lightly by default, but that's the
trade-off if you want the OPAC to be indexed at the moment.

Hope that helps,
-- 
MJ Ray (slef)  Webmaster and LMS developer at     | software
www.software.coop http://mjr.towers.org.uk        |  .... co
IMO only: see http://mjr.towers.org.uk/email.html |  .... op


More information about the Koha mailing list