I installed Koha 2.0.0RC1 the other day. I decided to use the Z39.50 search feature to add a book. The first search I tried apparently didn't find anything, however I was able to find items by Eric Raymond when I searched for him by name, so it does manage to search the US Library of Congress (The only Z39.50 server I have is the default Library of Congress one). However, I see to have an issue with the other searches I tried - they client won't give up searching for them (excerpts from log file below). It has tried many (hundreds?) of times for each search over the last 30 minutes and is still trying. When I killed the daemon and started it back up, it kept on trying the same searches. Is there a way to tell the client to give up searching for something if it isn't found after X amount of attempts? Edward Corrado BTW: I have no idea how the z39.50 client decided to search for me (but I did have only one record in the database at the time of this z39.50 search and it had me as the author. Processing isbn="0131411551" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing isbn="0131411*" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. DBD::mysql::st execute failed: Lost connection to MySQL server during query at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 123. DBD::mysql::st fetchrow failed: fetch() without execute() at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 124. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing isbn="0131411*" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. starting loop Library of Congress ==> answered : 10000 found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 170. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing author="Corrado, Edward" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing isbn="0-13-141155-1" at Library of Congress z3950.loc.gov:7090 voyager USMARC (12 forks) z3950.loc.gov:7090 done. starting loop Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing author="Corrado, Edward" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. DBD::mysql::st execute failed: Lost connection to MySQL server during query at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 123. DBD::mysql::st fetchrow failed: fetch() without execute() at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 124. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing isbn="0-13-141155-1" at Library of Congress z3950.loc.gov:7090 voyager USMARC (12 forks) z3950.loc.gov:7090 done. starting loop Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing author="Corrado, Edward" at Library of Congress z3950.loc.gov:7090 voyager USMARC (11 forks) z3950.loc.gov:7090 done. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Processing isbn="0-13-141155-1" at Library of Congress z3950.loc.gov:7090 voyager USMARC (12 forks) z3950.loc.gov:7090 done. starting loop Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168. Library of Congress ==> USMARC at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 156. Library of Congress ==> answered : no records found at /usr/local/koha/intranet/scripts/z3950daemon/processz3950queue line 168.
On 2003-12-30 16:10:28 +0000 Edward M. Corrado <ecorrado@athena.rider.edu> wrote:
they client won't give up searching for them
I didn't notice this and offended a library sysadmin (256000 requests or so). If someone can tell me how to solve it, I would be grateful, but I disabled the daemon for now and plan to replace it later. -- MJR/slef My Opinion Only and possibly not of any group I know. Please http://remember.to/edit_messages on lists to be sure I read http://mjr.towers.org.uk/ gopher://g.towers.org.uk/ slef@jabber.at Creative copyleft computing services via http://www.ttllp.co.uk/
MJ Ray a écrit :
On 2003-12-30 16:10:28 +0000 Edward M. Corrado <ecorrado@athena.rider.edu> wrote:
they client won't give up searching for them
I didn't notice this and offended a library sysadmin (256000 requests or so). If someone can tell me how to solve it, I would be grateful, but I disabled the daemon for now and plan to replace it later.
* manually : truncate z3950_results table. * automatically : I think in processqueue, we could use startdate to calculate an expiry date : a request must not last more than 2 minuts. Maybe even less. -- Paul POULAIN Consultant indépendant en logiciels libres responsable francophone de koha (SIGB libre http://www.koha-fr.org)
On 2004-01-05 14:29:47 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
* automatically : I think in processqueue, we could use startdate to calculate an expiry date : a request must not last more than 2 minuts. Maybe even less.
One reason I was given for the daemon method was the long time that z39.50 searches can take. If we impose a 2 minute limit, there seems little reason not to use traditional CGI scripts. Maybe you meant some longer limit, like 20 minutes. How long will librarians actually wait for search results? -- MJR/slef My Opinion Only and possibly not of any group I know. Please http://remember.to/edit_messages on lists to be sure I read http://mjr.towers.org.uk/ gopher://g.towers.org.uk/ slef@jabber.at Creative copyleft computing services via http://www.ttllp.co.uk/
On Mon, 5 Jan 2004, MJ Ray wrote:
On 2004-01-05 14:29:47 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
* automatically : I think in processqueue, we could use startdate to calculate an expiry date : a request must not last more than 2 minuts. Maybe even less.
One reason I was given for the daemon method was the long time that z39.50 searches can take. If we impose a 2 minute limit, there seems little reason not to use traditional CGI scripts. Maybe you meant some longer limit, like 20 minutes.
How long will librarians actually wait for search results?
20 minutes seems like an awful long time to me (but then again I am not a cataloger). Maybe instead of hard coding it in, the length of time could be a variable that each easily site could customize if they wish. I'd probably like to test (or hear from people who actually catalog a lot of records) to see how long it should go on for, but somewhere between two and five minutes seems like a reasonable default to me. Ed C.
-- MJR/slef My Opinion Only and possibly not of any group I know. Please http://remember.to/edit_messages on lists to be sure I read http://mjr.towers.org.uk/ gopher://g.towers.org.uk/ slef@jabber.at Creative copyleft computing services via http://www.ttllp.co.uk/ _______________________________________________ Koha mailing list Koha@lists.katipo.co.nz http://lists.katipo.co.nz/mailman/listinfo/koha
MJ Ray a écrit :
On 2004-01-05 14:29:47 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
* automatically : I think in processqueue, we could use startdate to calculate an expiry date : a request must not last more than 2 minuts. Maybe even less.
One reason I was given for the daemon method was the long time that z39.50 searches can take. If we impose a 2 minute limit, there seems little reason not to use traditional CGI scripts. Maybe you meant some longer limit, like 20 minutes.
How long will librarians actually wait for search results?
I think 2 minuts is enough. But... when you search 4 differents servers, if each request needs 2 minuts, and is synchronous, you need 8 minuts to get an answer. With the daemon, it's asynchronous, so you can have answers in 2 minuts. that's the main reason of the daemon. with another one imho : if you setup your webserver timeout low (something like 10 seconds) the daemon becomes mandatory. -- Paul POULAIN Consultant indépendant en logiciels libres responsable francophone de koha (SIGB libre http://www.koha-fr.org)
On 2004-01-05 15:37:24 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
With the daemon, it's asynchronous, so you can have answers in 2 minuts. that's the main reason of the daemon.
How many Z39.50 servers do most people use? Is it impossible to write an asynchronous CGI? These are design questions, so please reply to -devel if you want.
if you setup your webserver timeout low (something like 10 seconds) the daemon becomes mandatory.
If you set your timeout that low, I think you are likely to have worse problems unless you have a fast machine. The CGI could just return whatever answers come back in <10 seconds. Is it reasonable to use the daemon by default just to give better support to odd setups? -- MJR/slef My Opinion Only and possibly not of any group I know. Please http://remember.to/edit_messages on lists to be sure I read http://mjr.towers.org.uk/ gopher://g.towers.org.uk/ slef@jabber.at Creative copyleft computing services via http://www.ttllp.co.uk/
On Mon, 2004-01-05 at 16:27, MJ Ray wrote:
On 2004-01-05 15:37:24 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
With the daemon, it's asynchronous, so you can have answers in 2 minuts. that's the main reason of the daemon.
How many Z39.50 servers do most people use? Is it impossible to write an asynchronous CGI? These are design questions, so please reply to -devel if you want.
I use three enabled by default, and up to 6 for difficult queries.
if you setup your webserver timeout low (something like 10 seconds) the daemon becomes mandatory.
If you set your timeout that low, I think you are likely to have worse problems unless you have a fast machine. The CGI could just return whatever answers come back in <10 seconds. Is it reasonable to use the daemon by default just to give better support to odd setups?
I'm very happy with the daemon approach and I'd be very unhappy with having to wait for a response for each query individually. I normally type in queries for 5 - 10 books at a time, and then go back and enter the returned values. This works very well for me. Of course I'm only doing at the most 25 books per day, although I would have thought that someone doing even more books would use a similar method. I do find that if I haven't had a result back in 2 minutes or so from a particular query, then it is unlikely that I'm going to get an answer back at all. Just my two penn'orth Nigel -- Nigel Titley <nigel@titley.com>
On 2004-01-05 20:44:11 +0000 Nigel Titley <nigel@titley.com> wrote:
I'm very happy with the daemon approach and I'd be very unhappy with having to wait for a response for each query individually.
Do you not encounter the "requesting forever" problem? Wouldn't having each request in its own browser tab be as easy, or easier?
I do find that if I haven't had a result back in 2 minutes or so from a particular query, then it is unlikely that I'm going to get an answer back at all.
Interesting. Do others find this too? -- MJR/slef My Opinion Only and possibly not of any group I know. Please http://remember.to/edit_messages on lists to be sure I read http://mjr.towers.org.uk/ gopher://g.towers.org.uk/ slef@jabber.at Creative copyleft computing services via http://www.ttllp.co.uk/
On Tue, 2004-01-06 at 12:18, MJ Ray wrote:
On 2004-01-05 20:44:11 +0000 Nigel Titley <nigel@titley.com> wrote:
I'm very happy with the daemon approach and I'd be very unhappy with having to wait for a response for each query individually.
Do you not encounter the "requesting forever" problem? Wouldn't having each request in its own browser tab be as easy, or easier?
I don't seem to get the "requesting forever" problem. All my searches eventually terminate, although it may be many minutes. This could be because over time I have selected only those sources which are fast and reliable. I wouldn't want to have to kick off several searches individually.
I do find that if I haven't had a result back in 2 minutes or so from a particular query, then it is unlikely that I'm going to get an answer back at all.
Interesting. Do others find this too?
Again, it may be due to selection of fast reliable sources. -- Nigel Titley <nigel@titley.com>
MJ Ray a écrit :
On 2004-01-05 15:37:24 +0000 paul POULAIN <paul.poulain@free.fr> wrote:
With the daemon, it's asynchronous, so you can have answers in 2 minuts. that's the main reason of the daemon.
How many Z39.50 servers do most people use? Is it impossible to write an asynchronous CGI? These are design questions, so please reply to -devel if you want.
z3950 client is used for 2 differents things : * quick cataloguing : feature used for Koha. * "true multi-cataloguing search". the z3950 standard is/should be 1st used to find a given book in more than 1 catalogue. Koha doesn't handle this feature yet. But when it will, the asynch. daemon will be more & more important : some libraries have 10+ links with other libraries. Thus, if you search "The theory of mathematical structures", that is very rare (& may not exist, i just invented it :-D ), you search in 10+ libraries, and may have only 1 positive result.
if you setup your webserver timeout low (something like 10 seconds) the daemon becomes mandatory.
If you set your timeout that low, I think you are likely to have worse problems unless you have a fast machine. The CGI could just return whatever answers come back in <10 seconds. Is it reasonable to use the daemon by default just to give better support to odd setups?
If you do so, it means you tell the CGI what is apache timeout. Quite dirty imho. Anyway, I agree that the daemon must be improved. -- Paul POULAIN Consultant indépendant en logiciels libres responsable francophone de koha (SIGB libre http://www.koha-fr.org)
participants (4)
-
Edward M. Corrado -
MJ Ray -
Nigel Titley -
paul POULAIN