Take a look at the bug: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11096 you will understand you need to upgrade :-D El jue., 22 ago. 2019 a las 22:51, Tomas Cohen Arazi (<tomascohen@gmail.com>) escribió:
Back in 2013, in Reno we found that zebra was picky about record sizes using the DOM filter, and sending the results in USMARC.
El jue., 22 de agosto de 2019 20:31, Paul A <paul.a@navalmarinearchive.com> escribió:
Arturo -- Many thanks (and Katrin, thanks for your email.)
We also have divided up serials into decades (or other period lengths) where we know that the size will be problematic. Here, one of our cataloguers added additional chronologically missing items to an existing biblio -- and there was no warning, no log entries. The only inkling that something was wrong (apart the record not being in Zebra) was that many/most search pages brought uo a "No title" entry that was completely blank. Finding (direct MySQL query) the biblionumber and the itemnumbers still did not allow staff to do anything.
So finally (do NOT do this if you are faint of heart in MySQL), I fixed it all directly in MySQL, along the lines of:
Create new biblios (upto 1984, 1985 onwards) UPDATE items SET items.biblionumber = 42382, items.biblioitemnumber=42382 WHERE items.itemnumber >= 45000 AND items.enumchron <= '1984%' AND items.biblionumber=34612; UPDATE items SET items.biblionumber = 42381, items.biblioitemnumber=42381 WHERE items.itemnumber >= 45000 AND items.enumchron >= '1985%' AND items.biblionumber=34612; UPDATE items SET items.itemcallnumber=MEGN-MART-7 WHERE items.biblionumber = 42381; DELETE FROM biblio WHERE biblionumber=34612; DELETE FROM biblioitems WHERE biblionumber=34612; Re-index biblios ./bin/migration_tools/rebuild_zebra.pl -b -r -v -x
It took about 15 minutes and worked well (and for Katrin, we're still using 3.8.24. I know... but we're not a lending library -- things would be very different if we were -- and every year we try the "latest stable" but fail to get staff and OPAC up to the same speed as "old reliable."
Again, my thanks to you both for replying, Paul
Hi Paul,
Our library also ran into this issue with really large bib records for long-running periodicals. This was back in 2017, and this bug was filed where the issue was discussed: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15399
If I remember correctly, it seems that there may be some ways to fine-tune your Koha settings to allow for indexing records that are larger
On 2019-08-22 11:45 a.m., Arturo Longoria wrote: than the ~1MB size limit, but doing so breaks the ability to run your Koha as a Z39.50 server. The discussion in the bug I linked to explains this more.
So, for our library, since we are set up to use Z39.50, it was a no-go.
We opted instead to break up our huge bib records into more manageable records that don't bump up against the size limit. So, our long-running periodical records are now broken up by decades/time periods and we use the MARC 780/785 fields to indicate the previous/next record in the series, like so: https://catalog.sll.texas.gov/cgi-bin/koha/opac-detail.pl?biblionumber=385
I hope this helps.
Arturo Longoria Reference Librarian/Web Manager Texas State Law Library www.sll.texas.gov
-----Original Message----- From: Koha <koha-bounces@lists.katipo.co.nz> On Behalf Of Paul A Sent: Thursday, August 22, 2019 10:21 To: koha@lists.katipo.co.nz Subject: [Koha] Koha/Zebra record lengths
We've lost a record (opac and staff) biblio + 422 items. It's there in
all its glory in MySQL, but Zebra won't show it. The Koha wiki mentions:
"Record length of 101459 is larger than the MARC spec allows (99999 bytes) -- This will show up if you are trying to index a record that
has a large number of items (common with serials, for example), or just has a lot of text in the record itself. /.../ Koha can do the indexing by using the MARCXML format rather than ISO 2709, and this gets around the problem. If you add '-x' to the reindex_zebra.pl command when indexing biblios, it will do this"
We always have used '-x' for biblios (did it again a couple of minutes ago.) No joy
I can (I hope!) just delete the last item (or two or three) from MySQL
and reindex, but before I go there, does anyone have previous experience with this problem?
Many thanks --Paul _______________________________________________ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha
_______________________________________________ Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz https://lists.katipo.co.nz/mailman/listinfo/koha
-- Tomás Cohen Arazi Theke Solutions (http://theke.io) ✆ +54 9351 3513384 GPG: B2F3C15F