[Koha] Recovering from corrupt innodb tables

Chris Hobbs chobbs at nhusd.k12.ca.us
Sat Mar 12 09:04:43 NZDT 2011

  OK, all is not as wonderful as I had hoped - it turns out that mysql 
is smarter than I am. All of the relational integrity between the 
original biblio table (which I renamed to biblio_orig) and other tables 
was maintained. So, for example, the referential links between biblio 
and biblioitems are now between biblio_orig and biblioitems.

Any ideas on how I fix this cleanly, re-linking everything to my rebuilt 
biblio table?

Thanks in advance!


On 3/10/11 9:29 PM, Chris Hobbs wrote:
> Hi all,
> Just got done pulling off a (very) minor miracle, but thought I'd 
> document it in the hopes that someone else in the same boat finds it 
> useful.
> We've been having performance problems for over a week, and I finally 
> got around to checking the correct logs, and discovered that we had a 
> corrupt page in one of those lesser used koha tables, something called 
> biblio :-/
> Of course, since I'd let it slide for over a week, bringing biblio 
> back from backup would have meant lots of re-work for any cataloging 
> done since the corruption began.
> Because of the corrupt page, whenever a request was made against that 
> particular page, the mysql database would restart, dropping the 
> connections and causing general chaos. And thanks to google's robot, 
> that corrupt page was getting hit pretty regularly as it turns out.
> Some googling gave me an idea of how to retrieve what I could from the 
> table (think binary search), finding the edges of the corrupt page. I 
> ended up being able to recover all of the biblio records from 1-30227, 
> and from 30313-91859, leaving a gap of 85 missing biblios.
> I could have made our library technician (hi Katie) rebuild those 
> records by hand, but biblioitems has all of the required information 
> stored in the marcxml field. Thanks to prior questions being asked and 
> answered on the list, I was able to put together the following query 
> to recreate the missing records.
> insert into biblio
> select biblionumber, '' framework,
> extractValue(marcxml,"record/datafield[@tag='100'][1]/subfield[@code='a']") 
> author,
> extractValue(marcxml,"record/datafield[@tag='245'][1]/subfield[@code='a']") 
> title,
> extractValue(marcxml,"record/datafield[@tag='440'][1]/subfield[@code='a']") 
> REGEXP '[a-zA-Z]' series,
> extractValue(marcxml,"record/datafield[@tag='440'][1]/subfield[@code='a']") 
> series_title,
> substring(extractValue(marcxml,"record/datafield[@tag='260'][1]/subfield[@code='c']"),2,4) 
> copyright,
>   NULL,
>   '2010-07-23',
> extractValue(marcxml,"record/datafield[@tag='520'][1]/subfield[@code='a']") 
> abstract
> from biblioitems where biblionumber >= 30228 and biblionumber <= 30312;
> Hope somebody else finds this useful!
> -- 
> Chris Hobbs
> Director, Technology
> New Haven Unified School District
> -- This message has been checked by ESVA and is believed to be clean.
> _______________________________________________
> Koha mailing list  http://koha-community.org
> Koha at lists.katipo.co.nz
> http://lists.katipo.co.nz/mailman/listinfo/koha

Chris Hobbs
Director, Technology
New Haven Unified School District

This message was scanned by ESVA and is believed to be clean.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.katipo.co.nz/pipermail/koha/attachments/20110311/f647c7fc/attachment-0001.htm 

More information about the Koha mailing list