paul POULAIN wrote:
The good solution is to enter the books in the "breeding farm". It's a place where biblios are stored, waiting for inclusion into "active DB". When you catalogue, if you choose "New biblio" (with MARC=on in parameters), you can enter a title or an ISBN to search into the breeding farm before creating the biblio. Next screen is the list of biblios you have in your active DB (in case you forgot you already had it), and the list of correspounding biblios in your breeding farm. If you find what you need, click, and next screen is Biblio Editor, filled with biblio imported from breeding farm. Here you can complete/modify the biblio, then create items.
That's the right way to use biblios from a z3950 site imho. That how I did for Dombe Abbey with 40 000 biblios from BNC (Canada)
dear paul ! i am lurking around for some half of a year to get a feeling what solution to use for e.g. a school biblio /w 16000+ items. choices are ils (obviously dead),obiblio,koha and of course some local commercial beasts i hope to avoid due to their "closed shop"-philosophy :-) metadata quality of items/books is in every candidate-library extremely bad and all have not the resources available to do the COMPLETE work interactively within resonable time, since they cannot close down for months. next, some major reorganization in topology/subjects/etc.etc are desired too, when doing such changes. SO it is essential to prepare data outside of e.g. koha and import it or at least a *big* part of it AUTOMATICALLY (still have no clue how this works or could be done in koha). and keep also in mind the psychological benefits of doing such : people are more happy to key in "just" half of mass than all of it. to prepare fuzzy data of course perl is imho the first choice and z3950 queries are certainly easier to handle than reprocessing html-output of various opac's hanging around. i appreciate your and your colleague's work very much and i could contribute heavily as perl hacker, but it is VERY hard to do just reverse engineering because no architectural or other docs helpful to analyze the beast are available besides bare code. also cvs seems not to be the latest *common* effort, or am i not right in this case ? --- cu wolfgang