<HTML>
<BODY>
Salvete!<br>
<br>
>> Koha allows you to add an arbitrary number of z3950 sources and search them<br>
>> individually or in arbitrary groups. But the type of "shotgun" approach you<br>
>> describe is not currently supported by Koha.<br>
>><br>
>> I suspect it is not the kind of workflow a professional cataloger would<br>
>> undertake, mainly because there is no oversight of the quality or accuracy<br>
>> of data injected into the database. Occasionally ISBN's do get reused, so<br>
>> without other criteria or human review, you could conceivably import bogus<br>
>> data. That being said, I like the idea of a mass import by ISBN from text<br>
>> file, breaking it apart into two results:<br>
>><br>
>> a group of staged records where ISBN is confidently matched, and<br>
>> a text file with the subset of unmatched or ambiguous ISBNs<br>
<br>
<br>
I'd love to see something go one better than this and check the record lengths. The caveat is that sometimes fields are unnecessarily repeated where a cataloguer has accidentally cut and pasted too often. This has always seemed like the sort of brute force operation that a computer could perform with relative ease and a person could lend an eye to later, much like the authorities deduplication feature that eXC has worked out. <br>
Two better would include ranking. Both relevancy ranking and optional community feedback that ranked by Library. For instance, it was very easy for me to tell staff to check MaineInfoNet first since that consortium had a lot of very high quality records that were of local interest, so there was high recall and high precision.<br>
<br>
Cheers,<br>
Brooke
</BODY></HTML>