Re: [Koha] ElasticSearch Not Runningf
Hi Peter, On the server itself if you curl the ES server do you get a response: curl -XGET http://es:9200/ if the server is there, you can run compare_es_to_db.pl to identify missing records, sometimes they are records with problems in the MARC: /usr/share/koha/bin/maintenance/compare_es_to_db.pl You can also check that the indexer is running: sudo koha-es-indexer --status kohadev You should email the general list so that you can get a broader range of answers as well. -Nick On Sat, Sep 7, 2024 at 5:55 PM Peter Kinyua <peterkmwara@gmail.com> wrote:
Greetings Nick, My name is Peter Kinyua working as a Librarian for a private university in Kenya. I have recently upgraded koha version 22 to 24.05.01 and finally to 24.05.03.After switching to Elasticsearch from Zebra I was able to search books.However from the server information Elasticsearch is indicated as not running while system information page indicates - Records are not indexed in Elasticsearch- 6 records (s) missing on a total of 5837 in index koha_teau_library_biblios. How can I solve this issue..please advise.
Regards,
Peter
-- Nick Clemens ByWater Solutions bywatersolutions.com Phone: (888) 900-8944 Pronouns: (he/him/his) Timezone: Eastern Follow us: <https://www.facebook.com/ByWaterSolutions/> <https://www.instagram.com/bywatersolutions/> <https://www.youtube.com/user/bywatersolutions> <https://twitter.com/ByWaterSolution>
I have a few questions, and the first one is, "How can re-create an intranet user account?" A few days ago I zealously deleted almost everything in my implementation's MySQL database. In doing so, I not only deleted all of my bibliographic and authority records, but I also deleted my cool home screen messages. Just as importantly, I am no longer able to log into the Koha intranet at <instance>.org:8080 I believe my account and the account named koha_catalog were deleted in the process. How can I recreate a backend user without having access to the... backend? -- Eric Morgan
Hi Eric You wrote:
I have a few questions, and the first one is, "How can re-create an intranet user account?"
A few days ago I zealously deleted almost everything in my implementation's MySQL database. In doing so, I not only deleted all of my bibliographic and authority records, but I also deleted my cool home screen messages. Just as importantly, I am no longer able to log into the Koha intranet at <instance>.org:8080 I believe my account and the account named koha_catalog were deleted in the process.
How can I recreate a backend user without having access to the... backend?
To me this sounds like you better might restore your database from the backup that you hopefully made before changing/deleting things... However if you really just need to create new users, these scripts might help you (here xxx is your instance name): $ sudo koha-shell -c '/usr/share/koha/bin/admin/set_password.pl --help' xxx $ sudo koha-shell -c '/usr/share/koha/bin/devel/create_superlibrarian.pl --help' xxx Hope this helps. Best wishes: Michael -- Geschäftsführer · Diplombibliothekar BBS, Informatiker eidg. Fachausweis Admin Kuhn GmbH · Pappelstrasse 20 · 4123 Allschwil · Schweiz T 0041 (0)61 261 55 61 · E mik@adminkuhn.ch · W www.adminkuhn.ch
What are some of the best practices for Zebra indexing and re-indexing of MARC records; ought my MARC records include unique identifiers in sone 9xx field? I am in the process of curating about .7 million MARC records, putting them into Koha, and providing access to them via both the traditional catalogue as well as the Search-Retrieve Via URL (SRU) interfaces. I am in a constant process of improving the records in one way or another. Adding date values. Adding subject headings. Adding content notes. Removing duplicates. Etc. After creating an improved set of records, I have been zealously deleting bibliographic records using the command line, but this process also deletes things I don't want to be deleted. See: https://bit.ly/3XkMeKV I know I can use bulkmarcimport.pl to delete records, but the process is very slow, especially when I want to delete 100's of thousands of items. A few days ago I learned about the koha-rebuild-zebra command, and I believe I saw something about Zebra identifiers in 9xx fields flashing by on the screen. Maybe, if I put identifiers in a 9xx fields, I can re-index things more quickly? If so, then how? Maybe, if my records have magic 9xx fields, then, when I use bulkmarcimport.pl to import things, Zebra will really overwrite my existing records? That would be nice. After I create a new set of improved MARC records, how can I efficiently reindex them sans deleteing them from the MySQL database? -- Eric Morgan
Hi Eric, the search with Zebra and Elasticsearch only works when your records include a unique identifier that links the record in the index with the record in your database. This is achieved by adding the biblionumber to the MARC record automatically. For MARC21 field 999 is used. These fields and mappings should not be changed. If you import records, the biblionumber will automatically be added. If you want to carry over an identifier of your old system to Koha, in MARC21 you could use 035$a with a prefix or 001/003. You can't speed up the indexing process by adding anything to your MARC data. In general, indexing using Elasticsearch will be much quicker than using Zebra for this number of records. You can always do another full reindex, without deleting. But if you load new improved records, you will need to reindex them again. Hope that helps, Katrin On 09.09.24 20:11, Eric Lease Morgan wrote:
What are some of the best practices for Zebra indexing and re-indexing of MARC records; ought my MARC records include unique identifiers in sone 9xx field?
I am in the process of curating about .7 million MARC records, putting them into Koha, and providing access to them via both the traditional catalogue as well as the Search-Retrieve Via URL (SRU) interfaces. I am in a constant process of improving the records in one way or another. Adding date values. Adding subject headings. Adding content notes. Removing duplicates. Etc.
After creating an improved set of records, I have been zealously deleting bibliographic records using the command line, but this process also deletes things I don't want to be deleted. See: https://bit.ly/3XkMeKV
I know I can use bulkmarcimport.pl to delete records, but the process is very slow, especially when I want to delete 100's of thousands of items.
A few days ago I learned about the koha-rebuild-zebra command, and I believe I saw something about Zebra identifiers in 9xx fields flashing by on the screen. Maybe, if I put identifiers in a 9xx fields, I can re-index things more quickly? If so, then how?
Maybe, if my records have magic 9xx fields, then, when I use bulkmarcimport.pl to import things, Zebra will really overwrite my existing records? That would be nice.
After I create a new set of improved MARC records, how can I efficiently reindex them sans deleteing them from the MySQL database?
-- Eric Morgan
_______________________________________________
Koha mailing list http://koha-community.org Koha@lists.katipo.co.nz Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
participants (4)
-
Eric Lease Morgan -
Katrin Fischer -
Michael Kuhn -
Nick Clemens