During the next four years the Digital Development Team will join the Application project, a temporary organization whose mission is to provide systems, tools and services for the new city library of Oslo due to open in 2018.
The main task will be to provide modern tools for the library staff in order to facilitate the dissemination and mediation of content, both inside the library building and on the Internet. The library system and its related tools have been identified as the core system of this activity.
From ILS to LS.XT
Oslo Public Library has chosen the free and open Koha Integrated Library System as a basis for our new library system. This is an ILS, but we see Koha as one of several modules that will form our future library system. We aim to take the I completely out of ILS, and our working title is LS.XT – Library system Extended.
We do not see our library system as a single stand-alone component that will do «everything». First and foremost we want to use Koha for what it is good at – circulation. Koha comes with good APIs out of the box and the beauty of open source is that we can expand and contribute to create richer APIs that will serve our needs.
Our catalogue data will be stored as RDF linked data, a rich metadata model more suitable for our future needs than the library specific MARC-format. RDF data enables us to use the same metadata format for our physical collection and our digital content, and it provides us with a good foundation for search, presentation and integration with other content.
While using RDF as our primary format we still have to provide a MARC representation of our data. The MARC representation requires a conversion of our RDF records to provide a slim MARC record in Koha. This is mainly for dealing with circulation and for exchanging data with other libraries. This is a small paradigm as our RDF records will be free and open, but in order for other libraries to actually be able to use our data, we have to convert it and lock it into a nearly 50 year old format.
We hope someone shares our dream of a full-fledged RDF library system. Please get in touch if you do! :)
RDF opens a new world of possibilities as to how we can connect our metadata with data from other relevant resources. We will pursue data harvesting from other sources, which mean we can add value to our core content, and harvesting of basic bibliographic data to facilitate the cataloguing process.
A viable system architecture
One of the challenges is to construct a systems architecture that is stable, durable and modular. An architecture that will last for years and withstand phasing out old, outdated and rarely used services and tools, while being flexible to support the development of new and more modern services on new platforms.
We don’t know all that much about which gadgets and interfaces we need to create services for in the future, but we know it will be something different than what we have today.
Planning for this is conquering the future! :)
This sounds like a great direction!
At the National Library of Sweden (KB), we’re deep into the process of replacing our current, MARC-based library cataloguing system with a new infrastructure based on RDF (including the defining, finding and linking of entities that this entails).
We have given some presentations on the subject.
In Swedish:
* http://librisbloggen.kb.se/2014/05/28/ny-katalog-nytt-format-pionjararbetet-med-libris-xl/
* http://librisbloggen.kb.se/2014/02/07/film-om-katalogprojektet/
In English:
From ELAG 2014:
* http://elag2014.org/programme/elag-workshops-list-page/11-3/ (abstract)
* http://goo.gl/EQWxrZ (slides)
* (A video of this presentation should be available soon.)
From ELAG 2013:
* https://www.youtube.com/watch?v=0o6ZgkAV5Pw&feature=youtu.be (video)
All of this is done as (and based on) open source, available at:
* https://github.com/libris/librisxl (infrastructure)
* https://github.com/libris/kitin (cataloguing tool)
A beta of the system is available for testing at:
* http://devkat.libris.kb.se/
Don’t hesitate to contact us for more information!
Hi Niklas!
Thank you for your kind words and all your links and information!
We have seen your presentation given at the ELAG conference this year and we have read about your Libris XL project with great interest.
It is very cool that two small scandinavian neighbouring countries choose to do much the same!
KBs linked data work has been an inspiration for us and it would be interesting to meet up and discuss our slightly different approaches to reach the same goal – a workable linked data catalogue.
Maybe we could meet for a joint workshop this autumn/winter? :)
Hi!
At the Special Collections at UiB we are trying to plan for a post-Protégé cataloguing system. Just started using Protégé and it does work quite well, but support has ended for the server-klient setup.
Your plans, as well as Kitin, looks great. We would be interested in testing if Armillaira could fit our needs too. I really do not see why an RDF based tool shouldn’t.
http://marcus.uib.no
Hei Terje!
Nothing will make us more happy than if there would be more users of this system:) – either in parts or as a whole.
The RDF cataloguing interface is based around user defined «profiles», which define what fields you want in a form, and what rdf predicates they would correspond to, how to interact with external sources, relations to other resources and so on. The software should be fairly agnostic to what kind of data you are modelling, and not tied specifically to public libraries, even that’s what we are working with now.
It’s still a very early stage though, and the software is probably to much in flux to try out, except for those who don’t mind spending some time fiddeling & figuring things out, and keeping up with changing APIs.
The profiles are written in javacsript and will some required technical skills. There will probably not be much documentation of the format until they have become stable, but I’ll go over the build instructions on github to make sure it’s possible to get the system up and running easily. The repo includes example profiles for «person», «place», «manifestation» and a few others..
If you decide to try it out allready, feel free to ask any question, here, by mail or open an issue on github. Any feedback would certainly be appreciated.
Hi Petter!
Great, have confidence in you to not make this into a too library focused application. We will look into it during the autumn.
But, will it be closely tied to Virtuoso or will other triplestores work?
No, any RDF store with SPARQL 1.1 support should work, but we are only testing against Virtuoso atm. We don’t make use of any vendor-specific features, as far as I can tell.