Working with Koha at Oslo Public Library

About one year ago, Deichmanske bibliotek — Oslo Public Library — made the decision to build their new library system on open source software, choosing Koha as the major component. This sparked both excitement and controversy, and now that the migration project is well underway, also significant interest from both domestic and international libraries. This post is an attempt to bring our friends up to speed on what we are trying to do, and what we’ve done so far, and which challenges and possibilities we see going forward.

Before we start looking at our goals and the ways we’re trying to get there, we’d like to clearly state the following: choosing Koha and our proposed model for creating an up-to-date library system was a choice that needed to be taken at that specific time, given the constraints and timelines of the current building project in Oslo, and all the strategic and organisational challenges that arose from that. Given these facts, it’s not clear that Koha would the best choice for everyone. Based on time alone, it’s quite possible that a shorter deadline would have forced us to choose a commercial vendor, just as a longer timeline could have resulted in a joint effort with several other libraries to create something totally new. Either of these options should be carefully considered by any library looking to upgrade or replace their current systems.

Anyway, we have chosen to use Koha as a foundation for our future library system. We also believe that MARC is not a good data format for new and user-friendly content discovery and dissemination tools — during the last couple of years, we’ve been looking at RDF for that. This is really the key assumption behind our work: that we can combine an RDF store with all relevant metadata, with Koha as a subsystem to cater for patron management, circulation, interfacing with self-service machinery and the like. Now, this is mostly uncharted territory, and although we’re still optimistic, we are not quite sure we have a complete solution as how to join up all the dots. For that, we need at the very least a functional prototype and we’ve not begun making that yet.

So what have we been doing since we made the decision to make the jump? First and foremost, we’ve acquired help. We decided early on that we wanted on-site expertise to help us realise our ambitions, and we decided to bring in external consultants (to minimise the risk of a complicated fire/hire process if things turned pear-shaped). Because of the public-tender process, this took us almost exactly six months.

During those six months, the existing team made proof-of-concept code for integrations with current RFID-equipment, with self-service machinery (based on SIP2); started working on migration scripts and integration between Koha and the national library card database. During this time, most business processes were documented and suggestions for improvement were described.

Since we became a (more-or-less) complete team in August, we prioritised getting accustomed to working within a test-driven framework. Most of the ways we go about things are based on “Growing Object Oriented Software Guided by Tests» by Freeman & Pryce. What this means in practice is a far broader topic than can be covered in this blog post, but a key element is that automated tests should cover all functional requirements, and that the tests should be written before any code.

Also, a lot of work has gone into creating the framework for our workflows, so that we can build our development, test and “production” environments in a fully automated way (so that making changes, and if need be, reverting them, is as easy and therefore cheap as possible). We have invested time in setting up the infrastructure needed to facilitate test driven development and continuous delivery, including testing tools like Cucumber and Webdriver, provisioning tools like SaltStack and Docker, and also tools for system monitoring, like Logstash/Kibana and Graphite. All the code for this can be found on github. We’ve named the core project LS.ext short for Library System Extended.

The majority of the work that has been carried out so far is related to creating a situation where work can be done, and be done properly within a framework that produces repeatable, tested results. As a consequence, the more innovative work of creating LS.ext — the parts related to RDF and APIs — is still to come.

While we welcome all the interest and input we’ve received, we feel that it is a bit premature to begin preaching about what we are trying to do. We are, however, looking for ways of sharing our experiences and lessons learned with several libraries, and we expect to learn a lot also from this dialogue.

If you have any comments or questions, please let us know in the comments below!

 

-Arve

Om Arve Søreide

Program Manager Applikasjoner, Nye Deichman