What is Umlaut anyway?
Umlaut is software for libraries (you know the kind with books), which deals with advertising services for specific known citations. It runs as Ruby on Rails application via an engine gem.
What is Umlaut?
Umlaut could be called an 'open source front-end for a link resolver' -- Umlaut accepts requests in OpenURL format, commonly used in academic licensed databases -- but has no knowledge base of it's own, it can be used as a front-end for an existing knowledge base. (Currently SFX, but other plugins can be written).
And that describes Umlaut's historical origin and one of it's prime use cases. But in using and further developing Umlaut, I've come to realize that it has a more general purpose, as a new kind of infrastructural component.
Better, although a bit buzzword laden:
Umlaut is a just-in-time aggregator of "last mile" specific citation services, taking input as OpenURL, and providing an HTML UI as well as an api suite for embedding Umlaut services in other applications.
(In truth, that's just a generalization of what your OpenURL Link Resolver does now, but considered from a different more flexible vantage).
Let's take some of those buzzwords apart to get at what Umlaut is:
Last Mile, Specific Citation
Umlaut is not concerned with the search/discovery part of user research. Umlaut's role begins when a particular item has been identified, with a citation in machine-accessible form (ie, title, author, journal, page number, etc., all in seperate elements).
Umlaut's role is to provide the user with services that apply to the item of interest. Services provided by the hosting institution, licensed by the hosting institution, or free services the hosting institution wishes to advertise/recommend to it's users.
Umlaut strives to supply links that take the user in as few clicks as possible to the service listed, without ever listing 'blind links' that you first have to click on to find out whether they are available. Umlaut pre-checks things when neccesary to only list services, with any needed contextual info, such that the user knows what they get when they click on it. Save the time of the user.
One of the most important services for most environments is full text availability. Other services can include:
- local print/physical document delivery
- location/availability information for physical copies in the library
- 'search inside' the item, even if no full text available
- Metadata about the item of interest ('cited by', similar items, abstract, cover image, etc)
Or anything else an Umlaut service plugin is written for. Umlaut uses service plugins to look up such services from multiple sources -- Umlaut has no knowledge base of it's own. Umlaut provides a plugin architecture to make it easy to add additional sources of information. See Writing a Service Plugin.
Umlaut aggregates all these services in a single place, and can also embed all these aggregated services to other apps (see below). Umlaut rationalizes your architecture, a single application responsible for 'last mile' services.
Umlaut has a somewhat sophisticated (and unfortunately sometimes complex under the hood) architecture allowing services to be executed concurrently in waves, to try to minimize user wait time even when consulting many external sources, and allow some results to be shown while other results are still coming in.
note: To really utilize Umlaut, you're going to want a service plugin to your OPAC/ILS with physical availability and document delivery. Umlaut only comes with built in services for a very limited set of OPAC/ILS however, at present. You'll prob have to write one. Writing a Service Plugin
Just In Time
In the current library tech environment, in order to show all these services to a user, multiple sources of information likely need to be consulted.
A prime source of information is of course your local link resolver knowledge base product. However, many of us have information relevant to specific item services which are in the local 'catalog' database but not in the link resolver knowledge base.
There are other sources of information which are not local at all -- for instance, finding out if there is a full text copy in Google Books, Internet Archive, or HathiTrust; or if there is a limited preview from Amazon. You likely haven't loaded all this information about third party free resources in the local catalog (it's not entirely clear how to do so in a standard machine-accessible way), and you shouldn't need to in order to advertise them to users.
Umlaut has no 'knowledge base' of it's own -- instead it calls out to other sources, using APIs, to determine service availability. Including local knowledge bases (catalog, link resolver), and third party remote knowledge bases (Amazon, Google, etc). Umlaut gets this information 'just in time', 'on demand', as a request for a particular item of interest comes in.
Any source that has a suitable API, an Umlaut service plug-in can be written for.
Umlaut takes it's input (citations described in a machine-actionable way) as an OpenURL, specifically an OpenURL using one of the 'traditional' scholarly item formats. (*:book, *:journal and *:dissertation from the registered metadata formats)
OpenURL is not neccesarily a format I advocate in general, or am a huge fan of. As a standard, I think it ended up over-abstracted and over-engineered and difficult to work with in it's 1.0 incarnation.
In part, Umlaut uses OpenURL because of it's legacy (and still important use case) as a front-end for an OpenURL link resolver knowledge base, it lessens impedence to use the same input schema for Umlaut as a link resolver it will talk to.
But OpenURL also comes with a huge 'external' advantage -- the large installed base of platforms that support linking out to a local institutional OpenURL resolver...
Most libraries (at least academic), already have a whole bunch of third party sites generating OpenURL links pointing to a local institutional service. You can point all these to an Umlaut installation instead to make Umlaut the 'front door' to your library's services for items identified elsewhere.
This is exactly what Umlaut is for. (You can change OpenURL base url configuration at every service, or just change your DNS so the already registered URL points at Umlaut now).
A properly set up and localized Umlaut will serve just as well as the 'front door' for monographs as it does for journal articles, and you may wish to point additional configurable third party platforms at it that weren't previously pointed at your link resolver. (eg "ISBN" searches from LibX or WorldCat to Umlaut, etc.)
APIs, the other end of aggregation
Once you have Umlaut aggregating 'just in time' specific item services from multiple sources, one starts to realize that there are other interfaces where known items with machine-actionable descriptions exist, and you might want to embed these aggregated services there.
Umlaut has a suite of API's to make this possible. There exists an XML/JSON Data API Response that delivers everything in an Umlaut response in machine-actionable format.
But what has proven more useful is instead Umlaut services which deliver html snippets, individual chunks of HTML representing sections in an Umlaut page. These complete HTML sections can then be placed onto a web page of another application.
There is an XML/JSON HTML Snippet API Response for these html snippets. But generally, what you'll use is instead Umlaut's JQuery Content Utility which will let you place Umlaut content on a page in very few lines of JQuery, with hooks to allow custom transformation and placement logic. All you need to add Umlaut content to another app's item detail page with the JQuery utility is:
This shows how Umlaut is a key infrastructural component, that keeps you from having to implement the same things on multiple platforms. With an Umlaut set up consuming many services and delivering to multiple platforms, adding a new service is something you do once in Umlaut, and it shows up on all platforms delivered to. Delivering to a new platform is something you do once, and it'll get all future services you add automatically too. Umlaut's role as a service aggregator really shines there.
I embed Umlaut services into item detail pages on my catalog and federated search product. Future ideas include a LibX-like browser addition that might embed Umlaut service html even onto third party pages.
Try it out
(To be done, a screencast. Interested in seeing it? Let me know).
One easy way to see what Umlaut can do is to use Google Scholar, setting an Umlaut implementer institution as a "Library Links" preference. My own institution is "Johns Hopkins University" -- check that as a preference, now search for things on Google Scholar, and click on the "Find It @ JH" link to be redirected to Umlaut. Try it for both article and monograph hits. Look for the more subtle 'Find It @ JH' link in the row of links underneath a hit as well as as the prominent right-column link.
You can also just go ahead and install Umlaut, point it at your SFX, possibly configure other services that require API keys, and see what you think: Installation
Why not just use SFX?
Some people who already have the SFX OpenURL link resolver product, ask: okay, but why not just trick SFX out to do all these things, it's very customizable, you probably could.
Indeed, you probably could. Here is why I chose not to, decoupling the functionality I wanted from SFX, instead:
- Many of the functions I wanted would require 'plugins' on SFX. Writing plugins in SFX is a little bit tricky, and not enjoyable to me.
- Plugin results are not generally available in SFX's own API, thus not supporting the possibility of using SFX, like I do Umlaut, as a central aggregator of services distributed to multiple platforms.
- SFX does not have an architecture like Umlaut's for supporting multi-threaded concurrent/background execution of fetches from third party sources, which I wanted to better support more sources of info with a responsive UI.
- If you tried to custom build extensive functionality into SFX, you are really tying yourself into the SFX platform.
- If Ex Libris replaces SFX with a future product, or a future version of SFX on a different architecture, all your work is going to need to be redone.
- If you decide to switch to a different link resolver, perhaps because a different product has a better knowledge base, all your work is going to have to be redone (and it may or may not be possible to do so, depending on the product)
- Instead, with the Umlaut approach, we have decoupled our front end service aggregator from our link resolver knowledge base. We still (at present) use SFX -- for it's knowledge base, not it's interface, or any other value added services SFX provides. We have an Umlaut service plugin for SFX, to consult it's knowledge base. If we later switch to a different link resolver knowledge base provider (from EL or any vendor), we simply have to write a new service plugin (and it needs to provide a suitable API to make that possible), we don't need to rewrite any other service plugins, nor do we need to rewrite any code for embedding Umlaut services in other applications like our catalog; and our users should not even notice the change much. We can evaluate our link resolver knowledge base options solely on the quality of the knowledge base (as well as price, performance, etc), and pick best of breed there, without needing to worry about the native link resolver API etc. At least that's the idea.
Of course, there are always trade-offs and downsides. The risk to choosing Umlaut here is the same as many open source projects: You are running an application not supported by a vendor, you will need to host and maintain this application yourself, you will need to count on a volunteer community to maintain and enhance the product, or have in-house capacity to do so. But keep in mind that if you added so much local custom code and templates to SFX to do what Umlaut does, you'd have a lot of non-vendor-supported code that route too.