BIBFRAME

Bibliographic Framework Initiative (Library of Congress)

The Library of Congress > BIBFRAME > Implementation & Testing > Implememtation Register

The BIBFRAME implementation register is established to list BIBFRAME implementations - existing, developing, and planned. It will be maintained here. Any organization implementing a BIBFRAME project or application can be listed. If an organization has multiple BIBFRAME projects it may have each listed in a separate entry. See Implementation Register Guidelines


Columbia University Libraries

  • Description: As part of the 2CUL Technical Services Initiative, a collaboration between Cornell and Columbia University Libraries, we are testing the BF vocabulary and data model by converting MARC and MODS records for all formats from Columbia’s broad collections. We will also assess available BIBFRAME tools and test the BIBFRAME Editor for the creation of new BIBFRAME descriptions. We will provide feedback on conversion issues encountered using the tools, the BF vocabulary, and the data model. The intent of Columbia’s participation is to contribute to vocabulary & tool development.
  • Implementation status: Formation of a team consisting of staff members with rare book, serial, law and non-MARC expertise. Training in progress. Record conversion in progress. (A progress report can be expected in early  2015.) 

  • Contact: Melanie Wacker (mw2064@columbia.edu)

  • Added to Register: October 14, 2014

Princeton University Library

  • Application: NjP BIBFRAME Analysis, second phase

  • Description: Princeton is converting and reviewing existing MARC records with a focus on music material, rare books, and non-roman script records.  Princeton will also test the BIBFRAME Editor for creation of new bibliographic data.  The intent of Princeton’s experimentation will be to analyze the BIBFRAME vocabulary and model with regard to cataloging standards.
  • Implementation status: We tested individual and bulk record conversion and reported conversion issues to LC.  We are working to establish a connection between the BF editor and a data store and will provide feedback.  We are evaluating the possibility of using the editor for production work rather than just testing by creating original descriptive data for small projects. We have begun discussing use cases which could demonstrate the benefits of linked data for our users, and we are experimenting with enriching our bibliographic data with URIs in preparation for large-scale conversion. 

  • Contact:Joyce Bell (joyceb@princeton.edu)

  • Added to Register: May 20, 2014    Last Update: October 2, 2014

George Washington University

  • Description:
    1. Implement Editor input, indexing criteria.
    2. Test BF vocabulary and data modeling using GW collections.
    3. Incorporate controlled vocabularies and ontology to BF core data set.
    4. Establish practitioner work and best practices.
    5. Design user interface.
    6. Explore additional enhancements beyond library environment.
  • Activities to date
    1. Training curriculum in progress.
    2. Formation of the team, consisting of four members representing the three campus libraries,
      that will plan and direct BIBFRAME-related activities at GWU during the testing period.
    3. Creation of a test server on which to store project-related work products: http://gwbibframe.wrlc.org/
    4. Distribution of a survey to determine current cataloging, metadata, and technology skills and
      preferred learning styles of select staff and librarians; analysis of results.
    5. Establishment of a BIBFRAME-specific internal listserv as a means of communication for a
      large group of staff and librarians distributed across the three campus libraries.
    6. .Establishment of an internal wiki, on which are shared BIBFRAME documentation, resources
      for learning, and updates on local activities.
    7. Discussion of other potential idea-sharing tools, such as Ideascale, flow.io, Sharepoints.
  • Implementation Status: As of August, 2014, GW has completed the following.
    • Converted a selected set of MARC data in a variety of formats, in particular, the data that contain subfield zero.
    • Installed BFE from github and began to use it to create individual records manually.
    • Began to capture best practices for the following Classes: Work, Instance, Annotation.
    • Reviewed and critiqued the converted data that URI were generated.
    • Installed Solr/Blacklight instance to experiment with possible BF data for display.

  • Contact: Jackie Shieh

  • Added to Register: February 10, 2014    Last Update: September 30, 2014

German National Library

  • Application: BIBFRAME prototype in DNB OPAC

  • Description:  From the full presentation of a record in the OPAC of the German National Library, there is now an action option called "BIBFRAME-Repräsentation dieses Datensatzes" ("BIBFRAME Representation of this Record").  By clicking a link the record is converted from Pica+ into BIBFRAME (in RDF/XML) and offered for download. 
    Behind the link is a conversion process triggered "on the fly", itself based on a mapping from PICA+ to BIBFRAME.

  • Link: http://www.dnb.de/katalog
    Demo: http://de.slideshare.net/sollbruchstelle/2014-0126-bibframeheuvelmann

  • Status: This is a starting point for further developments, notably, to synchronize with the "Vocabulary 1.0" published by the Library of Congress in January 2014.  

  • Contact: Reinhold Heuvelmann

  • Added to Register: February 6, 2014     Verified: october 1, 2014

Cornell University Library

  • Description: Beyond our use of BIBFRAME for LD4L (see also Stanford’s registry entry), Cornell is assessing the BIBFRAME Converter as tool for manually creating original BIBFRAME.
  • Contact:Steven Folsom (sf433@cornell.edu)

  • Implementation status: (A progress report can be expected around the New Year.)

  • Added to Register: June 24, 2014          Verified:: October 2, 2014

Biblioteca Nacional de Cuba “José Martí” (BNJM)

  • Application. Retrospective conversion using BIBFRAME

  • Description: BNJM is working to implement BIBFRAME as part of a strategy to finish the retrospective conversion of its printed catalogs. The current online catalogs include bibliographic references since 1998 and the previous bibliographic references remain in printed cards catalogs. The library will also test the conversion of existing MARC records (from 1998) to the Linked Data Model of BIBFRAME using the conversion made available by the Library of Congress (LC) .
    We are digitizing the printed card catalogs and creating a virtual catalog of card images or CIPAC (Card Images Public Access Catalog). In order to link the cards with the Web of Data, we are using a hybrid approach to annotate its data. The annotation combines OCR techniques, crowd and specialists annotation, use of VIAF, ISNI and other LOD sources and quality reviews translating the annotations to RDF. The creation of linked data is consistent with the BIBFRAME model.  

    We encourage the idea to include Catalog Cards as a class of annotations of work instances, similar to the Cover Art Annotation  Class in the BIBFRAME model.  In fact we have been modeling the Catalog Card Class in a way similar to that of the Covert Art Annotation Class and that could be a solution for this USE CASE respecting the standardized efforts of BIBFRAME.  

    We plan to publish a first set of the BIBFRAME data of our holdings as part of the Linked Open Data space at the first trimester of 2015.  We are very interested to test interoperability issues with other libraries and agencies.

  • Implementation status: We already have created the CIPAC of the Printed Cards Catalog of books of the Cuban Collection of XIX century and we began the annotation process using the mentioned tool. We would welcome the definition of the above mentioned annotation class in order to advance the creation of linked data using BIBFRAME model.

  • Contact :Pedro Urra (pedro.urra@bnjm.cu, pedro.urra@infomed.sld.cu)

  • Added to Register: June 27, 2014

The National Library of Medicine

  • Application: NLM BIBFRAME Analysis, Phase 2

  • Description: NLM is preparing to initiate a two-pronged BIBFRAME Phase 2 implementation experiment:  one effort will focus on conversion of all NLM’s existing MARC Bibliographic records to the BIBFRAME model using the conversion made available by the Library of Congress (LC), and a second effort will use the BIBFRAME editor tool to create new bibliographic metadata in the BIBFRAME format.  The intent of NLM’s experimentation will be to analyze the BIBFRAME vocabulary and model in regard to cataloging standards and internal metadata practices.
    1. Conversion of NLM legacy data to BIBFRAME
      NLM will use the LC conversion program available on GitHub to convert all of its MARC records to BIBFRAME format.  NLM will implement a simple index and UI (possibly SOLR/Blacklight) to facilitate review and analysis of the converted data, and make recommendations to LC regarding the vocabulary and the model based on that analysis.
    2. Experimentation with the BIBFRAME editor tool
      When it becomes available, NLM will install the BIBFRAME editor tool for use by a specified group of NLM cataloging staff to
      • evaluate the utility and completeness of the available BIBFRAME vocabulary and relationships by inputting bibliographic data;
      • experiment with finding and inserting linked data from internal and external sites (e.g., MeSH, LCNAF, VIAF);
      • assess the compatibility of the data output with RDA and the ramifications of those findings; and
      • assess the ability to re-use the output RDF/XML in NLM’s Digital Collections repository.
  • Implementation status: A test conversion of a sample set of legacy MARC records is in process.  Implementation of a UI for search and display of the legacy metadata is being investigated..

  • Contact: Nancy Fallgren fallgrennj@mail.nih.gov
  • Added to Register: April 9, 2014

Stanford University

  • Application: Linked Data for Libraries (LD4L)

  • Description: The goal of Linked Data for Libraries (LD4L) is to create a Scholarly Resource Semantic Information Store (SRSIS) model that works both within individual institutions and through a coordinated, extensible network of Linked Open Data to capture the intellectual value that librarians and other domain experts and scholars add to information resources when they describe, annotate, organize, select, and use those resources, together with the social value evident from patterns of usage.  LD4L plans to use BIBFRAME as the common model for all bibliographic data.  The initial focus will be on MARC data conversion but the conversion of other standards (e.g. MODS) will be a focus as well.  The three universities involved (Cornell, Harvard, Stanford) plan to develop a common data store and shared light abstraction layer.  The anticipated discovery layer will be Blacklight.

  • Link: https://wiki.duraspace.org/display/ld4l

  • Implementation status: Initial focus is on MARC record conversion and the development of an ontology for usage data.  The group is making a large scale test of the BIBFRAME converter, examining all MARC fields and formats, and reporting technical issues to Github and conceptual issues to the BIBFRAME testbed.

  • Contact: Philip Schreur, pschreur@stanford.edu

  • Added to Register: April 2, 2014

Colorado College

  • Application: TIGER Web Catalog & Flask-BIBFRAME Extension

  • Description: BIBFRAME is a critical Linked Data vocabulary being developed in the open-source project, Flask-BIBFRAME (https://github.com/jermnelson/flask_bibframe), a component of the Cataloging Pull Platform. On February 27th, Colorado College is releasing a Minimum Viable Product (MVP) web catalog, called TIGER (source code available at https://github.com/jermnelson/tiger-catalog), that uses the Flask-BIBFRAME extension. TIGER uses Linked Data entities using source records from their MARC-based legacy ILS  and Fedora Commons digital repository.  The TIGER web catalog’s user interface is based on Aaron Schmidt’s design at http://www.walkingpaper.org/5979 and uses the following technologies: Flask micro web framework for the web server and middleware. Solr Index for searching.  Redis for analytics, result caching, and messaging. MongoDB document store for JSON-LD representations of BIBFRAME entities using RDA, Schema.org, MARC, and other vocabularies.
           Other BIBFRAME experiments that preceded to the development  of the TIGER web catalog include the Summer 2013 Union Library discovery app that uses an earlier version of the BIBFRAME vocabulary and is available at http://tuttdemo.coloradocollege.edu/. A later BIBFRAME prototype was developed that directly used Schmidt’s UI design from his blog with the 40,000+ RDF records from Project Gutenberg and can be viewed at http://tuttdemo.coloradocollege.edu/catalog/. The source code for both BIBFRAME prototypes is available at https://github.com/jermnelson/aristotle-library-apps

Library of Congress

  • Description:
    • MARC to BIBFRAME Transformation Service
      The transformation tool uses LC's MARC2BIBFRAME transformation to convert an existing marcxml record or collection to bibframe. Records can be pasted in or addressed by url.
      The transformed records will be shown in Exhibit, with the ability to see the original marcxml , bibframe rdf, and json. Records can be re-transformed if the MARC2BIBFRAME code has been updated. Link: http://bibframe.org/tools/transform/start      Status: Available for use.

    • MARC to BIBFRAME Comparison Service
      The comparison tool takes a record number (marc 001) and transforms that LC record, showing either marcxml or bibframe results. Possible new development: lccn as the input.
      Link: http://bibframe.org/tools/compare     Status: Available for use.

  • Status Report

  • Contact: Nate Trail

  • Added to Register: January 31, 2014             Last Update: October 21, 2014