I just finished all the features and fixes to release 1.2. Now it is time to test, test, test. The features included in 1.2 are listed in github under the release 1.2 milestone:
You can download and try to unstable 1.2 beta from the software download page. Just select the unstable version for your operating system.
We’ve created an About the Curator’s Workbench page in an effort to centralize and highlight the documentation that has been produced about the Curator’s Workbench thus far. Keep an eye out, as this page will be updated with new information about the Workbench as it is created.
A screencast demonstration of the Curator’s Workbench software tool is now available on our GitHub wiki. The demo takes you through a sample project, staging and capturing targeted folders, creating a MODS crosswalk with tabular metadata, and exporting a submission METS file in XML. You can also follow this link directly to YouTube.
The Curator’s Workbench Guide v2 expands on the existing documentation and provides more specific details on setting a staging area, creating and matching metadata crosswalks, and wrapping up projects.
The updated guide is available on the download page.
We are anticipating a new release of Curator’s Workbench software later this month. In the meantime, there is a beta available for download from the software site.
The new version includes these goodies:
- Designate a folder as a collection.
- Set access control policies anywhere in the arranged objects tree.
- Link folders and collections to surrogate images or objects.
- There are staging performance improvements for iRODS grids.
There are various improvements to the crosswalk editor:
- Map to the full range of XML elements and attributes defined or allowed in the MODS schema.
- You may now nest these output elements to an arbitrary depth in a schema-driven manner.
- You can set a default value for any text element or attribute.
- You can pick a default value for XML attributes that have constrained values, such as authority.
There are plenty more issues that are fixed and tweaks. More details are in GitHub under the Release 1.2 milestone. If you do test the beta build, please let us know how it works for you.
It was fun to make and I hope that will also make it fun to present. Just finished the first draft of a Prezi presentation on the Curator’s Workbench. If you wish to pan and zoom in a world of ideas, follow the link.
I’m pleased to announce that workbench source code is now hosted at github.com. I’ll be added more licensing and build information soon (Apache 2). This is my first git-hosted project, so I am still learning the ropes. However, I hope that git will facilitate community development on the project, especially of repository or discipline-specific plugins.
The project git page is here:
Before this can be very useful I’ll need to add some more developer documentation. For now I’ll just mention that the build is orchestrated by Maven 3 and the Tycho plugin. This mean that even though the project uses the Eclipse framework, it can be build on the maven command line and in continuous integration environments. A continuous integration server is in the works and setting it up will help me diagnose any lingering build issues in the trunk. Also coming soon are nightly snapshot and stable builds, which I’ll link to on the download page.
The workbench is designed to update itself and any plugins via update sites. This means that the workbench will detect when newer versions are available and prompt the user for download/install. The primary update site and the workbench menu options to support updates are in the works.
If there are any questions about workbench code or functions, please don’t hesitate to post a comment.
The Carolina Digital Repository has been an active service since April 2009. We started off with three pilot collections in a pre-soft launch mode. In September 2009, we moved towards an official soft-launch status with more collections and enhanced workflows for collection ingest. We now have over 15,000 objects in the repository and we are constantly growing.
Curators’ Workbench, a pre-ingest workflow tool for digital objects is now in use, in a beta state. We have made the software available for download to get feedback from the larger community. Please check out our earlier blog post about this.
We’re working on a new user interface. This will consist of a new look and feel, as well as a rebuilt Solr index that will allow for faceted searching and browsing. The interface will provide a rich full record display and improved discovery.
In September, we deployed Shibboleth for authentication to restricted content. We’re investigating a more holistic approach to access control through Fedora Enhanced Security Layer (FeSL) to enable more granular access control capabilities.
We’re constantly working with potential depositors to acquire digital content from faculty, staff and students. In this period of new collection growth, we’re building up workflows and clear timelines that depositors can understand.
The repository has been fortunate to have a long-standing Steering Committee comprised of library administrators, repository staff and School of Information and Library Science faculty. The Steering Committee helps set big-picture direction and goals for the repository. This year, we formed a committee for the CDR that reports to the Library Technology Council. This committee can help review and approve more specific development plans and near-term planning documents that will help guide technical staff and collecting efforts.
We prepared a poster for International Digital Curation Conference (IDCC 2010) on the Curator’s Workbench. 60cm by 80cm is not a lot of room, but we did our best. For more information, come find Erin O’Meara at the poster session. You might also talk to Cal Lee or Helen Tibbo, both on our CDR steering committee.
Here is the poster in PDF.
I am proud to announce this new desktop tool, which is definitely the coolest software I’ve worked on this year. It solves several problems we faced in submission work flow and we hope it can dramatically speed up processing for large collections with custom metadata. The features break down into three vaguely overlapping categories, those being capture, rearrangement and description.
Here are some screenshots of the interface:
This screenshot shows the project tree to the left and a MODS editor on the right. The user is editing the MODS elements for a single folder called “TUCASI”. The attributes of the selected MODS name element are editable in the properties view in the lower right quadrant.
The most novel feature and the one I most want to highlight is batch metadata crosswalks. The screenshot above shows a crosswalk editor, which consists of a canvas and a palette of widgets. The end user can construct a pretty sophisticated mapping of custom metadata to MODS by “visual programming”. By dropping widgets on the canvas and linking them together, they define how a field becomes an element. Presently the editor only supports tab-separated metadata sources, but as time allows we plan to extend the feature to support any delimited file and XML sources.
Whenever a crosswalk definition is saved, it is used to generate or regenerate a set of MODS records. These MODS records can be automatically associated with files and folders through a matcher widget on the canvas, which works as long as you have file and folder names in your custom metadata. Otherwise you can drag and drop a MODS records onto the appropriate item in the arrangement.
This visual programming and automation of crosswalks saves a lot of valuable time on the part of curators and programmers, who would otherwise be engaged to create custom scripts for each new custom metadata format. Since we are collecting data from disparate parts of the university, each collection may come with a unique descriptive metadata format, often manually created spreadsheets or discipline-specific XML. It’s just not resource efficient to create custom scripts for most incoming collections. The crosswalk feature lets us migrate literally thousands of descriptive records at a time and link them to data objects without new software development.
The last feature to mention today is staging of files. I designed the workbench to process large numbers of files and folders in one submission. However repository ingest happens via a web interface, which is not the most reliable way of transmitting thousands of large files let alone a SIP containing such numbers. So we needed to stage files in advance. The diagram above shows how data flows from incoming data to staging, archival and access storage. Individual users have accounts in a staging area within our iRODS grid. Files placed there by the workbench are readable by Fedora at ingest time, when they are copied into archival storage.
This approach comes with several advantages:
- There are no data transmission failures at submission time
- The transmission of files to staging can be incremental, controlled and “paranoid” with a checksum comparison
- The workbench can inform users of staging issues as they arise, so they can be addressed before submission.
- Files are staged in the background while you work on arrangement and description
- There are efficiencies to be gained at ingest time, when copying from a staging grid location to an archival grid location.
Some Notes on the Software Technology
The workbench is built upon a considerable pile of open source code and standards, including the following:
- Eclipse Rich Client Platform (RCP)
- Eclipse Modeling Framework (EMF) and Graphical Modeling Framework (GMF)
- METS XML for project definition files and submission files
- MODS XML
- iRODS jargon client libraries
The Eclipse RCP is extensible via the OSGi framework. This means that parts of the tool can be made modular and/or mashable to better fit non-UNC environments. This will require some refactoring that we need to do anyway, but most of it is already there with OSGi.
One module that I’d like to see is a way to integrate Google Refine into workflows. This seems like a natural fit for cleaning up custom metadata and normalizing various sources before crosswalks are applied.
Another modular area would be export for submission. The current implementation transforms our internal METS project definition into a submission METS for ingest into the CDR. Needless to say, this submission METS is in a CDR-specific profile. So a natural extension point would be to support other export modules for other repositories.
The BETA software is available for download, experimentation and use. We cannot provide any support, but we do welcome your comments here or contact us directly. Oh yeah, you may only download and use the software at your own risk. See our download page.