Friday, March 29, 2013

Week 26: cb2bib or text2bib?

I didn't do many things this week, but I really got into the Vict Bib project.  Earlier in the week I began a list of Vict Bib fields based on a spreadsheet and an examination of possible fields on the website.  This was a loooooong process because there isn't a way to have the website automatically display all fields.  I had to scroll through many records-pages and pages actually-in order to make sure that I had identified all of the fields. 

Later in the week I started reading documentation on cb2bib and text2bib.  The cb2Bib is a free, open source, and multiplatform application for rapidly extracting unformatted, or unstandardized bibliographic references from email alerts, journal Web pages, and PDF files. Text2Bib is a PHP script for converting references to BibTeX format.  However, it seems like it cannot detect some of the document types that Vict Bib uses.  Lastly, I read quite a few forum posts on the subject of data extraction.  So, again, this week was not much "doing", but a lot of preparation for what's to come. 

Reading #7


 
Weir, R. O. (2012).  Gathering, Evaluating and Communicating Statistical Usage Information for Electronic Resources. In Managing Electronic Resources (pp. 87-119). Chicago: ALA.
When evaluating e-resources, it is vital to take a close look at the usefulness patrons derive from them compared to the investment, such as purchasing and licensing.  The importance of usage data and statistics in making and justifying e-resource renewal decisions is substantial.  The challenges faced in gathering data, creating and processing statistics, and reaching responsible conclusions are quickly increasing.  Prior to tackling these challenges, an evaluator must be certain of what he or she wants to achieve and how he or she is willing to tackle it, deciding upon a scale of usage analysis that is meaningful and sustainable in each library’s individual context.  The library and user communities must be aware of the subtle variations that exist in the definitions of usage between vendors.  The attempts to achieve standards, and the nuances of specific data items equip the librarian to use usage data wisely. 

Friday, March 22, 2013

Week 25: More possibilities

This week I think I have finished the Schematron work.  Really, I really do!  I have the encoder checks, editor checks, biographical introductions and critical introductions.  I have put them up on the Wiki and after Michelle looks at them I can then put them on Xubmit so that encoders can later download them to check their work. 

I also had a nearly two hour meeting with Michelle to talk about METS Navigator work and a new project assisting with migrating the data from Victorian Studies Bibliography.  Right now all of the data is on a website, but due to lack of support and time to devote to the maintenance of the website, the website will be shutting down.  In order to preserve the data, the plan is to move it to Zotero.  Michelle would like me to do two things:  map the Vict Bib fields to BibTeX fields (BibTeX is a format that Zotero supports).  Secondly, she would like me to find a tool that can extract the data from the CSV file into BibTeX. 

I also attended the Digital Library Brown Bag on the William V.S. Tubman Photograph Collection. The presentation reviewed the history of the project, from removing old, decaying photographs from a damp library in Liberia to providing public access to these photos via Image Collections Online.  The speakers stressed the large role of IU's Digital Library Program through their contribution and provision of digital library infrastructure and related tools.  This presentation was further confirmation for me of what excites me about the possibilities of digital libraries and digital collections. 

Friday, March 8, 2013

Reading #6


Caplan, P. (2003). MOA2 and METS. In Metadata Fundamentals for All Librarians (pp. 161-165). Chicago: American Library Association.
This book’s section on METS begins with its history: In 1997 the Digital Library Federation began a project called Making of America II.  Then, in 2001, a workshop was convened to discuss modification of MOA2.  A result of the workshop was a successor format, known as METS.  The section continues on to describe the differences between MOA2 and METS.  METS, unlike MOA2 has a header section containing information about the METS file itself as well as a behavioral section detailing information about behaviors associated with the object.  Another major difference is the external metadata record or embedded metadata from a non-METS namespace within a wrapper element.  The author recommends the external metadata record as a way to avoid maintenance issues that could arise due to changes in standards or project focus.  The many parts of a METS file are then listed, along with a brief explanation of each of the subparts.  For example: The ‘fileSec’ groups information about files related to the digital object within a wrapping.  The ‘FLocat’ subelement can be used to link to the actual file.  The author demonstrates well the impressive flexibility and extensibility (through the extension schema) of METS through outlining its history and providing sample file excerpts that show METS’ many potential uses.

Week 24: Library of Congress fellowship?

So, last week I found out about this Library of Congress fellowship.  It's a nine month position where you get to work at one of several D.C. digital libraries/humanities institutions.  I think I may apply because it seems like it may be interesting and a good stepping stone.  In order to prepare, I did a few hours this week researching the Folger Shakespeare Library and MITH.  I also did some work on improving my resume. 

Again, I attended the Digital Library Brown Bag.  Stacey Konkiel, the E-Science librarian and Heather Coates (remotely from IUPUI) did the talk.  It was on the university-wide suite of data services that librarians have developed to address the need for academic libraries to evolve to include research data curation and management services.  Most of these services seem to be used primarily by science faculty and students, but a small number of services exist to provide support and platforms for social science and humanities scholars.  I admired how the speakers emphasized the need for campus-specific resources, given that the Bloomington and IUPUI campuses tend to possess slightly different research and output emphases. 

To round out the week, I continued the JavaScript tutorial. 

Friday, March 1, 2013

Reading #5


Liu, J. (2007). Metadata implementation.  In Metadata and its Applications in the Digital Library: Approaches and Practices (pp. 124-125). Westport: Libraries Unlimited.

Within the Metadata Implementation chapter, there is a small section on crosswalking that begins with a quote from the Dublin Core Metadata Glossary that states that “crosswalks help promote interoperability”.  The author then moves on to define a crosswalk as a “high level mapping table for conversion”.  Such a method leads to inevitable sacrifice of data, because it is almost impossible to match all of the elements in the original schema with those of the other schema.  No two metadata standards are completely equivalent.  While this can be useful to pick and choose a schema which best suits a project, it can also be detrimental when trying to transfer data from one metadata standard to another.  The author mentions several projects that have used crosswalks extensively, such as Michigan State University’s Innovative Interfaces XML Harvester, which is a tool to help convert the collection-level information in EAD into MARC.   METS is described as an example of a standard that avoids transformation problems due to its acceptance of descriptive metadata in any format regardless of local rules or standards.  In a time where standards are constantly shifting and advancing, METS is an invaluable tool. 

Week 23: XSLT, METS, JavaScript, oh my!!

So, this week I seemed to have dabbled with a bunch of things.  I continued reading from 'XSLT Quickly' by Bob DuCharme and got as far as adding and deleting elements.  But as it turns out, I actually won't be writing the XSLT for transformation.  It seems the programmers want to use Java and want to do this transformation themselves.  So, I will just write a draft XSLT anyway for the practice, although it won't be used. 

I then got my METS experience for the week by updating the METS comparison documentation ever so slightly so that the wording is a little more clear. 

Then, since I have to wait to hear from Michelle about my next step, I decided to finally sit down and do some JavaScript tutorial work.  I want to eventually use JavaScript for my Hildegard project, so since I had the time I decided it was the perfect moment.  I spent about 3 hours working on that.  I used the W3 schools site for that: www.w3schools.com/js/js_intro.asp .  I really like their tutorials.  They're not super comprehensive, but they are good basic starting off points. 

I also attended the Digital Library Brown Bag.  This week, DLP's own Julie Hardesty gave the talk.  She is the Metadata Analyst/Librarian here, and she and I worked together on the ICO/Photocat project. Her presentation was on the IU Libraries' using CSS media queries to offer mobile-ready access to online digital collections.  I liked how she had us take out our smartphones (if we had them) and compare the different displays and how the page was rendered.  Apparently, there are two ways a mobile phone can display a web page.   One is to create a completely separate site, a mobile site.  You can tell if a site is a mobile site by whether or not an 'm.' is located in front of the rest of the URL.  Or, the CSS can be written to automatically render the display so that it can fit the screen, whether it is on an iPhone, Android or iPad.  This was an interesting talk and at the end Julie read a funny poem she wrote called 'Making Mobile Meaningful' that was written using only words starting with 'm'.  Clever!