Tuesday, 28 January 2014

IS 289: Week 4 (January 28, 2014)

IS 289 Community-based Archives
Guest Lecturer
University of California at Los Angeles – January 28, 2014

As a result of my role as facilitator for the Mayme A. Clayton Library and Museum’s Collection Advisory Board, I was asked to be a guest lecturer in Dr. Anne Gilliland’s course on community based archives at the UCLA's Graduate School of Education and Information Studies. When I arrived to the sunlight filled classroom, I was pleased to find a nice group of fresh faced students listening to their instructor, taking notes, and no PowerPoint presentation in sight. I took a seat in the back and listened as Dr. Gilliland talked about strategic planning for community archives, and used examples from diverse archives around the world to illustrate her points. I found myself taking notes on the information that would help me articulate ideas at MCLM and trying to capture the details of the institutions that I hoped to visit in the near future. She talked about how the National Japanese American Museum is located on the site of the deportation of thousands of Japanese Americans to internment camps to demonstrate how the location of the archive can provide incredible resonance with the mission of the archive. She mentioned how the government archives in Cologne, Germany fell through the floor and into the metro station below because the proper floor load measurements were not considered. In terms of raising money, she shared how the Arab American National Museum in Dearborn, Michigan suffered a detrimental hit to their fundraising efforts, when their campaign roll out was scheduled within days of the events of 9/11. The wide spread negative perception of Arab Americans forced them to re-visit their strategy for securing funds.  

When it was time for me to speak, I decided to forgo my written notes and let the photographs on my PowerPoint keep me on track for the presentation. I shared how Mayme’s interview with The HistoryMakers, and the IMLS funding facilitating my move to Los Angeles to work on Mayme’s collection in 2012.  I told Mayme’s story about wanting people to know that “black people had done great things”, how she spent her whole life collecting evidence of that simple fact, and how her collection arrived in an empty courthouse in Culver City, CA. I relished the opportunity to spend some time discussing how MCLM has been able to capitalize on Mayme’s history of community engagement to enlist community buy-in and meet the minimum expenses of keeping the door open. I talked about Black Talkies on Parade (film festivals), Student and Independent Filmmakers Awards, Annual Awards Programs, and Celebrity Golf Tournaments, from the late 70’s to the early 2000’s which are documented within Mayme’s Papers. I went on to talk about our current challenges, as I saw them, mainly a lack of adequate staff and the absence of strategic planning. I gave examples of the negative impact of exorbitant reliance on volunteers, and how we have to re-do tasks, when they were not done consistently over time. I mentioned our Collection Advisory Board as a strategy to help us consider multiple angles before decisions are made at the museum. I finished with a slide from Kate Thiemer’s 2009 SAA presentation on Archives 1.0 versus Archives 2.0, and how we can bring MCLM into 2.0 territory. The students asked very perceptive questions about the museum’s accessions, collaboration with other black archives, and how we manage volunteers. Overall it was a very successful presentation, and I plan to visit their class for my own edification as time permits in the near future.


As I sat through the rest of the class, I listened as Dr. Gilliland brought up complex philosophical questions about the function of community archives. One point that has stuck with me is the questioning of the implementation of description standards, and the needs of our users. She gave an example of a former student who is working as a metadata specialist at UCLA, trying to make an English finding aid accessible to a group of older group of Armenian community members. There is no doubt that the collection would be of use to those individuals, and even if she managed to have the finding aid translated into Armenian (with appropriate script), who is to say that the words we use are the words that they would use to describe the content of that collection. In my LIB 122 class, we talk about data schemas and standards (semantics) as the best way to share information among different institutions and make it accessible to their users. Dr. Gilliland encouraged us to ask community members what they would call a given item, and compare it to what an archivist would call it, in order to determine how critical the problem is for a given archive. When I consider that little test, I think that is valuable for the staff of MCLM to aspire to the standard; the proper names, material types, and subject matter of Mayme’s collection are not so far removed from the mainstream to warrant its own classification. Not that community archives are designed to be controlled by the hegemony, but I do like the idea of being versed enough in the language of the standards to be a “crosswalk” for the community archives in my sphere of influence.   

Monday, 27 January 2014

LIB 122: Week 2 (January 21, 2014)

This week we learned about how metadata works to help us manage/discover content, the different types of metadata, and had an introduction to metadata standards. One critical technological component for metadata to be effective is the existence of a digital asset management system. An excel spreadsheet will not cut it. The digital asset management system is the central requirement to connect the storage “dark” archive (.tif files), the institution’s integrated library system and the web interface for users to access materials. The digital asset management system also allows for changes to be made in one place and updated in every other place, to fully unlock the power of metadata, this system is critical. We also discussed descriptive, structural, administrative and preservation metadata. When began to talk about metadata standards, I made note of a chart that will serve me well into the future:


Museum
Library
Archives
Data Structure
CDWA
MARC
EAD
Data Standard
CCO
AACR2
DACS
Data Format
XML
XML
XML
   CDWA = Categories for the Description of Works of Art, CCO = Cataloging Cultural Objects,
   MARC = Machine Readable Cataloging, AACR2 = Anglo American Cataloging Rules
   EAD = Encoded Archival Description, DACS = Describing Archives a Content Standard

We also discussed different data structures/schemas, like MODS, and Dublin Core. One of the more interesting aspects for me was the idea of semantics or data standards within each schema. How can we standardize our descriptions if we aren’t speaking the same language? During my fellowship, I kept talking to my executive director about appraising records, without an archival background, he would get frustrated, thinking I was talking about monetary appraisal. We were not using the same “data standard” or semantics. During class, we practiced assigning creating different types of metadata elements and entering metadata values for a wide variety of digital objects. Even as my community archive is missing some of the equipment and software requirements to full utilize metadata, we can capture information according to the standards as we catalog our assets.  

Tuesday, 21 January 2014

LAPNet Earthquake Workshop

Earthquakes: What are they and how do they affect our Buildings and Collections?
Glendale Public Library – January 21, 2014

I was very excited to attend this workshop about earthquakes at the Glendale Public Library this morning. It was especially relevant to me because, having grown up in Phoenix, Arizona; I had no concept of how it feels to live in a veritable earthquake hotbed. Now that I reside in Los Angeles County, it impossible to ignore that we all live within 10 miles of an earthquake fault line. Dr. Lucy Jones, a seismologist from the United States Geological Survey spoke for an hour about the inevitable earthquake that will strike Los Angeles in the near future and what the ramifications for our urban lifestyles will be. Everything from running water to cell phone towers, and infrastructure could be unavailable for unspecified lengths of time. Using Hurricane Katrina and New Orleans as an example, Dr. Jones shared that as a result of poor preparation and recovery efforts, the population of New Orleans has been reduced to half of its pre-Katrina numbers. How would Los Angeles deal with this type of blow to its population and economy? There is a short video, The ShakeoutScenario, that describes what Los Angeles would be like in the event of a high magnitude earthquake along the San Andreas fault, which just happens to run through the fastest growing region of LA County, the Inland Empire.  

The next speaker, Anders Carlson, a professor at the USC School of Architecture shared extensive data about how buildings have responded to the movement of the Earth beneath them in previous earthquakes. He shared that most engineers and architects design new buildings with preventative measures in anticipation of earthquakes, but the older buildings are in dire need of seismic retrofitting. There are building codes that require ever new and existing building, especially K-12 public schools, to have a minimum level of life safety elements, but the performance of the building under distress is not mandated. Even if no one dies in a building ill prepared for an earthquake, the building could knock into neighboring structures causing damage or become damaged beyond repair displacing residents and workers, which ultimately results in problems that would be more expensive to fix down the road than retro-fitting the building in the near future. There is a website created by structural engineers, dedicated to identifying these dangerous buildings in an effort to make building owners do the preventative maintenance.   

The last speaker, Tony Gardner, the former head of Special Collections and Archives at California State University at Northridge talked about his experience with the Northridge Earthquake on January 17, 1994. Twenty years ago, an earthquake ripped through his campus and left 400 million dollars in damage, which was the highest amount done to an American college campus since the wartime arsons of the Civil War. The libary's two wings, one which housed the archival materials were damaged beyond repair and had to be demolished and re-constructed. Mr. Gardner shared photographs and gave a detailed narrative of what steps were taken by the librarians and archivists to save their materials. I imagined the great amount of patience and resilience demonstrated by his staff as they were moved from one temporary location to the next, and discovering new damage at every turn from theft to water damage. They also had to inventory and pack away materials quickly, often in the dark, with hard hats on as the structure was not stable in the weeks after the earthquake. The images of the lightweight industrial shelves that were twisted out of shape like pipe cleaners and the bankers’ boxes hanging at an angle or on the floor were unsettling, as many of the shelves at MCLM are set up just like that. I also know that some of Mr. Gardner’s “lessons learned” have been implemented through the California Preservation Program, including having a point of contact in case of an emergency and step by step instructions. Every volunteer at MCLM has an emergency plan with maps and phone numbers behind their mandatory name tag.


As a result of my attendance at this workshop, I have an increased awareness of the earthquake threat for myself and the places that I work. I was so grateful for the audience member who asked what an individual should do in the case of an earthquake, answer: don’t run, find a table and crawl under it. As MCLM is working on a new collection storage space, I can also be much more knowledgeable about the positive and negative aspects of the decisions that are being made. Although this session was not about archives directly, I believe that we, as archivists, have a responsibility to understand and mitigate the risks that we can and cannot control within the environments of our collections.   

LIB 122: Week 1 (January 14, 2014)

I returned to Pasadena City College for the second semester of my certificate program in Digitization for Cultural Heritage Institutions. This course focuses exclusively on metadata, and our first class delved into why it was important; it drives people to our websites. This is the reason that high quality and standardized metadata is critical. As the metadata principles are understood and applied, then local repositories can harvest their metadata records and federated search engines like Google can include us in the search results. Not only does this help repositories attract a wide range of users, it helps to justify our existence with data that demonstrates usage and relevance, which becomes critical as competition for financial and human resources increases. Linda showed us images on a local library’s website, and then searched for those images on oaister.worldcat.org (no hits) to demonstrate how they are losing opportunities to engage users. Towards the end of class, we were asked to go around the room to share what our biggest concerns about the class were, and what we hoped to gain from this semester. I said that I was looking forward to learning new metadata schemas and controlled vocabularies, but my concern was a little more complex. I explained that after my experience last semester with scanning and Content DM, everything seemed so simple with the proper hardware and software, what happens in institutions who can’t afford these resources? Linda said that even when you have resources, making metadata records harvestable can be time-consuming and difficult, but there are things that you can do at the onset to streamline the process and be prepared when the opportunities become available. That was very encouraging to me because whether we (MCLM) have a sophisticated content management system or not, our materials need to be described. The more that I can understand about metadata, the more valuable I can be in a leadership position at the museum, and when the resources become available we can focus on importing rather than re-creating descriptions from scratch.    

Sunday, 5 January 2014

LIB 121: Week 15-16 (December 3, 2013)

Our final project in this class was to use Content DM to attach metadata to the 25 images that we had scanned for the midterm. We also had to add information about our process and metadata into the Digital Project summary document. I knew that the most time consuming aspect of the project would be to write descriptions and assign controlled vocabulary to each image. During the weekend before class, I drafted a spreadsheet that contained all of the metadata elements that were required for the assignment and filled in the data so that I could spend more class time working with the functionality of ContentDM.

Essentially, when I arrived in class and began working, different aspects of my work were held in three different places. My metadata spreadsheet and images were on my jump drive, from there I uploaded images and entered metadata into the “Project Client” which was held on the computer lab desktop; and then I uploaded those images and metadata to the Content DM server for public viewing. I could see how in a regular archive environment, Content DM workflows and permissions would need to be established early on to ensure accurate information and prevent redundancy. Once information was uploaded to the server, I had to verify that I wanted those images and that vocabulary to be added to my collection, and then the program “indexed” the content and made it available for viewing.

The system was relatively simple for me except for a few snags. For my first attempt to move images from the jump drive to the desk top, I did not elect to sync the Project Client files to my jump drive, which meant that the images and metadata that I did not upload to the server deleted from the Project Client when I closed the application. Luckily, it was less than 10 images, and I had the metadata information saved in my spreadsheet. My other issue involved the use of commas rather than semi colons to separate terms in my controlled vocabulary. Using commas made each list of terms a unique term in the controlled vocabulary list. On the web server, I had to go into the controlled vocabulary edit field, delete all of the long strings and enter all the individual terms. Then I had to go into each image and separate the terms with semi-colons. Once again, my spreadsheet helped me see which terms I was considering without opening every image in the first step.


We had week 15 and week 16 to use the computer lab to finish our project, with the work that I was able to do at home; I finished within 10 minutes of our last day of class.  I was grateful for the early session, and left Pasadena City College with the satisfaction of making a worthwhile investment in this program. My completed collection can be found at: http://cdm16693.contentdm.oclc.org/ , under my last name, Powell.      

LIB 121: Week 14 (November 26, 2013)

I had used Content DM while working on the Sacks Collection at Arizona Historical Foundation in Tempe, but my responsibilities were mainly data entry. As we learned about all of the capabilities of the content management system, I began to understand how many decisions were made before I was even hired to input text. Linda instructed us in how Content DM can be incorporated in digital preservation workflows, and its functionality as a tool for collection searching and display. The trick is to set it up in such a way that meets the needs of the repository as well as the potential users. All of the lectures prior to this one gave us a blueprint for how we could configure Content DM for our final project. 

We talked about the importance of field names, Dublin Core mappings, and various data types. Other field properties included whether the text should wrap or go in a straight line, if the field was required, should it be full text searchable, or if it should adhere to a controlled vocabulary. Content DM comes with nine controlled vocabularies pre-loaded: Art and Architecture, Dublin Core Metadata Initiative Type Vocabulary, Getty Thesaurus of Geographic Names, Guidelines on Subject Access to Individual Works of Fiction, Drama, etc., Maori Subject Headings, Medical Subject Headings, Newspaper Genre List, Thesaurus for Graphic Materials, and Union List of Artist Names. Linda went over so many details about the capabilities of Content DM, and showed us many examples such as Claremont College, in our three hour lecture, I stopped taking notes. Toward the end of class, she explained that she did not expect us to remember every detail, just the capability, this way we would be encouraged to do the research to figure out how to make the software work for us. 

LIB 121: Week 13 (November 19, 2013)

This lecture started with an explanation of various types of software used to work with collections, such as collection management system, digital asset management software, digital library software, and preservation systems. 

Collection Space is an open source, digital content manager that the Museum of the Moving Image (New York, NY) and The Phoebe A. Hearst Museum of Anthropology (Berkeley, CA) are using. After clicking through the demonstration, it looks like a great resource for item level description, especially for digital files. One of Linda’s warnings about open source was confirmed when I looked at the FAQs, “Installing Collection Space requires someone comfortable with a command line interface and package manager (for Linux and Mac installations), and who has some familiarity with editing text files”. These skills were not an element of my graduate education, and as the lone archivist in a community archive without an IT staff, implementing this “easy and free” software could present a bit of a challenge.

PastPerfect Software is a very popular option for small museums and archives. It has an object, photograph, archive, and library modules that allow for the cataloging of diverse collections. It also has capabilities of managing other aspects of the museum functioning including item loans, donor and volunteer information. The leadership at the Mayme A. Clayton Library and Museum is strongly considering the adoption of PastPerfect, but I would caution them to clearly define the organizational structure of their collections, because provenance and consistency could easily get lost in the midst of all of those fields and modules.

Greenstone Digital Library Software has a more international presence, and can be used in at least six different languages but various American institutions like the Allen Park Veterans Administration Hospital Archives and the Detroit Public Library’s E. Azalia Hackley Collection. Both websites incorporate search “buttons” across the top of the screen which allow you to browse by relevant categories, titles, people, organizations, lyrics, etc. The catalog records are diverse (oral histories, photographs, sheet music) and clearly displayed. In the case of the Detroit Public Library, there are images alongside the item descriptions.  

Fedora, which is an acronym standing for Flexible Extensible Digital Object Repository Architecture, is being used by Arizona State Universities Libraries and Grinnell College, among others. The software seems to be best utilized as a digital repository as it showcases faculty output, conference papers, data sets and learning objects. Fedora is open source and supported by DuraSpace (DSpace). DSpace functions more like a content management system and works collaboratively with Fedora. My delve into the website confirms that has a complicated infrastructure that has the power to support high volume and diverse types of digital objects. Fedora does not seem like an appropriate application for a small staffed community archive.

Simple DL is a brand of digital library software that allows organizations to publish and describe the contents of their collections online. Mount St. Mary’s College and the University of South Alabama, among others who are using currently using it. The interface is web-based which allows you to work from any computer, includes a 30 day trial, and a bulk upload of files. The pricing ranges from $80.00 to $160.00 per month depending on much data you require. One public library in Nevada has used Simple DL to post oral histories, periodicals, and photographs. This software application could be a good choice for a place like the Mayme A. Clayton Library and Museum.

Omeka is an open source content management system designed for publishing digital collections. Omeka is not as intense as Fedora or DSpace and uses an unqualified Dublin Core metadata standard. Rather than entire organizational collections, Omeka is extremely powerful for storytelling through online exhibits; examples include, the “Bracero History Archive” or the “Frontier to Heartland” digital exhibits. I have seen some incredible websites created through Omeka, and I can think of so many to create at the Mayme A. Clayton Library and Museum. For example, a digital exhibit dedicated to Marilyn White and her experience as a black female athlete in the 1940’s or C. Jerome Woods’ collection telling the story of the Black LGBT community in Los Angeles from the 1960’s up until now.

Readers could probably tell that I had specific perspective while learning about these topics in class. When I first began working in archives, I assumed that I could plug myself into any university, corporate, or library archive but as time goes on, I feel like I have found my calling with the perpetual underdog, small to medium sized community archives. I am drawn to the resources that can allow these institutions to strategically use our resources to educate ourselves and those around us.