I’m a Graduate Research Assistant at DILAC and first year MS-HCI (Human-Computer Interaction) student. Since my first semester, I have been fortunate enough to work on a project with Juan Carlos Rodriguez, a Georgia Tech professor of Spanish who has conducted interviews of a 40-year struggle against U.S. military presence in Vieques, Puerto Rico. In 2004 he went to the island and captured 107 hours of video interviews of the people who were part of the struggle against the navy. In the summer of 2018, with the help of the Georgia Tech Library, he began digitizing the archives. Most are housed at the library and on Youtube. The main purpose of this development project is to make this archive publicly available in an interface that facilitates efficient and innovative research.
Our first goal was to create a visual representation of these testimonies and history with a prototype. Xinyi Chen, another GRA and I, started doing some humanities research about the history of Vieques, the relationship between the U.S., specifically the navy, and Puerto Rico, and the 1999 protests. Below is an image from the timeline I created using timeline.js of some of the interview clips Juan Carlos had shared, beginning with the major events that took place during the same time.
There are four components to the website project: Youtube for hosting the video interviews. Oral History Meta Synchronizer (OHMS) where we perform operations such as saving metadata about the interviews, indexing, attaching and syncing transcripts. OHMS viewer is a separate application meant to be run on a separate web server. This is where we can view the interviews, along with perform indexing and transcription. Finally, Omeka which is an open-source digital collection/exhibit content management system that will house the public site. Syncing all these parts and making them work together in a way that we need has been a bit of a challenge.
Since the archives are very thematic and character-driven, it is important for users to be able to search for specific tags or keywords from the transcripts and link different parts of the videos in order to connect topics. For instance, there were interviews from people during the struggle who speak about their experience regarding the presence of the US Navy. Ideally the viewer would be able to link this story with the other events in a meaningful way and perhaps even be able to see a juxtaposition between two related interviews.
After some research and usability testing on the four components with Steve Hodges, a DILAC Steering Committee Member and the Head of IT for Ivan Allen College, we found that although these platforms were ideal for many of the functions we want to perform, there were still some limitations, specifically with being able to tag specific segments of the one hour plus long videos and be able to then search for all segments that hold those tags/keywords. Due to time constraints and resources, our workaround was to actually clip the videos into pieces and treat them as single objects to tag and categorize. This way the search functionality we are looking for will be possible.
Since the interviews are in Spanish, the next phase of our project will be to get them transcribed by a professional team. Once this is complete, we can perform a final import of all the data including the transcriptions into Omeka. In the meantime, I am working on implementing the Omeka Neatline Timeline plugin on the site to experiment with this out-of-the-box solution for adding some visualization and a thematic way of searching across the videos and interviews. The plugin will allow us to choose and link the items we want to see on the timeline by specific parameters such as date, user, tag, collection etc. Our goal for the user is to be able to continue using this system in order to add items, link topics and collections across videos and interviews. This is a new way for me to think of a search functionality — that the intention is not just locating the same word or phrase across a static body of text, but a meaningful search that links topics and displays the results in a visual narrative.