I am the principal investigator for this Mellon-funded initiative (a $50,000 sub-grant through the University of Nevada Las Vegas Collections as Data – Part to Whole).

Photographs, with their dual role as documents and pictures, possess unique persuasive power. Their wide-ranging use, from tokens of memory to government records, from social media to scientific findings, from artistic endeavors to forensic evidence, invests them with an authority that crosses many disciplines. Yet the cultural heritage institutions that collect and preserve them, whether they be libraries, museums, historical societies or art galleries, often work in silos, with the subject matter of a particular collection determining its processing and destination. With born-digital collections, these divisions are amplified at scale. As institutions increasingly deal with large collections of born-digital images, traditional processing is impracticable on both local and collective levels.
Another major challenge for archives, museums, and libraries is metadata creation at scale. This challenge has been exacerbated as archives in different institutional settings seek to diversify and decolonize their collections. In order to provide access to collections, many of our mechanisms for search and discovery rely on free-form and faceted search. The ascendency of free-form natural language search as popularized by Google has shaped the search and research patterns currently adopted by many scholars. Generating metadata is expensive, time-consuming, and laborious. Assigning keywords, ontologies, and schemas to images requires painstaking processing by catalogers and metadata specialists describing each image. As a result, a collection may be under-described or not have item level descriptions. There is often a need to re-describe when a collection is described. Furthermore, the kind of descriptive metadata can change with new developments in data/information/library science, new areas of inquiry among scholars, and changes in audiences, but it is often cost prohibitive to re-describe a collection.
Machine-based computational methods are opening up new avenues for large scale image analysis and retrieval.
The project “Images as Data: Processing, Exploration, and Discovery at Scale” provides a model for creating, searching, and assessing data about images at large scale.
The scope of the grant includes:
- Demonstrating how computer vision can provide descriptive metadata (text data) for born-digital and digitized materials at large scale using the Distant Viewing Toolkit, a Python package funded by the National Endowment for the Humanities Digital Humanities Advancement Grant and built by the University of Richmond Distant Viewing Lab.
- Generating visual data for aggregating and analyzing visual patterns across extremely large corpora of retro-digitized and born-digital images.
- Developing user-driven recommender systems for content-based image retrieval for scholars and a model for image-based search.
- Providing a model for how to navigate rights and access with sensitive audio/visual material.