Short Paper
Assessing Large Language Models: Architectural Archive Metadata and Transcription
Hannah Chavez Moutran ,Devon Murphy
,Karina Sanchez
,Willem Borkgren
,Katie Pierce Meyer
,Josh Conrad
Abstract
Our research explores whether Large Language Models (LLMs) can offer a solution for improving the efficiency of developing detailed, rich metadata for large digitized collections. We tested the ability of seven widely available LLMs to complete four metadata generation tasks for a selection of pages from the Southern Architect and Building News (1882-1932): assigning subject headings; creating short content summaries; extracting named entities; and writing transcriptions. Our cross-departmental team evaluated the quality of the outputs, the cost, and the time efficiency of using LLMs for metadata workflows. To do so, we developed a metadata quality rubric and scoring schematic to ground our results. Analysis suggests that models can perform interpretive metadata tasks well, but lack the accuracy needed for assigning terms from controlled vocabularies. With careful implementation, thorough testing, and creative design of workflows, these models can be applied with precision to significantly enhance metadata for digitized collections.
Author information
Hannah Chavez Moutran
University of Texas at Austin Libraries,US
Cite this article
- Published
Issue
- Location:
- University of Barcelona, Barcelona, Spain
- Dates:
- October 22-25, 2025