Skip to main content

Shared AI Models for Iterative Preservation and Creation

Published onApr 22, 2019
Shared AI Models for Iterative Preservation and Creation

Visible storage in the Henry R. Luce Center for the Study of American Art. Image courtesy of the Met Museum.

On the mezzanine of The Met is a section of visible storage housed in the Henry R. Luce Center for the Study of American Art. Rows upon rows of glass cabinets contain a physical archive of objects arranged by material (oil paintings, sculpture, furniture and woodwork, glass, ceramics, and metalwork), then by form and chronology. Visible storage raises the question of the relationship between objects chosen for museum display, and the broader space of possible creation. Structure, collectively cultivated over generations, underlies that creation. In the Museum’s open stacks, organized variations suggest a structured feature-space that could be used to characterize objects found elsewhere in the Museum, as well as work outside the collection. We can use machine learning algorithms to infer those characteristic features from a collection of existing work, and build AI models of its underlying structure. A single artwork manifests an instance of that structure; learning executable models of it enables creation of multifold instances, and quick iteration. These models and their outputs deserve rich contextualization in the modeler’s choice of training data. An algorithm which knows little else about the world can learn, from the combined features of its inputs, a recipe for new creation – ranging from homages to a particular period to chimeras of geography and style.

The experience of walking the aisles of visible storage is similar to that of exploring a generative model: take a parameter corresponding to each feature of an object, tweak it, and create another instance. The explorer sees rows of silver bowls seemingly identical except for handles of varying curvature, or silverware sets with increasingly intricate patterning. I like to think of generated images in a similar manner: organizable constellations of possibility, variations on a cultural theme, suggestions of what we could create in the future, based on what we already have.

Pairs of real images from The Met collection (L), and generated images (R). Images courtesy of the author.

Here we find one great beauty of artificial intelligence (AI) intermingling with art: the emergence of a toolkit for collaborating with a model of cultural history on a timeline allowing experimentation and rapid evolution in the present. The toolkit should be broadly and openly accessible, allowing everyone to take part in cultivating the collective structure that underlies culture and creation. Applying these AI models to digital collections of cultural works gives us the ability to interpolate, and to iterate. Interpolating lets us explore the interstices of a collection to discover, in the space between existing works, echoes of works that could have been created but never were. Iterating lets us explore thousands of variations and select feature combinations to inspire the next generation of outputs, evolving collections forward.

A prototype of this vision came to life during the recent Met x Microsoft x MIT collaboration. To experiment with archive interpolation and evolution, the group I joined implemented machine learning models of the structure underlying different categories in The Met collection, and built an interactive web studio where visitors can explore and experiment with that structure. Gen Studio places existing Met images on a map of an associated latent space, a space of features that can be used to describe the structure and appearance of existing artwork. Those features can be recombined to create new images using trained neural networks known as generative adversarial networks (or GANs). As visitors explore the map, they see new images generated from features of The Met images, weighted by the distance on the map from each. This experience was designed to be generist—simultaneously conveying both the uniqueness of each image, and its potential to be iterated. As a final step, visitors can search for works in The Met collection that are visually similar to a generated image. You can read more about the project in an interview by Will Fenstermaker, an editor in The Met's Digital Department, try out an online version of the web studio, and explore a set of guides and demos for setting up your own studio.

The experience of Gen Studio on the web. Visitors navigate the map to generate images, and can then explore visually similar objects in The Met collection. Video courtesy of the author.

On February 4, we presented this project in a showcase in the Great Hall of The Met Fifth Avenue. We invited visitors to step into the latent space of The Met collection via an immersive, projected map of existing images, their nearest-neighbors in latent space, and sequences of images interpolating between them. In the sets of interstitial images, we find a dreamlike complement to The Met collection: its features manifest in new forms, with combinations both strange and familiar. In the models, we find potential energy; the suggestion, just beneath the surface, that an infinite number of such images could be created, then taken as inspiration to refine the models themselves. A feedback loop befitting a twenty-first century articulation of iterative creativity.

Sarah Schwettmann and SJ Klein at their Generist Maps demo at the Met on February 4, 2019. Images courtesy of Victor Castro.

Culture is iteration; existing works inspire future creation. Reducing barriers to iteration enables long chains and complex networks of collaboration, transcending the notion of sole authorship. In an era where AI models can accelerate this process, we need infrastructure and norms around contributing to shared models, and around the idea of models as a shared resource. Together with colleagues at the Berkman Klein Cyberlaw Clinic at Harvard, I recently released a pair of legal templates for collaborations where AI models are trained on existing art. These templates are an invitation to facilitate collaborations in a way that is approachable and fair—a first step toward an ecosystem of models for iterative creation.

In such an ecosystem, AI models could rekindle an ancient relationship to making, where the creation of beauty involves its enactment through a collectively refined process. Our collaborator on the Gen Studio project, Kim Benzel, curator in charge of the Department of Ancient Near Eastern Art at The Met, highlighted similarities between ritualized creation of objects in her collection and the use of AI models as tools for repeating and modulating a creative process. Both emphasize process over product, where outputs are linked more strongly to the creative heritage they embody than to an individual author, and exist as manifestations of models for making that can be run again. In addition to cataloguing collections of creative work, archives of the future could identify and describe a variety of corresponding world models for generating new entries in a collection. That is, built into such archives could be a recipe for their evolution. We are excited about the potential for a future where these models become an explicit part of shared heritage, and participation in training and evolving them becomes a shared cultural practice.


Related Content

Read more about how The Met is exploring artificial intelligence in an article by Chief Digital Officer Loic Tallon: "Sparking Global Connections to Art through Open Data and AI."
The Met's Collection API is available on GitHub
Read more about the Generist Project.
The Generist Maps web studio is available online at Gen.Studio.
Download legal template agreements for artist-AI collaborations.
Learn more about The Met's Open Access initiative, the tagging initiative, and the prototypes developed at The Met x Microsoft x MIT hackathon.

Comments
0
comment
No comments here
Why not start the discussion?