Transfer Learning, 2022
Digital print 25″ x 64″
This print illustrates the process of transfer learning, a machine learning technique when a model trained on one task is reused –usually to save time and computing energy– for another related task. Because transfer learning is often a hidden process, computer scientist Timnit Gebru and others, concerned with algorithmic bias, advocate for model training transparency.
In this print, a series of uncanny images reveals a machine learning model’s transition from a training dataset to a primary one. In this instance, a StyleGAN model initially trained on Flickr-Faces-HQ (a commonly used dataset made of 70,000 face photographs scraped from Flickr) begins training on the UVM Art + AI Research Group’s Athena Dataset (a series of artist-made glyphs). Each image documents a consecutive model learning step through the transition.
Ultimately, I discarded this model’s output because I did not like the results. Transfer Learning, documents the failed experiment and illustrates data’s hidden role in training AI models.