

This is produced by the StyleGAN at the beginning of training.

This wasn’t ideal for the above reasons, but I decided to give it a go and see what the results would be like anyway.Ī contact sheet of “reals” or real images taken from my dataset from the BHL Flickr. Due to time constraints, I did allow some illustrations with more than one bird in them and some with background features such as grass. I love birds and the style of scientific illustrations, and I was excited at the idea of the new and weird birds my StyleGAN model might produce! I chose illustrations in which the bird was in the centre of the page and the rest of the space was ideally empty. I looked through the BHL Flickr and decided to use scientific illustrations of birds. This was important as when training this model I wanted the network to focus on the subject of the image itself and not, for example, where in the image the subject was, as that would waste precious training time. The biggest (active) part of this project time-wise was collecting the images and making them look roughly uniform by cropping them to the same size and having the subject roughly in the same spot in each one. This advancement allowed me to feel like I could attempt to train my own, as 1000 images was something I could get and I felt like I could train the model during the course of a hack day and get some kind of result to show at the end. StyleGAN has been updated a few times and in 2020 StyleGAN2-ADA was released, which now allows us to train a network on very few images, actually as few as 500–1000 as opposed to 10,000s just a couple of years ago, and it will learn to produce good quality fakes in a very short period of time (a few hours of training). The first network produces the fake images and the second will look at those images and try to determine whether it thinks they are “real” images or “fake.” The longer each network trains, the better each gets at its job, and the more likely the images produced will trick the second network and ultimately also humans into thinking they’re real. The “adversarial” part of the name refers to the fact that a StyleGAN actually comprises two Neural Networks which work against each other. Three images from two successful, one less so…Ī StyleGAN does this by processing through the set of real images it is given numerous times during which it also produces its own images. This network has been trained on thousands of images of people’s faces and has in turn learned to produce new images which (for the most part) look like real photographs of people’s faces. You may have seen the site, which features fake photos of people which a StyleGAN has produced. The purpose of a StyleGAN is to train and learn from input images and produce its own “fake” images back.

StyleGAN stands for Style Generative Adversarial Network and was originally developed by NVLabs.
#Not enough memory to open this illustration plus
I watched Derrick’s tutorials on training a StyleGAN Neural Network and the things he was saying made a degree of sense to me, plus he had published a handy Google Colab notebook with step-by-step code, so I decided it was something I might be able to have a go at. Here, he publishes videos of his Machine Learning courses which he runs for people who want to use ML for creative purposes. I’ve also had an interest in Machine Learning (ML) for a while, and I recently discovered Derrick Schultz and his YouTube channel Artificial Images. I immediately knew I wanted to utilise this resource as I love scientific illustrations of nature. I was inspired by the Biodiversity Heritage Library’s Flickr, which is a massive collection of free-to-use scientific images. The theme this past hack day at Cogapp was “Museum APIs,” but the looser interpretation was that we were to use open data provided by museums in our projects. This allows us to get the creative juices flowing and to further our agenda of innovation.

We create websites and other digital services for museums, art galleries, archives and the like, but every couple of months we hold a “hack day.” A hack day involves spending a day working on projects which generally revolve around a particular theme and which ideally we can do in one day. I work as a web developer for the agency Cogapp, which is based in Brighton, UK. AI generated illustrations of birds created with illustrations from the BHL Flickr using machine learning with a StyleGAN.
