The AI-network can change any landscape into a sun-bleached breezy anime backdrop while retaining key details.
Opinions may vary, but personally I’ve always wished real life was just a touch more like it was in animated movies. Imagine tucking into a steaming slice of pie as rendered by Hayao Miyazaki in Kiki’s Delivery Service, or walking through the searing sunlight of Makoto Shinkai’s picturesque offerings. While the late Satoshi Kon’s content is less peaceful, there’s still something hypnotic about his swooping scenery and dream-like aesthetics. This is presumably why films from all three creatives were specially selected to train an AI program to convert photographs into anime-style art.
Jie Chen, Gang Liu and Xin Chen, students at Wuhan University and Hubei University of Technology, worked together to produce AnimeGAN — a new generated adversarial network (or GAN) to fix up the issues with existing photographic conversion into art-like images. As they stated in their original thesis, manually creating anime can be “laborious”, “difficult” and “time-consuming”, so having the option of converting photography would reduce workload and maybe even inspire more people to try their hand at producing anime.
▼ A photograph…
▼ And here’s how it looks after being put through AnimeGAN’s “Hayao” filter.
August 6 saw the release of AnimeGAN 2, which has been improved in various ways. The library of images the network draws upon to create a new interpretation has been updated with a host of new Blu-ray quality images; high-frequency artifacts have been reduced. Furthermore, it’s allegedly been made much easier to recreate the effects shown by the team in their original thesis.
Where the network really shines is in its ability to retain the features of its source images. When contrasting the AnimeGAN network to existing, state-of-the-art AI, it’s apparent that some aspects of the photographs — trees, for instance, or windows — are smoothed and blurred so much as to become unrecognizable. AnimeGAN not only retains these finer details but takes less time to do so, as long as it’s been adequately trained!
The best part is that you can test a version of the first AnimeGAN right now, with no need to download any additional software. This doesn’t allow you to differentiate between the three filters, but it still makes for a pretty cute photograph in its own right.
▼ As seen here, on my beautiful and photogenic cat.
I must admit, I can’t help but wonder about the naming of the filters. Obviously, each one stands for the image bank it was trained to use as a resource: “Hayao” is sourced from Hayao Miyazaki‘s The Wind Rises, while “Paprika” is from Satoshi Kon’s film of the same name; stills from Makoto Shinkai’s Your Name make up the “Shinkai” version. While alternating the choice of a director’s given name, surname and the film title is artsy, it does have the unintended effect of muddying both the clarity of the connection between the three filters as well as the credit for the film’s directors.
Still, since the network is apparently easy to train there’s nothing stopping people in the future from making their own Kon, Totoro, or Makoto filters; heck, maybe you could even render reality in the pastel pigments of Sailor Moon if you fed it enough images? The sky — and indeed, your own photographic skill — is the limit!
Source: GitHub/TachibanaYoshino/AnimeGAN (1, 2, 3)
Top image: GitHub/TachibanaYoshino/AnimeGAN
Insert images: GitHub/TachibanaYoshino/AnimeGAN (1, 2)
Cat images ©SoraNews24
● Want to hear about SoraNews24’s latest articles as soon as they’re published? Follow us on Facebook and Twitter
Leave a Reply