Since last year, a group of artists have been using an artificial intelligence image generator called Midjourney to create still photos of films that don’t exist. They call the trend “AI cinema.” We spoke to one of her practitioners, Julie Wieland, and asked her about her synthetic photography technique, which she calls “synthography.”
Origins of “AI cinema” as a still image art form
In the past year, image synthesis models such as DALL-E 2, Stable Diffusion, and Midjourney have made it possible for anyone with a textual description (called a “hint”) to create a still image in a wide variety of styles. This technique has been controversial among some artists, but other artists have embraced the new tools and are working with them.
While anyone with a clue can create an AI-generated image, it soon became clear that some people had a special talent for refining these new AI tools to create better content. As with painting or photography, the human creative spark is still needed to consistently produce noticeable results.
Shortly after the miracle of single-image creation came along, some artists started creating multiple AI-generated images with the same theme, and they did it using a wide aspect ratio like in movies. They tried to tell a story together and posted it on Twitter with a hashtag. #icinema. Due to technological limitations, the images did not move (yet), but a group of images gave the aesthetic impression that they were all taken from the same movie.
The most interesting thing is that these films do not exist.
V first tweet we could find which included the #aicinema tag and the familiar four movie-style images with a related theme, was received from John Finger on September 28, 2022. Wieland, to this day a graphic designer who has been practicing AI cinema for several months, acknowledges Finger’s pioneering role. in art form along with another artist. “Probably I saw it first of all. John Meta another john Fingers,” she says.
It is worth noting that the movement of AI cinema in its current form of still images can be short-lived when text2video models such as Podium Gen-2 become more capable and widespread. In the meantime, we’ll try to capture the zeitgeist of this short AI moment.
Julie Wieland’s story on artificial intelligence
For more information on the #aicinema movement, we spoke to Wieland, who lives in Germany and has a huge following. on twitter placing attention-grabbing works of art generated by Midjourney. We previously covered her work in an article about Midjourney v5, a recent model update that adds more realism.
The art of artificial intelligence has been a fruitful field for Wieland, who believes that Midjourney not only gives her a creative outlet, but also speeds up her professional workflow. This interview was conducted via direct messages on Twitter and her responses have been edited for clarity and length.
Ars: What inspired you to create frames from films with the help of AI?
Wieland: It all started with a DALL-E application when I finally got access after being on the waiting list for several weeks. To be honest, I don’t like it”drawn astronaut dog in spacetoo much aesthetic that was very popular in the summer of 2022, so I wanted to check out what else there is in the AI universe. get good results and I used them pretty quickly in my daily job as a graphic designer for moodboards and presentations.
With Midjourney, I have reduced my time from looking for inspiration on Pinterest and stock sites from two days of work to maybe 2-4 hours because I can create exactly the feeling I need to convey to clients so they know , as it will be “. feel.” Since then, the adaptation of illustrators, photographers and videographers has become even easier.