Connect with us


Artists amaze with AI-generated film stills from a parallel universe



Increase / An AI-generated image from the #aicinema photo series titled “Vinyl Vengeance” by Julie Wieland, created with Midjourney.

Since last year, a group of artists have been using an artificial intelligence image generator called Midjourney to create still photos of films that don’t exist. They call the trend “AI cinema.” We spoke to one of her practitioners, Julie Wieland, and asked her about her synthetic photography technique, which she calls “synthography.”

Origins of “AI cinema” as a still image art form

In the past year, image synthesis models such as DALL-E 2, Stable Diffusion, and Midjourney have made it possible for anyone with a textual description (called a “hint”) to create a still image in a wide variety of styles. This technique has been controversial among some artists, but other artists have embraced the new tools and are working with them.

While anyone with a clue can create an AI-generated image, it soon became clear that some people had a special talent for refining these new AI tools to create better content. As with painting or photography, the human creative spark is still needed to consistently produce noticeable results.

Shortly after the miracle of single-image creation came along, some artists started creating multiple AI-generated images with the same theme, and they did it using a wide aspect ratio like in movies. They tried to tell a story together and posted it on Twitter with a hashtag. #icinema. Due to technological limitations, the images did not move (yet), but a group of images gave the aesthetic impression that they were all taken from the same movie.

The most interesting thing is that these films do not exist.

V first tweet we could find which included the #aicinema tag and the familiar four movie-style images with a related theme, was received from John Finger on September 28, 2022. Wieland, to this day a graphic designer who has been practicing AI cinema for several months, acknowledges Finger’s pioneering role. in art form along with another artist. “Probably I saw it first of all. John Meta another john Fingers,” she says.

It is worth noting that the movement of AI cinema in its current form of still images can be short-lived when text2video models such as Podium Gen-2 become more capable and widespread. In the meantime, we’ll try to capture the zeitgeist of this short AI moment.

Julie Wieland’s story on artificial intelligence

For more information on the #aicinema movement, we spoke to Wieland, who lives in Germany and has a huge following. on twitter placing attention-grabbing works of art generated by Midjourney. We previously covered her work in an article about Midjourney v5, a recent model update that adds more realism.

The art of artificial intelligence has been a fruitful field for Wieland, who believes that Midjourney not only gives her a creative outlet, but also speeds up her professional workflow. This interview was conducted via direct messages on Twitter and her responses have been edited for clarity and length.

Ars: What inspired you to create frames from films with the help of AI?

Wieland: It all started with a DALL-E application when I finally got access after being on the waiting list for several weeks. To be honest, I don’t like it”drawn astronaut dog in spacetoo much aesthetic that was very popular in the summer of 2022, so I wanted to check out what else there is in the AI ​​universe. get good results and I used them pretty quickly in my daily job as a graphic designer for moodboards and presentations.

With Midjourney, I have reduced my time from looking for inspiration on Pinterest and stock sites from two days of work to maybe 2-4 hours because I can create exactly the feeling I need to convey to clients so they know , as it will be “. feel.” Since then, the adaptation of illustrators, photographers and videographers has become even easier.


Messenger adds multiplayer games that can be played during video calls.



Facebook Gaming, a division of Meta, announced that you can now play games during video calls in Messenger. At launch, there are 14 free games available in Messenger video calls on iOS, Android, and the web. Among the games there are such popular titles as Words With Friends, Card Wars, Exploding Kittens and Mini Gold FRVR.

To access the games, you need to start a video call on Messenger and press the group mode button in the center, then tap on the Play icon. From there, you can browse the game library. The company notes that you must have two or more people on your call to play the games.

“Facebook Gaming is pleased to announce that you can now play your favorite games during video calls on Messenger,” the company said in a statement. Blog post. “This new shared experience on Messenger makes it easy to play games with friends and family during a video call, allowing you to strengthen bonds with friends and family while engaging in conversations and games.”

The company says it is working on adding more free games to the platform this year. Facebook Gaming invites developers interested in integrating this feature into their games to contact the company.

Image credits: Facebook games

News comes like Facebook close offline Facebook Gaming application last October. The app was launched in April 2020 at the start of the pandemic to allow users to watch their favorite streamers, play instant games, and participate in gaming groups. At the time, Facebook noted that users would still be able to find their games, streamers, and groups when visiting Gaming on the Facebook app.

Although Facebook was experiment with games in messengers over the past few years, the idea of ​​playing games while video chatting in a quick and easy way can be a welcome addition for some users.

The launch comes after Facebook recently announced that it is testing the ability for users to access their messenger included in the Facebook app. Back in 2016 Facebook removed messaging capabilities from their mobile web app to push people to the Messenger app, which annoyed many users. The company is now testing a reversal of this decision.

Continue Reading


Opinion: Are ChatGPT and other AI models really threatening our work?



Can the bot write this intro? The buzz around AI has brought this issue to the forefront of public discussion. Conversational bots like ChatGPT and automatic image generators like OpenAI’s DALL-E are popping up everywhere.

Despite A little optimistic case studies of their potential, current models are limited, and the results they provide are deeply flawed. But it doesn’t seem to matter that the tech behind AI isn’t ready for prime time. Models only have to tell a compelling story to people signing checks, and they do.

Microsoft, which developed its own Bing chatbot, invested $13 trillion in OpenAI. Venture capital companies at an early stage poured out $2.2 billion into generative AI just last year, and this year sales department announced a $250 million fund to invest in space. headlines another institutions announce AI the future of workready, inter alia, to replace writers and artists. That these breathless predictions outpace the quality of the technology itself says a lot about our cultural moment and our longstanding tendency to devalue creative work.

The supposed promise of the future of AI is effective and abundant content. Office workers can now create entire presentations with a query or a click. Creative agencies use image generators client concept mockup. Even literary magazines informed being bombarded with AI-generated posts, and yes, editors post AI-generated content. articles another illustrations.

But AI models have proved time another again what they perpetuate prejudicemisunderstanding cultural context and prioritize be persuasive in speaking Truth. They rely on datasets human creative workapproach that might otherwise be called plagiarism or data collection. And the models that drive ChatGPT and DALL-E are black boxes, so it’s technically impossible to trace the origin of the data.

Today these and other models demand people (with their own prejudices) to trains them to “good” results, and then check their work at the end. Because tools are built to match a pattern, their results are often repetitive and subtle, an aesthetic of similarity rather than invention.

Thus, the incentive to replace human workers does not come from the capabilities of technology. This stems from the years when companies large and small – especially in publishing, technology and the media – tightened the screws on creative work in order to spend less and less on employees.

In today’s financial downturn, even tech companies are cutting costs through mass layoffs (including AI ethics teams) when financing and selling AI tools. But the situation is more dire for writersartists and musicians who were troubled to earn a living for a long time.

Pay for writers editors another illustrators this country has stagnation over the past two decades. Some countries have begun to treat art as a public good: Ireland experimenting with paying artists to create art, and other countries subsidizing audience for art. But in the US, public funding for the arts is embarrassing. short compared to other rich Western countries and dropped even further during a pandemic. Many artists need to move from one social media application to another in order to build an audience for their work and generate income.

Meanwhile, the omnipresent stream Subscriptions another algorithmic channels, with their laser focus on receiving most devotionturned creativity into an endless scroll of mediocrity, recurring styles – V unstable model for original work.

Automation is the next chapter in this ever-cheaper content story. How much can the cost of art go down? Why pay artists a living wage when machines can be programmed to produce interchangeable pieces of content?

Yes, because these models will not replace human creative work. If we are to break out of repetitive patterns, strive to break prejudice, and create new opportunities, work must come from people.

The danger of reducing creative work to widgets for outsourcing is that we lose the stages of reflection and iteration that create new connections. The language learning models that underlie chatbots are designed to provide a single authoritative response that encapsulates the world in the amount of information they have already received.

On the other hand, the human brain has a unique recursive processing ability that allows us to interpret ideas outside of a set of rules. Each step of the creative process – no matter how slow, small, or boring – is a massive act, taking a concept to a new place and imagining a wider world than exists today.

The absorption of AI is not inevitable, despite the fact that some business and technology leaders say. This is not the first cycle of tech hype, and some regulators, unions and artists are already fighting back. After the Cryptocurrency Crash, the Federal Trade Commission established Office of Technology to support enforcement in new technical areas, and the agency has issued several public warnings that false claims about products AI Capabilities will be disputed.

The Writers Guild of America, which is ready to go on strike, has proposed safeguards and regulatory standards for the use of AI in screenwriting. SAG AFTRA, The union of film actors and television and radio workers said that if studios want to use AI to simulate the performances of actors, they will have to negotiate with the union. Some researchers build tools to protect the work of visual artists from being swallowed up by models for image generators, and others have launched open source systems to highlight biases in AI models.

But the broader call to action is cultural: to recognize that creative work is not just a product or content, but necessary and a highly skilled practice that deserves solid funding and support. Creativity is how meaning is constructed in culture. This is a task that machines cannot accomplish and should not be controlled by the companies that make them. The bot can quickly write the end of this story, but we have to ask ourselves: whose votes do we really need?

Rebecca Ackermann has written about technology and culture for the MIT Technology Review, Slate, and other sources. She previously worked as a designer for technology companies such as Google and Nerd.TueAll.

Continue Reading


Twitter: BBC opposes ‘publicly funded media’ label



The corporation says it wants to resolve the issue after one of its primary accounts gets a new assignment.

Continue Reading