Connect with us

TECH

Messenger adds multiplayer games that can be played during video calls.

Published

on

Facebook Gaming, a division of Meta, announced that you can now play games during video calls in Messenger. At launch, there are 14 free games available in Messenger video calls on iOS, Android, and the web. Among the games there are such popular titles as Words With Friends, Card Wars, Exploding Kittens and Mini Gold FRVR.

To access the games, you need to start a video call on Messenger and press the group mode button in the center, then tap on the Play icon. From there, you can browse the game library. The company notes that you must have two or more people on your call to play the games.

“Facebook Gaming is pleased to announce that you can now play your favorite games during video calls on Messenger,” the company said in a statement. Blog post. “This new shared experience on Messenger makes it easy to play games with friends and family during a video call, allowing you to strengthen bonds with friends and family while engaging in conversations and games.”

The company says it is working on adding more free games to the platform this year. Facebook Gaming invites developers interested in integrating this feature into their games to contact the company.

Image credits: Facebook games

News comes like Facebook close offline Facebook Gaming application last October. The app was launched in April 2020 at the start of the pandemic to allow users to watch their favorite streamers, play instant games, and participate in gaming groups. At the time, Facebook noted that users would still be able to find their games, streamers, and groups when visiting Gaming on the Facebook app.

Although Facebook was experiment with games in messengers over the past few years, the idea of ​​playing games while video chatting in a quick and easy way can be a welcome addition for some users.

The launch comes after Facebook recently announced that it is testing the ability for users to access their messenger included in the Facebook app. Back in 2016 Facebook removed messaging capabilities from their mobile web app to push people to the Messenger app, which annoyed many users. The company is now testing a reversal of this decision.

TECH

Arkansas sues TikTok, Meta over privacy and child safety concerns

Published

on

LITTLE ROCK, Arkansas. (AP) — The state of Arkansas filed a lawsuit Tuesday against TikTok and parent company Facebook Meta, alleging that social networks mislead consumers about the safety of children on their platforms and the protection of users’ privacy.

The state has filed two lawsuits against TikTok and its Chinese parent company ByteDance, as well as a third lawsuit against Meta, which also owns Instagram, accusing the companies of violating the state’s fraudulent trading practices law.

“TikTok is a wolf in sheep’s clothing,” says one lawsuit filed in state court. “As long as TikTok is allowed to deceive and mislead Arkansas consumers about the risks to their data, those consumers and their privacy are easy prey.”

Government Sarah Huckabee Sanders and Attorney General Tim Griffin, both Republicans, have announced legal action in the face of TikTok. growing questions about the security of their users’ data. Both the FBI and FCC officials have warned that ByteDance may be sharing TikTok user data with China’s authoritarian government.

One of Sanders’ first steps after she was sworn in as governor of Arkansas in January was to sign an executive order to ban TikTok from government devices.

One of the lawsuits alleges that TikTok fails to take appropriate steps to protect minors using the platform from inappropriate content, including sexual content and material depicting drug or alcohol use. TikTok did not immediately respond to an email asking for comment on Tuesday.

“Over the past decade, we have seen one social media after another exploit our children for profit and escape government scrutiny,” Sanders said in a statement released by her office. “My administration will not tolerate such an unfortunate status quo.”

The lawsuit against Meta accuses the company of manipulating Facebook to maximize the amount of time young people spend on the platform, which it says has contributed to mental health issues among the state’s youth.

On Tuesday, Meta revealed the steps it has taken to protect teens on its platforms, including age verification technology and technology that finds and removes content related to suicide, self-harm or eating disorders.

“These are complex issues, but we will continue to work with parents, experts, and regulators such as state attorneys general to develop new tools, features, and policies that meet the needs of teens and their families,” said Antigone Davis, head of security at Meta. says in the statement.

Indiana filed a similar lawsuit last year. against TikTok, alleging that the video-sharing platform misleads its users, especially children, about the level of inappropriate content and the safety of consumer information. The Seattle Public School District in January is also in the South the tech giants behind TikTok, Instagram, Facebook, YouTube and Snapchat, seeking to hold them accountable for the youth mental health crisis.

Sanders also supports a bill being promoted in the Arkansas Legislature that would require parental permission for those under the age of 18 to use the social networking site. The proposal, approved by a Senate committee on Tuesday, would require social media sites to verify a user’s age.

Utah became the first state last week make such a requirement, but experts questioned how such rules might apply and whether they could lead to unforeseen consequences.

Continue Reading

TECH

As WWDC approaches, Apple is making it easier to test beta versions of macOS and watchOS.

Published

on

Continue Reading

TECH

Opinion: Are ChatGPT and other AI models really threatening our work?

Published

on

Can the bot write this intro? The buzz around AI has brought this issue to the forefront of public discussion. Conversational bots like ChatGPT and automatic image generators like OpenAI’s DALL-E are popping up everywhere.

Despite A little optimistic case studies of their potential, current models are limited, and the results they provide are deeply flawed. But it doesn’t seem to matter that the tech behind AI isn’t ready for prime time. Models only have to tell a compelling story to people signing checks, and they do.

Microsoft, which developed its own Bing chatbot, invested $13 trillion in OpenAI. Venture capital companies at an early stage poured out $2.2 billion into generative AI just last year, and this year sales department announced a $250 million fund to invest in space. headlines another institutions announce AI the future of workready, inter alia, to replace writers and artists. That these breathless predictions outpace the quality of the technology itself says a lot about our cultural moment and our longstanding tendency to devalue creative work.

The supposed promise of the future of AI is effective and abundant content. Office workers can now create entire presentations with a query or a click. Creative agencies use image generators client concept mockup. Even literary magazines informed being bombarded with AI-generated posts, and yes, editors post AI-generated content. articles another illustrations.

But AI models have proved time another again what they perpetuate prejudicemisunderstanding cultural context and prioritize be persuasive in speaking Truth. They rely on datasets human creative workapproach that might otherwise be called plagiarism or data collection. And the models that drive ChatGPT and DALL-E are black boxes, so it’s technically impossible to trace the origin of the data.

Today these and other models demand people (with their own prejudices) to trains them to “good” results, and then check their work at the end. Because tools are built to match a pattern, their results are often repetitive and subtle, an aesthetic of similarity rather than invention.

Thus, the incentive to replace human workers does not come from the capabilities of technology. This stems from the years when companies large and small – especially in publishing, technology and the media – tightened the screws on creative work in order to spend less and less on employees.

In today’s financial downturn, even tech companies are cutting costs through mass layoffs (including AI ethics teams) when financing and selling AI tools. But the situation is more dire for writersartists and musicians who were troubled to earn a living for a long time.

Pay for writers editors another illustrators this country has stagnation over the past two decades. Some countries have begun to treat art as a public good: Ireland experimenting with paying artists to create art, and other countries subsidizing audience for art. But in the US, public funding for the arts is embarrassing. short compared to other rich Western countries and dropped even further during a pandemic. Many artists need to move from one social media application to another in order to build an audience for their work and generate income.

Meanwhile, the omnipresent stream Subscriptions another algorithmic channels, with their laser focus on receiving most devotionturned creativity into an endless scroll of mediocrity, recurring styles – V unstable model for original work.

Automation is the next chapter in this ever-cheaper content story. How much can the cost of art go down? Why pay artists a living wage when machines can be programmed to produce interchangeable pieces of content?

Yes, because these models will not replace human creative work. If we are to break out of repetitive patterns, strive to break prejudice, and create new opportunities, work must come from people.

The danger of reducing creative work to widgets for outsourcing is that we lose the stages of reflection and iteration that create new connections. The language learning models that underlie chatbots are designed to provide a single authoritative response that encapsulates the world in the amount of information they have already received.

On the other hand, the human brain has a unique recursive processing ability that allows us to interpret ideas outside of a set of rules. Each step of the creative process – no matter how slow, small, or boring – is a massive act, taking a concept to a new place and imagining a wider world than exists today.

The absorption of AI is not inevitable, despite the fact that some business and technology leaders say. This is not the first cycle of tech hype, and some regulators, unions and artists are already fighting back. After the Cryptocurrency Crash, the Federal Trade Commission established Office of Technology to support enforcement in new technical areas, and the agency has issued several public warnings that false claims about products AI Capabilities will be disputed.

The Writers Guild of America, which is ready to go on strike, has proposed safeguards and regulatory standards for the use of AI in screenwriting. SAG AFTRA, The union of film actors and television and radio workers said that if studios want to use AI to simulate the performances of actors, they will have to negotiate with the union. Some researchers build tools to protect the work of visual artists from being swallowed up by models for image generators, and others have launched open source systems to highlight biases in AI models.

But the broader call to action is cultural: to recognize that creative work is not just a product or content, but necessary and a highly skilled practice that deserves solid funding and support. Creativity is how meaning is constructed in culture. This is a task that machines cannot accomplish and should not be controlled by the companies that make them. The bot can quickly write the end of this story, but we have to ask ourselves: whose votes do we really need?

Rebecca Ackermann has written about technology and culture for the MIT Technology Review, Slate, and other sources. She previously worked as a designer for technology companies such as Google and Nerd.TueAll.

Continue Reading
Advertisement

Trending