Connect with us


Canteen Elephant: Startup Makes Giant Meatballs



AMSTERDAM — Throw another mammoth on a barbie?

An Australian company on Tuesday removed the glass cap from a meatball made from lab-grown, cultured meat using the genetic sequence of a long-extinct pachyderm, saying it was meant to spark public debate about the high-tech delicacy.

The launch at the Amsterdam Science Museum took place just a few days before April 1st, so there was an elephant in the room: is that true?

“This is not an April Fool’s joke,” said Tim Noakesmith, founder of Australian startup Vow, “this is real innovation.”

Cultured meat, also called cultured or cell meat, is made from animal cells. Its production does not require the slaughter of livestock, which advocates say is better not only for the animals but also for the environment.

Vow used the mammoth’s publicly available genetic information, filled in the missing pieces with the genetic data of its closest living relative, the African elephant, and placed them in a sheep’s cage, Noaksmith said. Under the right conditions in the lab, the cells multiplied until there were enough of them to roll into meatballs.

More than 100 companies around the world are working on cultured meat products, many of them start-ups like Vow.

Experts say that if the technology is widely adopted, it could significantly reduce the environmental impact of global meat production in the future. Currently, billions of acres of land are used for agriculture around the world.

But don’t expect it to be on plates all over the world anytime soon. At the moment, tiny Singapore is the only country that has allowed the consumption of cell-based meat. Vow hopes to sell its first product there, farmed Japanese quail meat, later this year.

The mammoth meatball is a one-of-a-kind item that has not even been tried by its creators and is not planned to be commercially produced. Instead, it was presented as a source of protein that would make people talk about the future of meat.

“We wanted people to be excited that the future of food will be different from what we had before. That there are things that are unique and better than the meat we definitely eat now, and we thought the mammoth would be a conversation starter and get people excited about this new future,” Noakesmith told The Associated Press.

“But also the woolly mammoth has traditionally been a symbol of loss. We now know that he died from climate change. And so we wanted to see if we could create something that would symbolize a more exciting future that would be better not only for us, but for the planet,” he added.

Seren Kell, manager of science and technology at the Good Food Institute, a non-profit organization that promotes plant-based and cellular alternatives to animal products, said he hopes the project will “open up new conversations about the extraordinary potential of cultured meat to produce more sustainable food.” reducing the climate impact of our current food system and freeing up land for less intensive agriculture.”

He said the giant project, with its unconventional source of genes, was an exception in the new meat farming sector, which usually focuses on traditional livestock production of cattle, pigs and poultry.

“By growing beef, pork, chicken and seafood, we can make the biggest impact in terms of reducing emissions from traditional livestock production and meeting the growing global demand for meat while meeting our climate goals,” he said.

The giant meatball on display in Amsterdam, somewhere between a softball and a volleyball, was for display purposes only and was frosted to ensure it wouldn’t get damaged on the way from Sydney.

But when it was cooked – first slowly baked, and then processed on the outside with a blowtorch – it smelled nice.

“People who were there said that the scent was something like another prototype we had made before, which was a crocodile,” said Noaksmith. gave it a completely unique and new flavor that we as a population haven’t felt in a very long time.”


Associated Press reporter Laura Ungar provided the information from Louisville, Kentucky.


FDIC should act like a real insurer



Bank counter in Westminster, Colorado on November 3, 2009


Rick Wilking/REUTERS

The spectacular collapse of the Silicon Valley bank demonstrated a fatal flaw in the American deposit insurance system: the $250,000 FDIC coverage limit seems insufficient to prevent a bank run. But raising or eliminating this limit, some commentators have suggested, would create incredible moral hazard. Instead, the FDIC should mitigate risk, as an insurer would, by pricing and apportioning it.

Deposit insurance is the backbone of the American financial sector. In our fractional reserve system, banks lend almost all deposits. If all depositors show up and demand their money at the same time, there won’t be enough to satisfy everyone. Even a rumor can start to spread, especially given how quickly ideas can spread in the age of Twitter.

Bank deposit insurance reduces the chance of run-ins, giving insured depositors peace of mind, but many depositors hold funds in excess of the FDIC deposit insurance limit. Businesses in particular often need to keep more than $250,000 in the bank at any given time in order to pay payroll and other bills. If their bank seems unstable, these depositors have every reason to withdraw their money.

This is exactly what happened to the Silicon Valley bank. The media expanded on the fact that more than 90% of SVB deposits were not insured; the business realized the high risk and fled, pulling out $42 billion in one day. The FDIC took responsibility and decided to cover all deposits anyway, but the introduction of this rule would remove almost all responsibility for minimizing risks from banks.

There is a simpler and safer solution. The FDIC should encourage—or require—banks to distribute the amounts of money they hold in excess of insurance limits to other institutions. Suppose Jane deposits $500,000 in Regional Bank A and Joe deposits $500,000 in Regional Bank B. If the FDIC had a system whereby Bank A exchanged half of Jane’s deposits for half of Joe’s, this would eliminate any uninsured risk to both clients without increasing risk. or a substantial cost to any bank. And all of this allows Jane and Joe to access their funds from their main bank using their existing debit cards and checkbooks.

The FDIC would not have to start from scratch to build such a system. Several large firms and several fintech startups are already helping depositors increase FDIC coverage by spreading their deposits across multiple FDIC-insured banks, in what is called a “sweep” or “sweep.” mutual deposit agreements. The problem is that few people know about these systems or use them.

In fact, there are already many ways that savers can insure over $250,000 under existing FDIC rules. This limit only applies to each contributor for each category of property – clients receive $250,000 for each of 14 account types they can keep in the bank, and if there are several beneficiaries of the accounts, each of them gets that much. A married couple can receive up to $1 million by dividing their money into separate accounts for each spouse and a joint joint account. This complexity of joint accounts and pass-through insurance likely meant that SVB actually had many more insured deposits than the reports claimed. Unfortunately, few people know about these aspects of deposit insurance.

This is where the FDIC can step in. It has unique information about deposits, their structure and insurance status in all 4,237 banks it serves. The FDIC knows when depositors and banks can benefit from the distribution of funds under a mutual deposit agreement. This could encourage banks to use this practice by setting a threshold for uninsured deposits, above which institutions would have to pay higher premiums, unless they use mutual deposit agreements to mitigate their risk. If the FDIC wanted to be more assertive, it could include reciprocal deposit agreements in its oversight expectations for risk management. Banks that have not used the tools available to reduce uninsured deposits will face enforcement action.

This approach is much preferable to increasing or eliminating the deposit insurance limit, which would entail moral hazard and cost to taxpayers. In addition, the FDIC will eliminate the need for bank bailouts. And that would give regional banks, which provide important local services and spread risk throughout the system, a fighting chance.

Regulators usually resort to proven tools, whether or not they have been successful in the past. Might be worth trying something different this time.

Mr. Brooks is a former Acting Comptroller of the Currency and a member of the Board of Directors of the FDIC. Mr. Henderson is a professor of law at the University of Chicago and a visiting fellow at the Hoover Institution.

Copyright © 2022 Dow Jones & Company, Inc. All rights reserved. 87990cbe856818d5eddac44c7b1cdeb8

Continue Reading


Nothing Phone (2) Leak Hints at OnePlus’ Spiritual Successor



The Nothing (2) phone has been spotted on the government website of the Bureau of Indian Standards. 91 mobile, a telephone blog dedicated to the local market. This means that production may be completed and a launch may be imminent.

This is good news for anyone interested in Nothing Phone, not just Indian fans, as Nothing CEO Carl Pei has promised to bring the upcoming Nothing Phone (2) to more markets, making the US a real priority this time, not just a latecomer. . round beta testing area.

We recently heard from chatty Qualcomm executives that Nothing Phone (2) will likely use the company’s late 2022 flagship mobile platform, the Snapdragon 8 Plus Gen 1. Gen 2, but the latest chipset is powerful enough to power phones like Galaxy Z Fold 4 portable tablet. This makes it a major upgrade for Nothing.

Nothing Phone (1) with Nothing Ears (1) (Image credit: Peter Hoffman)

The 91Mobiles leaks also suggest that the phone could get 12GB of RAM as standard. Combined with a fast chipset, this would make Nothing Phone (2) a serious performer. If Nothing can deliver the phone at the bargain price of the original, that will make it an interesting contender.

Continue Reading


How to distinguish an AI-generated image that looks like “Balenciaga Papa”



fFor several years now, the public has been warned about the risk posed by images created by artificial intelligence, also known as deepfakes. But until recently, it was relatively easy to tell an AI-generated image from a photograph.

Never ever. Over the weekend, an AI-generated image of Pope Francis wearing a Balenciaga down jacket went viral online, fooling many internet users.

In just a few months, public AI imaging tools have become powerful enough to create photorealistic images. While the image of the Pope did contain clear signs of a fake, it was convincing enough to fool many internet users, including celebrity Chrissy Teigen. “I thought the Pope’s down jacket was real and didn’t think about it,” she wrote. “I will not survive the future of technology in any way.”

More from TIME

While AI-generated images have gone viral before, none have fooled as many people as quickly as the image of the Pope. The picture was bizarre, and, it should be noted, much of its virality was due to the fact that people deliberately shared it for the sake of laughter. But history may consider the Balenciaga Pope to be the first truly viral disinformation event fueled by deepfake technology and a sign that the worst is coming.

While deepfake victims, especially women who have been targeted by deepfake pornography without consent, have been warning about the risks of this technology for years, in recent months imaging tools have become much more accessible and powerful, producing better fake images of any kind. As AI advances rapidly, it becomes increasingly difficult to distinguish whether an image or video is real or fake. This can have a significant impact on public susceptibility to foreign influence operations, targeted targeting of individuals, and trust in the news.

Here are some tips to help you recognize AI-generated images today and not be fooled by even more compelling generations of technology in the future.

How to recognize an image created by AI today

If you look closely at the image of the Pope from Balenciaga, you will find several clear signs that it was created with the help of AI. The crucifix hanging on his chest is inexplicably held in the air, and in place of the second half of the chain there is only a white down jacket. In his right hand he holds what looks like a vague coffee cup, but his fingers are closed around the rarefied air, not the cup itself. His eyelid somehow merges with his glasses, which in turn blend into their own shadow.

Original image courtesy of @art_is_2_inspire via Instagram

Today, in order to identify an image created by artificial intelligence, it is often helpful to look at these intricate details of an image. AI image generators are essentially pattern replicators: they have learned what the Pope looks like and what a Balenciaga down jacket might look like, and they can miraculously squeeze them together. But they (yet) do not understand the laws of physics. They have no idea why a crucifix can’t float in the air without a chain supporting it, or that the glasses and the shadow behind them aren’t the same single object. It is in these often peripheral parts of an image that humans can intuitively notice inconsistencies that AI cannot.

But soon, artificial intelligence technology will improve to correct these kinds of errors. Just a few weeks ago, Midjourney, the artificial intelligence tool used to create the image of the Pope, was unable to create realistic images of human hands. To check if the image of a person was generated by artificial intelligence, you could look to see if he has seven blurry fingers or some other alien appendage. Not anymore. The newest version of Midjourney can generate realistic looking human hands, removing perhaps the easiest way to identify an AI image. Since AI image generators are evolving so quickly, it’s logical to assume that the advice above could quickly become outdated.

How not to be deceived in the future

For now, media literacy techniques may be your best bet to stay up to date with AI-generated images in the future. Asking these questions won’t help you spot 100% of fake images, but they will help you spot more of them, as well as become more resilient to other forms of misinformation. Remember to always ask: where did this image come from? Who is sharing this and why? Does this conflict with any other reliable information you have access to?

When it comes to detecting AI-generated viral images, it’s best to check what others are saying. Google and other search engines have a reverse image search tool that allows you to check where an image has already been posted online and what people are saying about it. With this tool, you can find out if experts or trusted publications have determined that an image is fake, as well as find the location where the image was posted in the first place. If an image allegedly taken by a news photographer was first posted by an Internet user under a pseudonym on a social networking site, this is a reason to doubt its authenticity.

If you’re a Twitter user, the community notes feature can often give you more information about an image, although often not until some time after the tweet was first posted. After the image of the Pope had already gone viral, a note was added to the original post in which it was posted: “This image of Pope Francis is an AI-generated image, not a real one,” the note reads. “The image was created in the Midjourney AI imaging app.”

Are there technological solutions?

There are many software on the market that claim to be able to detect deepfakes, including a proposal from Intel that says it is 96% accurate in detecting deepfake videos. But there are several free online tools that can reliably tell you if an image was created by artificial intelligence or not. One Free AI Image detector, hosted on the artificial intelligence platform Hugging Face, was able to correctly determine with a 69% chance that the image of Balenciaga’s dad was generated by artificial intelligence. But presented with Image of Elon Musk created by artificial intelligencealso created by the latest version of Midjourney, the tool gave the wrong answer, saying it was 54% sure the image was genuine.

When AI researchers at software company Nvidia and the University of Naples set out to find out how difficult it would be to build an AI-generated image detector, they found several limitations, according to a November 2022 paper they published. They found that AI image generators do indeed leave invisible telltale marks on the images they create, and that these “hidden artifacts” look slightly different depending on which particular program was used to create the image. The bad news was that these artifacts tend to become harder to detect when the image is resized or the quality is lowered – as is often the case when images are posted and re-posted on social media. According to Annalisa Verdoliva, one of the paper’s co-authors, the researchers created a tool that could determine that the image of the Pope from Balenciaga was generated by artificial intelligence. But while the tool’s code is available online, it’s not built into the web app, which means it’s hard for the average user to access it.

Another method, first described in paper published earlier this month may provide more generalized results. A team of researchers found that when an advanced AI image generator known as a diffusion model loaded an AI-generated image, the program could easily create a near-exact copy of the input image. In contrast, it was difficult for the instrument to reproduce even a rough copy of a real photo. Their discovery has not yet turned into an accessible online tool, but it gives hope that in the future, the application will be able to reliably determine whether an image was created by artificial intelligence or not.

More must-read content from TIME

Write Billy Perrigo:

Continue Reading


Copyright © 2023 Independent Post Media.